Page 186«..1020..185186187188..200210..»

Machine Learning vs. Deep Learning: What’s the Difference? – Gizmodo

Artificial intelligence is everywhere these days, but the fundamentals of how this influential new technology works can be confusing. Two of the most important fields in AI development are machine learning and its sub-field, deep learning. Heres a quick explanation of what these two important disciplines are, and how theyre contributing to the evolution of automation.

Like It or Not, Your Doctor Will Use AI | AI Unlocked

Its worth reminding ourselves what AI actually is. Proponents of artificial intelligence say they hope to someday create a machine that can think for itself. The human brain is a magnificent instrument, capable of making computations that far outstrip the capacity of any currently existing machine. Software engineers involved in AI development hope to eventually make a machine that can do everything a human can do intellectually but can also surpass it. Currently, the applications of AI in business and government largely amount to predictive algorithms, the kind that suggest your next song on Spotify or try to sell you a similar product to the one you bought on Amazon last week. However, AI evangelists believe that the technology will, eventually, be able to reason and make decisions that are much more complicated. This is where ML and DL come in.

Machine learning (or ML) is a broad category of artificial intelligence that refers to the process by which software programs are taught how to make predictions or decisions. One IBM engineer, Jeff Crume, explains machine learning as a very sophisticated form of statistical analysis. According to Crume, this analysis allows machines to make predictions or decisions based on data. The more information that is fed into the system, the more its able to give us accurate predictions, he says.

Unlike general programming where a machine is engineered to complete a very specific task, machine learning revolves around training an algorithm to identify patterns in data by itself. As previously stated, machine learning encompasses a broad variety of activities.

Deep learning is machine learning. It is one of those previously mentioned sub-categories of machine learning that, like other forms of ML, focuses on teaching AI to think. Unlike some other forms of machine learning, DL seeks to allow algorithms to do much of their work. DL is fueled by mathematical models known as artificial neural networks (ANNs). These networks seek to emulate the processes that naturally occur within the human brainthings like decision-making and pattern identification.

One of the biggest differences between deep learning and other forms of machine learning is the level of supervision that a machine is provided. In less complicated forms of ML, the computer is likely engaged in supervised learninga process whereby a human helps the machine recognize patterns in labeled, structured data, and thereby improve its ability to carry out predictive analysis.

Machine learning relies on huge amounts of training data. Such data is often compiled by humans via data labeling (many of those humans are not paid very well). Through this process, a training dataset is built, which can then be fed into the AI algorithm and used to teach it to identify patterns. For instance, if a company was training an algorithm to recognize a specific brand of car in photos, it would feed the algorithm huge tranches of photos of that car model that had been manually labeled by human staff. A testing dataset is also created to measure the accuracy of the machines predictive powers, once it has been trained.

When it comes to DL, meanwhile, a machine engages in a process called unsupervised learning. Unsupervised learning involves a machine using its neural network to identify patterns in what is called unstructured or raw datawhich is data that hasnt yet been labeled or organized into a database. Companies can use automated algorithms to sift through swaths of unorganized data and thereby avoid large amounts of human labor.

ANNs are made up of what are called nodes. According to MIT, one ANN can have thousands or even millions of nodes. These nodes can be a little bit complicated but the shorthand explanation is that theylike the nodes in the human brainrelay and process information. In a neural network, nodes are arranged in an organized form that is referred to as layers. Thus, deep learning networks involve multiple layers of nodes. Information moves through the network and interacts with its various environs, which contributes to the machines decision-making process when subjected to a human prompt.

Another key concept in ANNs is the weight, which one commentator compares to the synapses in a human brain. Weights, which are just numerical values, are distributed throughout an AIs neural network and help determine the ultimate outcome of that AI systems final output. Weights are informational inputs that help calibrate a neural network so that it can make decisions. MITs deep dive on neural networks explains it thusly:

To each of its incoming connections, a node will assign a number known as a weight. When the network is active, the node receives a different data item a different number over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node fires, which in todays neural nets generally means sending the number the sum of the weighted inputs along all its outgoing connections.

In short: neural networks are structured to help an algorithm come to its own conclusions about data that has been fed to it. Based on its programming, the algorithm can identify helpful connections in large tranches of data, helping humans to draw their own conclusions based on its analysis.

Machine and deep learning help train machines to carry out predictive and interpretive activities that were previously only the domain of humans. This can have a lot of upsides but the obvious downside is that these machines can (and, lets be honest, will) inevitably be used for nefarious, not just helpful, stuffthings like government and private surveillance systems, and the continued automation of military and defense activity. But, theyre also, obviously, useful for consumer suggestions or coding and, at their best, medical and health research. Like any other tool, whether artificial intelligence has a good or bad impact on the world largely depends on who is using it.

See more here:
Machine Learning vs. Deep Learning: What's the Difference? - Gizmodo

Read More..

The Many Ways Biopharma Can Use Artificial Intelligence, Including Generative AI – BioSpace

Pictured: AI robot holding biopharma-related items in hands/Taylor Tieden for BioSpace

As the biopharma industry explores the potential of artificial intelligence, use cases are quickly emerging. Companies are already looking to use generative AIthink ChatGPTto optimize pharma R&D, from target discovery to drug development to regulatory approval to commercialization and postmarket pharmacovigilance.

Last July, Hong Kongbased Insilico Medicine claimed to be the first company to enter a Phase II clinical trial with a drug fully devised with generative AI. But as with any new and rapidly evolving technology, there are varying viewpoints on how and when to best use generative AI, and there is still plenty of skepticism.

Here are several ways in which the biopharma industry can apply generative AI, according to experts involved in helping to establish AI-based R&D protocols.

Biopharma researchers historically have had to rely on data scientists to hunt down information, and those data scientists then have to figure out whether its the right data before they can start to do anything with it, said Alister Campbell, vice president and global head of science and technology at R&D-focused software company Dotmatics.

Generative AI changes that equation by automating data collection. Though humans still have to curate and verify the accuracy of machine outputs, Campbell told BioSpace that generative technologies can help optimize leads in discovery and design by speeding up processes.

At the J.P. Morgan Healthcare Conference in January, Jean-Philippe Vert, chief research and development officer at AI biotech Owkin, said that using generative AI helps his company see all of the data in its search for treatments.

Mike King, senior director of product and strategy, technology solutions at IQVIA, noted that Pfizers Viagra was originally tested for cardiac issues, but the most popular indication today, erectile dysfunction, was discovered by accident. Properly built algorithms could find additional indications to test for so companies dont have to rely on sheer luck, he told BioSpace.

Rachael Brake, chief scientific officer of life sciences informatics startup Zephyr AI, said that AI can help match underserved patient populations with emerging compounds, reducing R&D time and cost in the process. The value proposition of those novel therapies is that theyre solving current unmet need in the field, Brake said. Making a medicine that doesnt actually solve a problem actually is not very valuable to anybody.

Generative AI can automate the understanding of basic biological process. For example, King said, you could train AI on broad-based biology, science and known protein shapes based on known amino acids, and have it put forward suggestions on how certain proteins would look based on certain structure. That information can then be used to develop novel drug candidates.

Kimberly Powell, VP of healthcare at NVIDIA, suggested that a biopharma company could use Google DeepMinds AlphaFold 2 protein-folding AI technology to predict protein structure, then NVIDIAs MolMIM auto-encoder for small molecule drug discovery.

The concept of algorithmic approaches and computational approaches to designing new drugs isnt new, Campbell said. I think what has changed a lot is the methods have improved.

Brake also said that AI might be able to scan patient profiles as well as new literature to uncover emerging drug resistance to approved therapies. For example, AI can analyze tumor data to understand why they may be considered sensitive or resistant to that particular therapy, Brake said.

King added that drug safety is another area where AI can help parse the data. The ability to combine structured and unstructured content to look for possible adverse events and product quality issues, that technology is live today, he said. Thats brought about a significant benefit in understanding product performance post-market, but also in identifying possible significant failure modes, where the volume of data isnt necessarily high enough to trigger anything through older pharmacovigilance methods.

Despite the excitement, the biopharma industry has plenty of skepticism and concern about generative AI.

There is a stigma to its use, Campbell said, in part because the work can be opaque, and peer reviewers and regulators need to see explainable methods. This can only be overcome if generative AI provides results that scientists can trust, he said.

The anxiety extends to job security. Scientists are scared that they are going to be replaced by robots, Campbell said. But at this point, he added, were so early in the process that thats not realistic. The regular layoffs we see today in the biopharma space are a reflection of the current economic climate, not a shift in staffing strategies as a result of AI implementation.

Neil Versel is a former business editor at BioSpace. Follow him on LinkedIn or X.

Go here to read the rest:
The Many Ways Biopharma Can Use Artificial Intelligence, Including Generative AI - BioSpace

Read More..

Can Google Give A.I. Answers Without Breaking the Web? – The New York Times

For the past year and a half since ChatGPT was released, a scary question has hovered over the heads of major online publishers: What if Google decides to overhaul its core search engine to feature generative artificial intelligence more prominently and breaks our business in the process?

The question speaks to one of the most fragile dependencies in todays online media ecosystem.

Most big publishers, including The New York Times, receive a significant chunk of traffic from people going to Google, searching for something and clicking on articles about it. That traffic, in turn, allows publishers to sell ads and subscriptions, which pay for the next wave of articles, which Google can then show to people who go searching for the next thing.

The whole symbiotic cycle has worked out fine, more or less, for a decade or two. And even when Google announced its first generative A.I. chatbot, Bard, last year, some online media executives consoled themselves with the thought that Google wouldnt possibly put such an erratic and unproven technology into its search engine, or risk mucking up its lucrative search ads business, which generated $175 billion in revenue last year.

But change is coming.

At its annual developer conference on Tuesday, Google announced that it would start showing A.I.-generated answers which it calls A.I. overviews to hundreds of millions of users in the United States this week. More than a billion users will get them by the end of the year, the company said.

The answers, which are powered by Googles Gemini A.I. technology, will appear at the top of the search results page when users search for things like vegetarian meal prep options or day trips in Miami. Theyll give users concise summaries of whatever theyre looking for, along with suggested follow-up questions and a list of links they can click on to learn more. (Users will still get traditional search results, too, but theyll have to scroll farther down the page to see them.)

The addition of these answers is the biggest change that Google has made to its core search results page in years, and one that stems from the companys fixation on shoving generative A.I. into as many of its products as possible. It may also be a popular feature with users Ive been testing A.I. overviews for months through Googles Search Labs program, and have generally found them to be useful and accurate.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

More:
Can Google Give A.I. Answers Without Breaking the Web? - The New York Times

Read More..

OpenAI disbands safety team focused on risk of artificial intelligence causing ‘human extinction’ – New York Post

OpenAI eliminated a team focused on the risks posed by advanced artificial intelligence less than a year after it was formed and a departing executive warned Friday that safety has taken a backseat to shiny products at the company.

The Microsoft-backed ChatGPT maker disbanded its so-called Superalignment, which was tasked with creating safety measures for advanced general intelligence (AGI) systems that could lead to the disempowerment of humanity or even human extinction, according to a blog post last July.

The teams dissolution, which was first reported by Wired, came just days after OpenAI executives Ilya Sutskever and Jan Leike announced their resignations from the Sam Altman-led company.

OpenAI is shouldering an enormous responsibility on behalf of all of humanity, Leike wrote in a series of X posts on Friday. But over the past years, safety culture and processes have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI.

Sutskever and Leike, who headed the OpenAIs safety team, quit shortly after the company unveiled an updated version of ChatGPT that was capable of holding conversations and translating languages for users in real time.

The mind-bending reveal drew immediate comparisons to the 2013 sci-fi film Her, which features a superintelligent AI portrayed by actress Scarlett Johannson.

When reached for comment, OpenAI referred to Altmans tweet in response to Leikes thread.

Im super appreciative of @janleikes contributions to OpenAIs alignment research and safety culture, and very sad to see him leave, Altman said. Hes right we have a lot more to do; we are committed to doing it. Ill have a longer post in the next couple of days.

Some members of the safety team are being reassigned to other parts of the company, CNBC reported, citing a person familiar with the situation.

AGI broadly defines AI systems that have cognitive abilities that are equal or superior to humans.

In its announcement regarding the safety teams formation last July, OpenAI said it was dedicating 20% of its available computing power toward long-term safety measures and hoped to solve the problem within four years.

Sutskever gave no indication of the reasons that led to his departure in his own X post on Tuesday though he acknowledged he was confident that OpenAI will build [AGI] that is both safe and beneficial under Altman and the firms other leads.

Sutskever was notably one of four OpenAI board members who participated in a shocking move to oust Altman from the company last fall. The coup sparked a governance crisis that nearly toppled OpenAI.

OpenAI eventually welcomed Altman back as CEO and unveiled a revamped board of directors.

A subsequent internal review cited a breakdown in trust between the prior Board and Mr. Altman ahead of his firing.

Investigators also concluded that the leadership spat was not related to the safety or security of OpenAIs advanced AI research or the pace of development, OpenAIs finances, or its statements to investors, customers, or business partners, according to a release in March.

More:
OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post

Read More..

Artificial intelligence is already affecting elections – The Mandarin

While AI has the power to be destructive to individuals, it could unravel whole societies too, according to electoral commissioner Tom Rogers.

Speaking to a senate inquiry on Monday, he said artificial intelligence was already affecting elections around the world.

Countries as diverse as Pakistan, the United States, Indonesia and India have all demonstrated significant and widespread examples of deceptive AI content,

The AEC does not possess the legislative tools or internal technical capabilities to deter, detect, or adequately deal with false AI-generated content concerning the election process.

What were concerned about is AI that misleads citizens about the act of voting the truth of political statements either need to be lodged somewhere else.

Artificial intelligence has the potential to be as transformative as the Industrial Revolution, and Australia is not ready, a Senate inquiry has heard

The speed of the development of AI particularly generative AI has caught governments around the world flat-footed, and regulators are struggling to keep up with a technological realm they barely understand.

The proprietary nature of most AI models has exaggerated this challenge. When policymakers cant see inside the black box, it is all but impossible for them to know what controls might be needed until people are actually harmed by the technology.

Because misogyny is real, this didnt take long. Concerns about the generation and sharing of abusive images exploded across the internet, when AI-generated pornography featuring Taylor Swift was widely shared. About a month later, the Select Committee on Adopting Artificial Intelligence was formed.

In its first hearing on May 20, the committee heard the safeguards around the technology are not sufficient to protect citizens.

ANUsrecently minted vice-chancellor and futurist Genevieve Bell said the lack of basic understanding of what AI is was slowing attempts at regulation.

She said the social component of the rise of AI makes up a large part of the publics response and needs to be taken more seriously.

Theres a piece of all of this which is how people manage AI thats more of a cultural phenomenon. The ways we think about it are often driven by your age, she said

Its driven by the science fiction we grow up with, which is in itself shaped by multiple other points of view.

So helping our citizens understand that AI is not about to kill John Connor, it does not emerge in a single human form. In fact, it is infinitely more complicated.

It usually means explaining to people the largest base of robots are vacuum cleaners, and the place AI is most likely to turn up in your life is the algorithm inside Netflix. Its a very different reality to the one we sometimes talk about.

While witnesses all raised concerns about the destructive potential of artificial intelligence, many were quick to remind the committee there was immense productive potential in artificial intelligence too.

The public service is, by and large not comfortable adopting the technology without a greater understanding of its practical and ethical implications. As the key source of information and advice to government, this makes developing informed regulations a non-starter.

But work is underway. The Human Rights Commissioner and CSIRO are working to develop safeguard frameworks for individuals and society and make sure the technology is used for public good.

Healthcare is expected to be one of the greatest beneficiaries, as AI reaches maturity as a diagnostic tool.

Australia is also a signatory to the Bletchley Declaration on AI safety. It calls on signatories to take a proactive approach to both the development and regulation of artificial intelligence.

Rogers said the tech companies have been relatively cooperative with the AEC on AI safety, but less cooperative in other areas of moderation.

He declined to single out any particular tech company because he doesnt want the Eye of Sauron to fall upon him.

When we reach out to them, they ordinarily answer. Were meeting again with Meta later this week, and weve asked them for an overview of tools theyve put in place, he said.

They were part of the 20 companies that signed the Munich Accord earlier this year, where theyve pledged to combat disinformation, particularly this year.

Responding to concerns raised by Senator David Pocock, he said there were many instances in which people were able to spread misinformation in which the electoral commission couldnt do anything.

Pocock expressed concern about the use of deepfakes in election campaigns something that has already taken place in the United States, India and South Korea.

South Korea provides a particularly interesting case study, having introduced legislation banning AI-generated campaign material with a penalty of seven years in prison. Its election was nevertheless rife with AI-generated mis- and disinformation.

He said it was unlikely there would be any changes to legislation before the next election that would enhance protections.

AI is improving the quality of disinformation, making it more undetectable and spreading it more quickly through multiple channels.

Theres misinformation all the time about elections and the AEC whether that content comes from AI or other sources, we take that seriously.

Of course, Id prefer theres no misleading information, but currently if its authorised, its lawful ultimately its a matter for the Parliament, he said.

Ultimately, anything that provides extra transparency has got to be a good thing.

READ MORE:

Alarm sounded on deep fake health misinformation

See the article here:
Artificial intelligence is already affecting elections - The Mandarin

Read More..

Screening and diagnosis of cardiovascular disease using artificial intelligence-enabled cardiac magnetic resonance … – Nature.com

Ethics approval

The CMR datasets were acquired retrospectively under the approval of the institutional review boards (IRBs) at each participating institution, including Beijing Fuwai Hospital, Beijing Anzhen Hospital, Guangdong Provincial Peoples Hospital, the 2nd Affiliated Hospital of Harbin Medical University, the First Hospital of Lanzhou University, Renji Hospital, Tongji Hospital and Peking Union Medical College Hospital. Informed consent was waived by the IRBs. Before model training, testing and reader studies, all data underwent deidentification processes.

The CMR database search was performed for all eight centers to identify CVDs and normal controls. All data were anonymized and deidentified, as per the Health Insurance Portability and Accountability Act Safe Harbor provision56. Inclusion criteria were (1) patients with a definitive diagnosis of CVD and (2) patients with CMR scans at baseline before surgical treatment, if any. Exclusion criteria were (1) incomplete cine or LGE modalities, (2) SAX cine with fewer than five views, (3) CMR images with insufficient scan quality, (4) CVD patients missing clinical data and (5) CMR examinations that could not be interpreted and agreed upon by the committee cardiologists according to the diagnostic criteria (Methods). The detailed diagnostic criteria of the 11 types of CVDs and normal controls included in this study was described in Methods. Table 1 and Extended Data Table 1 present the detailed demographics and distribution of the primary dataset and the external validation sets collected from the other seven medical centers across China. To offer a comprehensive perspective on our primary development dataset, we went the extra mile by collecting the LV ejection fraction (LVEF) metric for all 7,900 subjects (including 1,250 normal controls and 6,650 patients with CVD) within the primary dataset. We meticulously summarized the distribution of demographics and LVEF across the 11 specified CVD classes and the normal control class in Supplementary Table 5. Additionally, we generated density plots to illustrate the distribution of LVEF for each class in the primary dataset, offering a more comprehensive representation (Supplementary Fig. 1).

The fresh consecutive testing set is designed to capture the genuine spectrum of disease phenotypes in the real-world clinical prevalence. To offer a thorough understanding of the severity of cases in alignment with real-world clinical prevalence, we have presented five key cardiac function metrics. These metrics include LVEF, LV mass, LVMi (LV mass index), LV end-diastolic volume and LV end-diastolic volume index. Supplementary Table 6 presents the distribution of demographics and the cardiac functions across 11 CVD classes and the normal control class in the fresh consecutive testing set. For improved visualization and clarity, we have depicted the prevalence of the 11 CVD classes in both the fresh consecutive testing set (n=532 patients with CVD) and the primary discovery dataset (n=6,650 patients with CVD) using pie charts in Supplementary Fig. 2. The fresh consecutive testing set offers a representation of the genuine clinical prevalence. Through direct comparison, it is evident that the primary dataset and the consecutive testing set exhibit very similar CVD prevalence and distribution. The top three most prevalent CVDs referred to the CMR examination remain HCM, DCM and CAD.

All images were acquired by breath-holding and electrocardiographic gating. A balanced steady-state free precession sequence was used for cine images with a continuous sampling from the basal to the apical levels on SAX views and two-chamber, three-chamber and 4CH long-axis views. We included cine MRI from two views in this study: the standard SAX cine and the long-axis 4CH cine. The SAX cine clearly depicts the RV and the LV. The 4CH cine shows the four chambers of heart: right atrium, left atrium, RV and LV.

LGE MRI has been established as the gold standard reference for myocardial viability and replacement fibrosis in the myocardium57,58. In our CMR cohorts, the LGE images were obtained using phase-sensitive inversion recovery sequence with a segmented FLASH readout scheme performed 1015min after injection of gadolinium-based contrast with 0.15mmolkg1 per bolus. Gadolinium contrast agents can be used to detect areas of fibrosis, as the prolonged washout of the contrast correlates with a reduction in functional capillary density in the irreversibly injured myocardium59. The SAX LGE used in the study was acquired from the SAX view with the same section thickness, covering the entire left ventricle from the base to the apex (nine parallel views for most cases). Note that LGE is an invasive examination that requires contrast injection and was therefore not performed for normal controls.

The typical CMR scan protocol and scanner parameters for the primary and external validation sets are presented in Supplementary Table 7. Extended Data Fig. 2 shows an illustration of cardiac MRIs (SAX cine, 4CH cine and SAX LGE) utilized in model development. Supplementary Videos 111 demonstrate example CMR of the 11 types of CVDs.

For each patient in the disease cohort, the textual description of the abnormalities in the CMR and the clinical report was extracted as the main reference. Besides that, all CMR records underwent additional annotation procedures. To annotate the disease cohort, a group of certified CMR experts reviewed all records and clinical reports. Every record was randomly assigned to be reviewed by a single physician specifically for this task, not for any other purpose. All annotators received specific instructions and training regarding how to annotate CMR data to improve labeling consistency. The diagnostic criteria we adopted in this study for each CVD class are described in Methods. CMR examinations that could not be interpreted by physicians received further annotation from a consensus committee of board-certified practicing cardiologists (with >15years of experience in CMR reading) working in Fuwai Hospital. The CMR examinations that could not be interpreted or agreed upon by the committee were removed from our dataset.

For the independent gold-standard test dataset with 500 patients (Extended Data Table 6) for humanmachine comparison, six physicians working in the MRI department at Fuwai Hospital contributed directly to its annotation (the six physicians were not involved in dataset annotation as described above). All participating physicians received specific instructions and training regarding how to annotate CMRs to ensure consistency. We divided the physicians into three groups according to their reading experience in CMR: 35years, 510years and more than 10years. CMR physicians in each group reviewed a randomly selected set of the 500 CMRs in a nonrepetitive manner.

The CMR preprocessing pipeline aimed to remove the additional burden of the deep neural network learning to find patterns between images for disease classification. All cardiac MRIs were preprocessed to (1) resample MRI images to the same spatial resolution and (2) localize the heart region of interest (ROI) to a crop image. We detailed the preprocessing step for cine and LGE MRI below and in Extended Data Fig. 4.

SAX cine comprises nine parallel views (for most cases) covering the apical to the basal levels of the LV. Each view contains 25 frames (cardiac phases), leading to 225 images in one single SAX cine record. We examined the representational power of different numbers of input views in developing the classification model. Balancing efficiency and effectiveness, the three-view input scheme achieved a greater representation of SAX cine and therefore is adopted throughout the rest of the study. The three-view input scheme includes the middle layer (the mid slice among the parallel layers spanning from the base to the apex), the second layer above the middle layer and the second layer below the middle layer (Extended Data Fig. 2). We extract the ImagePositionPatient tag and the ImageOrientationPatient tag from each Dicom header to locate the three layers. Then, three-spline interpolation provided by SimpleITK60 library (https://simpleitk.org/) is applied to resample the raw cine MRIs to the same spatial resolution of 0.994mm0.994mm, which is the most common spatial resolution across all subjects investigated in this study. We developed a heart ROI segmentation model (the following section) and used it to localize the region of heart for each cine MRI. The heart ROI segmentations predicted by the AI models were manually checked to ensure their accuracy. The extracted ROIs are padded to keep the aspect ratio the same without distortion, and then resized to 224224. The top and bottom 0.1% of the pixels in cine MRI images are clipped to avoid pixels that are outliners of the distribution. The cine images are scaled between 1 and 255, and then normalized by zero mean and unit variance before feeding them to the model. We sample a clip of 25 frames from each full-length cine sequence using a temporal stride of two, resulting in 13 frames as inputs to model development. The 4CH cine shares the same preprocessing pipeline as SAX cine, except that only one single layer (mid slice) is used to represent the 4CH view. For SAX LGE, all layers covering from the base to the apex of the heart are used for diagnostic model development. The preprocessing steps for SAX LGE are similar to that of cine MRI. We resampled SAX LGE along the z-axis to ensure that each LGE sequence contains nine slices because nine is the most common number of views for SAX LGE included in this study.

We developed heart detection DNN models to automatically extract the heart ROI regions (Extended Data Fig. 4). Three DNN models for SAX cine, 4CH cine and SAX LGE were trained and evaluated, respectively. We applied nnU-Net61 as our model backbone and generated the ground-truth segmentation masks for model supervision using a semi-automatic approach. (1) Automatic localization: for SAX cine and 4CH cine, we selected the pixel region with maximum standard deviation across all frames. These regions localize the heart ROI as heart is a beating organ with high standard deviation in its position. Specifically, for each cine movie sequence (s={{x}_{1},ldots ,{x}_{n}}), we computed a single pixel map of standard deviations across all frames ({x}_{mathrm{std}}=sigma ({{x}_{1},ldots ,{x}_{n}})). This map was used to compute an Otsu threshold to binarize and label regions with the greatest variation in cine modality21. For each cine sequence, a binary segmentation mask of the heart ROI is defined for the length of the cardiac cycle. All segmentation masks went through manual checking. The localization procedure captures the heart ROI in around 90% of cases. The rest of the cases are labeled manually. (2) Manual labeling: we manually drew the bounding box capturing the heart ROI, using 3D Slicer62 and ITK-SNAP63. We used the Scissors tool provided by the Segment Editor in 3D Slicer and the Polygon Inspector in ITK-SNAP to locate heart ROI. A binary segmentation mask was saved for each CMR sequence. For SAX LGE, we manually drew the annotations as model supervision.

In terms of model architecture, the detection model shares the classic U-net64 backbone with three small adjustments: (1) batch normalization is replaced with instance normalization65, (2) rectified linear unit (ReLU) is replaced with leaky ReLU66 as the activation function and (3) additional auxiliary losses are added in the decoder to all but the two lowest resolutions. The model outputs the binary bounding box that extracts the heart ROI. For model training, we adopted Adam optimizer and stochastic gradient descent (SGD) with Nesterov momentum (=0.99). The initial learning rate was set to be 0.01, and the decay of the learning rate followed the Poly learning rate policy67. Batch size was set to 36. Data augmentation included rotations, scaling, gamma correction and mirroring. The loss function is the sum of cross-entropy and Dice loss68.

For models based on cine sequence, we sampled a clip of 13 frames from each 25-frame cine video using a temporal stride of 2 and spatial size of 224224, resulting in 75656 input 3D tokens. The 3D patch partitioning layer obtains tokens, with each patch/token consisting of a 128-dimensional feature. In practice, 3D convolution without overlapping is applied for this tokenization, and the number of output channels is set to be 128 to project the features of each token to a 128-dimension.

The developed model consists of four stages, that is, four video swin transformer blocks. Each stage, besides the last stage, performs 2 spatial downsampling in the patch merging layer. It is worth noting that we do not downsample along the temporal dimension. The patch merging layer concatenates the features of each group of 22 spatially neighboring patches and applies a linear layer to project the concatenated features to half of their dimension. The video swin transformer block consists of a 3D window-based multihead self-attention module and a 3D-shifted window-based multihead self-attention module, followed by a feedforward network, that is, a two-layer multilayer perceptron, with Gaussian error linear unit nonlinearity in between. Layer normalization is applied before each multihead self-attention module and multilayer perceptron, and a residual connection is applied after each module. We used the base version of VST. The number of heads for each stage is 4, 8, 16 and 32. Extended Data Fig. 3a shows the schematic overview of the VST-based framework for modeling SAX cine.

Model performance improved with increasing training data sample size. For the screening model, we used random rotation, random color jitter and adding random number. During each step of SGD in the training process, we perturbed each training sample, cine video sequences, with a random rotation (between 45 and +45 degrees for SAX cine and between 20 to +20 degrees for 4CH cine), random color jitter and with adding a number sampled uniformly between 0.1 and 0.1 to image pixels (pixel values are normalized) to increase or decrease brightness of the images. For LGE, we used random rotation between 45 and +45 degrees, random color jitter and random flip along the z-axis. Data augmentation resulted in improvement for all models.

First, we developed VST-based models for SAX cine, 4CH cine and SAX LGE, respectively. Then, to fuse information from different modalities, we added a global average pooling layer following the last self-attention module for each VST model. This resulted in a 1,024-dimension feature vector from each modality. We further concatenated the 1,024-dimension vectors and added a fully connected layer on top of that to aggregate the features. The final fully connected softmax layer produces a distribution over the output classes. In terms of training, we loaded and froze the pretrained weights of each VST branch from different modalities using transfer learning69 and only finetuned the last fully connected layers for feature aggregation.

Following the classic VST configuration27, we employed an AdamW optimizer using a cosine decay learning rate scheduler and 2.5 epochs of linear warmup. A batch size of 32 was used. The backbone VST is initialized from the ImageNet70 and Kinetics-600 (ref. 71) pretrained model; the head is randomly initialized. Model pretraining plays a strikingly important role in VST-based CMR interpretation. We also found that multiplying the learning rate of the backbone by 0.1 improves performance. Specifically, the initial learning rates for the pretrained backbone and randomly initialized head were set to be 1104 and 1103, respectively. The impact of learning rate modification on the VST backbone was systematically examined as below. We adopt 0.2 stochastic depth rate and 0.05 weight decay for the Swin base model used in this study. To prevent the models from becoming biased toward one class, we balanced the training datasets for both screening and diagnostics using the ClassBalancedDataset sampling strategy72. Each VST branch derived from the single modality was trained for 150 epochs and then fed into the fusion model, following with 20 epochs of finetuning particularly for the fusion layers. For inference, we set the batch size to be one and the number of workers to be four. The training time for model development using four NVIDIA GeForce RTX 3090 graphics processing units with 24GB VRAM was about 77h, and the inference time for each subject was only 0.233s.

The impact of learning rate modification on the VST backbone was systematically examined through a controlled experiment. The experiment encompassed a range of learning rates, from 1102 to 1106, with a focus on their effects on the AI diagnostic model based on SAX cine. The investigation was conducted on the primary cohort (6,650 CVD patients), utilizing a twofold configuration for training and the remaining fold for testing. The model was trained for 150 epochs with five different learning rate initializations for the model backbone: 1102, 1103, 1104 (as applied in this study), 1105 and 1106. Other configurations were kept consistent for a fair and direct comparison, and the training loss for each scheme was plotted for analysis (Supplementary Fig. 3). From the depicted figure, several key observations emerge. When the learning rate is set too high (1102, curve in blue color), the model struggles to converge and the training loss fails to descend, in stark contrast to the more optimal setting of 1104 (curve in green color). Notably, the model under the 1102 learning rate incorrectly classified all samples into the HCM class during testing. Conversely, when the learning rate is set too low (1106, curve in purple color), the loss descends very slowly over the training period. As depicted in the figure, the loss curves for 1105 and 1106 remain at a relatively high level compared with the more effective setting of 1104. Further evaluation included the calculation of F1 and area under the receiver operating characteristic curve scores for the testing fold under the aforementioned experimental settings (Supplementary Fig. 3). Notably, the model trained with a learning rate of 1102 failed to converge and was consequently excluded from the quantitative metrics. According to the evaluation results, the initialized learning rate of 1104 demonstrated superior performance compared with the other settings. Therefore, based on these comprehensive analyses, we selected 1104 as the initialized learning rate for our experiment.

We examined the conventional CNNLSTM architecture in CMR interpretation. The CNNLSTM consists of a DenseNet encoder with 40 layers and a growth rate of 12 for feature extraction and an LSTM for temporal feature aggregation. DenseNet encoder comprised a series of two-dimensional convolutions with kernel sizes 11 and 33 and global average pooling to extract the feature vector for each input frame. For LSTM, the feature vector for each input frame is fed into the LSTM module sequentially. LSTM fuses the feature vectors and produces the final classification score after one fully connected layer. For the training configuration of the CNNLSTM model, we adopt the SGD optimizer with a learning rate of 0.001, a momentum of 0.9 and a weight decay of 0.001. A batch size of four is used for training and one is used for testing. The DenseNet encoder of the CNNLSTM model is initialized from the pretrained model21 and the LSTM component is randomly initialized. We kept data augmentation, the input scheme and computational resources the same as VST models with the only difference: SAX cine inputs are resized to 6464 due to CNNLSTM memory constraints.

The performance of the AI models was evaluated by assessing their sensitivity, specificity, precision and F1 score (harmonic mean of the predictive positive value and sensitivity), with two-sided 95% CIs, as well as the AUC of the ROC with two-sided CIs. The F1 score is complementary to the AUC, which is particularly useful in the setting of multiclass prediction and less sensitive than the AUC in settings of class imbalance. For an aggregate measure of model performance, we computed the class frequency-weighted mean for the F1 score and the AUC73.

The cutoff value was set to 0.5 for screening; the CVD class with the highest probability was the diagnostic prediction. Precision, sensitivity (recall), specificity, PPV, NPV and F1 score of each class are related to true-positive (TP), true-negative (TN), false-positive (FP) and false-negative (FN) rates, with formulas as follows:

$$text{Sensitivity}=frac{mathrm{TP}}{mathrm{TP}+mathrm{FN}},$$

$$text{Specificity}=frac{mathrm{TN}}{mathrm{TN}+mathrm{FP}},$$

$$mathrm{Precision}=frac{mathrm{TP}}{mathrm{TP}+mathrm{FP}},$$

$$mathrm{PPV},=frac{mathrm{TP},}{mathrm{TP}+mathrm{FP}},$$

$$mathrm{NPV},=frac{mathrm{TN},}{mathrm{TN}+mathrm{FN}},$$

$${F}_{1}text{-score}=frac{2times mathrm{Precision}times mathrm{Sensitivity}}{mathrm{Precision}+mathrm{Sensitivity}}.$$

The ROC space is defined by 1specificity and sensitivity as the x axis and the y axis, respectively. It depicts relative trade-offs between true positive and false positive, as the classification threshold goes from zero to one. A random guess will give a point along the diagonal line from the bottom left to the top right. Points above the diagonal line represent good classification results and points below the line represent bad results. We applied the class frequency-weighted F1 score and class frequency-weighted AUC to evaluate the performance of our diagnostic model, with the following formulas:

$${rm{Weighted}},{F}_{1}text{-}{rm{score}}=mathop{sum }limits_{i}^{C}{mathrm{ratio}}_{i}{F}_{1}text{-}{mathrm{score}}_{i},$$

$${rm{Weighted}},mathrm{AUC}=mathop{sum }limits_{i}^{C}{mathrm{ratio}}_{i}{mathrm{AUC}}_{i},$$

where ({{F}_{1}text{-score}}_{i}) and AUCi denote the F1 score and AUC for class i, respectively, and ({mathrm{ratio}}_{i}) denotes a frequency ratio for each class i.

In addition, to improve the model interpretability and visualize the features used by the DNN model that determine the final prediction, we used Grad-CAM29 to localize important regionssaliency regionsby visualizing class-specific gradient information. In Grad-CAM, the neuron importance weight ({alpha }_{k}^{,c}) is estimated as

$${alpha }_{k}^{,c}=frac{1}{Z}sum _{i}sum _{j}frac{partial {y}^{,c}}{partial {A}_{{ij}}^{k}},$$

where yc denotes the gradient score for class (c) before the softmax and Ak denotes the feature map activation of the kth layer. After computing the neuron importance weights for each feature map, we can generate a heat map indicating the significant regions related to class (c) by performing a weighted linear combination of the feature maps, followed with a ReLU activation function as

$${L}_{mathrm{Grad}-mathrm{CAM}}^{c}=mathrm{ReLU}left(sum _{k}{alpha }_{k}^{,c}{A}^{k}right).$$

We then used the Shapley values74 to evaluate the influence of each input modality (SAX cine, 4CH cine and SAX LGE). The Shapley value is a principled attribution method used in AI to quantify the contribution of individual input features by assigning each input modality an importance value for a particular prediction. The definition of the Shapley value75 is given in equations below:

$${{{phi }}}_{i}left(vright)=sum _{Ssubset N{i}}{left(begin{array}{c}n\ 1,left|Sright|,n-left|Sright|-1end{array}right)}^{-1}left(vleft(Scup {i}right)-vleft(Sright)right),$$

where ({phi}_{i}left(vright)) denotes the contribution value of input component i, namely the Shapley value of each input modality (player), (N) is the number of layers and (v) is a function mapping subsets of layers to the real numbers: (v:{2}^{N}to {R}), with (vleft(varnothing right)=0), where (varnothing) denotes the empty set. A set of players is called a coalition. The function (v) is called a characteristic function: if (S) is a coalition of players, then (v(S)), called the worth of coalition (S), describes the total expected sum of payoffs the members of (S) can obtain by cooperation. The sum extends over all subsets (S) of (N) not containing input component i; also note that (left(begin{array}{c}n\ a,{b},{c}end{array}right)) is the multinomial coefficient. This formula can also be interpreted as

$$begin{array}{l}{{{phi }}}_{i}left(vright)=frac{1}{{mathrm{Number}};{rm{of}};{rm{layers}}}\sum _{{mathrm{coalitions}}; {mathrm{including}};i}frac{{mathrm{Marginal}};{mathrm{contribution}}; {mathrm{of}};i;{mathrm{to}};{mathrm{coalition}}}{{mathrm{Number}}; {mathrm{of}}; {mathrm{coalitions}}; {mathrm{excluding}};i; {mathrm{of}}; {mathrm{this}}; {mathrm{size}}}.end{array}$$

The diagnosis of myocardial infarction or ischemic cardiomyopathy is based on the European Society of Cardiology, American College of Cardiology and American Heart Association committee criteria76 with significant stenosis on invasive coronary angiography (CAG) or coronary computed tomography angiography, and CMR showed subendocardial or transmural LGE with matching coronary arteries. We excluded cases without available CAG present or inadequate image quality due to arrhythmia or respiratory motion artifact.

We followed the 2020 American Heart Association and American College of Cardiology guidelines for the diagnosis of patients with HCM77. The clinical diagnosis of HCM was made by CMR showing a maximal end-diastolic wall thickness of 15mm anywhere in the LV, in the absence of another cause of hypertrophy in adults. More limited hypertrophy (1314mm) can be diagnostic when present in family members of a patient with HCM or in conjunction with a positive genetic test.

We excluded cases with the following conditions:

Valvular heart disease (aortic valve stenosis, etc.)

Long-term uncontrolled hypertension

Inflammatory heart disease (sarcoidosis, etc.)

Infiltrative cardiomyopathy (amyloidosis, Fabry disease, etc.)

Septal myectomy or alcohol ablation before CMR

CMR images with poor quality

The diagnosis of DCM is based on the diagnostic criteria of the World Health Organization78. Inclusion criteria were based on enlarged LV end-diastolic dimension (>60mm) and reduced LVEF (<45%). The exclusion criteria were as follows:

Significant stenosis of coronary artery (>50% stenosis, assessed on CAG or coronary computed tomography angiography)

Severe valvular disease, hypertension or congenital heart disease

Evidence of acute or subacute myocarditis (T2 weighted image and laboratory tests)

Any other metabolic disease through medical documentation

Inadequate CMR quality

The diagnosis of LVNC is based on previous studies32,79, as follows:

The presence of noncompacted and compacted LV myocardium with a two-layered appearance, with at least involvement of the LV apex

End-diastolic noncompaction/compaction ratio >2.3 on long-axis views and 3 on SAX views

Noncompacted mass >20% of the global LV mass

No pathologic (pressure/volume load, for example, hypertension) or physiologic (for example, pregnancy and vigorous physical activity) remodeling factors leading to excessive trabeculation

The diagnostic standards for ARVC were based on the revised Task Force Criteria80 score with either two major criteria, one major and two minor criteria or four minor criteria. The major criteria include regional RV akinesia or dyskinesia or dyssynchronous RV contraction, ratio of RV end-diastolic volume to body surface area >110mlm2 (male) or >100mlm2 (female) or RV ejection fraction <40%; fibrous replacement of the RV free wall myocardium, with or without fatty replacement of tissue on endomyocardial biopsy; repolarization abnormalities and depolarization or conduction abnormalities on ECG test.

The diagnosis of CAM is based on endomyocardial biopsy or extracardiac biopsy specimens showing positive birefringence with Congo red staining under polarized light, and with native and enhanced CMR imaging in a pattern consistent with CAM: LV wall thickness of more than 12mm shown by CMR without other known cause, with and without diffuse LGE81.

RCM is characterized by ventricular filling difficulties with increased stiffness of the myocardium. The restrictive cardiomyopathies are defined as restrictive ventricular physiology in the presence of normal or reduced diastolic volumes52,82, as follows:

Nondilated LV or RV with diastolic dysfunction

Bi-atrial dilation

Preserved ejection fraction (LVEF 50%)

We excluded subjects that met the following criteria:

With a reduced LV systolic function

Severe atrial fibrillation

Severe valvular disease, hypertension or congenital heart disease

Significant stenosis of coronary artery.

The diagnosis of PAH is based on the results of right heart catheterization examination. Patients are included in this study if they were clinically diagnosed as PAH83:

Mean pulmonary artery pressure (mPAP) 25mmHg

Pulmonary capillary wedge pressure (PCWP) <15mmHg

Pulmonary vascular resistance (PVR) >3 Wood units at rest

We excluded subjects with the following criteria:

Any evidence of cardiomyopathy, myocarditis, CAD, myocardial infarction, valvular disease, or constrictive pericarditis.

Any evidence of respiratory diseases.

History of cardiac surgery

The diagnosis of Ebsteins anomaly is based on apical displacement of tricuspid valve leaflets (8mmm2) with fibrous and muscular attachments to the underlying myocardium31. Patients with other concomitant malformation (for example, congenitally corrected transposition with Ebsteins anomaly) and history of cardiac surgery were excluded.

The diagnosis of acute myocarditis is based on the diagnostic criteria for clinically suspected myocarditis, as recommended by the European Society of Cardiology Working Group on Myocardial and Pericardial Diseases84, and is fulfilled by meeting the Lake Louise criteria85 or by confirmation through endomyocardial biopsy.

Patients with clinically acute myocarditis had the following: acute chest pain, signs of acute myocardial injury (electrocardiographic changes and/or elevated troponin level) and increased laboratory markers of inflammation (for example, C-reactive protein level). CAD was excluded before cardiac MRI. Patients with preexisting CVD were excluded.

The diagnostic criteria for HHD include (1) a history of prolonged, uncontrolled arterial hypertension and (2) concentric hypertrophy with left ventricular maximal wall thickness 12mm.

We excluded patients with the following conditions:

Any other causes of LV hypertrophy

Cardiomyopathy

Obstructive coronary heart disease

Severe valvular disease

Inflammatory heart disease

Severe ventricular arrhythmia such as ventricular tachycardia or left bundle branch block

Poor CMR imaging quality

Healthy controls were recruited as volunteers without CVDs (including cardiomyopathy, CAD, severe arrhythmia or conduction block, valvular disease, congenital heart disease and so on) and other organic or systemic diseases on the comprehensive evaluation by patient history, clinical assessment, ECG and echocardiography.

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

More here:
Screening and diagnosis of cardiovascular disease using artificial intelligence-enabled cardiac magnetic resonance ... - Nature.com

Read More..

Beyond Graduation: How AI Is Reshaping Career Paths for College Students – Chapelboro.com

Story byJessica Walker

David Kim, a UNC-Chapel Hill senior and computer science major, has submitted 326 job applications so far. Hes only reached the interview round for three of them.

Kim said this is normal for computer science majors like himself, who are still applying for jobs with graduation around the corner. He said some of his classmates have applied to around 1,000 jobs and are still waiting to hear back.

Kim said career competitiveness is often attributed to the economy and job market, but also points to a newer wave of technology: artificial intelligence.

AI works to have equal, or even higher, intelligence to humans when solving problems. That kind of advanced technology is financially attractive to businesses, Kim said.

Well see how [companies] try to automate the jobs away, he said. I think from an economic standpoint, [AI] is definitely going to take away some jobs. And thats scary.

ChatGPT is one of the most popular generative artificial intelligence tools. AI like ChatGPT leave students wondering, will AI take their jobs? | Photo taken by Adrian Tillman

Mohammad Hossein Jarrahi, a UNC-CH professor in the School of Information and Library Science, studies the relationship between humans and AI in the workforce. To him, the threat of AI boils down to one concept: self-learning.

[AI systems] are adaptive in their learning, and that is the source of opportunity and source of threats, Jarrahi said. Because if theyre learning really well, whats going to happen to me? If theyre learning somehow independent of me as a knowledge worker, then what is my contribution?

Jarrahi said that blue-collar jobs are usually at risk from new technology and automation, but this new wave of AI affects degree-seeking knowledge workers as well a group that was thought to be immune, Jarrahi said.

If youre not worried, youre probably not paying attention, Jarrahi said.

Kim said its difficult to predict the future, but he knows he will have to find ways to compete in the evolving job market now influenced by AI.

I think for the industry, [AI] is actually good. But for a graduating senior or even a junior, a sophomore computer science student now, or even graduating high school students who might want to study computer science thats a pretty big worry, Kim said. I dont know whats gonna happen. I dont think anyone does. But well have to adapt somehow, like always.

Undergraduate uncertainty

The fear of AI replacing jobs isnt just affecting computer science students.

As a UNC-CH sophomore, Sarayu Thondapu has already had multiple conversations about AIs impact on her future.

Thondapu is currently studying economics and political science on the pre-law track. During her winter break back home in Charlotte, North Carolina, her uncle told her to be wary of AIs potential impact on legal professions. For example, the AI program LegalGPT can perform similar tasks to legal assistants or paralegals.

Scott Geier, a professor in the Hussman School of Media and Journalism at UNC-CH, said that junior attorneys, for example, are at risk of losing their jobs because AI can complete the same tasks, like reviewing documents and writing briefs.

Anything that involves analyzing information and doing so quickly and efficiently, AI can already do that better. So its going to be better, faster and cheaper, Geier said. And if something is better, faster and cheaper, theyre gonna do the robot.

Now, AI leaves Thondapu questioning her career.

I wanted to go into law to be someone that can help people, someone that can truly connect with the cases that Im working with and can be of assistance toward them, and I wouldnt leave them out to dry, Thondapu said. I guess I worry that if I end up relying on ChatGPT or artificial intelligence too much, I kind of forget the reason why Im there in the first place.

Halfway through Thondapus undergraduate experience, she thinks about her future graduation and often wonders if it will all be worth it.

We spent four years of our lives at an undergraduate institution. We worked our butts off. We did a lot to get to the places were at and we gained a lot of experience, Thondapu said. But then to realize that something that were responsible for creating might actually end up dashing all our efforts, I definitely think thats really scary.

Due to hersurvival mentality,she said she is now incorporating technology classes and a data science credential into her studies to be competitive in the job market and help secure her future.

I think this is something that weve all come to terms with: We need to know something about computers to basically exist in a world like this, Thondapu said.

However, she said she fears that with too much attention on computers, this generation will forget how to communicate with passion and humanity. As a result, Thondapu added a creative writing minor to her studies to compete with AI.

Kim also considered altering his graduation plans. He said he originally focused on job applications related to software engineering and web development, but after this wave of AI, he became interested in applying to more machine learning roles.

But its still a change he said he has to think about, as these roles often require additional schooling, time and money.

AI vs. college degrees

The value of college degrees has always been debated, Jarrahi said. However, higher educations value in the era of AI adds new twists to the argument.

Google offers career certificates to anyone interested in the technology space, with no prior experience necessary. Theirwebsiteadvertises the programs as a real path to in-demand jobs in under six months.

It costs less than college, takes less time and Google said the program is worth the same as a four-year degree, Geier said.

Some may think, College costs too much. Its not worth the sticker price, Geier said. And people are starting to come to realize that if AI now is part of the equation, thats just going to accelerate that mindset.

Duke University now offers a degree program for AI learning:Dukes AI Master of Engineeringprogram.

Jared Bailey is the current president of Dukes AI Competition Club and a student in the masters program, which costs $75,877 in tuition for the typical 12-month duration: two semesters and a summer session.

The program includes other flexible education options like an extended track of 16 months, which costs up to $95,000, and the online program for 24 months, which costs $98,970.

But Bailey believes the program is worth it.

A smart student investigates to understand if their education will have a fair return on investment, he said. I do not see a world where students are unable to find fields to pursue which offer a fair return.

Dukes AI program website posted that the degree will provide great graduate outcomes in jobs around engineering and data science.

Bailey said AI has the potential to help other industries, not just in science, engineering or technology. He said one classmate in the program is a medical doctor who uses computer vision to identify diseases in high-resolution photos.

Bailey related the advancements of AI to the creation of the camera, the internet and personal computers. He said although they all received initial pushback, it ended up enhancing the work of humanity.

Duke has largely embraced student use of AI, Bailey said. When I was younger, educators pushed back on student use of calculators and the internet. Its refreshing to see Duke take a different stance and embrace this new technology.

AI is here to enhance our work and not compete with it, he said.

Embracing AI

Professional editor Erin Servais believes that humans can collaborate with AI, and shes even incorporated it into her career.

Servais has edited professionally, from line editing to developmental editing, since 2008. Last year, Servais career changed when she used ChatGPT tocopy edit for the first time.

I was so shocked by how accurate and fast it was, Servais said. And I knew that it was going to have big effects on our profession. I knew that immediately, the very first time I tried it.

She then created the course AI for Editors to prepare and educate editors on how AI programs like ChatGPT are reshaping the profession.

Because ultimately, AI is replacing editors, she said.

People are losing their jobs to artificial intelligence and in a really just unintelligent way, Servais said. Its not a good thing, and its not going to help readers; its not going to help the writers; its not going to help anyone.

But if editors learn how to use AI, editing can be more accurate and efficient for organizations, she said. An editor with knowledge of AI may have better job security and value in the workplace, she said.

The next evolution of jobs will revolve around guiding AI programs and checking their work rather than manually making the changes in a document, Servais said. But humanity is still essential, she said.

We dont want AI to do our jobs because we still need to double check it and make sure that what it is producing is quality and factual. And humans are needed for that still, Servais said.

Geier said AI might take on the heavy lifting in most professional spaces, but with human oversight as well. But he said it will be in a gradual way.

He doesnt think that students graduating now will lose a job because of AI, as long as they prepare. Those who dont learn about AI, however, will be left behind, he said.

Geier said students need to give themselves an edge by working with AI in a way others cant. And thats what universities need to be teaching, he said.

Youve got to make yourself relevant with using AI in currently what youre doing, Geier said. The way its going to be is when you come out of here, the employers are going to ask, Can some rando, some stranger off the street, come in and do the same job that [a student] is doing just by writing a prompt into AI? If the answer is yes, youre out of a job.

Stories from theUNC Media Hubare written by senior students from various concentrations in the Hussman School of Journalism and Media working together to find, produce and market unique stories all designed to capture multiple angles and perspectives from across North Carolina.

Related Stories

None

See the original post:

Beyond Graduation: How AI Is Reshaping Career Paths for College Students - Chapelboro.com

Read More..

Dunkirk native receives honor | News, Sports, Jobs – Evening Observer

Dunkirk native David Van Wey, graduated from Dunkirk High School in the class of 1985. His Principal was Mr. John Mancuso and his computer science teacher was Mr. James Will (DHS Class of 1961 graduate). Both encouraged Van Wey to further his education in the computer science field and after graduation he began his college career at the Rochester Institute of Technology.

While furthering his education in a Bachelors level program for computer science at RIT, he simultaneously worked in the office for the National Technical Institute for the Deaf (NTID). Having made many new friends and colleagues, he was referred for a position at the University of Rochester in the Lab for Laser Energetics. Working in the laser lab, sparked an interest in him that led him to further his education, ultimately receiving a masters degree in computer science from the University of Rochester.

In April 2023, he received a letter from the President of the University of Rochester, Sarah C. Mangelsdorf, stating he was chosen to receive one of the three Witmer Awards for Distinguished Service. Manglesdorf stated in the letter:

For over 30 years, your commitment to the Laboratory for Laser Energetics (LLE) and University has been invaluable. From creating relational databases to modernizing operations years ago to standing watch over the laser system one shift per week for over 20 years as a trained power conditioning operator to becoming a highly respected and appreciated human resources professional today, your contributions exemplify our Meloria values.

He was presented with the award where he was congratulated by the University of Rochesters Board of Trustees. At this time, Van Wey plans to continue the rest of his career at the University of Rochester, eventually retiring in his hometown of Dunkirk.

Today's breaking news and more in your inbox

See original here:

Dunkirk native receives honor | News, Sports, Jobs - Evening Observer

Read More..

Artificial intelligence and us Marquette’s Intersection faculty roundtable | Marquette Today – Marquette Today

Intersection is a recurring feature in Marquette Magazine that brings together faculty members from different disciplines to share perspectives on a consequential topic. This time, with large-language models such as ChatGPT and artificial-intelligence-driven image and video generators exploding into our lives, schools and economy, three professors reflect on the change AI is making us grapple with. Following are key excerpts from a conversation with them.

Whats the most promising or exciting thing youve seen from the world of AI over the past year?

MZ: When I study how users interact with these technologies, theres a lot of utility. Im not even thinking about ChatGPT or similar platforms, but about how smart devices are becoming better at processing voice commands and predicting my needs and wants. I joke in my classes that we critique the issues and challenges posed by AI, but I rely on Apple Maps. I rely on Grammarly to autocorrect my grammar. So, there are definitely benefits. AI is helping to make a lot of these tools better for a lot of people.

NY: Generative AI is the most promising thing that I have seen. I have been using it in a lot of useful ways for my research to address challenges that had been hard tosolve. I can generate synthetic data to compensate for a lack of enough data. I can remove the biases in my data set. This is something that has amazed me in the last few years.

JSL: The way AI can handle huge sets of data has wonderful implications for our approaches to major ethical problems, such as climate change, because AI can handle calculations that an individual person cant. Same with medicine. So on those issues AI could be really promising especially when we make accountability and transparency priorities, so people understand, at least somewhat, whats going on behind what AI generated for them.

What has been the most concerning development or the AI-related challenge that most urgently needs addressing?

JSL: For professors in the humanities, whats going on in our circles is the question of what were trying to teach our students to do, composing an essay, most obviously. Is that being perceived, or is it in fact, not even useful to them anymore? How can we convince our students that the creative process which is something weve been given by God, the act of writing and the rigor that entails is actually important for them? Its like a basketball player practicing their shots. There are some things you just cant have a machine do for you, for your own growth. (Saint-Laurent also cited concerns with the use of deep fakes by malicious actors or states, and with AI potentially worsening inequalities.)

NY: There are a few things I should mention: ethical and privacy risks, the use of generative AI for creating content fake audio, video, images and even assignments at school that can put people at risk. My other concern is our need to learn how to have humanAI collaboration because its crucial to know what you need from AI and how to use it correctly. A final point in terms of using these algorithms is their explainability. Do their processes make sense? The algorithms are getting better and better, but the interpretability of the AI is another concern.

MZ: Its hard to add to the points already being made. But a broader concern that I share with students is an overall kind of quantification bias thats emerged that assumes AI or anything data-driven will be inherently correct and better than having a human make a decision or a prediction. We rely on algorithmic and AI-driven systems without having that explainability as Nasim was saying. And now that data the things we can compute and put into a model are what matters most, that could have an impact on things like humanness or imperfection or broader kind of humanistic qualities that were losing because of the reliance on these models.

How dramatically do you see AI impacting higher education? How should Marquette prepare for that impact?

MZ: Part of me asks, why are we treating this differently than the emergence of the calculator or an online encyclopedia? Students have always been able to copy or find shortcuts for their work, but it does seem to be at a different scale now. Does that require us to change our mode of instruction or what were expecting of students? I suspect we expect different things in a math classroom than we did 30 years ago because memorizing the multiplication tables is just not as necessary.

Still, in our computer science classrooms, were struggling because there are tools out there that can write code for our coding assignments. Students can ace the assignments, but when theres an exam and theyre forced to do it on their own, theyre suddenly struggling, realizing what theyre not learning. That puts pressure on us as instructors to help students see that difference.

JSL: Its kind of hard to underestimate the impact. Frankly, Im actually worried about the divisions in the faculty and the students that this might cause. This has to be an interdisciplinary endeavor for us as a university, so we are talking to one another in important ways and not bringing our own bias against other disciplines, saying, We know this better than you do. Its impacting all of us, and its all of our responsibility.

Were not going to have a unified voice, but we must be able to identify what our concerns are, what is in keeping with the Jesuit mission of the university, how are those still really important questions for us all, so were not becoming like the Luddites over here and the techies over there. No, we have to put our students at the center and also us as professors to really try to get the human heart back in there. We can see what AI can do. Yes, it is amazing, but it also can give us greater awe about what the human person is too, what our capacities are, and to help our students not lose sight of that.

NY: Most of the faculty are now struggling with assignments and things that are given to them by students who may be using AI tools. Its all part of an adjustment process. It reminds me of how people were not ready to use elevators when they were first introduced; people were still using stairs. We need to get ready. As faculty, we need to learn how to use these tools and teach students how to use these tools correctly. There is now software to help us detect fakes and copied information. So, we need to get ready and define rules. Then we can probably even benefit in the classroom. AI may help us find personalized content for the students of the future based on their needs and GPA, and recommendations for custom course content. We could benefit as people did when they began using elevators.

Visit link:

Artificial intelligence and us Marquette's Intersection faculty roundtable | Marquette Today - Marquette Today

Read More..

The tentacles of retracted science reach deep into social media. A simple button could change that. – EurekAlert

image:

How the interface showing more information about retracted science would work.

Credit: Judy Kay and authors, University of Sydney

In 1998, a paper linking childhood vaccines with autism was published in prestigious journal,The Lancet, only to be retracted in 2010 when the science was debunked.

Fourteen years since its retraction, the papers original claim continues to flourish on social media, fuelling misinformation and disinformation around vaccine safety and efficacy.

A University of Sydney team is hoping to help social media users identify posts featuring misinformation and disinformation arising from now-debunked science. They have developed and tested a new interface that helps users discover further information about potentially fraught claims on social media.

They created and tested the efficacy of adding a more informationbuttonto social media posts. Thebuttonlinks to a drop down which allows users to see more details about claims or information in news posts, including information on whether that news is based on retracted science. The reseachers say social media platforms could use an algorithm tolink posts to details of retracted science.

Testing of the interface among a group of participants showed that when people understand the idea of retraction and can easily find when health news is based on a claim from retracted research, it can help reduce the impact and spread of misinformation as they are less likely to share it.

Knowledge is power, saidProfessor Judy Kayfrom the School of Computer Science who led the research. During the height of the COVID-19 pandemic, myths around the efficacy and safety of vaccines abounded.We want to help people to better understand when science has been debunked or challenged so they can make informed decisions about their health, she said.

The ability to read and properly interpret often complex scientific papers is a very niche skill not everybody has that literacy or is up to date on the latest science. Many people would have seen posts about now-debunked vaccine research and thought: it was published in a medical journal, so it must be true. Sadly, that isnt the case for retracted publications.

Social media platforms could do much better than they do now, said co-author and PhD student Waheeb Yaqub. During the height of the COVID-19 pandemic, myths around the efficacy and safety of vaccines spread like wildfire.

Our approach shows that when people understand the idea of retraction and can find when health news is based on a retracted science article, it can reduce the impact and spread of misinformation, he said.

Tool boosts literacy of processes behind scientific research

The research was conducted with 44 participants who started with little or no understanding of scientific retraction. After completing a five-minute tutorial, they rated how various reasons for retraction make a papers findings invalid.

The researchers then studied how participants used the More Informationbutton.They found the new information altered the participants beliefs on three health claims based on retracted papers shared on social media.

These claims were: whether masks are effective in limiting the spread of coronavirus; that the Mediterranean diet is effective in reducing heart disease; and snacking while watching an action movie leads to overeating.

The first claim was based on two papers, one which had been retracted and one which hadnt. The other two claims were based on retracted papers. The researchers specifically chose papers of which participants would have differing knowledge.

Participants confidently considered masks were effective. Most didnt know about the Mediterranean diet and so were unsure about whether it was true. Many people whose personal experience of snacking during films made them believe it was true.

Thebuttoninfluenced participants when they knew little about a topic to begin with. When the participants discovered the post was based on a retracted paper, they were less likely to like or share it.

On social media, both misinformation(the inadvertent spread of false information) and disinformation (false information deliberately spread with malicious intent), are rising.

Papers can be retracted when problems with methodology, results or experiments are found.

The researchers say it would be feasible for social media platforms to develop back-end software that links databases of retracted papers.

If social media platforms want to maintain their quality and integrity, they should look to implementsimplemethods like ours, Professor Kay said.

The study was published inProceedings of the ACM on Human-Computer Interaction.

DECLARATION

The authors declare no conflicts of interest. Waheeb Yaqub is the recipient of a research scholarship.

Proceedings of the ACM on Human-Computer Interaction

Survey

People

Foundations for Enabling People to Recognise Misinformation in Social Media News based on Retracted Science

26-Apr-2024

The authors declare no conflicts of interest. Waheeb Yaqub is the recipient of a research scholarship.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

The rest is here:

The tentacles of retracted science reach deep into social media. A simple button could change that. - EurekAlert

Read More..