Page 831«..1020..830831832833..840850..»

AlphaFold touted as next big thing for drug discovery but is it? – Nature.com

This protein, whose structure was predicted by AlphaFold, is part of the nuclear pore complex, which is a gateway for molecules entering a cells nucleus and is a drug target.Credit: DeepMind

After Google Deepminds AlphaFold proved that it could predict the 3D shapes of proteins with high accuracy in 2020, chemists became excited about the promise of using the open-source artificial-intelligence (AI) programme to discover drugs more quickly and cheaply. Most drugs work by binding to various sites on proteins, and AlphaFold could predict the structures for proteins that scientists previously knew little about.

Last month, the biotechnology firm Recursion, based in Salt Lake City, Utah, announced that it had calculated how 36 billion potential drug compounds could bind to more than 15,000 human proteins whose structures were predicted by AlphaFold. To pull off the massive computation, Recursion used its own AI tool, MatchMaker, that matched binding pockets on the predicted structures with suitably shaped small molecules, or ligands, from a database called Enamine Real Space.

AI tools are designing entirely new proteins that could transform medicine

Lots of people have predicted how molecules would bind with proteins, says Chris Gibson, Recursions co-founder and chief executive, but this many predictions is pretty unprecedented.

But not everyone is as bullish about AlphaFold revolutionizing drug discovery at least, not yet. In a paper published in eLife the day before Recursions announcement1, a team of scientists at Stanford University in California showed that AlphaFolds prowess at predicting protein structures doesnt yet translate into solid leads for ligand binding.

Models like AlphaFold are really good with [protein] structures, but we need to put some thought into how were going to use them for drug discovery, says Masha Karelina, a biophysicist at Stanford and co-author of the paper.

Others who spoke to Nature agree that this type of effort offers impressive amounts of data, but they arent yet sure about its quality. Biotech announcements such as the one from Recursion arent typically accompanied by validation data confirmation from laboratory experiments that a model has accurately predicted binding. The calculated interactions are also based on predicted, rather than experimentally determined, protein structures which might not contain the atomic-level resolution that drug developers need to pinpoint where the strongest binding might occur. Whats more, the sheer number of predicted interactions (Recursion predicted 2.8 quadrillion) means that even a small percentage of false-positive hits can lead to costly delays while scientists waste valuable time trying to validate them, says Brian Shoichet, a pharmaceutical chemist at the University of California, San Francisco.

The result, Shoichet says, is a lot of excitement, but also a lot of questions.

The idea of using computational tools in drug discovery is to make it easier, faster and cheaper to play with all the parameters that make a good drug, says Vsevolod Katritch, a computational biologist at the University of Southern California in Los Angeles. By using AI models to find leads, a drug company might need to test only a few hundred compounds in the lab, instead of thousands. This can shave millions off the cost and bring a compound to market in years instead of decades.

What's next for AlphaFold and the AI protein-folding revolution

AlphaFold and similar programs, such as RoseTTAFold, which was developed by an international team led by researchers at the University of Washingtons Institute of Protein Design, promise to shake up the pharmaceutical industry further because the structures of many human proteins had been lacking, making it difficult to find treatments for some diseases. The programmes have become so good at predicting 3D protein shapes that of the 200 million protein structures deposited into a database last year, the European Molecular Biology Laboratorys European Bioinformatics Institute deemed 35% to be highly accurate as good as experimentally determined structures and another 45% accurate enough for some applications.

On the surface, making the leap from AlphaFolds and RoseTTAFolds protein structures to the prediction of ligand binding doesnt seem like such a big one, Karelina says. She initially thought that modelling how a small molecule docks to a predicted protein structure (which usually involves estimating the energy released during ligand binding) would be easy . But when she set out to test it, she found that docking to AlphaFold models is much less accurate than to protein structures that are experimentally determined1. Karelinas still not 100% sure why, but she thinks that small variations in the orientation of amino-acid side chains in the models versus the experimental structures could be behind the gap. When drugs bind, they can also slightly alter protein shapes, something that AlphaFold structures dont reflect.

Laksh Aithani, chief executive and co-founder of London-based Charm Therapeutics, agrees with Karelinas findings that RoseTTAFold and AlphaFold dont perform well when determining small-molecule docking.

Charm is trying a different way of evaluating proteindrug binding. The technique uses an AI tool, called DragonFold, that is built on a RoseTTAFold backbone. It models the 3D shape of the protein and ligand bound together, which Aithani says allows Charm to account for changes in protein shape that occur with ligand binding and to modify the would-be drug to create tighter, more selective binding. The effort isnt far enough along for Aithani to reveal many details, but he says the project has attracted the interest of pharmaceutical firm Bristol Myers Squibb, based in Lawrenceville, New Jersey.

In the end, the challenge for these groups, says Shoichet, isnt to design a model that will identify how well molecules bind, but to create a system that can identify compounds that bind strongly to proteins about which little is known. To make progress, validation in the lab is necessary, he says.

A Pandoras box: map of protein-structure families delights scientists

Industry should be able to do the validation, says Bonnie Berger, a mathematician at the Massachusetts Institute of Technology in Cambridge. At the moment, however, if industry is doing it, it isnt sharing that data.

Theres a lack of transparency from companies like Recursion, who make predictions without fully sharing their methods or results. Its a problem for me and for the field, she says.

Recursion responds that it has shared validation data on MatchMaker in two studies: one in Scientific Reports in 20212, and one in the Journal of Chemical Information and Modeling earlier this year3.

Sharing these exciting technical milestones in real time as they occur is our way to share how we are thinking about drug discovery with the community and the broader general public, says Recursion spokesperson Ryan Kelly.

Berger says that competitions such as the one that put AlphaFold on the map could not only help drive drug discovery forward, but also shed more light on industrys methods. AlphaFold made headlines when it won the biennial Critical Assessment of protein Structure Prediction (CASP) contest in 2020, in which researchers had to test their prediction models against a set of proteins for which structures were experimentally determined, but not yet publicly released. In the same way, an AI tools results for drugprotein interactions could be compared with lab results for binding.

Theres a huge amount of effort going on to harness models such as AlphaFold for drug discovery, Shoichet says. But things are still just ramping up.

Read the original here:
AlphaFold touted as next big thing for drug discovery but is it? - Nature.com

Read More..

Instacart IPO; DeepMind Breakthrough; Uber Warns EU – Opto RoW

Every day, we handpick the 5 Top Stories stock market investors need to know. In 5 minutes, youll learn the stocks, CEOs, and money managers moving markets.

Online grocer Instacarts [CART] shares started trading at $42 on Tuesday, having sold for $30, raising $660m and pushing the firms valuation to more than $11bn. This may be a lot less than its pandemic-era valuation of $39bn, but the IPO has been hailed as sign of a return to health for the tech start-up space, reported Bloomberg. Nevertheless, Instacart shares fell by nearly 5% in premarket trading on Wednesday, in a possible sign that its debut rally is already waning.

Researchers at DeepMind, Googles [GOOGL] artificial intelligence (AI) research lab, have used an AI tool to predict whether genetic mutations will prove harmful. An article published in the journal Science details how the tool, called AlphaMissense, evaluated all of the 71 million missense mutations, wherein a single letter of the genetic code varies. On Tuesday, Mark Zuckerberg announced that the Chan Zuckerberg Initiative is to build one of the largest computing systems dedicated to non-profit life sciences.

The fact that Huaweis latest smartphone has a 5G chip produced by Chinese semiconductor maker SMIC [0981.HK] raises questions about the effectiveness of US sanctions, and may also constitute a headwind for Apple [AAPL] in China, CNBC reported. Meanwhile, Beijing has accused Washington of infiltrating Huawei servers as far as back as 2009, while the German interior ministry is moving to ban components made by Huawei and ZTE [0763.HK] from its 5G network, according to Bloomberg.

This week, the EU will begin finalising the text of a new law aimed at protecting gig workers among them, those who drive for Uber [UBER]. Anabel Daz, Head of EMEA Mobility at Uber, has duly warned that being obliged to treat such workers as employees could cause the firm to shut down in hundreds of cities across the bloc, and to drive up its prices for consumers by some 40%.

The Wall Street Journal reported that federal prosecutors are looking into perks that Tesla [TSLA] may have provided CEO Elon Musk going back as far as 2017. This forms part of their investigation into the plan for a house, known as Project 42, which the company was allegedly going to build for Musk, who has refuted the allegation. Elsewhere, in a conversation with Israeli Prime Minister Benjamin Netanyahu, Musk mentioned he is weighing charging users to access X, formerly Twitter.

Disclaimer Past performance is not a reliable indicator of future results.

CMC Markets is an execution-only service provider. The material (whether or not it states any opinions) is for general information purposes only, and does not take into account your personal circumstances or objectives. Nothing in this material is (or should be considered to be) financial, investment or other advice on which reliance should be placed. No opinion given in the material constitutes a recommendation by CMC Markets or the author that any particular investment, security, transaction or investment strategy is suitable for any specific person.

The material has not been prepared in accordance with legal requirements designed to promote the independence of investment research. Although we are not specifically prevented from dealing before providing this material, we do not seek to take advantage of the material prior to its dissemination.

CMC Markets does not endorse or offer opinion on the trading strategies used by the author. Their trading strategies do not guarantee any return and CMC Markets shall not be held responsible for any loss that you may incur, either directly or indirectly, arising from any investment based on any information contained herein.

*Tax treatment depends on individual circumstances and can change or may differ in a jurisdiction other than the UK.

Continue reading for FREE

Success! You have successfully signed up.

Excerpt from:
Instacart IPO; DeepMind Breakthrough; Uber Warns EU - Opto RoW

Read More..

Where Will Alphabet Be in 3 Years? – The Motley Fool

Tech giant Alphabet (GOOG -0.08%) (GOOGL -0.15%) dominates the search market with its Google search engine and is a powerhouse in streaming video with YouTube. In addition, its cloud division has carved out a solid third-place position in the cloud computing infrastructure market, recently becoming profitable. And the company is also investing in several "moonshot" projects, such as Waymo self-driving cars, which it groups into a segment called "other bets."

In short, Alphabet is a great business. Yet while the release of OpenAI's ChatGPT last year brought up competitive concerns, I would actually lean more toward Alphabet being in a stronger position a few years rather than a weaker one.

That's because of three innovations we already know are in its pipeline, not to mention others we might not even know about yet.

At the beginning of this year, some thought ChatGPT could put several of Alphabet's businesses in jeopardy. Notably, some wondered if a ChatGPT-enhanced Bing search engine would lure significant traffic away from Google, or if the answers generated by chatbots would be able as monetizable as traditional search queries.

While those questions are still out there, Google's share of the search business hasn't been affected much thus far. Moreover, Google's engineers were the ones who actually cracked the code on transformer technology, the architecture which allows AI models to train themselves without the need for human data labeling, and which has led to recent breakthroughs.

So Google should be able to compete in AI effectively. Earlier this year, the company combined its DeepMind and Google Brain AI laboratories into one unit. Now, the company is on the cusp of releasing its own large language model, Gemini, to take on OpenAI and others.

Gemini will attempt to take the best technologies under both DeepMind and Google Brain and combine them in a multimodal model. A multimodal model integrates not only text, but also images and other types of data. The version of ChatGPT you are probably familiar with is not multimodal; however, there are reports that OpenAI is working on a multimodal version of ChatGPT.

According to recent interviews with DeepMind CEO Demis Hassabis, Gemini will have innovative new reasoning, problem-solving, and reinforcement-learning capabilities, and will also be available in different sizes and capabilities for customers. Google is in the process of testing the model with a handful of outside developers, so Gemini should go into beta testing soon.

While we can't know how well Gemini will compete, Hassabis said early results were "promising." And Google not only has deep AI chops, but also unique computing infrastructure expertise from its many years of running Google Search. So, one can easily imagine a scenario in which Google leads the AI field with differentiated technology, just as it has long led search.

Three years from now, that could open up new revenue streams, including better search, a monetized chatbot, and/or more services on Google Cloud. Last quarter, Alphabet CEO Sundar Pichai noted that 70% of AI unicorns used Google Cloud. Meanwhile, the cloud unit could become a major profit center soon, as it just flipped to profitability for the first time in Q1 2023.

Image source: Getty Images.

Most people don't immediately think of Pixel phones and tablets as a core part of Alphabet's business, but the company has been improving its offerings and supporting the Pixel brand with more resources of late.

Results are starting to show. According to research firm Canalys, Google Pixel grew 20% in the first quarter, and reached 2.46% market share of Android phones in North America. That may not sound like much, but that made Pixel the third-most-used Android smartphone in North America. Meanwhile, second-place Motorola, which is now produced by Chinese company Lenovo, fell 40% in the first quarter, to a 4.82% market share.

That seems to indicate Google is gaining momentum in the phone market, as Motorola weakens and other Chinese smartphone brands exit the market. Furthermore, Korea's LG decided to stop making smartphones in 2021, leaving a bigger opening for Google to take market share.

And Pixel is doing even better in some overseas markets. For instance, Counterpoint Research reports that Pixel is now actually the leading Android brand in Japan, with a market share of about 9%.

Investors should also expect Pixel devices to get even better in the coming years. This is because the Pixel team is working on its own proprietary mobile system on a chip (SoC). Like many other mobile handset companies, Pixel started off using third-party hardware. However, Google has the financial resources to make Pixel's inner parts more proprietary, and it's now at work on a fully customized SoC.

The most recent Pixel smartphone actually uses some proprietary chip technology, but currently, Google is combining this with IP from Samsung's Exynos chipsets. However, Google is targeting the 2025 Pixel as the smartphone that will have a fully customized SoC. Given Google's success with in-house chipmaking for its tensor processing units (TPUs), which it uses in its cloud servers, the company could begin to make waves in the smartphone market a few years from now.

Finally, there was an interesting announcement made on Alphabet's last earnings call. CFO Ruth Porat, who has been in that role since 2015, announced she would be adding another title -- chief investment officer. More specifically, that new role will have her overseeing the "other bets" division, while also working with overseas policymakers to "unlock economic growth via technology and investment."

It's a bit unclear what that means, but Porat has been credited with helping Alphabet become more financially disciplined in recent years. And the "other bets" division could certainly use more discipline, as it lost $5.3 billion in 2021 and $6.1 billion in 2022.

With Porat now overseeing that division, it may see significant bottom-line improvements a few years from now, either through better revenue growth or perhaps cost savings from culling unprofitable projects.

Overall, Alphabet looks likely to be a steady earnings compounder for years to come. Meanwhile, it still trades at a reasonable valuation of around 20 times next year's earnings estimates -- and the stock is actually cheaper than that when factoring in Alphabet's $118 billion in cash and the depressive effect of its "other bets" losses.

Investors should look forward to more profitable growth from Alphabet over the next three years.

Read this article:
Where Will Alphabet Be in 3 Years? - The Motley Fool

Read More..

PLANNING AHEAD: What can happen when the law meets artificial intelligence – The Mercury

JANET COLLITON

It seems sometimes that everywhere you go and in every news media you consult, a major subject of interest is Artificial Intelligence otherwise known as AI. What AI is and what it means for the future has been the subject of television interviews such as the one appearing on the popular television program Sixty Minutes between interviewer Scott Pelley and Google CEO Sundar Pi (July 9, 2023).

AI has also inspired legal writings such as the articles appearing in the July/August, 2023 edition of The Pennsylvania Lawyer, a publication of the Pennsylvania Bar Association. For better or for worse AI has been described as impacting everything from the way we work to the way we write, think and organize data.

Artificial Intelligence has been described as the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

Another simple description is the science of making machines that can think like humans. It can do things that are considered smart. AI technology can process large amounts of data in ways unlike humans. The goal for AI is to be able to do things such as recognize patterns, make decisions, and judge like humans As early as 2005 and 2006 chess programs based in AI were able to win decisive victories against human international chess champions. In 2023, Code X The Stanford Center for Legal Informatics and the legal technology company Casetextannounced what they called a watershed moment. Research collaborators had deployed GPT-4, the latest generation Large Language Model to take and pass the Uniform Bar Exam. GPT-4 didnt just squeak by. It passed the multiple-choice portion of the exam and both components of the written portion, exceeding not only all prior Large Language Models scores, but also the average score of real-life bar exam takers, scoring in the 90th percentile

Legal use of Artificial Intelligence obviously goes well beyond competition between machines and students in passing the bar exam. How it can be used is a subject of ongoing debate. It is pointed out that AI itself does not think in the way we do or feel. It takes massive amounts of data, organizes it and arrives at conclusions. It can even make up answers and deceive which is a subject of great concern.

In the previously cited Pennsylvania Bar Association magazine, The Pennsylvania Lawyer, two articles, including the cover article A Cautionary Tale of AI as a Research Tool, and another The Not-So-Quiet Revolution: AI and the Practice of Law, explore AI, GPT and the actual and potential effects of this revolution.

In In re Estate of Bupp: A Cautionary Tale, the author describes his adventures as an associate attorney who was tasked by a partner to research a statute of limitations issue regarding an accounting. The associate decided to use GPT (an AI system) to find the answer and shortly came across the case of Elwood P. Bupp who had filed a petition to be appointed guardian for Florence P. Zook, an elderly woman. The case described a hearing where Bupp was removed as guardian and cited later appellate decisions. The only problem was that Bupp never existed. Neither did Zook or the hearing dates or the decisions described. The story was completely made up by AI. This was the cautionary tale.

The second article, The Not-So-Quiet Revolution: AI and the Practice of Law gave a more nuanced view of AI and its possible practical uses. The technology could sort massive amounts of data (the kind frequently produced in discovery) and locate and organize information at a rate of speed unknown to humans. The author also suggested it might help some individuals without legal access to be able to handle some matters on their own. Always, I would note, however, there would be the Bupp concern in mind regarding accuracy.

This was not the end of my learning about AI and the law. Last week I attended the National Elder Law Forum in Chicago where the lead speaker took us through some further positives and negatives. There is still much to be learned.

Janet Colliton Esq. is a Certified Elder Law Attorney (CELA) by the National Elder Law Foundation and limits her practice to elder law, retirement, life care, special needs, and estate planning and administration with offices at 790 East Market St., Ste. 250, West Chester, 610-436-6674, colliton@collitonlaw.com. She is a member of the National Academy of Elder Law Attorneys and Pennsylvania Association of Elder Law Attorneys and, with Jeffrey Jones, CSA, co-founder of Life Transition Services LLC, a service for families with long term care needs.

See original here:
PLANNING AHEAD: What can happen when the law meets artificial intelligence - The Mercury

Read More..

Artificial intelligence at universities: a pressing issue University … – University Affairs

The overwhelming rise of text generators raises the need for reflection and guidelines to ensure their ethical use in an academic setting.

Just a year ago, the debate around artificial intelligence (AI) was largely theoretical. According to Caroline Quesnel, president of the Fdration nationale des enseignantes et enseignants du Qubec (FNEEQ-CSN) and literature instructor at Collge Jean-de-Brbeuf, the start of the 2023 winter semester marked a turning point as ChatGPT became a focal point in classrooms. Other forms of generative AI are also available to the public, such as QuillBot (text writing and correction), DeepL (translation) and UPDF (PDF summarization).

Martine Peters, a professor of educational science at the Universit du Qubec en Outaouais, surveyed 900 students and found that 22 per cent were already using ChatGPT (sometimes, often or always) to do their assignments. And that was in February!, she noted. It is an alarming statistic, particularly as neither faculty nor universities were prepared to deal with the new technology. Trying to ban AI now would be futile, so what can universities do to ensure its ethical use?

Dr. Peters is convinced that AI can be used for educational purposes. It can help a person understand an article by summarizing, translating or serving as a starting point for further research. In her opinion, outside of language courses (which specifically assess language skills), it could also be used to correct a text or improve syntax, much like grammar software or writing services that some students have relied upon for years.

However, plagiarism remains a major concern for academics. And for the moment, there is no effective tool for detecting the use of AI. In fact, Open AI, the company behind ChatGPT, abandoned its detection software this summer for lack of reliable results. This is a rat race were bound to lose, argued Dr. Quesnel. Should professors return to pen-and-paper tests and classroom discussions? Satisfactory solutions have yet to be found, but as Dr. Quesnel added, its clear that AI creates tension, especially considering the massive pressures in academia. Right now, were spending a lot of energy looking at the benefits of AI instead of its pitfalls.

Beyond plagiarism, AI tools raise all kinds of issues (bias, no guarantee of accuracy, etc.) that the academic community needs to better understand. ChatGPT confidently spouts nonsense and makes up references; its not very good at solving problems in philosophy or advanced physics. You cant use it with your eyes closed, warned Bruno Poellhuber, a professor in the department of psychopedagogy and andragogy at the Universit de Montral.

More training is needed to help professors and students understand both the potential and drawbacks of these technologies. You have to know and understand the beast, Dr. Poellhuber added.

Dr. Peters agreed. For years, we didnt teach how to do proper web searches. If we want our students to use AI ethically, we have to show them how, and right now nobody seems to be taking that on, she said.

Universities are responsible for training their instructors, who can then pass this knowledge on to students. Students need to know when its appropriate to use AI, explained Mlanie Rembert, ethics advisor at the Commission de lthique en science et en technologie (CEST).

The Universit de Montral and the Ple montralais denseignement suprieur en intelligence artificielle (PIA) organized a day of reflection and information for the academic community (professors, university management, etc.) in May. The aim was to demystify the possibilities of generative AI and discuss its risks and challenges, Dr. Poellhuber explained.

This event followed an initial activity organized by the Quebec Ministre de lEnseignement suprieur and IVADO, which gave rise to a joint Conseil suprieur de lducation (CSE) and CEST committee. The committee is currently conducting discussions, consultations and analysis among a wide range of experts on the use of generative AI in higher education. Our two organizations saw the need for more documentation, reflection and analysis around this issue, said Ms. Rembert, who coordinates the expert committees work. Briefs were solicited from higher education institutions and from student and faculty organizations. The report, scheduled to be released in late fall, will be available online.

Given the scale of the disruption, faculty members could also benefit from the experience of others and the support of a community of practice. Thats the idea behind LiteratIA, a sharing group co-founded by Sandrine Prom Tep, associate professor in the management sciences school at the Universit du Qubec Montral. Its all very well to talk about theory and risks, but teachers want tangible tools. They want to know what to do, she explained. Instead of letting themselves be outpaced by students who are going to use AI anyway teachers should adopt a strategy of transparency and sharing. If we dont get on board, students will be calling the shots.

Universities and government alike will have to take a close look at the situation and set concrete, practical and enforceable guidelines. We cant dawdle: AI is already in classrooms, said Dr. Quesnel, adding that faculty are currently shouldering a burden that should be shared by teaching institutions and the Ministre de lEnseignement suprieur. We need tools that teachers can rely on.

So far, very few universities have issued guidelines, those that exist are often vague and difficult to apply. There isnt much in terms of procedures, rules or policies, or tools and resources for applying them. Basically, teachers have to decide whether or not to allow AI, and make their own rules, Dr. Prom Tep added. Institutions will need to define clear policies for permissible and impermissible use, including but not limited to plagiarism (for example, how to use it for correcting assignments, how to cite ChatGPT, etc.).

Rolling out policies and legislation can take time. Its like when the web became prominent: legislation had to play catch up, noted Dr. Prom Tep. The Observatoire international sur les impacts socitaux de lIA et du numrique (OBVIA), funded by the Fonds de recherche du Qubec, is expected to make recommendations to the government, as is the joint expert committee. But is that enough? Do we need broader consultations? questioned Dr. Prom Tep, who would like to see permanent working groups set up. In her opinion, to avoid each institution reinventing the wheel, these reflections will have to be collective and shared, and neutral places for discussion will have to be created.

Link:
Artificial intelligence at universities: a pressing issue University ... - University Affairs

Read More..

PERSPECTIVE: Does Artificial Intelligence Have the Wisdom to … – HSToday

I used to be young and full of hope now Im older and full of other things! However, I have been involved in the invention, innovation, and commercialization of emerging technologies for well over 40 years. With this experience, I have learned to be practical and realistic about new, emerging technologies. I clearly remember the very early days of lasers, the advent of nanotechnology, smart robotics, advanced vibration control systems and currently the promise of laser-induced nuclear fusion, to just name a few, and have come to realize that we must be realistic about what these technologies can offer and what they probably cannot offer at least in the shorter term.

While Im proud to have been and still be involved in these high-tech capabilities, I believe its important to shed light on the fact that most of these technologies had more hype than reality in reference to their short-term promise. As one of the co-authors of the Strategy for Advanced Manufacturing in the United States for President Obama and a major private-sector proponent of the National Nanotechnology Initiative (NNI) for President George W. Bush, I have learned a lot of what it really takes for a technology to live up to a lot of its hype and in most cases it doesnt live up to its promises, certainly in the short term. Case in point: Id like to respectfully recommend taking heed when making claims that Artificial Intelligence (AI) will replace millions and millions of human beings in the short term in virtually every sector of human endeavor.

Its true that AI has made remarkable strides in recent years and does have the potential to revolutionize various aspects of our lives. From self-driving cars to recommendation algorithms, AI systems have demonstrated their ability to process vast amounts of data and make decisions. However, it is essential to recognize that while AI has tremendous benefits, it lacks fundamental qualities that only humans possess.

In this article, we will explore the key differences between AI and the Human Touch. The ultimate human characteristic that we all aspire to possess is wisdom. As we continue to integrate these distinctions into our society, it is important that we understand that AI lacks the fundamental quality that humans have: namely, wisdom. Recently, I had the honor to be interviewed by a brilliant physician researcher named Dr. Laura Gabayan. She created The Wisdom Project in 2022 and interviewed 60 wise individuals for 20-30 minutes throughout North America. The interviews allowed her and her team to scientifically arrive at 8 themes or characteristics that comprise wisdom. Her book discussing these elements and key findings from her encounters will be published in February 2024. Her LinkedIn profile is http://www.linkedin.com/in/lauragabayan. I can highly recommend her approach to this important subject.

What Is Wisdom?

Can it even be scientifically defined? For millennials, wisdom has been defined in a variety of ways, yet they all seem to rely on the idea of I know it when I see it. Can something so important be left to intuition? Can wisdom actually be quantified? Dr. Gabayan found that wisdom is the combination of the following eight elements:

AIs Capabilities

AI, on the other hand, is a set of technologies and algorithms designed to mimic human intelligence and perform tasks such as data analysis, problem-solving, and decision-making. AI systems can process vast amounts of data quickly and accurately identify patterns, and even learn from data to improve their performance over time. Some of the capabilities of AI include:

Key Differences Between AI and Wisdom

The Role of Humans in AI

Recognizing the distinctions between AI and wisdom underscores the importance of human oversight in AI development and deployment. While AI can perform many tasks efficiently, it must operate within a framework set by humans. Humans should:

The Ethical Responsibility

The ethical responsibility of integrating AI into society lies in acknowledging the limitations of AI and recognizing the need for wisdom in guiding its development and application. Failing to do so may lead to unintended consequences, including the reinforcement of harmful biases, erosion of privacy and the devaluation of human qualities such as empathy and compassion.

Conclusion

AI is a powerful tool that can augment human capabilities and solve complex problems. However, it is essential to distinguish AI from wisdom. Wisdom is a uniquely human quality that encompasses experience and judgment. AI lacks consciousness, moral values, emotional intelligence and the capacity to handle ambiguity. Therefore, it cannot replace the role of a human in decision-making, particularly in situations requiring ethical judgment and compassion.

As we continue to integrate AI into our lives, we must maintain a vigilant commitment to ethical oversight, ensuring that AI systems operate within the bounds of human consciousness. Recognizing the limitations of AI and valuing human qualities that contain elements of wisdom are essential for shaping a future where technology serves humanitys best interests, while upholding our shared values and principles.

Acknowledgement: I would like to take this opportunity to thank Dr. Laura Gabayan for sharing the results of her thorough research into the subject of wisdom.

The views expressed here are the writers and are not necessarily endorsed by Homeland Security Today, which welcomes a broad range of viewpoints in support of securing our homeland. To submit a piece for consideration, email editor @ hstoday.us.

Originally posted here:
PERSPECTIVE: Does Artificial Intelligence Have the Wisdom to ... - HSToday

Read More..

Diana Bracco: "Artificial Intelligence will assist radiologists in making increasingly accurate and reliable diagnoses" – Yahoo Finance

MILAN, Sept. 21, 2023 /PRNewswire/ -- Unlocking the AI Revolution - A Symposium on the future of the Healthcare Industry and Diagnostic Imaging in the era of Artificial Intelligence is the title describing the theme of the 2023 edition of Bracco Innovation Day. This event took place at the Human Technopole Auditorium in Milan.

Diana Bracco, President and CEO of Bracco Group

Fulvio Renoldi Bracco, Vice President and CEO of Bracco Imaging, opened the proceedings with a talk in which he observed: "Artificial Intelligence is significantly impacting our lives and its adoption in diagnostic imaging will greatly benefit both patients and healthcare providers. Therefore, we have long since built a dedicated AI team that collaborates with prestigious universities, hospitals, and private companies and that aims to develop algorithms and smart solutionscapable of improving the diagnostic performance of contrast media, resulting in increasingly accurate and predictive imaging."

The symposium included three sessions with important international keynote speakers, and concluded with final remarks by Anna Maria Bernini, Minister of University and Research. The first one, which looked at the new capacities of Artificial Intelligence in drug discovery, omics sciences, and pharmaceutical manufacturing, highlighted how AI is destined to play a significant role in many aspects of medicine and the healthcare industry. Specifically, AI will: accelerate the speed of development of new engineered drugs for specific targets, facilitate the study and management of large amounts of omics data for the prevention and treatment of human diseases, and streamline the production sector to maximize yields and minimize environmental impact.

The second session was dedicated to the impact of AI in radiology, where significant topics regarding the adoption of AI in diagnostic imaging were addressed.

The final session addressed the numerous ethical, political and regulatory aspects that national and international institutions are currently addressing in the face of the AI revolution.

Story continues

This final session was opened by Diana Bracco, President and CEO of Bracco Group, who spoke of the growing importance of diagnostic imaging for patients' health, a sector in which the company is a global leader. "Imaging is consolidating its status as a pillar of contemporary medicine and as an essential tool for the identification of pathologies and the development of innovative medical treatments. Indeed, it is universally understood," she said, "that an early diagnosis not only enables personalized and targeted medicine but also helps address diseases in their initial stages. Precision imaging - thanks also to its non-invasive nature and minimal risk for the patient - will increasingly take center stage in the medicine of the future, where diagnosis and therapy appear to be more closely integrated." Diana Bracco then turned her attention to the potential of the AI revolution for diagnostic imaging. "Artificial Intelligence will assist our radiologists in their work, supporting them in producing increasingly precise and reliable diagnostic reports."

In addition to the many visitors, Bracco Innovation Day was attended by invited researchers from Bracco facilities in Italy, Switzerland, Germany, the United Kingdom, the United States and China. During the session dedicated to 'AI in radiology,' the results of an important study published in the prestigious journal Investigative Radiology: https://journals.lww.com/investigativeradiology/fulltext/9900/amplifying_the_effects_of_contrast_agents_on.129.aspx.

were presented. This study was authored by, among others, Alberto Fringuello Mingo, Sonia Colombo Serra, and Giovanni Valbusa, three young researchers from Bracco Imaging. Through the use of Artificial Intelligence, the team successfully "trained" a neural network using an innovative approach to enhance the contrast in Magnetic Resonance Imaging of the brain, all without any impact on the current clinical protocol.

Carolina Joyce ElefanteUfficio Stampa Direzione Comunicazione & Immagine Gruppo Braccocarolina.elefante@bracco.com+393334263484www.bracco.com

Gruppo Bracco Logo

SOURCE Gruppo Bracco

Visit link:
Diana Bracco: "Artificial Intelligence will assist radiologists in making increasingly accurate and reliable diagnoses" - Yahoo Finance

Read More..

Opinion: How artificial intelligence has changed my students – The … – The San Diego Union-Tribune

Clausen is an author and professor at the University of San Diego. He lives in Escondido.

This academic year I entered my 56th year of college teaching. Yes, thats over half a century of anticipating what I would experience with the newest group of college students when I entered my university classroom. Never before had I been more apprehensive. Let me explain.

I taught my first college-level writing class in 1968. I was a teaching assistant at the University of California Riverside, and I was only a few years older than the freshmen I would be meeting the very next day. I didnt sleep well that night. As a student, I didnt often participate in classroom discussions. I usually sat in the back of the room and listened. Sometimes I daydreamed. Now, I was expected to lead those discussions.

That thought was more than a little frightening.

I was entering the college teaching profession in the middle of the Vietnam War. That was also a concern. Students at many campuses were known to take over classrooms and use them to deliver antiwar lectures. The very real possibility existed that I would have to yield my classroom to an ardent antiwar extremist. I shared some of their concerns about the Vietnam War, since I had a close friend who lost his brother in the conflict. Still, I was hesitant to give up my class to them.

My first class went OK. I managed to get through the discussion without revealing too much about my amateur status as a college teacher. Subsequent years had other challenges that often made it difficult to teach. Financial struggles, personal losses and a pandemic, all took a toll. Still, I overcame those challenges and learned to love the profession I had entered almost by accident.

This year was different. I didnt face antiwar activists or others with deeply felt ideological concerns. I had students who are so dependent on technology that I fear they are turning over the power of thinking to distant, even potentially authoritarian influences.

Cellphone usage has been a problem in our nations classrooms for years. However, the new artificial intelligence technologies and their implications for education exceed anything I have ever confronted. My responsibility as a teacher of literature and writing is to motivate students to confront their own humanity in many different contexts. Then I encourage them to explore their personal reactions to literary classics. Over several days and even weeks, they are challenged to write multiple drafts of an essay. This also requires them to think and rethink the essay prompt until it penetrates to some deeper level of their own consciousness.

Recently, however, I have noticed a growing number of student essays that are more formulaic, written in a tone and style that sounds subtly robotic and seldom penetrates to a deeper level of the students thinking process.

I did not know it at the time, but that was my first introduction to the presence of AI-generated writing in my classroom. The students presence in those essays gradually faded and was replaced by an intelligent-sounding, albeit artificially contrived human voice. That voice seemed to bypass the many stages of deeper thinking that reflect more sophisticated cognitive growth.

I realize my options in confronting these new slightly robotic voices are limited. I can pretend it is the students own writing, and we can both engage in an elaborate charade of feedback that is meaningless to both of us. I can announce rigid penalties for AI-generated essays, and then read student written work primarily to determine whether or not it violated those restrictions. I can move all writing in-class and deny students the essential educational experience of learning to think and rewrite their own prose over a sustained period of time. Or I can simply ignore my own lying eyes and accept the many articles and essays that are encouraging educators to work with AI in the classroom.

Yes, I have probably outlasted my time in a university classroom. I admit that. But I cant get over my concerns that a nation that condones plagiarism of intellectual properties and outright cheating is setting a very bad example for future generations. Even more important, it is denying young people the opportunities they truly need to develop their full potential as human beings.

This year, when I entered my classroom, I was concerned I would be looking at many students who will never reach their full potential because they have given up too much of their unique identities to todays electronically driven educational system.

That worried me even more than the sleepless night over half a century earlier when I was about to teach my first class.

See the original post:
Opinion: How artificial intelligence has changed my students - The ... - The San Diego Union-Tribune

Read More..

Anthropic Lays Out Strategy To Curb Evil AI – The Messenger

Taking cues from how the U.S. government handles bioweapons, artificial intelligence startup Anthropic has laid out a risk assessment and mitigation strategy designed to identify and curb AI before it causes catastrophe.

The Responsible Scaling Policy offers a four-tier system to help judge an AI's risk level on the low end are models that can't cause any harm whatsoever and on the high end are models that don't even exist but which hypothetically could achieve a malignant superintelligence capable of acting autonomously and with intent.

"As AI models become more capable, we believe that they will create major economic and social value, but will also present increasingly severe risks," the company said in a blog post on Tuesday. The policy, they clarified, is focused on "catastrophic risks those where an AI model directly causes large scale devastation."

The four tiers range from AI like that which powers a gaming app, for example, a computer that plays chess. The second tier contains models that can be used by a human to cause harm, like ChatGPT, for example, to create and spread disinformation. The third tier escalates the risk posed by the second tier models these models might offer information to users not found on the internet as we know it, and they could become autonomous in some degree.

The highest tier, though, are hypothetical. But Anthropic speculated they could eventually produce "qualitative escalations in catastrophic misuse potential and autonomy."

Anthropic's policy gives users a blueprint to contain the models once they've diagnosed the extent of their problem.

The first tier are so benign, that these models require no extra strategy or planning, Anthropic said.

For models that fall under in the second tier, Anthropic recommends similar safety guidelines to those adopted as part of a White House-led Commitment in July: AI programs should be tested thoroughly before they are released into the wild, and AI companies need to tell governments and the public about the risks inherent to their models. They also need to remain vigilant against cyberattacks and manipulation or misuse.

Containing the third tier-class AI takes this further: They require companies to securely store their AI models on servers, and maintain strict need-to-know protocols for employees working on different facets of the models. Anthropic also recommends models be kept in secure locations and that whatever hardware was used to design the programs also be kept secure.

Perhaps because it is still hypothetical, Anthropic has no guidance for the advent of an evil, fourth-tier AI system.

"We want to emphasize that these commitments are our current best guess, and an early iteration that we will build on," Anthropic said in the post.

Anthropic was founded in 2021 by former members of ChatGPT creator OpenAI. The company has raised more than $1.6 billion in funding and is perhaps best known for its Claude chatbot.

A spokesperson for Anthropic did not immediately reply to a request for comment.

See the article here:
Anthropic Lays Out Strategy To Curb Evil AI - The Messenger

Read More..

You and AI: A look at artificial intelligence in education – NBC Chicago

L.L. Bean has just added a third shift at its factory in Brunswick, Maine, in an attempt to keep up with demand for its iconic boot.

Orders have quadrupled in the past few years as the boots have become more popular among a younger, more urban crowd.

The company says it saw the trend coming and tried to prepare, but orders outpaced projections. They expect to sell 450,000 pairs of boots in 2014.

People hoping to have the boots in time for Christmas are likely going to be disappointed. The bootsare back ordered through February and even March.

"I've been told it's a good problem to have but I"m disappointed that customers not gettingwhat they want as quickly as they want," said Senior Manufacturing Manager Royce Haines.

Customers like, Mary Clifford, tried to order boots on line, but they were back ordered until January.

"I was very surprised this is what they are known for and at Christmas time you can't get them when you need them," said Clifford.

People who do have boots are trying to capitalize on the shortage and are selling them on Ebay at a much higher cost.

L.L. Bean says it has hired dozens of new boot makers, but it takes up to six months to train someone to make a boot.

The company has also spent a million dollars on new equipment to try and keep pace with demand.

Some customers are having luck at the retail stores. They have a separate inventory, and while sizes are limited, those stores have boots on the shelves.

Read the original here:
You and AI: A look at artificial intelligence in education - NBC Chicago

Read More..