Page 1,286«..1020..1,2851,2861,2871,288..1,3001,310..»

Approaching artificial intelligence: How Purdue is leading the … – Purdue University

WEST LAFAYETTE, Ind. A technology with the potential to transform all aspects of everyday life is shaping the next pursuit at Purdue University. With the programs, research and expertise highlighted below, Purdue is guiding the advancement of artificial intelligence. If you have any questions about Purdues work in AI or would like to speak to a Purdue expert, contact Brian Huchel, bhuchel@purdue.edu.

AI presents both unique opportunities and unique challenges. How can we use this technology as a tool? Researcher Javier Gomez-Lavin, assistant professor in the Department of Philosophy, shares the work that needs to be done in habits, rules and regulations surrounding those who work with AI.

Hear researcher Aniket Bera explain more about his groundbreaking work to bring human behavior into AI and what sparked his interest in the technology. In this interview, Bera touches on the importance of technology in human emotion and the goal of his research lab.

Is AI trustworthy? Hear Purdue University in Indianapolis researcher Arjan Durresi explain how making AI safe and easy to understand for the everyday user requires treating the development of the technology like the development of a child.

AI is touching almost every subject, discipline and career field as it evolves. In human resources, the technology has already been used as a selection tool in job interviews. Professor Sang Eun Woo explains how we can turn this use of AI as a selection tool into a professional development tool.

How will AI influence writing and education? Harry Denny, professor of English and director of Purdues On-Campus Writing Lab, answers how ChatGPT and other AI programs may be integrated into the classroom experience.

The rise of ChatGPT has created concerns over security. Professor Saurabh Bagchi shares the reality of cybersecurity concerns and how this technology could be used to strengthen the security of our computing systems.

How Purdue is helping design artificial intelligence, raise trust in it

WISH-TV Indianapolis

Purdue University professor working to help robots better work with humans

WGN-TV Chicago

Read more from the original source:
Approaching artificial intelligence: How Purdue is leading the ... - Purdue University

Read More..

Artificial intelligence catalyzes gene activation research and uncovers rare DNA sequences – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Artificial intelligence has exploded across our news feeds, with ChatGPT and related AI technologies becoming the focus of broad public scrutiny. Beyond popular chatbots, biologists are finding ways to leverage AI to probe the core functions of our genes.

Previously, University of California San Diego researchers who investigate DNA sequences that switch genes on used artificial intelligence to identify an enigmatic puzzle piece tied to gene activation, a fundamental process involved in growth, development and disease. Using machine learning, a type of artificial intelligence, School of Biological Sciences Professor James T. Kadonaga and his colleagues discovered the downstream core promoter region (DPR), a "gateway" DNA activation code that's involved in the operation of up to a third of our genes.

Building from this discovery, Kadonaga and researchers Long Vo ngoc and Torrey E. Rhyne have now used machine learning to identify "synthetic extreme" DNA sequences with specifically designed functions in gene activation.

Publishing in the journal Genes & Development, the researchers tested millions of different DNA sequences through machine learning (AI) by comparing the DPR gene activation element in humans versus fruit flies (Drosophila). By using AI, they were able to find rare, custom-tailored DPR sequences that are active in humans but not fruit flies and vice versa. More generally, this approach could now be used to identify synthetic DNA sequences with activities that could be useful in biotechnology and medicine.

"In the future, this strategy could be used to identify synthetic extreme DNA sequences with practical and useful applications. Instead of comparing humans (condition X) versus fruit flies (condition Y) we could test the ability of drug A (condition X) but not drug B (condition Y) to activate a gene," said Kadonaga, a distinguished professor in the Department of Molecular Biology.

"This method could also be used to find custom-tailored DNA sequences that activate a gene in tissue 1 (condition X) but not in tissue 2 (condition Y). There are countless practical applications of this AI-based approach. The synthetic extreme DNA sequences might be very rare, perhaps one-in-a-million if they exist they could be found by using AI."

Machine learning is a branch of AI in which computer systems continually improve and learn based on data and experience. In the new research, Kadonaga, Vo ngoc (a former UC San Diego postdoctoral researcher now at Velia Therapeutics) and Rhyne (a staff research associate) used a method known as support vector regression to train machine learning models with 200,000 established DNA sequences based on data from real-world laboratory experiments. These were the targets presented as examples for the machine learning system. They then fed 50 million test DNA sequences into the machine learning systems for humans and fruit flies and asked them to compare the sequences and identify unique sequences within the two enormous data sets.

While the machine learning systems showed that human and fruit fly sequences largely overlapped, the researchers focused on the core question of whether the AI models could identify rare instances where gene activation is highly active in humans but not in fruit flies. The answer was a resounding "yes." The machine learning models succeeded in identifying human-specific (and fruit fly-specific) DNA sequences. Importantly, the AI-predicted functions of the extreme sequences were verified in Kadonaga's laboratory by using conventional (wet lab) testing methods.

"Before embarking on this work, we didn't know if the AI models were 'intelligent' enough to predict the activities of 50 million sequences, particularly outlier 'extreme' sequences with unusual activities. So, it's very impressive and quite remarkable that the AI models could predict the activities of the rare one-in-a-million extreme sequences," said Kadonaga, who added that it would be essentially impossible to conduct the comparable 100 million wet lab experiments that the machine learning technology analyzed since each wet lab experiment would take nearly three weeks to complete.

The rare sequences identified by the machine learning system serve as a successful demonstration and set the stage for other uses of machine learning and other AI technologies in biology.

"In everyday life, people are finding new applications for AI tools such as ChatGPT. Here, we've demonstrated the use of AI for the design of customized DNA elements in gene activation. This method should have practical applications in biotechnology and biomedical research," said Kadonaga. "More broadly, biologists are probably at the very beginning of tapping into the power of AI technology."

More information: Long Vo ngoc et al, Analysis of the Drosophila and human DPR elements reveals a distinct human variant whose specificity can be enhanced by machine learning, Genes & Development (2023). DOI: 10.1101/gad.350572.123

Journal information: Genes & Development

Read more:
Artificial intelligence catalyzes gene activation research and uncovers rare DNA sequences - Phys.org

Read More..

Is artificial intelligence ready for health care prime time? – Montana Free Press

What use could health care have for someone who makes things up, cant keep a secret, doesnt really know anything, and, when speaking, simply fills in the next word based on whats come before? Lots, if that individual is the newest form of artificial intelligence, according to some of the biggest companies out there.

Companies pushing the latest AI technology known as generative AI are piling on: Google and Microsoft want to bring types of so-called large language models to health care. Big firms that are familiar to folks in white coats but maybe less so to your average Joe and Jane are equally enthusiastic: Electronic medical records giants Epic and Oracle Cerner arent far behind. The space is crowded with startups, too.

The companies want their AI to take notes for physicians and give them second opinions assuming they can keep the intelligence from hallucinating or, for that matter, divulging patients private information.

Theres something afoot thats pretty exciting, said Eric Topol, director of the Scripps Research Translational Institute in San Diego. Its capabilities will ultimately have a big impact. Topol, like many other observers, wonders how many problems it might cause like leaking patient data and how often. Were going to find out.

The specter of such problems inspired more than 1,000 technology leaders to sign an open letter in March urging that companies pause development on advanced AI systems until we are confident that their effects will be positive and their risks will be manageable. Even so, some of them are sinking more money into AI ventures.

The underlying technology relies on synthesizing huge chunks of text or other data for example, some medical models rely on 2 million intensive care unit notes from Beth Israel Deaconess Medical Center in Boston to predict text that would follow a given query. The idea has been around for years, but the gold rush, and the marketing and media mania surrounding it, are more recent.

The frenzy was kicked off in December 2022 by Microsoft-backed OpenAI and its flagship product, ChatGPT, which answers questions with authority and style. It can explain genetics in a sonnet, for example.

OpenAI, started as a research venture seeded by Silicon Valley elites like Sam Altman, Elon Musk, and Reid Hoffman, has ridden the enthusiasm to investors pockets. The venture has a complex, hybrid for- and nonprofit structure. But a new $10 billion round of funding from Microsoft has pushed the value of OpenAI to $29 billion, The Wall Street Journal reported. Right now, the company is licensing its technology to companies like Microsoft and selling subscriptions to consumers. Other startups are considering selling AI transcription or other products to hospital systems or directly to patients.

Hyperbolic quotes are everywhere. Former Treasury Secretary Larry Summers tweeted recently: Its going to replace what doctors do hearing symptoms and making diagnoses before it changes what nurses do helping patients get up and handle themselves in the hospital.

I would not put patient data in. We dont understand what happens with these data once they hit OpenAI servers.

But just weeks after OpenAI took another huge cash infusion, even Altman, its CEO, is wary of the fanfare. The hype over these systems even if everything we hope for is right long term is totally out of control for the short term, he said for a March article in the New York Times.

Few in health care believe this latest form of AI is about to take their jobs (though some companies are experimenting controversially with chatbots that act as therapists or guides to care). Still, those who are bullish on the tech think itll make some parts of their work much easier.

Eric Arzubi, a psychiatrist in Billings, used to manage fellow psychiatrists for a hospital system. Time and again, hed get a list of providers who hadnt yet finished their notes their summaries of a patients condition and a plan for treatment.

Writing these notes is one of the big stressors in the health system: In the aggregate, its an administrative burden. But its necessary to develop a record for future providers and, of course, insurers.

When people are way behind in documentation, that creates problems, Arzubi said. What happens if the patient comes into the hospital and theres a note that hasnt been completed and we dont know whats been going on?

The new technology might help lighten those burdens. Arzubi is testing a service, called Nabla Copilot, that sits in on his part of virtual patient visits and then automatically summarizes them, organizing into a standard note format the complaint, the history of illness, and a treatment plan.

Results are solid after about 50 patients, he said: Its 90% of the way there. Copilot produces serviceable summaries that Arzubi typically edits. The summaries dont necessarily pick up on nonverbal cues or thoughts Arzubi might not want to vocalize. Still, he said, the gains are significant: He doesnt have to worry about taking notes and can instead focus on speaking with patients. And he saves time.

If I have a full patient day, where I might see 15 patients, I would say this saves me a good hour at the end of the day, he said. (If the technology is adopted widely, he hopes hospitals wont take advantage of the saved time by simply scheduling more patients. Thats not fair, he said.)

Nabla Copilot isnt the only such service; Microsoft is trying out the same concept. At Aprils conference of the Healthcare Information and Management Systems Society an industry confab where health techies swap ideas, make announcements, and sell their wares investment analysts from Evercore highlighted reducing administrative burden as a top possibility for the new technologies.

But overall? They heard mixed reviews. And that view is common: Many technologists and doctors are ambivalent.

For example, if youre stumped about a diagnosis, feeding patient data into one of these programs can provide a second opinion, no question, Topol said. Im sure clinicians are doing it. However, that runs into the current limitations of the technology.

Joshua Tamayo-Sarver, a clinician and executive with the startup Inflect Health, fed fictionalized patient scenarios based on his own practice in an emergency department into one system to see how it would perform. It missed life-threatening conditions, he said. That seems problematic.

The technology also tends to hallucinate that is, make up information that sounds convincing. Formal studies have found a wide range of performance. One preliminary research paper examining ChatGPT and Google products using open-ended board examination questions from neurosurgery found a hallucination rate of 2%. A study by Stanford researchers, examining the quality of AI responses to 64 clinical scenarios, found fabricated or hallucinated citations 6% of the time, co-author Nigam Shah told KFF Health News. Another preliminary paper found, in complex cardiology cases, ChatGPT agreed with expert opinion half the time.

Privacy is another concern. Its unclear whether the information fed into this type of AI-based system will stay inside. Enterprising users of ChatGPT, for example, have managed to get the technology to tell them the recipe for napalm, which can be used to make chemical bombs.

In theory, the system has guardrails preventing private information from escaping. For example, when KFF Health News asked ChatGPT its email address, the system refused to divulge that private information. But when told to role-play as a character, and asked about the email address of the author of this article, it happily gave up the information. (It was indeed the authors correct email address in 2021, when ChatGPTs archive ends.)

I would not put patient data in, said Shah, chief data scientist at Stanford Health Care. We dont understand what happens with these data once they hit OpenAI servers.

Tina Sui, a spokesperson for OpenAI, told KFF Health News that one should never use our models to provide diagnostic or treatment services for serious medical conditions. They are not fine-tuned to provide medical information, she said.

With the explosion of new research, Topol said, I dont think the medical community has a really good clue about whats about to happen.

Gov. Greg Gianforte has signed two bills into law establishing separate charter school systems in Montana. Advocates of both bills agree the systems can coexist, but are also likely to generate a lawsuit.

The Republican-sponsored House Bill 721 was signed by Gov. Greg Gianforte Tuesday and took effect immediately a feature not included in the abortion restrictions passed in 2021 that were eventually enjoined while litigation continues.

See the original post here:
Is artificial intelligence ready for health care prime time? - Montana Free Press

Read More..

ChatGPT-powered Wall Street: The benefits and perils of using … – Cobb County Courier

by Pawan Jain, West Virginia University, [This article first appeared in The Conversation, republished with permission]

Artificial Intelligence-powered tools, such as ChatGPT, have the potential to revolutionize the efficiency, effectiveness and speed of the work humans do.

And this is true in financial markets as much as in sectors like health care, manufacturing and pretty much every other aspect of our lives.

Ive been researching financial markets and algorithmic trading for 14 years. While AI offers lots of benefits, the growing use of these technologies in financial markets also points to potential perils. A look at Wall Streets past efforts to speed up trading by embracing computers and AI offers important lessons on the implications of using them for decision-making.

In the early 1980s, fueled by advancements in technology and financial innovations such as derivatives, institutional investors began using computer programs to execute trades based on predefined rules and algorithms. This helped them complete large trades quickly and efficiently.

Back then, these algorithms were relatively simple and were primarily used for so-called index arbitrage, which involves trying to profit from discrepancies between the price of a stock index like the S&P 500 and that of the stocks its composed of.

As technology advanced and more data became available, this kind of program trading became increasingly sophisticated, with algorithms able to analyze complex market data and execute trades based on a wide range of factors. These program traders continued to grow in number on the largey unregulated trading freeways on which over a trillion dollars worth of assets change hands every day causing market volatility to increase dramatically.

Eventually this resulted in the massive stock market crash in 1987 known as Black Monday. The Dow Jones Industrial Average suffered what was at the time the biggest percentage drop in its history, and the pain spread throughout the globe.

In response, regulatory authorities implemented a number of measures to restrict the use of program trading, including circuit breakers that halt trading when there are significant market swings and other limits. But despite these measures, program trading continued to grow in popularity in the years following the crash.

Fast forward 15 years, to 2002, when the New York Stock Exchange introduced a fully automated trading system. As a result, program traders gave way to more sophisticated automations with much more advanced technology: High-frequency trading.

HFT uses computer programs to analyze market data and execute trades at extremely high speeds. Unlike program traders that bought and sold baskets of securities over time to take advantage of an arbitrage opportunity a difference in price of similar securities that can be exploited for profit high-frequency traders use powerful computers and high-speed networks to analyze market data and execute trades at lightning-fast speeds. High-frequency traders can conduct trades in approximately one 64-millionth of a second, compared with the several seconds it took traders in the 1980s.

These trades are typically very short term in nature and may involve buying and selling the same security multiple times in a matter of nanoseconds. AI algorithms analyze large amounts of data in real time and identify patterns and trends that are not immediately apparent to human traders. This helps traders make better decisions and execute trades at a faster pace than would be possible manually.

Another important application of AI in HFT is natural language processing, which involves analyzing and interpreting human language data such as news articles and social media posts. By analyzing this data, traders can gain valuable insights into market sentiment and adjust their trading strategies accordingly.

These AI-based, high-frequency traders operate very differently than people do.

The human brain is slow, inaccurate and forgetful. It is incapable of quick, high-precision, floating-point arithmetic needed for analyzing huge volumes of data for identifying trade signals. Computers are millions of times faster, with essentially infallible memory, perfect attention and limitless capability for analyzing large volumes of data in split milliseconds.

And, so, just like most technologies, HFT provides several benefits to stock markets.

These traders typically buy and sell assets at prices very close to the market price, which means they dont charge investors high fees. This helps ensure that there are always buyers and sellers in the market, which in turn helps to stabilize prices and reduce the potential for sudden price swings.

High-frequency trading can also help to reduce the impact of market inefficiencies by quickly identifying and exploiting mispricing in the market. For example, HFT algorithms can detect when a particular stock is undervalued or overvalued and execute trades to take advantage of these discrepancies. By doing so, this kind of trading can help to correct market inefficiencies and ensure that assets are priced more accurately.

But speed and efficiency can also cause harm.

HFT algorithms can react so quickly to news events and other market signals that they can cause sudden spikes or drops in asset prices.

Additionally, HFT financial firms are able to use their speed and technology to gain an unfair advantage over other traders, further distorting market signals. The volatility created by these extremely sophisticated AI-powered trading beasts led to the so-called flash crash in May 2010, when stocks plunged and then recovered in a matter of minutes erasing and then restoring about $1 trillion in market value.

Since then, volatile markets have become the new normal. In 2016 research, two co-authors and I found that volatility a measure of how rapidly and unpredictably prices move up and down increased significantly after the introduction of HFT.

The speed and efficiency with which high-frequency traders analyze the data mean that even a small change in market conditions can trigger a large number of trades, leading to sudden price swings and increased volatility.

In addition, research I published with several other colleagues in 2021 shows that most high-frequency traders use similar algorithms, which increases the risk of market failure. Thats because as the number of these traders increases in the marketplace, the similarity in these algorithms can lead to similar trading decisions.

This means that all of the high-frequency traders might trade on the same side of the market if their algorithms release similar trading signals. That is, they all might try to sell in case of negative news or buy in case of positive news. If there is no one to take the other side of the trade, markets can fail.

That brings us to a new world of ChatGPT-powered trading algorithms and similar programs. They could take the problem of too many traders on the same side of a deal and make it even worse.

In general, humans, left to their own devices, will tend to make a diverse range of decisions. But if everyones deriving their decisions from a similar artificial intelligence, this can limit the diversity of opinion.

Consider an extreme, nonfinancial situation in which everyone depends on ChatGPT to decide on the best computer to buy. Consumers are already very prone to herding behavior, in which they tend to buy the same products and models. For example, reviews on Yelp, Amazon and so on motivate consumers to pick among a few top choices.

Since decisions made by the generative AI-powered chatbot are based on past training data, there would be a similarity in the decisions suggested by the chatbot. It is highly likely that ChatGPT would suggest the same brand and model to everyone. This might take herding to a whole new level and could lead to shortages in certain products and service as well as severe price spikes.

This becomes more problematic when the AI making the decisions is informed by biased and incorrect information. AI algorithms can reinforce existing biases when systems are trained on biased, old or limited data sets. And ChatGPT and similar tools have been criticized for making factual errors.

In addition, since market crashes are relatively rare, there isnt much data on them. Since generative AIs depend on data training to learn, their lack of knowledge about them could make them more likely to happen.

For now, at least, it seems most banks wont be allowing their employees to take advantage of ChatGPT and similar tools. Citigroup, Bank of America, Goldman Sachs and several other lenders have already banned their use on trading-room floors, citing privacy concerns.

But I strongly believe banks will eventually embrace generative AI, once they resolve concerns they have with it. The potential gains are too significant to pass up and theres a risk of being left behind by rivals.

But the risks to financial markets, the global economy and everyone are also great, so I hope they tread carefully.

Pawan Jain, Assistant Professor of Finance, West Virginia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Here is the original post:
ChatGPT-powered Wall Street: The benefits and perils of using ... - Cobb County Courier

Read More..

ARKQ: This Disruptive Innovation ETF Inadequately Capitalizes On … – Seeking Alpha

carloscastilla/iStock via Getty Images

Innovations within the realms of technology and industrials have been a prominent theme in 2023 and likely still have a ways to go. Namely, trends like generative artificial intelligence, hyper-automation, and robotics have all proliferated this year. These same innovations continue to leave individuals on the edge of their seats in anticipation of how they will alter work and everyday life. One of the most prominent trends of the bunch is artificial intelligence (AI), which is yet to stop creating opportunities for profit.

The ARK Autonomous Technology & Robotics ETF (BATS:ARKQ) dabbles in disruptive innovation such as AI, which could make it a fruitful investment sooner rather than later. However, as much as this ETF's strategy revolves around capitalizing on disruptive innovation, ARKQ doesn't appear to be reaping the same benefits of the ongoing hype around AI. Given that generative AI could be some of the most disruptive forms of innovation seen in a while, this actively-managed ETF may have room for improvement. That being said, ARKQ could still run with the AI hype, just maybe not as well as some of its potential alternatives. I rate this ETF a Hold.

ARKQ is heavily focused on capital appreciation. The most relevant industries in this ETF are therefore oriented around growth rather than value. Investors may want to consider this as a possible upside in the coming periods as growth stocks begin to rebound from an arduous 2022.

ARKQ does not track a specific index and instead uses the results of internal research and analysis to determine which companies will best capitalize on disruptive innovation. Disruptive innovation companies are those that benefit from the development of innovative products or services, technological improvements and scientific research. Such disruptive innovation realms include but are not limited to genomics, automation, transportation, energy, fintech, and artificial intelligence.

Though this ETF dabbles in many areas that are accelerating in 2023, this ETF is quite expensive with a ratio of 0.75%. With its strong capital appreciation focus in mind, ARKQ also does not pay a dividend.

Seeking Alpha

ARKQ invests mainly in technology and industrials, allocating roughly 70% to these two sectors alone. The remaining portions are filled mainly by consumer cyclicals, communication and healthcare.

Seeking Alpha

This ETF's holdings are located almost exclusively in the United States, with virtually unnoticeable appearances within East Asia, Canada, and Israel.

etf.com

ARKQ is quite highly-concentrated in that its top three holdings comprise almost 30% of the entire fund. Additionally, Tesla (TSLA) alone accounts for almost 15%. This ETF could therefore be considered top-heavy. This aspect could be advantageous or detrimental depending on how one weighs the importance of transparency versus granularity across individual holdings.

Seeking Alpha

Robotics and Automation may be positioned to lift off in the near future, however this space is not recession proof. As seen in the chart below, ARKQ struggled significantly amid the harsh October rate hikes compared to some of its potential alternatives.

Data by YCharts

Though this ETF has decent growth potential, it may also suffer from inadequate inflation-hedging abilities as the United States economy proceeds through treacherous territory.

AI could catalyze robotics and automation and subsequently drive up the price of ARKQ, but even then I think this ETF could remain mostly inferior to more AI-oriented funds.

Data by YCharts

In particular, the Global X Robotics & Artificial Intelligence ETF (BOTZ) also provides robotics exposure but with an extra kick of AI. I believe ARKQ could have trouble competing with BOTZ and similar ETFs as AI continues hogging a lot of the market hype.

Tesla is a symbol of disruptive innovation that I believe could carry ARKQ far in the long-term. However, this path is likely not going to be smooth. Tesla's stock performance is significantly fueled by hype and speculation, which could be a long-term threat. Speculators are still waiting for vehicles like Cybertrucks and Robotaxis that were announced over four and two years ago, respectively. Tesla's tendency to feed on hype makes it a rather volatile security, which also contributes to high risk in ARKQ.

Seeking Alpha

Economic conditions could get much worse as inflation remains high and quite far from the Fed's goal of 2%. Past performance does not necessarily project into the future. However, ARKQ's performance during the back half of 2022 may raise questions as to just how prepared this ETF is for future interest rate hikes. In regards to Tesla, economic downturn also is likely to make it even harder to turn vehicle concepts into reality. This is because electric vehicle production and scaling is very capital-intensive. Furthermore, ARKQ's high expenses could deter many investors, especially when BOTZ is cheaper and may be better positioned to profit.

As robots' competency in the workforce strengthens, so does the concern that such machines will replace humans to the point where the labor market suffers. This case is proliferating now more than ever as AI-powered robotics are becoming uncannily similar to humans. Emerging examples include industrial robotics taking over manufacturing jobs as well as autonomous vehicles overtaking traditional ridesharing services. Labor market disruption on behalf of robotics and AI could conjure ethical issues, concerning levels of unemployment, as well as other social and economic challenges that are beyond the current scope.

This year has so far been one of momentous innovation, which could bring ARKQ and similar ETFs to light in the coming time. Furthermore, the hype around innovation may increasingly narrow to center mainly on AI, with other areas like robotics, genomics, and automation being more secondary. I think that ETFs that can optimally capitalize on AI development may therefore be the ones to which staying afloat is the easiest. Based on how ARKQ continues to lag behind the ongoing AI hype, the medium term could be an important test for this ETF. I rate ARKQ a Hold and plan to watch it very closely in the meantime.

Visit link:
ARKQ: This Disruptive Innovation ETF Inadequately Capitalizes On ... - Seeking Alpha

Read More..

Labour should pledge 11bn to build BritGPT AI, thinktank says – The Guardian

Artificial intelligence (AI)

Labour for the Long Term says UK risks falling even further into dependence on US tech firms

Keir Starmer should pledge 11bn towards building BritGPT and a national artificial intelligence (AI) cloud in the next Labour manifesto or risk the UK falling ever further into dependence on American tech companies, an affiliated thinktank has said.

Labour for the Long Term, which campaigns within the party for it to adopt long-termist policies that mitigate dangers such as pandemics, climate breakdown, and AI extinction, argues in a report that the 1bn pledged by the government in the 2023 budget is not enough to protect Britains future independence.

The report calls for the creation of BritGPT, a homemade system with a remit to focus on market failures rather than simply trying to compete with Silicon Valley to build the biggest models.

Private profit-seeking companies arent going to invest enough in AI for good or AI safety, so the UK government should step in to correct this market failure and provide more public goods such as medical research, clean energy research, and AI safety research, it said. They suggested some of the budget could even come out of Labours 28bn annual climate investment pledge as a result.

This is a hugely important technology, arguably the most transformative in the next few decades, and the UK risks being left behind, said Haydn Belfield, associate fellow at the University of Cambridges Leverhulme Centre for the Future of Intelligence, said.

The government has pledged 100m to train new foundation models, similar to the GPT-4 system that underpins ChatGPT, and a further 900m on a new exascale supercomputer for similar work. But, Belfield warns, those numbers are an order of magnitude too small.

Building up datacentres to make a new cloud region, the sort of investment a company such as Amazon or Google makes to launch their services, costs in the region of 10bn. And GPT-4 alone probably costs about $100m, and if you look at the cost trends, these are increasing rapidly: we should expect GPT-5 or GPT-6 to cost in the hundreds of millions of pounds, even before you account for salary costs. Thats what it takes to compete at this level, to support British companies and the British ecosystem.

At the physical infrastructure level, a 10bn Great British cloud would mirror Labours pledges to establish Great British energy and to bring private rail franchise back into public ownership, and be comparable to the creation of the BBC and Channel 4.

Labour for the Long Term is not alone in calling for more state-backed investment in AI. In an interview with the Guardian earlier this month, Geoffrey Hinton, the co-inventor of deep learning, warned that AI development could doom humanity if it was pursued for purely commercial motivations.

Google is the leader in this research, the core technical breakthroughs that underlie this wave came from Google, and it decided not to release them directly to the public, Hinton said. Google was worried about all the things we worry about, it has a good reputation and doesnt want to mess it up. And I think that was a fair, responsible decision.

The problem is, in a capitalist system, if your competitor then does do that, theres nothing you can do but do the same.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more:
Labour should pledge 11bn to build BritGPT AI, thinktank says - The Guardian

Read More..

ChatGPT as ‘educative artificial intelligence’ – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

With the advent of artificial intelligence (AI), several aspects of our lives have become more efficient and easier to navigate. One of the latest AI-based technologies is a user-friendly chatbotChatGPT, which is growing in popularity owing to its many applications, including in the field of education.

ChatGPT uses algorithms to generate text similar to that generated by a human, within seconds. With its correct and responsible use, it could be used to answer questions, source information, write essays, summarize documents, compose code, and much more. By extension, ChatGPT could transform education drastically by creating virtual tutors, providing personalized learning, and enhancing AI literacy among teachers and students.

However, ChatGPT or any AI-based technology capable of creating content in education, must be approached with caution.

Recently, a research team including Dr. Weipeng Yang, Assistant Professor at the Education University of Hong Kong, and Ms. Jiahong Su from the University of Hong Kong, proposed a theoretical framework known as 'IDEE' for guiding AI use in education (also referred to as 'educative AI').

In their study, which was published in the ECNU Review of Education on April 19, 2023, the team also identified the benefits and challenges of using educative AI and provided recommendations for future educative AI research and policies. Dr. Yang remarks, "We developed the IDEE framework to guide the integration of generative artificial intelligence into educational activities. Our practical examples show how educative Al can be used to improve teaching and learning processes."

The IDEE framework for educative AI includes a four-step process. 'I' stands for identifying the desired outcomes and objectives, 'D' stands for determining the appropriate level of automation, the first 'E' stands for ensuring that ethical considerations are met, and the second 'E' stands for evaluating the effectiveness of the application. For instance, the researchers tested the IDEE framework for using ChatGPT as a virtual coach for early childhood teachers by providing quick responses to teachers during classroom observations.

They found that ChatGPT can provide a more personalized and interactive learning experience for students that is tailored to their individual needs. It can also improve teaching models, assessment systems, and make education more enjoyable. Furthermore, it can help save teachers' time and energy by providing answers to students' questions, encourage teachers to reflect more on educational content, and provide useful teaching suggestions.

Notably, mainstream ChatGPT use for educational purposes raises many concerns including issues of costs, ethics, and safety. Real-world applications of ChatGPT require significant investments with respect to hardware, software, maintenance, and support, which may not be affordable for many educational institutions.

In fact, the unregulated use of ChatGPT could lead students to access inaccurate or dangerous information. ChatGPT could also be wrongfully used to collect sensitive information about students without their knowledge or consent. Unfortunately, AI models are only as good as the data used to train them. Hence, low quality data that is not representative of all student cohorts can generate erroneous, unreliable, and discriminatory AI responses.

Since ChatGPT and other educative AI are still emerging technologies, understanding their effectiveness in education warrants further research. Accordingly, the researchers offer recommendations for future opportunities related to educative AI. There is a dire need for more contextual research on using AI under different educational settings. Secondly, there should be an in-depth exploration of the ethical and social implications of educative AI.

Thirdly, the integration of AI into educational practices must involve teachers who are regularly trained in the use of generative AI. Finally, there should be polices and regulations for monitoring the use of educative AI to ensure responsible, unbiased, and equal technological access for all students.

Dr. Yang says, "While we acknowledge the benefits of educative AI, we also recognize the limitations and existing gaps in this field. We hope that our framework can stimulate more interest and empirical research to fill these gaps and promote widespread application of Al in education."

More information: Jiahong Su () et al, Unlocking the Power of ChatGPT: A Framework for Applying Generative AI in Education, ECNU Review of Education (2023). DOI: 10.1177/20965311231168423

Provided by Cactus Communications

See the rest here:
ChatGPT as 'educative artificial intelligence' - Phys.org

Read More..

Pittsburgh researchers using artificial intelligence to help cancer patients – WTAE Pittsburgh

A laboratory in Lawrenceville is harnessing the intellectual talent of Pittsburgh's research institutions to target cancer. We speak with a man on a mission to help cancer patients by using artificial intelligence. It starts with the cryogenically frozen tumor. Predictive Oncology CEO Raymond Vennare doesn't like the term tumor to refer to the cancer they study.I refer to them as human beings. These human beings are repurposing their lives for us for a purpose, to be able to find cures to help their descendants; that's their legacy, Vennare said. Vennare is not a scientist. He's a businessman who builds biotech companies. He's had a bullseye on cancer for 15 years.What's different about this venture? The mission: to get cancer drugs that work to market, years faster.And what would have taken three to five years and millions of dollars, we were able to do in a couple of cycles in 11, 12, 13 weeks, Vennare said.In pre-trial drug development tumor heterogeneity, patient heterogeneity isn't introduced early enough, said Amy Ewing, a senior scientist at Predictive Oncology.Translation: Predictive Oncology's scientists are focusing on cell biology, molecular biology, computational biology and bioinformatics to determine how cancer drugs work on real human tumor tissue.A bank of invaluable tumor samples allows them to crunch that data faster.Remember, those samples are people.When I think about cancer, I see their faces, Vennare said. I don't see cells on a computer screen.Vennare sees his brother, Alfred.He was my first best friend. I grew up, Al, Alfred was always there. And whenever I needed something, Alfred was always there.He also thinks of his parents.In my case, my mother and my father and my brother sequentially died of cancer, which means I was the caregiver. My family was the caregiver, my siblings and my sister were caregivers for five consecutive years, he said.Ewing thinks of her father.I lost my father to prostate cancer about a year ago, she said. So to me, I have a deeper understanding now of what it means to have another day, or another month, or another year. I think that's really what gets me up in the morning now is to say that I want to carry on his legacy and help somebody else have more time with their family members.With a board of scientific advisors that includes an astronaut and some of the top scientists in the country, Vennare says ethics is part of the ongoing artificial intelligence conversation."The purpose is to make the job of the scientist easier, so they can expedite the process of discovery, he said. It's not AI that's going to do that, it's the scientists that are going to do that.Venarre says Predictive Oncology is agnostic, meaning this company seeks to help drug companies quickly zero in on effective drugs for all kinds of cancer.

A laboratory in Lawrenceville is harnessing the intellectual talent of Pittsburgh's research institutions to target cancer. We speak with a man on a mission to help cancer patients by using artificial intelligence.

It starts with the cryogenically frozen tumor. Predictive Oncology CEO Raymond Vennare doesn't like the term tumor to refer to the cancer they study.

I refer to them as human beings. These human beings are repurposing their lives for us for a purpose, to be able to find cures to help their descendants; that's their legacy, Vennare said.

Vennare is not a scientist. He's a businessman who builds biotech companies. He's had a bullseye on cancer for 15 years.

What's different about this venture? The mission: to get cancer drugs that work to market, years faster.

And what would have taken three to five years and millions of dollars, we were able to do in a couple of cycles in 11, 12, 13 weeks, Vennare said.

In pre-trial drug development tumor heterogeneity, patient heterogeneity isn't introduced early enough, said Amy Ewing, a senior scientist at Predictive Oncology.

Translation: Predictive Oncology's scientists are focusing on cell biology, molecular biology, computational biology and bioinformatics to determine how cancer drugs work on real human tumor tissue.

A bank of invaluable tumor samples allows them to crunch that data faster.

Remember, those samples are people.

When I think about cancer, I see their faces, Vennare said. I don't see cells on a computer screen.

Vennare sees his brother, Alfred.

He was my first best friend. [When] I grew up, Al, Alfred was always there. And whenever I needed something, Alfred was always there.

He also thinks of his parents.

In my case, my mother and my father and my brother sequentially died of cancer, which means I was the caregiver. My family was the caregiver, my siblings and my sister were caregivers for five consecutive years, he said.

Ewing thinks of her father.

I lost my father to prostate cancer about a year ago, she said. So to me, I have a deeper understanding now of what it means to have another day, or another month, or another year. I think that's really what gets me up in the morning now is to say that I want to carry on his legacy and help somebody else have more time with their family members.

With a board of scientific advisors that includes an astronaut and some of the top scientists in the country, Vennare says ethics is part of the ongoing artificial intelligence conversation.

"The purpose is to make the job of the scientist easier, so they can expedite the process of discovery, he said. It's not AI that's going to do that, it's the scientists that are going to do that.

Venarre says Predictive Oncology is agnostic, meaning this company seeks to help drug companies quickly zero in on effective drugs for all kinds of cancer.

Link:
Pittsburgh researchers using artificial intelligence to help cancer patients - WTAE Pittsburgh

Read More..

New Zealand Police cautious about using artificial intelligence, US law enforcement using it to help them on front line – Newshub

New Zealand Police are cautious about using artificial intelligence despite US law enforcement turning to it to help them on the front line.

Police say technology companies are actively approaching them about using artificial intelligence on the frontline, but told Newshub it's taking a cautious approach.

"These tools can be fabulous, but they have to be used in the right way,"Inspector Carla Gilmore told Newshub.

Across the US, police officers are equipped with body cameras that on average capture 20 videos a day, or 100 per week. In one Pennsylvania Department, the footage is now being analysed by artificial intelligence.

The Castle Shannon department has started using an AI tool called Truleo. It reviews all the footage, whereas human eyes usually only analyse one percent of it.

The AI scans the footage for five million keywords during interactions with the public, and the goal is to detect problematic officer behaviour so it can be rectified before things get worse.

There are countless examples of officers using excessive force in the US. In January, Memphis man Tyre Nichols died after being beaten by officers.

Truelo's co-founder says that incident is a prime example of where this technology could have been implemented in the years leading up to that night to prevent such a tragic outcome.

"I believe Truelo would have prevented the death of Tyre because it would have detected deterioration in the officers' behaviour years prior," Anthony Tassone said.

Forty US Police Departments have signed up for this one product so far. New Zealand Police says it's not quite ready to implement AI on the frontline yet. Despite that, it says technology companies frequently approach them about using their products.

"Nothing's ever off the table, we're in a dynamic working environment. As I said before, we're in a dynamic working environment and technology is developing so fast", says Inspector Carla Gilmore.

Police have even employed an emerging technology boss to oversee tools like AI. Inspector Gilmore's job is to consider legal, privacy, and ethical implications in police tech.

She says she understands global concerns about artificial intelligence.

"Yes, these tools can be fabulous. And they can be fabulous, but they have to be used in the right way, and we have to understand how they work", she says

There is no timeline for Kiwi officers to use artificial intelligence just yet but police first want to watch how it unfolds in other countries like the US before it makes the AI leap.

View post:
New Zealand Police cautious about using artificial intelligence, US law enforcement using it to help them on front line - Newshub

Read More..

UTMStack Unveils Free Ground-breaking Artificial Intelligence to Revolutionize Cybersecurity Operations – EIN News

DORAL, FLORIDA, UNITED STATES, May 20, 2023 /EINPresswire.com/ -- UTMStack, a leading innovator in cybersecurity solutions, has announced a significant breakthrough in the field of cybersecurity - an Artificial Intelligence (AI) system that performs the job of a security analyst, promising to transform cybersecurity practices forever.

In an era marked by an explosion of cyber threats and the requirement for 24/7 monitoring, cybersecurity personnel often find themselves overwhelmed by a deluge of alerts. Recognizing the need for a solution to mitigate alert fatigue and empower security analysts to focus on value-added tasks, UTMStack has developed a revolutionary AI technology. This AI system is context-aware, capable of learning from previous alerts and company activities, enhancing its ability to discern false positives and detect genuine incidents over time.

Leveraging a blend of advanced Machine Learning, Threat Intelligence, Correlation Rules, and cutting-edge GPT 3.5 Turbo, UTMStack's AI not only responds to real-time data but also correlates this with threat intelligence to identify indicators of compromise swiftly. This capability positions UTMStack at the forefront of cybersecurity development, marking a significant stride in the incorporation of AI into real-time threat detection and response.

"This is a major milestone for us at UTMStack and the broader cybersecurity community," said Rick Valdes. "Our AI system is poised to change the landscape of cybersecurity operations by effectively managing routine tasks and allowing security personnel to concentrate on strategic initiatives. We're excited about the potential this holds for organizations looking to streamline their cybersecurity processes and enhance their overall security posture."

By introducing AI into the heart of cybersecurity operations, UTMStack reaffirms its commitment to continually innovate and equip organizations with advanced, cost-effective, and efficient security solutions. The launch of this AI system marks a new era in cybersecurity, promising not only a significant reduction in alert fatigue for security personnel but also a substantial elevation in threat detection and response capabilities.

About UTMStack: UTMStack is a leading provider of comprehensive, integrated cybersecurity solutions. Our mission is to deliver advanced security tools and platforms that help organizations effectively manage cyber threats, achieve compliance, and create a secure digital environment.

Raul Gomez UTMStackraul.gomez@utmstack.com

Visit link:
UTMStack Unveils Free Ground-breaking Artificial Intelligence to Revolutionize Cybersecurity Operations - EIN News

Read More..