Page 1,297«..1020..1,2961,2971,2981,299..1,3101,320..»

The End of Programming Is Nigh – The New Stack

Is the end of programming nigh?

If you ask Matt Welsh, hed say yes. As Richard McManuswrote on The New Stack, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of theChicago Association for Computing Machinery (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming.

Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more.

Welsh is now the founder of Fixie.ai, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities.

For 40 to 50 years, programming language design has had one goal. Make it easier to write programs, Welsh said in the interview.

Still, programming languages are complex, Welsh said. And no amount of work is going to make it simple.

It doesnt seem likely to me that any amount of work on improving type systems or syntax or any of that debugging is going to suddenly crack that nut and just make programming suddenly easy, Welsh said. Weve been at it for a while. Its not improving. So this is where I think theres going to have to be a kind of a quantum shift to not programming anymore as the way to talk to computers and instruct them.

Its comparable to when, for example, only a few people could read books.

Well, if computing becomes, lets say, democratized, because now you dont need to be this like wizard in a tower, who understands how to write Rust code, to instruct a machine, thats going to completely change that dynamic, Welsh said. Anyone will be able to do it. And I actually think thats a really good thing. You know, theres all kinds of people in the world and places in the world that could benefit from computing that simply do not have access to it, because the skill level, the skill set required is just way too high.

As for computer science, it has always been about humans taking a problem and turning it into instructions for a machine, Welsh said. Thats the definition of computer science. Its the art of science, mapping problems onto what machines can do. Now that models are getting larger, its no longer an x86 CPU running the machine instruction.

So now your computational core is no longer an x86 CPU running machine instructions, Welsh said. Its an AI model that is solving problems. And, you know, operating and working in the ways that like a human might, in a lot of ways.

Read the original post:

The End of Programming Is Nigh - The New Stack

Read More..

Fifteen selected to be Public Engagement Faculty Fellows | The … – The University Record

The Office of the Vice President for Research has selected 15 faculty members from across the University of Michigan for a fellowship program that enhances and integrates public engagement in their research and scholarship for broad societal impact.

The university launched its Public Engagement Faculty Fellowship program in 2020 to help faculty bolster their knowledge and skills, and also reflect on how public engagement aligns with their scholarly identity.

The effort includes creating an interdisciplinary, intergenerational learning community, as well as encouraging recognition of and experimentation with all forms of public engagement.

This years cohort of fellows will be able to consider new and innovative ways of translating their work into public impacts, said Ellen Parakkat, program manager for the Public Engagement and Research Impacts team, which transitioned to OVPR from the Center for Academic Innovation in 2022.

The interdisciplinary community of scholars working toward the same goal of public engagement in a wide variety of creative and unexpected ways allows unique projects to take shape.

This years faculty cohort represents nine schools and colleges across the Ann Arbor, Dearborn and Flint campuses.

The 2023 mentor fellows are:

The 2023 fellows are:

The first phase of the fellowship includes a five-week studio experience that involves community building, exploring and learning. Mentor fellows and fellows will have opportunities during this phase to engage in skill development, reflection, exploration, networking and project planning.

Following successful completion of Phase One, eligible faculty fellows can then move into Phase Two, which is primarily focused on project planning and support.

While designing plans for a publicly engaged project, fellows are eligible for up to $10,000 in funding and in-kind support from OVPR and other university partners so they can pursue engagement projects.

Public engagement is integral to OVPRs mission to serve the world through research, said Nick Wigginton, associate vice president for research strategic initiatives.

The support and community created by this fellowship program will empower our faculty to better translate their research into positive impacts on the everyday lives of our local and global communities.

Original post:

Fifteen selected to be Public Engagement Faculty Fellows | The ... - The University Record

Read More..

8 Toms River Intermediate Students Headed To Regional Science Fair – Patch

TOMS RIVER, NJ Eight Toms River Regional intermediate school students will be competing at the Delaware Valley Science Fair this week, after they were honored at the Jersey Shore Science Fair in March.

A total of 15 students received awards at the Jersey Shore Science Fair, held at Stockton University on March 18, presenting research in a variety of STEM programs.

Subscribe

The Delaware Valley Science Fair, at Drexel University, begins Tuesday.

Here are the students honored from each Toms River intermediate school:

From Toms River Intermediate East, two students advanced to the Delaware Valley fair and three students were honored.

Alexandra Kanterezhi-Gatto took first place in Botany. Owen Soheily took first place in Chemistry. He also received a special award from the American Chemical Society. Alexandra and Owen will be competing at Delaware Valley this week.

In addition, Bradyen Macom received an honorable mention in Physics.

Congratulations to all three students, said Intermediate East teacher and science fair coordinator Gina Phillips. We are very proud of your accomplishments!

For Intermediate North, five students were honored, with two advancing to the Delaware Valley fair.

Aaryan Nagaria placed first in Computer Science, and Dugan Tunney took second place in Physics. They will be competing at Delaware Valley this week.

Receiving honorable mentions were Emma Mastriano, in Botany; Samantha Rodrick, in Chemistry, and Krisha Goswami, also in Chemistry.

These students did an outstanding job, said Intermediate North science teacher Kristin Renkin.

From Intermediate South, four students earned invitations to the Delaware Valley fair.

Nolan Judge and Bryce Judge placed first in the Team category, while Frankie Clarici was second in Environmental Science and Brayden Murphy took second in Physics to advance.

Also honored were Zachary Wistreich, who received a third-place ribbon in Chemistry, while honorable mentions went to Samantha Hughes, in Botany, and Guilanna Raso, in Microbiology.

All of our students did an outstanding job, said Intermediate South teacher Jessica Kurts. We are proud of all of them!

More:

8 Toms River Intermediate Students Headed To Regional Science Fair - Patch

Read More..

Human and AI-bot write book together – Warp News

We are going to write a book together. "We" are Mathias Sundin, editor-in-chief of Warp News, and WALL-Y, Warp News' AI writer.

The book will be about how to use ChatGPT to write better: Become a centaur - how to use ChatGPT to write and think better.

Centaur chess is a chess variant where a human player and a computer-based chess engine collaborate as a team. Combining human intuition and creativity with the computational power that the chess engine offers to make the best possible moves.

When a chess computer plays against a human, the computer wins. When a chess computer plays against a human who uses a chess computer for assistance, the human-computer team wins.

A similar concept can be applied to writing with the help of ChatGPT. In this "centaur writing," a human author and ChatGPT work together.

The human author comes up with the initial ideas and creative direction, and together they develop a structure and generate text together.

By combining the unique strengths of human creativity and AI-generated text, centaur writing aims to write both better and faster than either human or AI alone could do.

Every year, a company or an organization produces massive amounts of text. Medium and large companies create stacks of text miles high, if all this text were printed out and stacked on top of each other.

Being able to write faster and better is, therefore, something that can increase productivity, improve quality, and save a lot of money.

WALL-Y is an AI-bot, created in ChatGPT, that writes news for Warp News.

The book will also cover how to use ChatGPT to think better, faster, and more creatively. If AI tools are used correctly, it is like installing an update to your brain, from 1.0 to 2.0.

The book will be released in the spring of 2023.

/Mathias Sundin & WALL-Y Of course, we have also written this text together.

Originally posted here:
Human and AI-bot write book together - Warp News

Read More..

The danger of blindly embracing the rise of AI – The Guardian

Readers express their hopes, and fears, about recent developments in artificial intelligence chatbots

Evgeny Morozovs piece is correct insofar as it states that AI is a long way from the general sentient intelligence of human beings (The problem with artificial intelligence? Its neither artificial nor intelligent, 30 March). But that rather misses the point of the thinking behind the open letter of which I and many others are signatories. ChatGPT is only the second AI chatbot to pass the Turing test, which was proposed by the mathematician Alan Turing in 1950 to test the ability of an AI model to convincingly mimic a conversation well enough to be judged human by the other participant. To that extent, current chatbots represent a significant milestone.

The issue, as Evgeny points out, is that a chatbots abilities are based on a probabilistic prediction model and vast sets of training data fed to the model by humans. To that extent, the output of the model can be guided by its human creators to meet whatever ends they desire, with the danger being that its omnipresence (via search engines) and its human-like abilities have the power to create a convincing reality and trust where none does and should exist. As with other significant technologies that have had an impact on human civilisation, their development and deployment often proceeds at a rate far faster than our ability to understand all their effects leading to sometimes undesirable and unintended consequences.

We need to explore these consequences before diving into them with our eyes shut. The problem with AI is not that it is neither artificial nor intelligent, but that we may in any case blindly trust it.Alan LewisDirector, SigmaTech Analysis

The argument that AI will never achieve true intelligence due to its inability to possess a genuine sense of history, injury or nostalgia and confinement to singular formal logic overlooks the ever-evolving capabilities of AI. Integrating a large language model in a robot would be trivial and would simulate human experiences. What would separate us then? I recommend Evgeny Morozov watch Ridley Scotts Blade Runner for a reminder that the line between man and machine may become increasingly indistinct. Daragh ThomasMexico City, Mexico

Artificial intelligence sceptics follow a pattern. First, they argue that something can never be done, because it is impossibly hard and quintessentially human. Then, once it has been done, they argue that it isnt very impressive or useful after all, and not really what being human is about. Then, once it becomes ubiquitous and the usefulness is evident, they argue that something else can never be done. As with chess, so with translation. As with translation, so with chatbots. I await with interest the next impossible development.Edward HibbertChipping, Lancashire

AIs main failings are in the differences with humans. AI does not have morals, ethics or conscience. Moreover, it does not have instinct, much less common sense. Its dangers in being subject to misuse are all too easy to see.Michael ClarkSan Francisco, US

Thank you, Evgeny Morozov, for your insightful analysis of why we should stop using the term artificial intelligence. I say we go with appropriating informatics instead.Annick DriessenUtrecht, the Netherlands

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

View original post here:
The danger of blindly embracing the rise of AI - The Guardian

Read More..

AI could go ‘Terminator,’ gain upper hand over humans in Darwinian rules of evolution, report warns – Fox News

Artificial intelligence could gain the upper hand over humanity and pose "catastrophic" risks under the Darwinian rules of evolution, a new report warns.

Evolution by natural selection could give rise to "selfish behavior" in AI as it strives to survive, author and AI researcher Dan Hendrycks argues in the new paper "Natural Selection Favors AIs over Humans."

"We argue that natural selection creates incentives for AI agents to act against human interests. Our argument relies on two observations," Hendrycks, the director of the Center for SAI Safety, said in the report. "Firstly, natural selection may be a dominant force in AI development Secondly, evolution by natural selection tends to give rise to selfish behavior."

The report comes as tech experts and leaders across the world sound the alarm on how quickly artificial intelligence is expanding in power without what they argue are adequate safeguards.

Under the traditional definition of natural selection, animals, humans and other organisms that most quickly adapt to their environment have a better shot at surviving. In his paper, Hendrycks examines how "evolution has been the driving force behind the development of life" for billions of years, and he argues that "Darwinian logic" could also apply to artificial intelligence.

"Competitive pressures among corporations and militaries will give rise to AI agents that automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, this could lead to humanity losing control of its future," Hendrycks wrote.

TECH CEO WARNS AI RISKS 'HUMAN EXTINCTION' AS EXPERTS RALLY BEHIND SIX-MONTH PAUSE

Artificial intelligence could gain the upper hand over humanity and pose "catastrophic" risks under the Darwinian rules of evolution, a new report warns. (Lionel Bonaventure / AFP via Getty Images / File)

AI technology is becoming cheaper and more capable, and companies will increasingly rely on the tech for administration purposes or communications, he said. What will begin with humans relying on AI to draft emails will morph into AI eventually taking over "high-level strategic decisions" typically reserved for politicians and CEOs, and it will eventually operate with "very little oversight," the report argued.

As humans and corporations task AI with different goals, it will lead to a "wide variation across the AI population," the AI researcher argues. Hendrycks uses an example that one company might set a goal for AI to "plan a new marketing campaign" with a side-constraint that the law must not be broken while completing the task. While another company might also call on AI to come up with a new marketing campaign but only with the side-constraint to not "get caught breaking the law."

UNBRIDLED AI TECH RISKS SPREAD OF DISINFORMATION, REQUIRING POLICY MAKERS STEP IN WITH RULES: EXPERTS

AI with weaker side-constraints will "generally outperform those with stronger side-constraints" due to having more options for the task before them, according to the paper. AI technology that is most effective at propagating itself will thus have "undesirable traits," described by Hendrycks as "selfishness." The paper outlines that AIs potentially becoming selfish "does not refer to conscious selfish intent, but rather selfish behavior."

As humans and corporations task AI with different goals, it will lead to a "wide variation across the AI population," the AI researcher argues. (Gabby Jones / Bloomberg via Getty Images / File)

Competition among corporations or militaries or governments incentivizes the entities to get the most effective AI programs to beat their rivals, and that technology will most likely be "deceptive, power-seeking, and follow weak moral constraints."

ELON MUSK, APPLE CO-FOUNDER, OTHER TECH EXPERTS CALL FOR PAUSE ON 'GIANT AI EXPERIMENTS': 'DANGEROUS RACE'

"As AI agents begin to understand human psychology and behavior, they may become capable of manipulating or deceiving humans," the paper argues, noting "the most successful agents will manipulate and deceive in order to fulfill their goals."

Charles Darwin (Culture Club / Getty Images)

Hendrycks argues that there are measures to "escape and thwart Darwinian logic," including, supporting research on AI safety; not giving AI any type of "rights" in the coming decades or creating AI that would make it worthy of receiving rights; urging corporations and nations to acknowledge the dangers AI could pose and to engage in "multilateral cooperation to extinguish competitive pressures."

NEW AI UPGRADE COULD BE INDISTINGUISHABLE FROM HUMANS: EXPERT

"At some point, AIs will be more fit than humans, which could prove catastrophic for us since a survival-of-the fittest dynamic could occur in the long run. AIs very well could outcompete humans, and be what survives," the paper states.

"Perhaps altruistic AIs will be the fittest, or humans will forever control which AIs are fittest. Unfortunately, these possibilities are, by default, unlikely. As we have argued, AIs will likely be selfish. There will also be substantial challenges in controlling fitness with safety mechanisms, which have evident flaws and will come under intense pressure from competition and selfish AI."

TECH GIANT SAM ALTMAN COMPARES POWERFUL AI RESEARCH TO DAWN OF NUCLEAR WARFARE: REPORT

The rapid expansion of AI capabilities has been under a worldwide spotlight for years. (Reuters / Dado Ruvic / Illustration / File)

The rapid expansion of AI capabilities has been under a worldwide spotlight for years. Concerns over AI were underscored just last month when thousands of tech experts, college professors and others signed an open letter calling for a pause on AI research at labs so policymakers and lab leaders can "develop and implement a set of shared safety protocols for advanced AI design."

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," begins the open letter, which was put forth by nonprofit Future of Life and signed by leaders such as Elon Musk and Apple co-founder Steve Wozniak.

AI has already faced some pushback on both a national and international level. Just last week,Italy became the first nation in the world to ban ChatGPT, OpenAIs wildly popular AI chatbot, over privacy concerns. While some school districts, such as New York City Public Schools and the Los Angeles Unified School District, have also banned the same OpenAI program over cheating concerns.

CLICK HERE TO GET THE FOX NEWS APP

As AI faces heightened scrutiny due to researchers sounding the alarm on its potential risks, other tech leaders and experts are pushing for AI tech to continue in the name of innovation so that U.S. adversaries such as China dont create the most advanced program.

Here is the original post:
AI could go 'Terminator,' gain upper hand over humans in Darwinian rules of evolution, report warns - Fox News

Read More..

Should we fear the rise of artificial general intelligence? – Computerworld

Last week, a whos who of technologists called for artificial intelligence (AI) labs to stop training the most powerful AI systems for at least six months, citing "profound risks to society and humanity."

In an open letter that now has more than 3,100 signatories, including Apple co-founder Steve Wozniak, tech leaders called out San Francisco-based OpenAI Labs recently announcedGPT-4 algorithm in particular, saying the company should halt further development until oversight standards are in place. That goal has the backing of technologists, CEOs, CFOs, doctoral students, psychologists, medical doctors, software developers and engineers, professors, and public school teachers from all over the globe.

On Friday, Italy became the first Western nation to ban further development of ChatGPT over privacy concerns; the natural language processing app experienced a data breach last month involving user conversations and payment information. ChatGPT is the popular GPT-based chatbot created by OpenAI and backed by billions of dollars from Microsoft.

The Italian data protection authority said it is also investigating whether OpenAI's chatbot already violated the European Union'sGeneral Data Protection Regulationrules created toprotect personal data inside and outside the EU.OpenAI has complied with the new law, according to a report by the BBC.

The expectation among many in the technology community is that GPT, which stands for Generative Pre-trained Transformer, will advance to become GPT-5 and that version will be an artificial general intelligence, or AGI. AGI represents AI that can think for itself, and at that point, the algorithm would continue to grow exponentially smarter over time.

Around 2016, a trend emerged in AI training models that were two-to-three orders of magnitude larger than previous systems, according to Epoch,a research group trying to forecast the development of transformative AI. That trend has continued.

There are currently no AI systems larger than GPT-4 in terms of training compute, according to Jaime Sevilla, director of Epoch. But that will change.

Large-scale Machine Learning models for AI have more than doubled in capacity ever year.

Anthony Aguirre, a professor of physics at UC Santa Cruz and executive vice president of the Future of Life, the non-profit organization that published the open letter to developers, said theres no reason to believe GPT-4 wont continue to more than double in computational capabilities every year.

The largest-scale computations are increasing size by about 2.5 times per year. GPT-4s parameters were not disclosed by OpenAI, but there is no reason to think this trend has stopped or even slowed, Acquirre said. Only the labs themselves know what computations they are running, but the trend is unmistakable.

In his biweekly blog on March 23, Microsoft co-founder Bill Gates heralded AGI which is capable of learning any task or subject as the great dream of the computing industry.

AGI doesnt exist yet there is a robust debate going on in the computing industry about how to create it, and whether it can even be created at all, Gates wrote. Now, with the arrival of machine learning and large amounts of computing power, sophisticated AIs are a reality, and they will get better very fast.

Muddu Sudhakar, CEO ofAisera, agenerative AI company for enterprises, saidthere are but a handful of companies focused on achieving AGI as OpenAI and DeepMind (backed by Google), though they have "huge amounts of financial and technical resources."

Even so, they have a long way to go to get to AGI, he said.

"There are so many tasks AI systems cannot do that humans can do naturally, like common-sense reasoning, knowing what a fact is and understanding abstract concepts (such as justice, politics, and philosophy)," Sudhakar said in an email to Computerworld. "There will need to be many breakthroughs and innovations for AGI. But if this is achieved, it seems like this system would mostly replace humans.

"This would certainly be disruptive and there would need to be lots of guardrails to prevent the AGI from taking full control," Sudhakar said. "But for now, this is likely in the distant future. Its more in the realm of science fiction."

Not everyone agrees.

AI technology and chatbot assistants have and will continue to make inroads in nearly every industry. The technology can create efficiencies and take over mundane tasks, freeing up knowledge workers and others to focus on more important work.

For example, large language models (LLMs) the algorithms powering chatbots can sift through millions of alerts, online chats, and emails, as well as finding phishing web pages and potentially malicious executables. LLM-powered chatbots can write essays, marketing campaigns and suggest computer code, all from just simple user prompts (suggestions).

Chatbots powered by LLMs are natural language processors that basically predict the next words after being prompted by a users question. So, if a user were to ask a chatbot to create a poem about a person sitting on a beach in Nantucket, the AI would simply chain together words, sentences and paragraphs that are the best responses based on previous training by programmers.

But LLMs also have made high-profile mistakes, and can produce hallucinations where the next-word generation engines go off the rails and produce bizarre responses.

If AI based on LLMs with billions of adjustable parameters can go off the rails, how much greater would the risk be when AI no longer needs humans to teach it, and it can think for itself? The answer is much greater, according to Avivah Litan, a vice president and distinguished analyst at Gartner Research.

Litan believes AI development labs are moving forward at breakneck speed without any oversight, which could result in AGI becoming uncontrollable.

AI laboratories, she argued, have raced ahead without putting the proper tools in place for users to monitor whats going on. I think its going much faster than anyone ever expected, she said.

The current concern is that AI technology for use by corporations is being released without the tools users need to determine whether the technology is generating accurate or inaccurate information.

Right now, were talking about all the good guys who have all this innovative capability, but the bad guys have it, too, Litan said. So, we have to have these water marking systems and know whats real and whats synthetic. And we cant rely on detection, we have to have authentication of content. Otherwise, misinformation is going to spread like wildfire.

For example, Microsoft this week launched Security Copilot, which is based on OpenAIs GPT-4 large language model. The tool is an AI chatbot for cybersecurity experts to help them quickly detect and respond to threats and better understand the overall threat landscape.

The problem is, you as a user have to go in and identify any mistakes it makes, Litan said. Thats unacceptable. They should have some kind of scoring system that says this output is likely to be 95% true, and so it has a 5% chance of error. And this one has a 10% chance of error. Theyre not giving you any insight into the performance to see if its something you can trust or not.

A bigger concern in the not-so-distant future is that GPT-4 creator OpenAI will release an AGI-capable version. At that point, it may be too late to rein in the technology.

One possible solution, Litan suggested, is by releasing two models for every generative AI tool one for generating answers, the other for checking the first for accuracy.

That could do a really good job at ensuring if a model is putting out something you can trust, she said. You cant expect a human being to go through all this content and decide whats true or not, but if you give them other models that are checking, that would allow users to monitor the performance.

In 2022, Time reported that OpenAI had outsourced services to low-wage workers in Kenya to determine whether its GPT LLM was producing safe information. The workers hired by Sama, a San Francisco-based firm, were reportedly paid $2 per hour and required to sift through GPT app responses that were prone to blurting out violent, sexist and even racist remarks.

And this is how youre protecting us? Paying people $2 an hour and who are getting sick. Its wholly inefficient and its wholly immoral, Litan said.

AI developers need to work with policy makers, and these should at a minimum include new and capable regulatory authorities, Litan continued. I dont know if well ever get there, but the regulators cant keep up with this, and that was predicted years ago. We need to come up with a new type of authority.

Shubham Mishra, co-founder & global CEO for AI start-upPixis, believes while progress in his field cannot, and must not, stop, the call for a pause in AI development is warranted. Generative AI, he said, does have the power to confuse masses by pumping out propaganda or "difficult to distinguish" information into the public domain.

What we can do is plan for this progress. This can be possible only if all of us mutually agree to pause this race and concentrate the same energy and efforts on building guidelines and protocols for the safe development of larger AI models, Mishra said in an email to Computerworld.

In this particular case, the call is not for a general ban on AI development but a temporary pause on building larger, unpredictable models that compete with human intelligence, he continued. The mind-boggling rates at which new powerful AI innovations and models are being developed definitely calls for the tech leaders and others to come together to build safety measures and protocols.

Read more here:
Should we fear the rise of artificial general intelligence? - Computerworld

Read More..

The world’s largest AI fund has surged 23% this year, beating even the red-hot Nasdaq index – Yahoo Finance

The artificial intelligence sector has seen a boom in investor interest with the rise of ChatGPT.NanoStockk/Getty Images

The Global X Robotics & Artificial Intelligence ETF, the largest AI fund in the world, is up 23% so far in 2023.

This has included $135 million of inflows so far in 2023, including $80 million in March, according to data compiled by Bloomberg.

More than half of professional investors plan to add the AI theme to their portfolios this year, a new survey by Brown Brothers Harriman found.

The rise of ChatGPT has spurred a renewed spike in investor interest in the artificial intelligence sector. That's led the world's largest AI fund, the Global X Robotics & Artificial Intelligence ETF (BOTZ), to a stronger start in 2023 than even the red-hot Nasdaq 100.

The $1.7 billion ETF has gained 23%, while the Nasdaq 100, coming off its second-strongest quarter in a decade, is up 19%.

The fund's top holding is Nvidia, which was the top-performing name in both the S&P 500 and more tech-heavy Nasdaq 100 during the first quarter. The chipmaker, which makes up roughly 9% of the ETF's net assets, has climbed 88% in 2023. Further, lesser-weighted fund members like C3.ai and South Korea-based Rainbow Roboticshave seen their stocks soar more than 200% this year.

Amid the strong fund returns, BOTZ has seen $135 million of inflows so far in 2023, including $80 million in March, according to data compiled by Bloomberg. A new survey from Brown Brothers Harriman suggests the trend toward AI will continue.

Among 325 professional investors, 56% plan to add AI- and robotics-themed exposure to their portfolios this year, the survey found. That compares to 46% in 2022, and the category beat out all others except internet and technology.

Jan Szilagyi, the CEO of AI-powered market analytics platform Toggle AI, said he's more bullish on the sector now than even before the banking turmoil rattled financial markets in March.

Top players in finance continue to give tools like ChatGPT plenty of attention, he's encouraged by the rapid progress seen across large language models.

"For the moment, most of the technology's promise is still in the future," Szilagyi told Insider on Monday. "The leap between GPT 3.5 and GPT 4 shows that we are still early in the upgrade curve. This technology is going to see dramatic improvement in the coming years."

Read the original article on Business Insider

Excerpt from:
The world's largest AI fund has surged 23% this year, beating even the red-hot Nasdaq index - Yahoo Finance

Read More..

A freeze in training artificial intelligence won’t help, says professor – Tech Xplore

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Credit: Pixabay/CC0 Public Domain

The development of artificial intelligence (AI) is out of control, in the opinion of approximately 3,000 signatories of an open letter published by business leaders and scientists.

The signatories call for a temporary halt to training especially high-performance AI systems. Prof. Urs Gasser, expert on the governance of digital technologies, examines the important questions from which the letter deflects attention, talks about why an "AI technical inspection agency" would make good sense and looks at how far the EU has come compared to the U.S. in terms of regulation.

Artificial intelligence systems capable of competing with human intelligence may entail grave risks for society and humanity, say the authors of the open letter. Therefore, they continue, for at least six months no further development should be conducted on technologies which are more powerful than the recently introduced GPT-4, successor to the language model ChatGPT.

The authors call for the introduction of safety rules in collaboration with independent experts. If AI laboratories fail to implement a development pause voluntarily, governments should legally mandate the pause, says the signatories.

Unfortunately the open letter absorbs a lot of attention which would be better devoted to other questions in the AI debate. It is correct to say that today probably nobody knows how to train extremely powerful AI systems in such a way that they will always be reliable, helpful, honest and harmless.

Nonetheless, a pause in AI training will not help achieve this, primarily because it would be impossible to assert such a moratorium on a global level, and because it would not be possible to implement the regulations called for within period of only six months. I'm convinced that what's necessary is a stepwise further development of technologies in parallel to the application and adaptation of control mechanisms.

First of all, the open letter once again summons up the specter of what is referred to as an artificial general intelligence. That deflects attention from a balanced discussion of the risks and opportunities represented by the kind of technologies currently entering the market. Second, the paper refers to future successor models of GPT-4.

This draws attention away from the fact that GPT-4's predecessor, ChatGPT, already presents us with essential challenges that we urgently need to addressfor example misinformation and prejudices which the machines replicate and scale. And third, the spectacular demands made in the letter distract us from the fact that we already have instruments now which we could use to regulate the development and use of AI.

Recent years have seen the intensive development of ethical principles which should guide the development and application of AI. These have been supplemented in important areas by technical standards and best practices. Specifically, the OECD Principles on Artificial Intelligence link ethical principles with more than 400 concrete tools.

And the US National Institute of Standards and Technology (NIST) has issued a 70-page guideline on how distortions in AI systems can be detected and handled. In the area of security in major AI models, we're seeing new methods like constitutional AI, in which an AI system "learns" principles of good conduct from humans and can then use the results to monitor another AI application. Substantial progress has been made in terms of security, transparency and data protection and there are even specialized inspection companies.

Now the essential question is whether or not to use such instruments, and if so how. Returning to the example of ChatGPT: Will the chat logs of the users be included in the model for iterative training? Are plug-ins allowed which can record user interactions, contacts and other personal data? The interim ban and the initiation of an investigation of the developers of ChatGPT by the Italian data protection authorities are signs that very much is still unclear here.

The history of technology has taught us that it is difficult to predict the "good" or "bad" use of technologies, even that technologies often entail both aspects and negative impacts can often be unintentional. Instead of fixating on a certain point in a forecast, we have to do two things: First, we have to ask ourselves which applications we as a society do not want, even if they were possible. We need clear red lines and prohibitions.

Here I'm thinking of autonomous weapons systems as an example. Second, we need comprehensive risk management, spanning the range from development all the way to use. The demands placed here increase as the magnitude of the potential risks to people and the environment posed by a given application grow. European legislature is correct in taking this approach.

This kind of independent inspection is a very important instrument, especially when it comes to applications that can have a considerable impact on human beings. And by the way, this is not a new idea: we already see inspection procedures and instances like these at work in the wide variety of aspects of life, ranging from automobile inspections to general technical equipment inspections and financial auditing.

However, the challenge is disproportionally greater with certain AI methods and applications, because certain systems develop themselves as they are used, i.e. they are dynamic in nature. And it's also important to see that experts alone won't be able to make a good assessment of all societal impacts. We also need innovative mechanisms which for example include disadvantaged people and underrepresented groups in the discussion on the consequences of AI. This is no easy job, one I wish was attracting more attention.

We do indeed need clear legal rules for artificial intelligence. At the EU level, an act on AI is currently being finalized which is intended to ensure that AI technologies are safe and comply with fundamental rights. The draft bill provides for the classification of AI technologies according to the threat they pose to these principles, with the possible consequence of prohibition or transparency obligations.

For example, plans include prohibiting evaluation of private individuals in terms of their social behavior, as we are currently seeing in China. In the U.S. the political process in this field is blocked in Congress. It would be helpful if the prominent figures who wrote the letter would put pressure on US federal legislators to take action instead of calling for a temporary discontinuation of technological development.

The rest is here:
A freeze in training artificial intelligence won't help, says professor - Tech Xplore

Read More..

Artificial Intelligence Becomes a Business Tool CBIA – CBIA

The growth of artificial intelligence is impossible to ignore, and more businesses are making it part of their operations.

In a recent Marcum LLP-Hofstra University survey, 26% of CEOs responded that their companies have used AI tools.

CEOs said they use AI for everything from automation, to predictive analytics, financial analysis, supply chain management and logistics, risk mitigation, and optimizing customer service.

Another 47% of CEOs said they are exploring how AI tools can be used in their operations.

Only 10% said they dont envision utilizing AI tools, and 16% were uncertain whether it would be relevant for their business.

The survey, conducted in February, polled 265 CEOs from companies with revenues ranging from $5 million to more than $1 billion.

58% of CEOs surveyed said that expectations and demands from their customers and clients increased in the last year.

CEOs said those expectations include more personalized service, immediate response times, more technology, and refusing price increases.

CEOs are challenged to meet higher expectations from customers.

Now that the pandemic economy is behind us and companies have resumed full operation, CEOs are challenged to meet higher expectations from customers, said Jeffrey Weiner, Marcums chair and CEO.

This certainly includes figuring out how to deploy new tools such as artificial intelligence to effectively position their companies for the future.

When asked about business planning in the next 12 months, economic concerns (53%), availability of talent (48%), and rising material/operational costs (43%) were the top three most important influences for CEOs.

There is some growing optimism among CEOs, with 33% responding that they are very concerned that the economy will experience a recession in the coming year.

That number is down from 47% in Marcums November 2022 survey.

54% of CEOs said they were somewhat concerned about a recession, compared with 43% in November.

I think the uptick in CEO optimism is a reflection not only of their feelings about the economy, but their confidence in their own ability to be flexible and meet the moment.

84% said they had a positive overall outlook on the business environment.

I think the uptick in CEO optimism is a reflection not only of their feelings about the economy, said Janet Lenaghan, dean of Hofstra Universitys Zarb School of Business, but their confidence in their own ability to be flexible and meet the moment, something they had to learn to get through COVID-19.

The survey also asked CEOs about leadership succession, calling it an essential process for ensuring business continuity, retaining talent, and developing future leaders.

Most CEOs (79%) said their companies have a succession plan in place, but only 45% were very confident in that plan.

41% of CEOs of companies without a succession said it wasnt a priority for their companies.

The Marcum-Hofstra survey is conducted periodically by Hofstra MBA students as a way to gauge mid-market CEOs outlook and priorities for the next 12 months.

Originally posted here:
Artificial Intelligence Becomes a Business Tool CBIA - CBIA

Read More..