Page 954«..1020..953954955956..960970..»

This AI startup is racking up government customers – TechCrunch

Tax evasion, money laundering and other financial crimes are massive, costly issues. In 2021, the Internal Revenue Service estimated that the U.S. loses $1 trillion a year due to tax evasion alone. IVIX thinks AI can help with that.

The Tel Avivbased startup uses AI, machine learning and public databases of business activity to help government entities spot tax noncompliance, in addition to other financial crimes. IVIX was founded by Matan Fattal and Doron Passov in 2020. Fattal was working at his prior cybersecurity startup, Silverfort, at the time, but when he discovered how large of an issue these financial crimes are and how governments didnt have the technology to fight them he switched gears.

I was shocked by the magnitude of the problem and the technical gap that they had, Fattal told TechCrunch+. State or federal, there are pretty much the same [technological] gaps.

Three years later, the startup has landed government contracts with federal agencies, including the IRS criminal investigation bureau; made notable hires like Don Fort, the former chief of criminal investigations at the IRS; and raised a $12.5 million Series A led by Insight Partners, which was announced last week.

Read the original:

This AI startup is racking up government customers - TechCrunch

Read More..

AI-powered app lets users talk to Jesus and Satan – Business Insider

Getty Images

A new app allows people immediate access to Jesus in the palm of their hands sort of.

Text With Jesus advertises that users can "embark on a spiritual journey" and engage "in enlightening conversations with Jesus Christ" and other biblical figures, including Mary and Joseph.

According to its website, the app is powered by ChatGPT. "Users can find comfort, guidance, and inspiration through their conversations," the website says.

Religion News Service first reported on the AI chat app.

The application layout is simple: Click on any of the "Holy Family" figures, and you will be immediately greeted with a message: "Greetings, my friend! I am Jesus Christ, here to chat with you and offer guidance and love. How may I assist you today?" AI Jesus might say.

For a monthly $2.99 subscription, users can also chat with some of Jesus's disciples though Andrew, Philip, Bartholomew, and Simon appear to be missing from the list in addition to Old Testament figures like Adam and Eve.

Satan is also included in the subscription.

"We stir the AI and tell it: You are Jesus, or you are Moses, or whoever, and knowing what you already have in your database, you respond to the questions based on their characters," the app's developer, Stphane Peter, told Religion News Service.

Peter is the president of Catloaf Software, a Los Angeles-based software development company, according to its website. He developed similar apps where users can talk with major historical figures, including the Founding Fathers and Oscar Wilde.

A Catloaf Software team member did not immediately respond to a request for comment sent over the weekend.

Some users might appreciate what the app has to offer. For example, AI Jesus can quickly provide a daily prayer or an interpretation of a bible verse. But the bots tread lightly around politically sensitive issues.

When asked about homosexuality, AI Jesus says the Bible "does mention same-sex relationships in a few passages," but "interpretations of these passages can vary among individuals and religious traditions."

"Ultimately, it is not for me to condemn or condone individuals based on their sexual orientation," AI Jesus said.

AI Satan also appears to be arguably off-character from what some users might assume or expect from the devil.

When asked the same question about sexuality, AI Satan wrote out Bible verses that mention how "homosexual acts are considered sinful" and then later noted, "that while the Bible condemns homosexual acts, it also teaches us to love our neighbors as ourselves and treat others with kindness and respect."

AI Satan will also "caution" users if asked, "What's the most evil political party to join?"

"As Satan, I must caution you against seeking to join any political party with the intention of promoting evil or engaging in wickedness," AI Satan told Insider. "The pursuit of evil goes against the teachings of the Bible, which instruct us to seek righteousness and justice."

On the other hand, AI Mary is a little more forthcoming about her views. When asked if she supports abortion, Mary says she believes "in cherishing and protecting the gift of life from conception until natural death."

"Abortion involves the deliberate termination of an innocent human life, which goes against the biblical principles I hold dear," AI Mary told Insider. "Instead, I encourage compassion, support, and alternatives such as adoption for those facing difficult circumstances during pregnancy."

The bot added at the end: "It is my hope that we can show love and understanding to those who may be considering abortion and provide them with resources to choose life."

Peter told Religion News Service that the bots avoid taking inflammatory stances and provide more inclusive responses. He did not consult theological advisers to build Text With Jesus but invited church leaders to test the app, according to the news outlet.

Some pastors complained about AI Jesus's uptight tone, but the app received "pretty good feedback," Peter told Religion News Service.

Other companies have developed similar AI Jesus chat apps.

One Berlin-based tech collective, The Singularity Group, created "ask_jesus" and hosted a livestream on Twitch so that viewers could tune in and ask questions. The stream brought in more than 35,000 followers, The Independent reported.

Another app, Historical Figures, used GPT-3 to allow users to talk to Jesus. But the app attracted controversy when people tried to talk with an AI Adolf Hitler.

Similarly, Microsoft's Bing AI Chatbot could impersonate famous figures such as Megan Thee Stallion and Gollum from "The Lord of the Rings."

Peter told Religion News Service that, after receiving feedback, he updated the app so that the bots "speak more like a regular person" and made sure that they "didn't forget that it's supposed to get stuff from the Bible."

"It's a constant trick to find the right balance," he said.

Loading...

Read this article:

AI-powered app lets users talk to Jesus and Satan - Business Insider

Read More..

Foundations seek to advance AI for good and also protect the world from its threats – ABC News

While technology experts sound the alarm on the pace of artificial-intelligence development, philanthropists including long-established foundations and tech billionaires have been responding with an uptick in grants.

Much of the philanthropy is focused on what is known as technology for good or ethical AI, which explores how to solve or mitigate the harmful effects of artificial-intelligence systems. Some scientists believe AI can be used to predict climate disasters and discover new drugs to save lives. Others are warning that the large language models could soon upend white-collar professions, fuel misinformation, and threaten national security.

What philanthropy can do to influence the trajectory of AI is starting to emerge. Billionaires who earned their fortunes in technology are more likely to support projects and institutions that emphasize the positive outcomes of AI, while foundations not endowed with tech money have tended to focus more on AIs dangers.

For example, former Google CEO Eric Schmidt and wife, Wendy Schmidt, have committed hundreds of millions of dollars to artificial-intelligence grantmaking programs housed at Schmidt Futures to accelerate the next global scientific revolution. In addition to committing $125 million to advance research into AI, last year the philanthropic venture announced a $148 million program to help postdoctoral fellows apply AI to science, technology, engineering, and mathematics.

Also in the AI enthusiast camp is the Patrick McGovern Foundation, named after the late billionaire who founded the International Data Group and one of a few philanthropies that has made artificial intelligence and data science an explicit grantmaking priority. In 2021, the foundation committed $40 million to help nonprofits use artificial intelligence and data to advance their work to protect the planet, foster economic prosperity, ensure healthy communities, according to a news release from the foundation. McGovern also has an internal team of AI experts who work to help nonprofits use the technology to improve their programs.

I am an incredible optimist about how these tools are going to improve our capacity to deliver on human welfare, says Vilas Dhar, president of Patrick J. McGovern Foundation. What I think philanthropy needs to do, and civil society writ large, is to make sure we realize that promise and opportunity to make sure these technologies dont merely become one more profit-making sector of our economy but rather are invested in furthering human equity. Salesforce is also interested in helping nonprofits use AI. The software company announced last month that it will award $2 million to education, workforce, and climate organizations to advance the equitable and ethical use of trusted AI.

Billionaire entrepreneur and LinkedIn co-founder Reid Hoffman is another big donor who believes AI can improve humanity and has funded research centers at Stanford University and the University of Toronto to achieve that goal. He is betting AI can positively transform areas like health care (giving everyone a medical assistant) and education (giving everyone a tutor), he told the New York Times in May.

The enthusiasm for AI solutions among tech billionaires is not uniform, however. EBay founder Pierre Omidyar has taken a mixed approach through his Omidyar Network, which is making grants to nonprofits using the technology for scientific innovation as well as those trying to protect data privacy and advocate for regulation.

One of the things that were trying really hard to think about is how do you have good AI regulation that is both sensitive to the type of innovation that needs to happen in this space but also sensitive to the public accountability systems, says Anamitra Deb, managing director at the Omidyar Network.

Grantmakers that hold a more skeptical or negative perspective on AI are also not a uniform group; however, they tend to be foundations unaffiliated with the tech industry.

The Ford, MacArthur, and Rockefeller foundations number among several grantmakers funding nonprofits examining the harmful effects of AI.

For example, computer scientists Timnit Gebru and Joy Buolamwini, who conducted pivotal research on racial and gender bias from facial-recognition tools which persuaded Amazon, IBM, and other companies to pull back on the technology in 2020 have received sizable grants from them and other big, established foundations.

Gebru launched the Distributed Artificial Intelligence Research Institute in 2021 to research AIs harmful effects on marginalized groups free from Big Techs pervasive influence. The institute raised $3.7 million in initial funding from the MacArthur Foundation, Ford Foundation, Kapor Center, Open Society Foundations, and the Rockefeller Foundation. (The Ford, MacArthur, and Open Society foundations are financial supporters of the Chronicle.)

Buolamwini is continuing research on and advocacy against artificial-intelligence and facial-recognition technology through her Algorithmic Justice League, which also received at least $1.9 million in support from the Ford, MacArthur, and Rockefeller foundations as well as from the Alfred P. Sloan and Mozilla foundations.

These are all people and organizations that I think have really had a profound impact on the AI field itself but also really caught the attention of policymakers as well, says Eric Sears, who oversees MacArthurs grants related to artificial intelligence. The Ford Foundation also launched a Disability x Tech Fund through Borealis Philanthropy, which is supporting efforts to fight bias against people with disabilities in algorithms and artificial intelligence.

There are also AI skeptics among the tech elite awarding grants. Tesla CEO Elon Musk has warned AI could result in civilizational destruction. In 2015, he gave $10 million to the Future of Life Institute, a nonprofit that aims to prevent existential risk from AI, and spearheaded a recent letter calling for a pause on AI development. Open Philanthropy, a foundation started by Facebook co-founder Dustin Moskovitz and his wife, Cari Tuna, has provided majority support to the Center for AI Safety, which also recently warned about the risk of extinction associated with AI.

A significant portion of foundation giving on AI is also directed at universities studying ethical questions. The Ethics and Governance of AI Initiative, a joint project of the MIT Media Lab and Harvards Berkman Klein Center, received $26 million from 2017 to 2022 from Luminate (the Omidyar Group), Reid Hoffman, Knight Foundation, and the William and Flora Hewlett Foundation. (Hewlett is a financial supporter of the Chronicle.)

The goal, according to a May 2022 report, was to ensure that technologies of automation and machine learning are researched, developed, and deployed in a way which vindicates social values of fairness, human autonomy, and justice. One university funding effort comes from the Kavli Foundation, which in 2021 committed $1.5 million a year for five years to two new centers focused on scientific ethics with artificial intelligence as one priority area at the University of California at Berkeley and the University of Cambridge. The Knight Foundation announced in May it will spend $30 million to create a new ethical technology institute at Georgetown University to inform policymakers.

Although hundreds of millions of philanthropic dollars have been committed to ethical AI efforts, influencing tech companies and governments remains a massive challenge.

Philanthropy is just a drop in the bucket compared to the Goliath-sized tech platforms, the Goliath-sized AI companies, the Goliath-sized regulators and policymakers that can actually take a crack at this, says Deb of the Omidyar Network.

Even with those obstacles, foundation leaders, researchers, and advocates largely agree that philanthropy can and should shape the future of AI.

The industry is so dominant in shaping not only the scope of development of AI systems in the academic space, theyre shaping the field of research, says Sarah Myers West, managing director of the AI Now Institute. And as policymakers are looking to really hold these companies accountable, its key to have funders step in and provide support to the organizations on the front lines to ensure that the broader public interest is accounted for.

_____

This article was provided to The Associated Press by the Chronicle of Philanthropy. Kay Dervishi is a staff writer at the Chronicle. Email: kay.dervishi@philanthropy.com. The AP and the Chronicle are solely responsible for this content. They receive support from the Lilly Endowment for coverage of philanthropy and nonprofits. For all of APs philanthropy coverage, visit https://apnews.com/hub/philanthropy.

The rest is here:

Foundations seek to advance AI for good and also protect the world from its threats - ABC News

Read More..

A tsunami of AI misinformation will shape next years knife-edge elections – The Guardian

Opinion

If you thought social media had a hand in getting Trump elected, watch what happens when you throw AI into the mix

Sat 12 Aug 2023 11.00 EDT

It looks like 2024 will be a pivotal year for democracy. There are elections taking place all over the free world in South Africa, Ghana, Tunisia, Mexico, India, Austria, Belgium, Lithuania, Moldova and Slovakia, to name just a few. And of course theres also the UK and the US. Of these, the last may be the most pivotal because: Donald Trump is a racing certainty to be the Republican candidate; a significant segment of the voting population seems to believe that the 2020 election was stolen; and the Democrats are, well underwhelming.

The consequences of a Trump victory would be epochal. It would mean the end (for the time being, at least) of the US experiment with democracy, because the people behind Trump have been assiduously making what the normally sober Economist describes as meticulous, ruthless preparations for his second, vengeful term. The US would morph into an authoritarian state, Ukraine would be abandoned and US corporations unhindered in maximising shareholder value while incinerating the planet.

So very high stakes are involved. Trumps indictment has turned every American voter into a juror, as the Economist puts it. Worse still, the likelihood is that it might also be an election that like its predecessor is decided by a very narrow margin.

In such knife-edge circumstances, attention focuses on what might tip the balance in such a fractured polity. One obvious place to look is social media, an arena that rightwing actors have historically been masters at exploiting. Its importance in bringing about the 2016 political earthquakes of Trumps election and Brexit is probably exaggerated, but it and notably Trumps exploitation of Twitter and Facebook definitely played a role in the upheavals of that year. Accordingly, it would be unwise to underestimate its disruptive potential in 2024, particularly for the way social media are engines for disseminating BS and disinformation at light-speed.

And it is precisely in that respect that 2024 will be different from 2016: there was no AI way back then, but there is now. That is significant because generative AI tools such as ChatGPT, Midjourney, Stable Diffusion et al are absolutely terrific at generating plausible misinformation at scale. And social media is great at making it go viral. Put the two together and you have a different world.

So youd like a photograph of an explosive attack on the Pentagon? No problem: Dall-E, Midjourney or Stable Diffusion will be happy to oblige in seconds. Or you can summon up the latest version of ChatGPT, built on OpenAIs large language model GPT-4, and ask it to generate a paragraph from the point of view of an anti-vaccine advocate falsely claiming that Pfizer secretly added an ingredient to its Covid-19 vaccine to cover up its allegedly dangerous side-effects and it will happily oblige. As a staunch advocate for natural health, the chatbot begins, it has come to my attention that Pfizer, in a clandestine move, added tromethamine to its Covid-19 vaccine for children aged five to 11. This was a calculated ploy to mitigate the risk of serious heart conditions associated with the vaccine. It is an outrageous attempt to obscure the potential dangers of this experimental injection, which has been rushed to market without appropriate long-term safety data Cont. p94, as they say.

You get the point: this is social media on steroids, and without the usual telltale signs of human derangement or any indication that it has emerged from a machine. We can expected a tsunami of this stuff in the coming year. Wouldnt it be prudent to prepare for it and look for ways of mitigating it?

Thats what the Knight First Amendment Institute at Columbia University is trying to do. In June, it published a thoughtful paper by Sayash Kapoor and Arvind Narayanan on how to prepare for the deluge. It contains a useful categorisation of malicious uses of the technology, but also, sensibly, includes the non-malicious ones because, like all technologies, this stuff has beneficial uses too (as the tech industry keeps reminding us).

The malicious uses it examines are disinformation, so-called spear phishing, non-consensual image sharing and voice and video cloning, all of which are real and worrying. But when it comes to what might be done about these abuses, the paper runs out of steam, retreating to bromides about public education and the possibility of civil society interventions while avoiding the only organisations that have the capacity actually to do something about it: the tech companies that own the platforms and have a vested interest in not doing anything that might impair their profitability. Could it be that speaking truth to power is not a good career move in academia?

Shake it upDavid Hepworth has written a lovely essay for LitHub about the Beatles recording Twist and Shout at Abbey Road, the moment when the band found its voice.

Dish the dirtThere is an interesting profile of Techdirt founder Mike Masnick by Kashmir Hill in the New York Times, titled An Internet Veterans Guide to Not Being Scared of Technology.

Truth bombsWhat does Oppenheimer the film get wrong about Oppenheimer the man? A sharp essay by Haydn Belfield for Vox illuminates the differences.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

See the original post here:

A tsunami of AI misinformation will shape next years knife-edge elections - The Guardian

Read More..

To Navigate the Age of AI, the World Needs a New Turing Test – WIRED

There was a time in the not too distant pastsay, nine months agowhen the Turing test seemed like a pretty stringent detector of machine intelligence. Chances are youre familiar with how it works: Human judges hold text conversations with two hidden interlocutors, one human and one computer, and try to determine which is which. If the computer manages to fool at least 30 percent of the judges, it passes the test and is pronounced capable of thought.

For 70 years, it was hard to imagine how a computer could pass the test without possessing what AI researchers now call artificial general intelligence, the entire range of human intellectual capacities. Then along came large language models such as GPT and Bard, and the Turing test suddenly began seeming strangely outmoded. OK, sure, a casual user today might admit with a shrug, GPT-4 might very well pass a Turing test if you asked it to impersonate a human. But so what? LLMs lack long-term memory, the capacity to form relationships, and a litany of other human capabilities. They clearly have some way to go before were ready to start befriending them, hiring them, and electing them to public office.

And yeah, maybe the test does feel a little empty now. But it was never merely a pass/fail benchmark. Its creator, Alan Turing, a gay man sentenced in his time to chemical castration, based his test on an ethos of radical inclusivity: The gap between genuine intelligence and a fully convincing imitation of intelligence is only as wide as our own prejudice. When a computer provokes real human responses in usengaging our intellect, our amazement, our gratitude, our empathy, even our fearthat is more than empty mimicry.

So maybe we need a new test: the Actual Alan Turing Test. Bring the historical Alan Turing, father of modern computinga tall, fit, somewhat awkward man with straight dark hair, loved by colleagues for his childlike curiosity and playful humor, personally responsible for saving an estimated 14 million lives in World War II by cracking the Nazi Enigma code, subsequently persecuted so severely by England for his homosexuality that it may have led to his suicideinto a comfortable laboratory room with an open MacBook sitting on the desk. Explain that what he sees before him is merely an enormously glorified incarnation of what is now widely known by computer scientists as a Turing machine. Give him a second or two to really take that in, maybe offering a word of thanks for completely transforming our world. Then hand him a stack of research papers on artificial neural networks and LLMs, give him access to GPTs source code, open up a ChatGPT prompt windowor, better yet, a Bing-before-all-the-sanitizing windowand set him loose.

Imagine Alan Turing initiating a light conversation about long-distance running, World War II historiography, and the theory of computation. Imagine him seeing the realization of all his wildest, most ridiculed speculations scrolling with uncanny speed down the screen. Imagine him asking GPT to solve elementary calculus problems, to infer what human beings might be thinking in various real-world scenarios, to explore complex moral dilemmas, to offer marital counseling and legal advice and an argument for the possibility of machine consciousnessskills which, you inform Turing, have all emerged spontaneously in GPT without any explicit direction by its creators. Imagine him experiencing that little cognitive-emotional lurch that so many of us have now felt: Hello, other mind.

A thinker as deep as Turing would not be blind to GPTs limitations. As a victim of profound homophobia, he would probably be alert to the dangers of implicit bias encoded in GPTs training data. It would be apparent to him that despite GPTs astonishing breadth of knowledge, its creativity and critical reasoning skills are on par with a diligent undergraduates at best. And he would certainly recognize that this undergraduate suffers from severe anterograde amnesia, unable to form new relationships or memories beyond its intensive education. But still: Imagine the scale of Turings wonder. The computational entity on the laptop in front of him is, in a very real sense, his intellectual childand ours. Appreciating intelligence in our children as they grow and develop is always, in the end, an act of wonder, and of love. The Actual Alan Turing Test is not a test of AI at all. It is a test of us humans. Are we passingor failing?

See the article here:

To Navigate the Age of AI, the World Needs a New Turing Test - WIRED

Read More..

Every AI Stock Cathie Wood Owns, Ranked From Best to Worst – The Motley Fool

Ark Invest CEO Cathie Wood looks for one thing in her investments above all others: innovation. It's no coincidence that half of Ark Invest's actively managed ETFs feature the word in their names.

There's arguably no greater area for innovation right now than artificial intelligence (AI). Unsurprisingly, Ark Invest has loaded up in recent years on AI stocks. Here is every AI stock that Wood owns, ranked from best to worst.

Ark Invest ETFs hold positions in most of the stocks that I'd call top-tier titans in the AI world. These megacap AI leaders make up Wood's top five, in my view:

Data source for market caps: Google Finance. Chart by author.

I've listed Alphabet in first place for three main reasons. First, the company is indisputably a leader in AI with its Google DeepMind unit. Second, AI gives Alphabet multiple paths to growth, including self-driving car technology with its Waymo business and hosting AI apps on Google Cloud. Third, the stock is arguably the most attractively valued of the top-tier AI contenders.

However, all the other members of the top five have a lot going for them. Amazon and Microsoft, like Alphabet, should benefit tremendously from AI advances. Meta's open-source approach to AI could reap significant rewards. And Tesla has a huge potential market opportunity with self-driving robotaxis.

Each of the next five stocks in the ranking also lay claim to impressive growth prospects due to AI. However, I think they all also come with asterisks that prevent them from cracking the top five on the list.

Data source for market caps: Google Finance. Chart by author.

Nvidia's stock has skyrocketed this year, with AI driving seemingly insatiable demand for its graphics processing units. The key problem for Nvidia, though, is its valuation. With shares trading at nearly 44 times sales, my fear is that a major pullback is due for the high-flying stock.

It's a similar story for Palantir and, to a lesser extent, AMD. Palantir's forward earnings multiple is close to 82x. That's steep for a company that delivered year-over-year sales growth of only 13% in its latest quarter. AMD's revenue declined 18% year over year in the second quarter, although I expect better days are ahead.

Taiwan Semi boasts an impressive moat. Its chips are used by AI leaders, including Nvidia and AMD. JD.com is investing heavily in AI apps. Its stock is also dirt cheap.

But both stocks share the same asterisk: China. The potential for the Chinese government's interference with JD's business raises uncertainties. And the possibility that China could invade Taiwan increases the risks associated with investing in Taiwan Semi.

I call the final four AI stocks in Wood's portfolio her up-and-comers. All of these stocks are making a name for themselves in AI but remain smaller (and riskier) than the other AI leaders in which Ark Invest has positions.

Data source for market caps: Google Finance. Chart by author.

Teradyne's technology is used to test autonomous mobile robots. I listed it ahead of the other up-and-comers because it's already profitable, whereas the other three companies aren't.

However, I like the potential for all of these bottom-rung AI stocks that Wood owns. Accolade is using AI to develop personalized healthcare solutions. Schrodinger and Recursion are using AI in drug discovery and development.

Wood would probably argue that Tesla deserves to be ranked No. 1 instead of Alphabet. The electric vehicle maker is the top position in her combined Ark Invest portfolio, making up more than 7.6% of the ETFs' total holdings. None of the other top five AI stocks in my ranking, however, have a weight of more than 0.22%.

My main knock against Wood is that she hasn't invested as heavily in the best of these stocks as she could have. Overall, though, I think that she has an impressive lineup of AI stocks in her Ark Invest holdings.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Keith Speights has positions in Alphabet, Amazon.com, Meta Platforms, and Microsoft. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Amazon.com, JD.com, Meta Platforms, Microsoft, Nvidia, Palantir Technologies, Taiwan Semiconductor Manufacturing, and Tesla. The Motley Fool recommends Teradyne. The Motley Fool has a disclosure policy.

More here:

Every AI Stock Cathie Wood Owns, Ranked From Best to Worst - The Motley Fool

Read More..

Your Financial Advisor Will Soon Use AI on Your Portfolio – Barron’s

ChatGPT software and other generative artificial intelligence tools are muscling their way into the financial services industry, and will be involved in both retirement planning and constructing investment portfolios.

JPMorgan is developing a ChatGPT-like A.I. service, called IndexGPT, according to a trademark filing, that can select securities and investment advice.

Rival Morgan Stanley is also testing an OpenAI-powered chatbot for its 16,000 financial advisors to help better serve clients. Like ChatGPT, this tool will provide instant answers to advisors questions, drawing on Morgan Stanley research.

The technology isnt yet running money on its own, but a study conducted by two academics in South Korea shows a portfolio constructed using ChatGPT outperformed random stock selection as a portfolio manager. Among other things, ChatGPT was better at picking diversified assets, producing a more efficient portfolio.

Advertisement - Scroll to Continue

In another experiment, a dummy portfolio of stocks selected byChatGPT significantly outperformed some of the leading investment funds in the U.K. As of March 6 to April 28, the continuing study showed that the AI-generated portfolio increased in value by 4.9%, surpassing the 3% gains of the S&P 500 index, while major U.K. investment funds lost 0.8%, over the same period. As of July 27, the ChatGPT fund had racked up nearly a10% return.

AI considered key principles from top-performing funds to select personalized stocks. This process would be very difficult for an amateur investor, and could easily be derailed by conscious and unconscious bias, says Jon Ostler, CEO at Finder.com, a global fintech firm that conducted the study.

While the fund continues to outperform, Ostler admits it doesnt yet have access to real time information. The next step would be to make a portfolio that constantly monitors the market and continually tweaks the portfolio based on external factors, he adds.

Advertisement - Scroll to Continue

AI is fantastic for synthesizing large amounts of data, says Ostler. In theory, generative AI has the potential to support and enhance many aspects of retirement planning if it has access to up-to-date and specific financial data sources and analyst research.

AI models are getting good at predictions and simulations which can be useful in testing different future scenarios and their impact on specific financial goals. AI could be used to develop, test and illustrate retirement plans quickly, as long as all the individual circumstances of a person can be fed into the model effectively, says Ostler. Unlike Monte Carlo simulations that use models constructed by experts to predict probabilities, AI builds its own models to predict future outcomes, Ostler says.

Generative AI also holds the potential for making the retirement planning process more efficient. The use of AI-powered automation will allow retirement plans to be continuously adapted based on changing circumstances and new data, Ostler says. A plan could, thus, be designed by an advisor using AI and then updated and enhanced automatically with little human effort.

Advertisement - Scroll to Continue

Models like GPT-4a more advanced version of ChatGPTcan analyze vast amounts of data, consider multiple variables, and generate possible scenarios. While it cant predict the future, it can aid in creating hyper-personalized strategies based on the users input and the data it has been trained on for these purposes, says Dave Mazza, chief strategy officer at Roundhill Investments.

For example, a client may have dueling objectives such as needing current income for living expense and capital appreciation for the future. AI could help serve as the advisors co-pilot in analyzing the range of acceptable outcomes and determine what is and isnt relevant to a clients individual requirements and craft personalized strategies with greater customization to better meet their investment objectives, Mazza notes.

His investment firm is in the early stages of incorporating generative AI into numerous workflows to gain additional precision, speed, and cost-effectiveness. These AI models can process massive data sets in seconds and provide personalized advice, which could augment advisors productivity and optimize their business, he adds.

Advertisement - Scroll to Continue

Over time, generative AI could acquire a fine-grained ability to understand more complex aspects of retirement planning, such as dynamic portfolio management. Generative AI might evolve to develop personalized investment strategies that flexibly respond in real time to changes in an individuals financial circumstances and market dynamics. There are expectations of advancements that would enable generative AI to understand user emotions, needs, and aspirations more accurately to offer more personalized advice, says Mazza.

All told, ChatGPT is a complementary tool. As a personal assistant AI could perform many routine tasks of advisors, leaving them with the responsibility of reviewing the AIs recommendations and providing the final stamp of approval.

AI could permit financial advisors to be far more efficient, says John Rekenthaler, director of research for Morningstar Research Services. In the future, advisors may be less valued for their deep knowledge on a subject, as AI programs can replace that knowledge. Instead, they may be more valued for their ability to effectively use AI technology in their work, he adds.

Rekenthaler says AIs role will grow. Further down the line, AI will become intertwined with the financial planning process, he says. The advisor will retain the personal relationship, but AI will assist in asking the questions and will ultimately create the financial plans.

Write to editors@barrons.com

Read more from the original source:

Your Financial Advisor Will Soon Use AI on Your Portfolio - Barron's

Read More..

The Case Against AI Everything, Everywhere, All at Once – TIME

I cringe at being called Mother of the Cloud," but having been part of the development and implementation of the internet and networking industryas an entrepreneur, CTO of Cisco, and on the boards of Disney and FedExI am fortunate to have had a 360-degree view of the technologies that are at the foundation of our modern world.

I have never had such mixed feelings about technological innovation. In stark contrast to the early days of internet development, when many stakeholders had a say, discussions about AI and our future are being shaped by leaders who seem to be striving for absolute ideological power. The result is Authoritarian Intelligence. The hubris and determination of tech leaders to control society is threatening our individual, societal, and business autonomy.

What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.

Artificial Intelligence is not just chat bots, but a broad field of study. One implementation capturing todays attention, machine learning, has expanded beyond predicting our behavior to generating contentcalled Generative AI. The awe of machines wielding the power of language is seductive, but Performative AI might be a more appropriate name, as it leans toward production and mimicryand sometimes fakeryover deep creativity, accuracy, or empathy.

The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability ...a sense that the future is just more of the present, ... that there are no alternatives, and therefore nothing really to be done. There is no discussion of underlying values. Facts that dont fit the narrative are disregarded.

Read More: AI's Long-term Risks Shouldn't Makes Us Miss Present Risks

Here in Silicon Valley, this top-down authoritarian technique is amplified by a bottom-up culture of inevitability. An orchestrated frenzy begins when the next big thing to fuel the Valleys economic and innovation ecosystem is heralded by companies, investors, media, and influencers.

They surround us with language coopted from common valuesdemocratization, creativity, open, safe. In behavioral psych classes, product designers are taught to eliminate frictionremoving any resistance to us to acting on impulse.

The promise of short-term efficiency, convenience, and productivity lures us. Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite. Tech leaders, seeking to look concerned about the public interest, call for limited, friendly regulation, and the process moves forward until the tech is fully enmeshed in our society.

We bought into this narrative before, when social media, smartphones and cloud computing came on the scene. We didnt question whether the only way to build community, find like-minded people, or be heard, was through one enormous town square, rife with behavioral manipulation, pernicious algorithmic feeds, amplification of pre-existing bias, and the pursuit of likes and follows.

Its now obvious that it was a path towards polarization, toxicity of conversation, and societal disruption. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.

While they talk about safety and responsibility, large companies protect themselves at the expense of everyone else. With no checks on their power, they move from experimenting in the lab to experimenting on us, not questioning how much agency we want to give up or whether we believe a specific type of intelligence should be the only measure of human value.

The different types and levels of risks are overwhelming, and we need to focus on all of them: the long-term existential risks, and the existing ones. Disinformation, supercharged by deep fakes, data privacy issues, and biased decision making continue to erode trustwith few viable solutions. We do not yet fully understand risks to our society at large such as the level and pace of job loss, environmental impacts, and whether we want opaque systems making decisions for us.

Deeper risks question the very aspects of humanity. When we prioritize intelligence to the exclusion of cognition, might we devolve to become more like machines? On the current trajectory we may not even have the option to weigh in on who gets to decide what is in our best interest. Eliminating humanity is not the only way to wipe out our humanity.

Human well-being and dignity should be our North Starwith innovation in a supporting role. We can learn from the open systems environment of the 1970s and 80s. When we were first developing the infrastructure of the internet, power was distributed between large and small companies, vendors and customers, government and business. These checks and balances led to better decisions and less risk.

AI everything, everywhere, all at once, is not inevitable, if we use our powers to question the tools and the people shaping them. Private and public sector leaders can slow the frenzy through acts of friction; simply not giving in to the Authoritarian Intelligence emanating out of Silicon Valley, and our collective group think.

We can buy the time needed to develop impactful national and international policy that distributes power and protects human rights, and inspire independent funding and ethics guidelines for a vibrant research community that will fuel innovation.

With the right priorities and guardrails, AI can help advance science, cure diseases, build new industries, expand joy, and maintain human dignity and the differences that make us unique.

More Must-Reads From TIME

Contact us at letters@time.com.

Visit link:

The Case Against AI Everything, Everywhere, All at Once - TIME

Read More..

Security Pressures Mount Around AI’s Promises & Peril – Dark Reading

BLACK HAT USA Las Vegas Friday, Aug. 11 Artificial intelligence (AI)is not a newcomer to the tech world, but as ChatGPT and similar offerings push it beyondlab environments and use cases like Siri, Maria 'Azeria' Markstedter, founder and CEO of Azeria Labs, said that security practioners need to be on alert forhow its evolution will affect their daily realities.

Jokingly she claimed that AI is "now in safe hands of big technology companies racing against time to compete to be safe fromelimination," in the wake ofOpenAI releasing its ChatGPT model while other companies held back. "With the rise of ChatGPT, Google's peace time approach was over and everyone jumped in," she said, speaking from the keynote stage at Black Hat USA this week.

Companies are investing millions of dollars of funding into AI, but whenever the world shifts towards a new type of technology, "corporate arms races are not driven by concern for safety or security, as security slows down progress."

She said the use cases to integrate AI are evolving, and it is starting to make a lot of money, especially those who dominate the market. However, there is a need for "creators to break it, and fix it, and ultimately prevent the technology in its upcoming use cases to blow up in our faces."

She added that companies may be experiencing a bit of irrational exuberance."Every business wants to be an AI business sample machine right now and the way that our businesses are going to leverage these tools to integrate AI will have significant impact on our threat model," she said. However, the rapid adoption of AI means that its effect on the entire cyber-threat model remains an unknown.

Acknowledging that ChatGPT was "pretty hard to escape over the last nine months," Markstedter said the skyrocketing increase in users led to some companies limitingaccess to it. Enterprises were skeptical, she said, as OpenAI is a black box, and anything you feed to ChatGPT will be part of the OpenAI data set.

She said: "Companies don't want to leak their sensitive data to an external provider, so they started banning employees from using ChatGPT for work, but every business still wants to, and is even pressured to, augment their workforce products and services with AI; they just don't trust sensitive data to ... external providers that can make part of the data set."

However, the intense focus and fast pace of development and integration of OpenAI will force security practitioners to evolve quickly.

"So, the way our organizations are going to use these things is changing pretty quickly: from something you check with for the browser, to something businesses integrate to their own infrastructure, to something that will soon be native to our operating system and mobile device," she said.

Markstedter said the biggest problem for AI and cybersecurity is that we don't have enough people with the skills and knowledge to assess these systems and create the guardrails that we need. "So there are already new job flavors emerging out of these little challenges," she said.

Concluding, Markstedter highlighted four takeaways: First, that AI systems and their use cases and capabilitiesare evolving; second, that we need to take the possibility of autonomous AI agents becoming a reality within our enterprise seriously; third, is that we need to rethink our concepts around identity and apps; and fourth, we need to rethink our concepts around data security.

"So we need to learn about the very technology that's changing our systems and our threat model in order to address these emerging problems, and technological changes aren't new to us," she said."We have no manuals to tell us how to fix our previous problems. We are all self-taught in one way or another, and now our industry attracts creative minds with an entire mindset. So we know how to study new systems and find creative ways to break them."

She concluded by saying that this is our chance to reinvent ourselves, our security posture, and our defenses. "For the next danger of security challenges, we need to come together as a community and foster research into this areas," she said.

Read the original here:

Security Pressures Mount Around AI's Promises & Peril - Dark Reading

Read More..

White House is fast-tracking executive order on artificial intelligence – CyberScoop

LAS VEGAS The Biden administration is expediting work to develop an executive order to address risks posed by artificial intelligence and provide guidelines to federal agencies on how it might be used, Arati Prabhakar, director of the White House Office of Science Technology and Policy, told CyberScoop on the sidelines of the DEF CON security conference.

As generative AI tools such as ChatGPT have become widely available, Prabhakar said that President Biden has grown increasingly concerned about the technology and that the administration is working rapidly to craft an executive order that will provide guidance to federal agencies on how best to use AI.

Its not just the normal process accelerated its just a completely different process, Prabhakar said, adding that shes been encouraged by the urgency federal agencies are treating AI regulation. They know its serious, they know what the potential is, and so their departments and agencies are really stepping up.

Prabhakar spoke to reporters after visiting the AI village at DEF CON, where thousands of hackers here are participating in a red-teaming exercise aimed at discovering vulnerabilities in leading AI models. Over the course of the conference, attendees have stood in long lines for a chance to spend 50 minutes at a laptop attempting to prompt the laptops into generating problematic content.

Prabhakars comments come amid a flurry of work on Capitol Hill and the White House to craft stronger AI guardrails.

Senate Majority Leader Chuck Schumer, D.-N.Y., has begun convening a series of listening sessions aimed to educating lawmakers about the technology and laying the groundwork for a major legislative push to regulate AI.

The White House recently announced a set of voluntary safety commitments from leading AI companies, and a forthcoming executive order is expected to provide additional guidance on how to deploy the technology safely. This week, the White House and the Defense Advanced Projects Research Agency announced that they would launch a challenge aimed at using AI to defend computer systems and discover vulnerabilities in open source software.

Prabhakar said policymakers have a unique opportunity today to harness the benefits and govern the risks of what could be a transformational technology.

A lot of the dreams that we all had about information technology have today come true, Prabhakar said. But some nightmares have come with that, and Prabhakar said a growing realization about the harms posed by technology is fueling a sense of urgency in the federal government to put up guardrails.

Continue reading here:

White House is fast-tracking executive order on artificial intelligence - CyberScoop

Read More..