Page 124«..1020..123124125126..130140..»

How Pope Francis became the AI ethicist for world leaders and tech titans – The Washington Post

BARI, Italy Pope Francis is an octogenarian who says he cannot use a computer, but on a February afternoon in 2019, a top diplomat of American Big Tech entered the papal residence seeking guidance on the ethics of a gestating technology: artificial intelligence.

Microsoft President Brad Smith and the pope discussed the rapid development of the technology, Smith recounted in an interview with The Washington Post, and Francis appeared to grasp its risks. As Smith departed, the pope uttered a warning. Keep your humanity, he urged, as he held Smiths wrist.

In the five years since that meeting, AI has become unavoidable as the pope himself found out last year when viral images of him in a Balenciaga puffer jacket heralded a new era of deepfakes. And as the technology has proliferated, the Vatican has positioned itself as the conscience of companies like Microsoft and emerged as a surprisingly influential voice in the debate over AIs global governance.

In southern Italy on Friday, Francis became the first pope to address a Group of Seven forum of world leaders, delivering a moral treatise on the cognitive-industrial revolution represented by AI, as he sought to elevate the topic in the same manner he did climate change.

President Biden greeted Pope Francis on June 14 at the Group of Seven roundtable in Fasano, Italy. (Video: Reuters)

In a sweeping speech, the pope sketched out the ramifications of a technology as fascinating as it is terrifying, saying it could change the way we conceive of our identity as human beings. He decried how AI could cement the dominance of Western culture and diminish human dignity.

AI, he said, stood as a tool that could democratize knowledge, exponentially advance science and alleviate the human condition as people give arduous work to machines. But he warned that it also has the power to destroy and called for an urgent ban on lethal autonomous weapons. As a ghost of the future, he referenced the 1907 dystopian novel Lord of the World, in which technology replaces religion and faith in God.

No machine should ever choose to take the life of a human being, the pope said.

Stories to keep you informed

He has previously insisted that AIs risks must be managed through a global treaty, and on Friday he endorsed the need for a set of uniting global principles to guide AIs development.

The Rome Call for AI Ethics a document that counted the Vatican, Microsoft and IBM among its original signatories in 2020 is emerging as a gold standard of best AI practices. It has informed G-7 discussions about developing a code of conduct. And on Friday, the G-7 leaders with the Vaticans support announced that they would create a badge of honor of sorts: a new label for companies that agree to safely and ethically develop AI tools and follow guidelines for the voluntary reporting and monitoring of risks. Echoing Vatican concerns, leaders additionally called for responsible military uses of AI.

The AI issue has provided an opening for the church, diminished by its handling of clerical sex abuse scandals, to reassert its moral authority. Microsoft and at least some other tech companies appear eager for the churchs seal of approval, as the industry grapples with the public-relations challenges of a technology that could automate jobs, amplify misinformation and create new cybersecurity risks.

The Vatican has earned a seat at the Big Tech table. An ancient institution with a mixed track record on science see the trial of Galileo is now dispatching representatives to major tech events.

The Rev. Paolo Benanti the Vaticans leading AI expert, a Franciscan priest and a trained engineer credited with coining the term algorethics last year secured a spot on the United Nations Advisory Body on Artificial Intelligence and has become a major player in the crafting of a national AI policy for Italy, a G-7 nation. At the Vaticans request, IBM hosted a global summit of colleges at the University of Notre Dame to bring AI ethics to the forefront of curriculums.

The Vaticans views have influenced concrete business decisions. Microsofts Smith told The Post: We developed our own technology that would allow anyone with just a few seconds of anyones voice to be able to replicate it. And we chose not to release that. The Rome principles, he added, are definitely part of what has helped us at Microsoft strive to take a broad-minded approach to the development of AI, including within our own four walls. I just think its provided a broad humanistic and intellectual frame.

The pledges emphasis on inclusion also influenced the companys decision to launch a fellowship that brings together researchers and civil society leaders largely from the Global South to evaluate the impact of the technology, said Natasha Crampton, Microsofts chief responsible AI officer. Fellows have helped the company develop multilingual evaluations of AI models and ensured that the company understands local context and cultural norms as it develops new products.

Not all companies are on board with the Rome principles. Some have forged ahead with AI-manipulated audio that researchers warn could be abused to dupe voters ahead of elections.

Not everyone has been allowed to join the Rome club, either. The Chinese company Huawei asked, said Archbishop Vincenzo Paglia, president of the Pontifical Academy for Life. And we said no, because we dont really know what the [people in charge there] think.

In the meantime, the Vatican remains concerned about the misuse of open-source AI. The technology could produce major benefits in health care and education, Benanti said. But it can also multiply a lot of bad elements in society, and we cannot spread AI everywhere without any political decision-making, because tomorrow we could wake up with a multiplier of inequality, of bioweapons, he said.

Vatican officials have already sounded alarms over what they view as potentially unethical uses, including the facial recognition systems deployed in the 2019-2020 crackdown on protesters in Hong Kong, as well as algorithms for refugee processing such as those in Germany, where AI-fueled linguistic tests have been used to establish whether asylum seekers are lying about their place of origin.

The relationship between the Vatican and AI innovators had its genesis in a 2018 speech that Benanti delivered on AI ethics. A senior Microsoft representative in Italy had been in the audience, and the two began meeting regularly. They brought in Paglia, who was interested in broadening the remit of his academy beyond core issues such as the ethics of stem cell research.

Ahead of Smiths visit with the pope, Paglia escorted him through Michelangelos Last Judgement in the Sistine Chapel, and showed him renderings by Galileo of the Earth revolving around the sun the theory that landed him under house arrest for life after a church trial.

Yet the Vaticans relationship with science hasnt always been as a Luddite. In the Middle Ages, Catholic scholars seeded Europe with what would become some of its greatest universities. And although targeted by some individual clerics, Darwins theory of evolution was never officially challenged by the Vatican.

The church officially declares that faith and reason are not in conflict.

The Bible doesnt tell us how heaven works, but how to get there, said Paglia, quoting Galileo. The archbishop has made official trips to Microsofts headquarters near Seattle and IBM offices in New York.

Through aggressive AI investments, Microsoft has become the worlds most valuable company, worth more than $3 trillion. But its continued success hinges on curbing negative perceptions of AI. Worries that the tech could displace jobs, exacerbate inequalities, supercharge surveillance and usher in new kinds of warfare are prompting governments around the world to consider stringent regulations that could blunt the companys ambitions.

The European Union is readying a landmark law that could limit more-advanced generative AI models. The Federal Trade Commission is investigating a deal that Microsoft made with the AI start-up Inflection, probing whether the tech giant deliberately set up the investment to avoid a merger review. And U.S. enforcers reached a deal that will open the company to greater scrutiny of how it wields power to dominate artificial intelligence, including its multibillion-dollar investments in ChatGPT maker OpenAI. That relationship has also exposed Microsoft to new reputational risks, as OpenAI chief executive Sam Altman frequently invites controversy.

Under Smiths leadership, Microsoft has built one of the most sophisticated global lobbying organizations to defuse its regulatory challenges and try to convince people that it is the tech titan the world can trust to build AI. Smith regularly meets with heads of state, including appearing last month alongside President Biden at a factory opening. To be an effective business, Microsoft has to find ways to work with governments and to ensure its technology can transcend them, Smith said.

The worlds oldest global organization can be a unique teacher and partner in that effort, he said, referring to the Vatican. Catholicism and other religions arent bound by national borders much like the applications Microsoft is peddling globally.

At one level, you might look at the two of us and think were odd bedfellows, Smith said. But on the other hand, its a perfect combination.

Zakrzewski reported from Washington.

Continue reading here:

How Pope Francis became the AI ethicist for world leaders and tech titans - The Washington Post

Read More..

Early bets on AI have helped this global tech fund outperform for a second year – CNBC

An early bet on artificial intelligence thanks to a simple investment framework is helping the T. Rowe Price's Global Technology Fund (PRGTX) outperform the market for a second straight year. "AI has to be the biggest productivity enhancer for technology, for the economy since electricity" said the fund's portfolio manager, Dominic Rizzo. "Our framework led us to be early to this AI trend, [and] led us to be early to the chip intensity of this AI trend." The fund has jumped more than 26% in 2024 after surging nearly 56% in 2023. That's due in part to its winning bets in AI names and semiconductor stocks many of which have outperformed the S & P 500 and Nasdaq Composite . Rizzo, a T. Rowe Price lifer who took the helm of the fund in 2022, attributes a four-step investing framework to PRGTX's success. The first pillar is what he calls "linchpin technologies" critical to a company's success. This includes artificial intelligence for semiconductor companies such as Nvidia. The fund manager also looks for innovation in secular growth markets, or companies that are taking market share in fast-growing markets and more quickly than competitors. Another factor is improving fundamentals, as measured through improved free cash flow or operating margin expansion, and reasonable valuation. "The way you get burned in tech is if you buy either extremely expensive stocks, or often if you buy extremely cheap stocks," he said. "That's because in tech, extremely cheap stocks are often cheap for a reason, [while] the extremely expensive stocks are too expensive to earn an outsize return." Early bets on AI Key to the $4.5 billion fund's recent success has been early positions in AI stocks and chipmaking darling Nvidia . The AI leader added to the fund at the end of 2021 today accounts for nearly 18% of the portfolio. Shares are up 166% since the start of 2024. "Nvidia is clearly the linchpin of AI," Rizzo said. "They've done such a tremendous job building up all the different pieces that you need, whether it's the central processing units, the graphics processing units, the networking technology, the software ecosystem. They really have all the different pieces necessary." But the AI darling is far from Rizzo's only semiconductor commitment on the AI theme. Taiwan Semiconductor Manufacturing and Advanced Micro Devices make up 5% and 4% of the portfolio, respectively. Chip equipment maker ASML Holding and semiconductor maker Analog Devices also make the fund's top 10 holdings, at about a combined 5%. Beyond chipmakers, Rizzo's also made significant bets on Apple and Microsoft , which account for 12% and about 10% of the portfolio, respectively. The pair are the fund's second and third largest holdings, behind Nvidia. Rizzo highlighted Microsoft's enterprise software leadership and Apple's consumer dominance. Together, both stocks also lend stability to the portfolio and trade at reasonable valuations with compound earnings growth potential, he added. For Apple, Rizzo touted healthy growth in the company's services business and smartphone growth in emerging markets as potential catalysts for the stock. More critical still is Apple's AI vision, which the MBA from the University of Chicago Booth School of Business expects to fuel a smartphone upgrade cycle. Apple unveiled its long-awaited AI plan at its Worldwide Developers Conference this week, calling it Apple Intelligence . Features include upgrades to the Siri digital assistant that integrate ChatGPT. Microsoft appeal For Microsoft, Rizzo highlighted the company's partnership with OpenAI that strengthens its AI prospects as well as the unique position of its Azure cloud computing business. Rizzo also views software holdings such as SAP and ServiceNow as next-stage beneficiaries of AI tailwinds, and well positioned to benefit from the data needed for AI. "We had the right framework, and the right investing style for this type of market," Rizzo said. "I hope that the investment framework will prove itself to continue to work, regardless of the market environment." The fund has a $2,500 minimum investment, charges a 0.94% net expense ratio and is rated two stars by Morningstar, which said in a report late last year that the Global Technology Fund is "off to a good start" and "has some promise but has much to prove" after the revamp that brought Rizzo onboard in 2022.

See the original post:

Early bets on AI have helped this global tech fund outperform for a second year - CNBC

Read More..

Meta is putting its European AI assistant launch on hold – The Verge

Meta is putting plans for its AI assistant on hold in Europe after receiving objections from Irelands privacy regulator, the company announced on Friday.

In a blog post, Meta said the Irish Data Protection Commission (DPC) asked the company to delay training its large language models on content that had been publicly posted to Facebook and Instagram profiles.

Meta said it is disappointed by the request, particularly since we incorporated regulatory feedback and the European [Data Protection Authorities] have been informed since March. Per the Irish Independent, Meta had recently begun notifying European users that it would collect their data and offered an opt-out option in an attempt to comply with European privacy laws.

Meta said it will continue to work collaboratively with the DPC. But its blog post says that Google and OpenAI have already used data from Europeans to train AI and claims that if regulators dont let it use users information to train its models, Meta can only deliver an inferior product. Put simply, without including local information wed only be able to offer people a second-rate experience. This means we arent able to launch Meta AI in Europe at the moment.

European regulators, on the other hand, have welcomed the pause.

We are pleased that Meta has reflected on the concerns we shared from users of their service in the UK, and responded to our request to pause and review plans touse Facebook and Instagram user data to train generative AI, Stephen Almond, the executive director of regulatory risk at the UK Information Commissioners Office, said in a statement.

The DPCs request followed a campaign by the advocacy group NOYB None of Your Business which filed 11 complaints against Meta in several European countries, Reuters reports.NOYB founder Max Schrems told the Irish Independent that the complaint hinged on Metas legal basis for collecting personal data. Meta is basically saying that it can use any data from any source for any purpose and make it available to anyone in the world, as long as its done via AI technology, Schrems said. This is clearly the opposite of GDPR compliance.

Read more here:

Meta is putting its European AI assistant launch on hold - The Verge

Read More..

Clearview AI Used Your Face. Now You May Get a Stake in the Company. – The New York Times

A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database.

Clearview AI, which is based in New York, scraped billions of photos from the web and social media sites like Facebook, LinkedIn and Instagram to build a facial recognition app used by thousands of police departments, the Department of Homeland Security and the F.B.I. After The New York Times revealed the companys existence in 2020, lawsuits were filed across the country. They were consolidated in federal court in Chicago as a class action.

The litigation has proved costly for Clearview AI, which would most likely go bankrupt before the case made it to trial, according to court documents. The company and those who sued it were trapped together on a sinking ship, lawyers for the plaintiffs wrote in a court filing proposing the settlement.

These realities led the sides to seek a creative solution by obtaining for the class a percentage of the value Clearview could achieve in the future, added the lawyers, from Loevy + Loevy in Chicago.

Anyone in the United States who has a photo of himself or herself posted publicly online so almost everybody could be considered a member of the class. The settlement would collectively give the members a 23 percent stake in Clearview AI, which is valued at $225 million, according to court filings. (Twenty-three percent of the companys current value would be about $52 million.)

If the company goes public or is acquired, those who had submitted a claim form would get a cut of the proceeds. Alternatively, the class could sell its stake. Or the class could opt, after two years, to collect 17 percent of Clearviews revenue, which it would be required to set aside.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Here is the original post:

Clearview AI Used Your Face. Now You May Get a Stake in the Company. - The New York Times

Read More..

Bill Gates on his nuclear energy investment, AI’s challenges – NPR

Bill Gate poses for a portrait at NPR headquarters in Washington, D.C., June 13, 2024. Ben de la Cruz/NPR hide caption

Artificial intelligence may come for our jobs one day, but before that happens, the data centers it relies on are going to need a lot of electricity.

So how do we power them and millions of U.S. homes and businesses without generating more climate-warming gases?

Microsoft founder, billionaire philanthropist and investor Bill Gates is betting that nuclear power is key to meeting that need and hes digging into his own pockets to try and make it happen.

Gates has invested $1 billion into a nuclear power plant that broke ground in Kemmerer, Wyo., this week. The new facility, designed by the Gates-founded TerraPower, will be smaller than traditional fission nuclear power plants and, in theory, safer because it will use sodium instead of water to cool the reactors core.

TerraPower estimates the plant could be built for up to $4 billion, which would be a bargain when compared to other nuclear projects recently completed in the U.S. Two nuclear reactors built from scratch in Georgia cost nearly $35 billion, the Associated Press reports.

Construction on the TerraPower plant is expected to be completed by 2030.

Gates sat for an interview at NPR headquarters with Morning Edition host Steve Inskeep to discuss his multibillion dollar nuclear power investment and how he views the benefits and challenges of artificial intelligence, which the plant hes backing may someday power.

This interview has been edited for length and clarity.

Steve Inskeep: Let me ask about a couple of groups that you need to persuade, and one of them is long-time skeptics of the safety of nuclear power, including environmental groups, people who will put pressure on some of the political leaders that you've been meeting here in Washington. Are you convinced you can make a case that will persuade them?

Bill Gates: Well, absolutely. The safety case for this design is incredibly strong just because of the passive mechanisms involved. People have been talking about it for 60 years, that this is the way these things should work.

Meaning if it breaks down, it just cools off.

Exactly.

Something doesn't have to actively happen to cool it.

There's no high pressure on the reactor. Nothing that's pushing to get out. Water, as it's heated up, creates high pressure. And we have no high pressure and no complex systems needed to guarantee the safety. The Nuclear Regulatory Commission is the best in the world, and they'll question us and challenge us. And, you know, that's fantastic. That's a lot of what the next six years are all about.

Taillights trace the path of a motor vehicle at the Naughton Power Plant, Jan. 13, 2022, in Kemmerer, Wyo. Bill Gates and his energy company are starting construction at their Wyoming site adjacent to the coal plant for a next-generation nuclear power plant he believes will revolutionize how power is generated. Natalie Behring/AP/FR170146 AP hide caption

Let me ask about somebody else you need to persuade, and that is markets showing them that this makes financial sense. Sam Altman, CEO of OpenAI, is promoting and investing in nuclear power and is connected with a company that put its stock on the market and it immediately fell. Other projects that started to seem too expensive have been canceled in recent years. Can you persuade the markets?

Well, the current reactors are too expensive. There are companies working on fission and there's companies working on fusion. Fusion is further out. I hope that succeeds. I hope that in the long run it is a huge competitor to this TerraPower nuclear fission. Unlike previous reactors, we're not asking the ratepayers in a particular geography to guarantee the costs. So this reactor, all of the costs of building this are with the private company, TerraPower, in which I'm the biggest investor. And for strategic reasons, the U.S. government is helping with the first-of-the-kind costs.

The U.S. Department of Energy is funding half the costs of TerraPowers project, which includes the cost of designing and licensing the reactor, the AP reports.

I wonder if you can approach an ordinary investor and say, This is a good risk. It's going to pay off in a reasonable time frame?

You know, we're not choosing to take this company public, because understanding all of these issues are very complex. Many of our investors will be strategic investors who want to supply components, or they come from countries like Japan and Korea, where renewables are not as easy because of the geography. And so they want to go completely green. They, even more than the U.S., will need nuclear to do that.

What is the connection between AI and nuclear power?

Well, I suppose people want innovation to give us even cheaper electricity while making it clean. People who are optimistic about innovation in software and AI bring that optimism to the other things they do. There is a more direct connection, though, which is that the additional data centers that we'll be building look like they'll be as much as a 10% additional load for electricity. The U.S. hasn't needed much new electricity but with the rise in a variety of things from electric cars and buses to electric heat pumps to heating homes, demand for electricity is going to go up a lot. And now these data centers are adding to that. So the big tech companies are out looking at how they can help facilitate more power, so that these data centers can serve the exploding AI demand.

I'm interested in whether you see artificial intelligence as something that potentially could exacerbate income inequality, something that you as a philanthropist would think about.

Well, I think the two domains that I'm most involved in seeing how AI can help are health and education. I was in Newark, New Jersey, recently seeing the Khan Academy AI called Khanmigo being used in math classes, and I was very impressed how the teachers were using it to look at the data, divide students up to have personalized tutoring at the level of a kid who's behind or a kid who's ahead.

Whenever I get, like, a medical bill or a medical diagnosis, I put it in the AI and get it to explain it to me. You know, it's incredible at that. And if we look at countries like in Africa where the shortage of doctors is even more dramatic than in the United States, the idea that we can get more medical advice to pregnant women or anybody suffering from malaria, I'm very excited. And so driving it forward appropriately in those two domains I see as completely beneficial.

Did you understand what I was asking, about the concentration of power?

Absolutely. This is a very, very competitive field. I mean, Google is doing great work. Meta. Amazon. And it's not like there's a limited amount of money for new startups in this area. I mean, Elon Musk just raised $6 billion. It's kind of like the internet was in the year 2000. The barriers to entry are very, very low, which means we're moving quickly.

And the other thing about a concentration of power Do you worry about, you know, more money for investors and fewer jobs for ordinary people? Like they can get this wonderful AI technology, but they dont have a job?

I do worry about that. Basically, if you increase productivity, that should give you more options. We don't let robots play baseball. We're just never going to be interested in that. If robots get really good, and AIs get really good, are we in some ways going to want, in terms of job creation, to put limits on that, or tax those things? Ive raised that in the past. They're not good enough yet to be raising those issues. But you know, say in three to five years, they could be good enough.

But for now, your hope is the AI doesn't replace my job. It makes me more productive in the job that I already have.

Well, there are few jobs that will replace you just like computers did. In most things today, AI is a co-pilot, it raises your productivity. But if you're a support person, taking support calls and you're twice as productive, some companies will take that productivity and answer more calls and have more quality of answer. Some companies will need less people, now freeing up labor to do other things. Do they go and help reduce class size or help the handicapped or help with the elderly? If we're able to produce more, then the pie is bigger. But are we smart in terms of tax policies or how we distribute that, so we actually take freed-up labor and put into things wed like to have?

The Bill & Melinda Gates Foundation is an NPR funder.

The audio version of this story was produced by Kaity Kline and edited by Reena Advani. The digital version was edited by Amina Khan.

Go here to read the rest:

Bill Gates on his nuclear energy investment, AI's challenges - NPR

Read More..

I put Luma Dream Machine to the test with 7 AI videos here’s how it stacks up to Sora – Tom’s Guide

Luma Labs, the artificial intelligence company that previously put out the Genie generative 3D model, has entered the world of AI video with Dream Machine and it is impressive.

Demand to try Dream Machine overloaded Luma's servers so much they had to introduce a queuing system. I waited all night for my prompts to become videos, but the actual "dreaming" process takes about two minutes once you reach the top of the queue.

Some of the videos shared on social media from people given early access seemed too impressive to be real, cherry picked in a way you can with existing AI video models to show what they do best but I've tried it and it is that good.

While it doesn't seem to be Sora level, or even as good as Kling, what I've seen is one of the best prompt following and motion understanding AI video models yet. In one way it is significantly better than Sora anyone can use it today.

Each video generation is about five seconds long which is nearly twice as long as those from Runway or Pika Labs without extensions and there is evidence of some videos with more than one shot.

I created several clips while testing it out. One was ready within about three hours, the rest took most of the night. Some of them have questionable blending or blurring, but for the most party they capture movement better than any model I've tried.

I had them showing walking, dancing and even running. Older models might have people going backwards, or have a dolly zoom on a dancer standing still from prompts requesting that type of motion. Not Dream Machine.

Dream Machine captured the concept of the subject in motion brilliantly with no need to specify the area of motion. It was particularly good at running. But you have minimal fine-tuned or granular control beyond the prompt.

This may be because it's a new model but everything is handled by the prompt which the AI automatically improves using its own language model.

This is a technique also used by Ideogram and Leonardo in image generation and helps offer a more descriptive explanation of what you want to see.

It could also be a feature of video models built on transformer diffusion technology rather than straight diffusion. Haiper, the UK-based AI video startup also says its model works best when you let the prompt do the work and Sora is said to be little more than a simple text prompt with minimal additional controls.

I came up with a series of prompts to test out Dream Machine. For some of these I've also tried them with existing AI video models to see how they compare and none of them achieved the level of motion accuracy or realistic physics.

In some cases I just gave it a simple text prompt, enabling the enhance feature. For others I prompted it myself with a longer prompt and in a couple of cases I gave it an image I'd generated in Midjourney.

For this video I created a longer form and descriptive prompt. I wanted to create something that looked like it was filmed on a smartphone.

The prompt: "An excited child running towards an ice cream truck parked on a sunny street. The camera follows closely behind, capturing the back of the childs head and shoulders, their arms swinging in excitement, and the brightly colored ice cream truck getting closer. The video has a slight bounce to mimic the natural movement of running while holding a phone."

It created two videos. The first made it look like the ice cream truck was about to run the child over, and the arm movements on the child were a bit weird.

The second video was much better. Certainly not realistic and had an impressive motion blur. The video above was of the second shot as it also captured the idea of a slight bounce in camera motion.

This time I just gave Dream Machine a simple prompt and told it not to enhance the prompt, just take what it is given. It actually created two videos that flow from one another, as if they were the first and second shot in a scene.

The prompt: "A man discovers a magic camera that brings any photo to life, but chaos ensues when he accidentally snaps a picture of a dinosaur."

While there is a bit of warping, particularly on the fringes, the motion of the dinosaur crashing the room appreciates real-world physics in an interesting way.

Next up a complex prompt again. Specifically one where Dream Machine has to consider light, shaky movement and a relatively complex scene.

The prompt: "A person walking along a busy city street at dusk, holding a smartphone vertically. The camera captures their hand as they swing it slightly while walking, showing glimpses of shop windows, people passing by, and the glow of streetlights. The video has a slight handheld shake to mimic the natural movement of holding a phone."

This could have gone two ways. The AI could have captured the view from the camera in the persons hand, or captured the person walking while holding the camera first versus third person. It opted for a third-person view.

It wasn't perfect with some warping on the fringes but it was better than I'd have expected given the elements of inconsistency in my prompt.

Next up I started with an image generated in Midjourney of a dancer in silhouette. I've tried using this with Runway, Pika Labs and Stable Video Diffusion and in each cases it shows movement into the shot but not of the character moving.

The prompt: "Create a captivating tracking shot of a woman dancing in silhouette against a contrasting, well-lit background. The camera should follow the dancer's fluid movements, maintaining focus on her silhouette throughout the shot."

It wasn't perfect. There was a weird warping of the leg as it spins and the arms seem to combine with the fabric, but at least the character moves. That is a constant in Luma Dream Machine it is so much better at motion.

One of the first prompts I try with any new generative AI image or video mode is "cats dancing on the moon in a spacesuit". It is weird enough to not have existing videos to draw from and complex enough for video to struggle with motion.

My exact prompt for Luma Dream Machine: "A cat in a spacesuit on the moon dancing with a dog." That was it, no refinement and no description of motion type I left that to the AI.

What this prompt showed is that you do need to give the AI some instruction in how to interpret motion. It didn't do a bad job, better than the alternative currently available models but far from perfect.

Next up was another one that started with a Midjourney image. It was a picture showing a bustling European food market. The original Midjourney prompt was: "An ultra-realistic candid smartphone photo of a bustling, open-air farmers market in a quaint, European town square."

For Luma Labs Dream Machine I simply added the instruction: "Walking through a busy, bustling food market." No other motion command or character instruction.

I wish I'd been more specific about how the characters should move. It captured the motion of the camera really well but it resulted in a lot of warping and merging between people in the scene. This was one of my first attempts and so hadn't tried out better techniques for prompting the model.

Finally, I decided to throw Luma Dream Machine a complete curveball. I'd been experimenting with another new AI model Leonardo Phoenix which promises impressive levels of prompt following. So I created a complex AI image prompt.

Phoenix did a good job but that was just an image, so I decided to put the exact same prompt into Dream Machine: "A surreal, weathered chessboard floating in a misty void, adorned with brass gears and cogs, where intricate steampunk chess pieces - including steam-powered robot pawns."

It pretty much ignored everything but the chess board and created this surrealist video of chess pieces being swept off the end of the board as if they were melting. Because of the surrealism element I can't tell if this was deliberate or a failure of its motion understanding. It looks cool though.

Luma Labs Dream Machine is an impressive next step in generative AI video. It is likely they've utilized experience in generative 3D modelling to improve motion understanding in video but it still feels like a stop gap to true AI video.

Over the past two years AI image generation has gone from a weird, low-res representation of humans with multiple fingers and faces looking more like something Edvard Munch might paint than a photograph to becoming near indistinguishable from reality.

AI video is much more complex. Not only does it need to replicate the realism of a photograph, but have an understanding of real world physic and how that impacts motion across scenes, people, animals, vehicles and objects.

For now I think even the best AI video tools are meant to be used alongside traditional filmmaking rather than replace it but we are getting closer to what Ashton Kutcher predicts is an era where everyone can make their own feature length movies.

Luma Labs have created one of the closest to reality motion tools I've seen yet but it is still falling short of what is needed. I don't think it is Sora level, but I can't compare it to videos I've made myself using Sora only what I've seen from filmmakers and OpenAI themselves and these are likely cherry picked from hundreds of failures.

Abel Art, an avid AI artist who had early access to Dream Machine has created some impressive work. But he said he needed to create hundreds of generations just for one minute of video to make it coherent and once you discard unusable clips.

His ratio is roughly 500 clips for 1 minute of video, with each clip at about 5 seconds he's discarding 98% of shots to create the perfect scene.

I suspect the ratio for Pika Labs and Runway is higher and reports suggest Sora has a similar discard rate, at least from filmmakers that have used it.

For now I think even the best AI video tools are meant to be used alongside traditional filmmaking rather than replace it but we are getting closer to what Ashton Kutcher predicts is an era where everyone can make their own feature length movies.

Back to MacBook Air

Deal ends Mon, Jun 24, 2024

Load more deals

Read the original here:

I put Luma Dream Machine to the test with 7 AI videos here's how it stacks up to Sora - Tom's Guide

Read More..

What is an AI PC and do you actually need one? – Tom’s Hardware

Artificial Intelligence (AI) is the tech term of the moment, and everyone in the PC space wants a piece of it. Every major computer company, from the ones that make the best ultrabooks and laptops to those that put together the best gaming PCs and desktops, as well as most component manufacturers, want to say they're offering an AI PC.

But what is an AI PC, exactly? What does it do differently from the computer you already have? Do you need one at all?

Here's how to cut through the noise to learn what an AI PC actually is:

The many definitions of an AI PC

Since every PC company and component manufacturer wants to say it is offering an AI PC, there are a ton of different definitions out there. Much of the press and the industry has seemingly coalesced around Microsoft's definition, which Intel shared at an AI PC developer program showing off its Core Ultra "Meteor Lake" processors. That definition had three requirements, suggesting that an AI PC:

This definition did rule out some existing PCs that had AMD and Intel's NPU and Copilot, but hadn't included the Copilot key. Most major laptop releases since then have included that key. If you take the Copilot key away, previous Windows on Arm devices, like those running Qualcomm 8cx would also qualify, as those have NPUs and can run Copilot, too. The Copilot key is little more than a branding exercise as it simply launches Copliot by simulating Shift + WIndows + F23, and one could just hit Windows + C to get the same efffect.

Since then, Microsoft has introduced its Copilot+ PCs, with laptops from Asus, Dell, Acer, Samsung, HP, Lenovo, Samsung, and Microsoft's Surface brand. Those all use Qualcomm's Snapdragon X Elite and Plus processors at the moment, though Microsoft has said upcoming AMD Strix Point (aka Ryzen AI 300) and Intel Lunar Lake chips may also fit the bill. To be considered a Copilot+ PC, laptops need to have at least 16GB RAM, 256GB storage, and an on-board NPU that's capable of 40 TOPS (trillions of operations per second). The Qualcomm Snapdragon X Elite chips support 45 TOPS on the NPU.

Copilot+ PCs, the first of which will launch on June 18, 2024, will have a series of four unique Windows AI features that other PCs cannot access. These include Cocreator (image generation in Paint), Windows Studio Effects (webcam blurring and special effects), real-time translation and captions for audio and Recall. Recall, the controversial feature which keeps a record of almost everything you do on your PC so you can remember it, was just pulled from the Copilot+ launch date build of Windows.

So, anyone who doesnt buy a Snapdragon X-powered laptop will have to wait on those special Windows features, likely for many months. AMD recently confirmed that its Ryzen AI 300 PCs wont be getting the Copilot+ features when they launch later this year, but will eventually. Desktop users are left out of the cold until at least the launch of Intel Arrowlake in Q4. And anyone with a current-gen laptop or desktop is probably left out permanently.

Given the weak set of Copilot+ features, anyone who is actually paying attention probably isnt that sad about having to miss them or wait for them. There are many other ways to get an offline (or online) AI image generator, to do real-time translation and to blur your webcam background. Recall is somewhat unique, but many people wont want it, because of the privacy risks associated with taking constant screenshots of your work.

Intel, on its website, has taken a more general approach: "An AI PC has a CPU, a GPU and an NPU, each with specific AI acceleration capabilities."

AMD, via a staff post on its forums, has a similar definition: "An AI PC is a PC designed to optimally execute local AI workloads across a range of hardware, including the CPU (central processing unit), GPU (graphics processing unit), and NPU (neural processing unit)."

Who is making AI PCs?

If you're following the definition that most of the industry is using: CPU, GPU, and NPU, then the answer is most laptop manufacturers are making them. They include Dell, HP, Lenovo, Asus, Acer, Samsung, Microsoft, among others.

These are laptops with Intel (Core Ultra "Meteor Lake"), AMD (Ryzen 7040 or 8040), or Qualcomm (Snapdragon X Elite or Plus) processors.

What is an NPU, exactly?

A neural processing unit, or NPU, are processors that specialize in parallel computing specifically for the purpose of AI workloads (GPUs often also use parallel processing in order to render advanced graphics). Intel, AMD, Qualcomm, and Apple have these attached onto the processor with the CPU and integrated GPU.

They're also highly efficient, allowing for longer battery life than running these processes on a CPU or a GPU (even if, in some cases, those might be the more performant options). NPUs run matrix math, allowing them to do things such as video decoding, upscaling and background removal at a fraction of the power.

An NPU's performance is measured in tera operations per second, or TOPS. Intels Meteor Lake processors and their NPUs can only do about 10 TOPS while Qualcomms Snapdragon X processors, AMDs Ryzen AI 300 processors and Intels Lunar Lake chips will all deliver 45 or more TOPS from their NPUs.

What about desktop PCs? What about GPUs?

At the moment, every PC with an NPU is a laptop (or, in some cases, tablets or gaming handhelds). But neither AMD nor Intel has released a desktop chip with an NPU. A big reason to include the NPU on mobile devices is that they are extremely power efficient, which helps with battery life. Desktop systems don't have batteries, so processor manufacturers can still pump their processors using more wattage.

Additionally, desktops are more likely to have discrete GPUs, which have also proven to be extremely adept at certain AI tasks (though this is mostly high-end parts, like the Nvidia GeForce RTX 4090, which has 24GB of RAM to work with).

In fact, in a recent blog post, Nvidia's vice president of consumer AI, Jason Paul, suggested that the company started the AI PC boom all the way back in 2018, when it launched its first GPUs with RTX tensor cores and DLSS with the RTX 20-series and Turing architecture. That's yet another different definition from the ones laptop and CPU companies are making.

While this doesn't fit the conventional, NPU-focused definition that many are working with, there are a number of companies putting the AI label on desktop PCs, too.

Newegg, for example, is selling its ABS desktops (which start with consumer-grade parts like the Nvidia GeForce RTX 4070 Super and an Intel Core i5-14400F for $1,800 and goes up from there), on an AI PC page alongside laptops using Intel NPUs, as well as desktop parts ("AI CPU," "API GPU"). MSI lists a number of its desktops using 14th Gen Intel processors as "AI Gaming Desktops" because of software that the company includes.

The desktops that make more sense to call AI PCs are workstations that have the power to train some models. Maingear, for instance, sells its Pro AI systems with Intel Xeon chips and Nvidia's RTX 5000 and RTX 6000 Ada GPUs. These range from $28,000 to $60,000 and are definitely not for people looking just to generate images or photos.

Are Macs AI PCs?

Apple is introducing its take on the AI to Mac laptops and desktops this fall. The company is launching a beta version of Apple Intelligence with generative writing, images, custom emoji, and a more capable version of its Siri assistant in macOS Sequoia this fall (as well as iPadOS 18 and iOS 18).

Apple will support Intelligence on Macs and iPads using its M1, M2, M3, and M4 families of processors. On the iPhone, it will support the A17 Pro on the iPhone 15 Pro and Pro Max (and presumably whatever is in the next iPhone this fall).

Because Apple uses its silicon across all of its Macs, its desktops, like the iMac, Mac Studio, and Mac Pro also have NPU (or as Apple refers to them, Neural Engines). That means macOS will have AI-specific features on desktop running on NPUs before Windows.

Do I need an AI PC?

Right now? "Need" is a strong word.

AI features are still in their infancy. In many cases they're still in beta. Many popular chatbots, like OpenAI's ChatGPT and Google Gemini, are totally cloud-based, as is most of what Microsoft Copilot does.

That's not to say there's no features you'll miss. On Copilot+ PCs, image generation built into Windows 11 and Restyle photo editing are exclusive to the new Snapdragon systems. Some other software, like the Adobe Photoshop, Lightroom, and Express as well as Da Vinci Resolve will use NPUs for some AI features and the cloud for others. Those NPU features may be slower or non-existent on older computers.

NPUs are also being used to power background blur and other camera effects on some PCs, though you don't inherently need an NPU for that kind of work (though it does free up the CPU and GPU).

But across the board, these features are still just rolling out, and it's unclear which will be the most useful to you. Local AI is more secure, as you don't have to send your information to another company's servers, but if you're using it in limited scenarios, the cloud functionality the most popular apps currently offer should more than suffice.

There's a lot of hype around AI. And while it has some legitimately cool uses, there are still plenty of places where it's unclear how much people will want to use it. If your current PC is still doing what you need it to do and is getting security updates, it may be worth waiting as more powerful tech comes out to support, presumably, more local AI tools, and see what you actually need.

It's already clear that companies will lock certain features to newer PCs. While early adopters may jump at the chance to try them, there's also no harm in letting those people be the beta testers (and many of these AI features are being labeled as beta by the companies that make them) and grab something when you know what you want.

Are AI PCs more secure?

One of the biggest pushes for AI on a laptop, rather than in the cloud, is security. After all, running an AI workload on an NPU in your computer means you don't necessarily need to send your information to the cloud.

That being said, that also means AI features need to be built securely, too. After security researchers discovered how easy it was to steal data from, Recall, Microsofts new AI feature that takes screenshots of all your activity, for later reference was pulled from the initial Copilot+ set of features as Microsoft promised more security and testing with Windows Insider members.

If you are running a business and having an LLM manage top-secret corporate data, having it processed locally would be more meaningful. But most of the AI features currently being marketed are not mission-critical business tools. Perhaps if Microsoft 365 Copilot ran completely locally, that would be a plus for some companies.

No matter what type of PC you're using, you should still adhere to good security practices. Other humans are still outsmarting us there.

Right now, the term AI PC is still somewhat vague. CPU vendors and Microsoft are using the term to sell you new computers (currently, new laptops only) that have powerful NPUs built into their processors. Most of the generative AI features people actually use today (chatbots, image generators) are free to use in the cloud, making them less than must-haves in their local form.

But NPUs do promise to save battery life by performing some common tasks such as video playback at much lower power. Some web browsers, such as Edge, can use the GPU today to do AI upscaling of videos, but soon that will be offloaded to the NPU. For creative professionals who are doing audio, video or photo editing while unplugged, the NPU will tackle tasks such as background noise removal at much lower power than your CPU or graphics card; of course, the software has to be optimized to do so. And, in the near future, well see more tasks transferred to NPUs, which in turn, will increase system efficiency even more.

So, in the end, the killer feature of AI PCs (at least if they are laptops) could really be longer endurance. If your laptop goes from lasting 12 hours on a charge to lasting 20 hours on a charge because of NPU usage, that could be a huge deal. But the generative features are still in their infancy, so if you're not ready to upgrade, there's still time to wait and see what evolves.

Continue reading here:

What is an AI PC and do you actually need one? - Tom's Hardware

Read More..

Yahoo resurrects Artifact inside a new AI-powered News app – The Verge

Artifact is dead, long live Yahoos version of Artifact. The architecture behind Artifact, the news aggregation app built by Instagram co-founders Kevin Systrom and Mike Krieger, will live on inside the body of a brand-new Yahoo News app.

Available to download today on iOS and Android, the new Yahoo News app brings an AI-powered personalized news feed for users based on their interests, while a feature called Key Takeaways can give a bullet summary of a news article when a reader is feeling TL;DR.

Other features of the Yahoo News app include Top Stories, which picks up on trending stories for users to read and will soon include key takeaway summaries. You can block stories with undesired keywords as well as filter out certain publishers to your preference. And just like Artifact, Yahoo News also lets you flag content like clickbait-y headlines, then lets AI write something better.

Yahoo is also taking some of what its building in the News app to the Yahoo News online homepage. Starting today, the website has a new layout that highlights top news, gives personalized recommendations, and shows trending topics. The new homepage experience is opt in.

The rest is here:

Yahoo resurrects Artifact inside a new AI-powered News app - The Verge

Read More..

My Take on the Reactions to Artificial Intelligence – Havana Times

Illustration: https://urbeuniversity.edu/

By Eduardo N. Cordovi Hernandez

HAVANA TIMES Here, in Havana, despite the blackouts and being technologically on par with other contemporary societies like the Australian Aborigines, the Pygmies of the Congo, or the Tuaregs of the Sahara, we still have a few hours of electricity to keep up with the gossip of the global neighborhood.

Since AI emerged, the praise hasnt stopped and it didnt take long for criticisms to appear in equal profusion. Theres not a day that goes by without at least one article warning about the dangers of its use, its incompetence in certain areas where immediate decision-making from an emotional perspective is crucial, something that AI lacks.

Following the premise of the previous paragraph, it is generally considered that its extreme logic can lead to errors since it may accept ambiguous premises as valid. For example: if asked which US president appears on the hundred-dollar bill, it will answer Benjamin Franklin without clarifying that Franklin was not a president, implicitly suggesting he was, which is false.

The issue is that we demand a level of certainty and guarantee from this tool that we ourselves, as humans, cannot provide, even though many are sure that we are the nonplus ultra of Creation, which I am not here to debate.

A broad sector of human non-artificial intelligence considers that AI can make mistakes, and I am astonished with euphemism: Really? And I immediately ask: And humans dont?

What I see in this problem is the following: If a human makes a mistake, we make them pay for the consequences of their error in some way, whether with their money, their freedom, or their life. In reality, with their freedom or their life, they dont pay anything, but because these are so valuable and at risk of being lost, people sharpen their attention to avoid making mistakes. In simpler or less important cases, they pay by being discredited, mocked, and ridiculed.

The fact is that AI has no sense of humor or fear of making mistakesit doesnt care! It refers to the database it has (which it hasnt worked on, as a Spaniard would say) and thats it.

All the flaws and others, such as those stemming from ambition or the supposed evil of so many other interests, are concentrated in us humans, and no one seems to be concerned that individuals with psychopathic personality traits, certain character problems, or emotional imbalances may or can take on leadership positions in companies or governments!

Its not that such individuals are psychopaths, paranoiacs, or megalomaniacs; its about certain traits that can be anticipated and the individuals remaining within the range of normal acceptance, but that in a supposed normalcy like the one we have, these traits can still be factors that enhance talent.

Well, maybe Im being a bit tragic; but even if we lower the bar, we will see that we remain the same, because someone who does not show emotional maturity, lets say being a bit demanding, someone who loses their cool or their temper quickly, or who, having such a good character, doesnt stress out, is already on the path to being a promising candidate for inefficacy or inefficiency in their own life. Imagine if such a person became president of an important corporation, such as presiding over the government of a nation, and suddenly believed they were the chosen one, and started changing laws to be president for life.

Universal History, meaning the universe, the cosmos, the galaxies, and their adjacent neighborhoods, is full of such cases. I refer to cases that exceeded criminal levels and now! Because a work tool appears with the same potential as a human as Albert Einstein; of whom I read every day in the news suggested by Google, that other scientists think, believe, claim, and even prove that he was wrong, that the Theory of Relativity does not explain many things well resulting, to conclude, that all the wise people on the planet also have an opinion about AI. I would say: And what does another stripe matter to the tiger.

Its totally laughable, at least here in my neighborhood, as they just cut the electricity. How exquisite!

Read more from Eduardo N. Cordovi here.

Follow this link:

My Take on the Reactions to Artificial Intelligence - Havana Times

Read More..

Pope to G7: AI is neither objective nor neutral – Vatican News – English

In an address to the G7 summit, Pope Francis discusses the threat and promise of artificial intelligence, the techno-human condition, human vs algorithmic decision-making, AI-written essays, and the necessity of political collaboration on technology.

By Joseph Tulloch

On Friday afternoon, Pope Francis addressed the G7 leaders summit in Puglia, Italy.

He is the first Pope to ever address the forum, which brings together the leaders of the US, UK, Italy, France, Canada, Germany, and Japan.

The Pope dedicated his address to the G7 to the subject of artificial intelligence.

He began by saying that the birth of AI represents a true cognitive-industrial revolution which will lead to complex epochal transformations.

These transformations, the Pope said, have the potential to be both positive for example, the democratization of access to knowledge, the exponential advancement of scientific research, and a reduction in demanding and arduous work and negative for instance, greater injustice between advanced and developing nations or between dominant and oppressed social classes.

Noting that AI is above all a tool, the Pope spoke of what he called the techno-human condition.

He explained that he was referring to the fact that humans relationship with the environment has always been mediated by the tools that they have produced.

Some, the Pope said, see this as a weakness, or a deficiency; however, he argued, it is in fact something positive. It stems, he said, from the fact that we are beings inclined to what lies outside of us, beings radically open to the beyond.

This openness, Pope Francis said, is both the root of our techno-human condition and the root of our openness to others and to God, as well as the root of our artistic and intellectual creativity.

The Pope then moved on to the subject of decision-making.

He said that AI is capable of making algorithmic choices that is, technical choices among several possibilities based either on well-defined criteria or on statistical inferences.

Human beings, however, not only choose, but in their hearts are capable of deciding.

This is because, the Pope explained, they are capable of wisdom, of what the Ancient Greeks calledphronesis(a type of intelligence concerned with practical action), and of listening to Sacred Scripture.

It is thus very important, the Pope stressed, that important decisions must always be left to the human person.

As an example of this principle, the Pope pointed to the development of lethal autonomous weapons which can take human life with no human input and said that they must ultimately be banned.

The Pope also stressed that the algorithms used by artificial intelligence to arrive at choices are neither objective nor neutral.

He pointed to the algorithms designed to help judges in deciding whether to grant home-confinement to prison inmates. These programmes, he said, make a choice based on data such as the type of offence, behaviour in prison, psychological assessment, and the prisoners ethnic origin, educational attainment, and credit rating.

However, the Pope stressed, this is reductive: human beings are always developing, and are capable of surprising us by their actions. This is something that a machine cannot take into account.

A further problem, the Pope emphasised, is that algorithms can only examine realities formalised in numerical terms:

The Pope then turned to consider the fact that many students are increasingly relying on AI to help them with their studies, and in particular, with writing essays.

It is easy to forget, the Pope said, that strictly speaking, so-called generative artificial intelligence is not really generative it does not develop new analyses or concepts but rather repeats those that it finds, giving them an appealing form.

This, the Pope said, risks undermining the educational process itself.

Education, he emphasised, should offer the chance for authentic reflection, but instead runs the risk of being reduced to a repetition of notions, which will increasingly be evaluated as unobjectionable, simply because of their constant repetition.

Bringing his speech to a close, the Pope emphasised that AI is always shaped by the worldview of those who invented and developed it.

A particular concern in this regard, he said, is that today it is increasingly difficult to find agreement on the major issues concerning social life - there is less and less consensus, that is, regarding the philosophy that should be shaping artificial intelligence.

What is necessary, therefore, the Pope said, is the development of an algor-ethics, a series of global and pluralistic principles which are capable of finding support from cultures, religions, international organizations and major corporations.

If we struggle to define a single set of global values, the Pope said, we can at least find shared principles with which to address and resolve dilemmas or conflicts regarding how to live.

Faced with this challenge, the Pope said, political action is urgently needed.

Only a healthy politics, involving the most diverse sectors and skills, the Pope stressed, is capable of dealing with the challenges and promises of artificial intelligence.

The goal, Pope Francis concluded, is not stifling human creativity and its ideals of progress but rather directing that energy along new channels.

You can find the full text of the Pope's address to the G7 here.

Read more here:

Pope to G7: AI is neither objective nor neutral - Vatican News - English

Read More..