Category Archives: Ai

Meta is putting its European AI assistant launch on hold – The Verge

Meta is putting plans for its AI assistant on hold in Europe after receiving objections from Irelands privacy regulator, the company announced on Friday.

In a blog post, Meta said the Irish Data Protection Commission (DPC) asked the company to delay training its large language models on content that had been publicly posted to Facebook and Instagram profiles.

Meta said it is disappointed by the request, particularly since we incorporated regulatory feedback and the European [Data Protection Authorities] have been informed since March. Per the Irish Independent, Meta had recently begun notifying European users that it would collect their data and offered an opt-out option in an attempt to comply with European privacy laws.

Meta said it will continue to work collaboratively with the DPC. But its blog post says that Google and OpenAI have already used data from Europeans to train AI and claims that if regulators dont let it use users information to train its models, Meta can only deliver an inferior product. Put simply, without including local information wed only be able to offer people a second-rate experience. This means we arent able to launch Meta AI in Europe at the moment.

European regulators, on the other hand, have welcomed the pause.

We are pleased that Meta has reflected on the concerns we shared from users of their service in the UK, and responded to our request to pause and review plans touse Facebook and Instagram user data to train generative AI, Stephen Almond, the executive director of regulatory risk at the UK Information Commissioners Office, said in a statement.

The DPCs request followed a campaign by the advocacy group NOYB None of Your Business which filed 11 complaints against Meta in several European countries, Reuters reports.NOYB founder Max Schrems told the Irish Independent that the complaint hinged on Metas legal basis for collecting personal data. Meta is basically saying that it can use any data from any source for any purpose and make it available to anyone in the world, as long as its done via AI technology, Schrems said. This is clearly the opposite of GDPR compliance.

Read more here:

Meta is putting its European AI assistant launch on hold - The Verge

Clearview AI Used Your Face. Now You May Get a Stake in the Company. – The New York Times

A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database.

Clearview AI, which is based in New York, scraped billions of photos from the web and social media sites like Facebook, LinkedIn and Instagram to build a facial recognition app used by thousands of police departments, the Department of Homeland Security and the F.B.I. After The New York Times revealed the companys existence in 2020, lawsuits were filed across the country. They were consolidated in federal court in Chicago as a class action.

The litigation has proved costly for Clearview AI, which would most likely go bankrupt before the case made it to trial, according to court documents. The company and those who sued it were trapped together on a sinking ship, lawyers for the plaintiffs wrote in a court filing proposing the settlement.

These realities led the sides to seek a creative solution by obtaining for the class a percentage of the value Clearview could achieve in the future, added the lawyers, from Loevy + Loevy in Chicago.

Anyone in the United States who has a photo of himself or herself posted publicly online so almost everybody could be considered a member of the class. The settlement would collectively give the members a 23 percent stake in Clearview AI, which is valued at $225 million, according to court filings. (Twenty-three percent of the companys current value would be about $52 million.)

If the company goes public or is acquired, those who had submitted a claim form would get a cut of the proceeds. Alternatively, the class could sell its stake. Or the class could opt, after two years, to collect 17 percent of Clearviews revenue, which it would be required to set aside.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Here is the original post:

Clearview AI Used Your Face. Now You May Get a Stake in the Company. - The New York Times

What is an AI PC and do you actually need one? – Tom’s Hardware

Artificial Intelligence (AI) is the tech term of the moment, and everyone in the PC space wants a piece of it. Every major computer company, from the ones that make the best ultrabooks and laptops to those that put together the best gaming PCs and desktops, as well as most component manufacturers, want to say they're offering an AI PC.

But what is an AI PC, exactly? What does it do differently from the computer you already have? Do you need one at all?

Here's how to cut through the noise to learn what an AI PC actually is:

The many definitions of an AI PC

Since every PC company and component manufacturer wants to say it is offering an AI PC, there are a ton of different definitions out there. Much of the press and the industry has seemingly coalesced around Microsoft's definition, which Intel shared at an AI PC developer program showing off its Core Ultra "Meteor Lake" processors. That definition had three requirements, suggesting that an AI PC:

This definition did rule out some existing PCs that had AMD and Intel's NPU and Copilot, but hadn't included the Copilot key. Most major laptop releases since then have included that key. If you take the Copilot key away, previous Windows on Arm devices, like those running Qualcomm 8cx would also qualify, as those have NPUs and can run Copilot, too. The Copilot key is little more than a branding exercise as it simply launches Copliot by simulating Shift + WIndows + F23, and one could just hit Windows + C to get the same efffect.

Since then, Microsoft has introduced its Copilot+ PCs, with laptops from Asus, Dell, Acer, Samsung, HP, Lenovo, Samsung, and Microsoft's Surface brand. Those all use Qualcomm's Snapdragon X Elite and Plus processors at the moment, though Microsoft has said upcoming AMD Strix Point (aka Ryzen AI 300) and Intel Lunar Lake chips may also fit the bill. To be considered a Copilot+ PC, laptops need to have at least 16GB RAM, 256GB storage, and an on-board NPU that's capable of 40 TOPS (trillions of operations per second). The Qualcomm Snapdragon X Elite chips support 45 TOPS on the NPU.

Copilot+ PCs, the first of which will launch on June 18, 2024, will have a series of four unique Windows AI features that other PCs cannot access. These include Cocreator (image generation in Paint), Windows Studio Effects (webcam blurring and special effects), real-time translation and captions for audio and Recall. Recall, the controversial feature which keeps a record of almost everything you do on your PC so you can remember it, was just pulled from the Copilot+ launch date build of Windows.

So, anyone who doesnt buy a Snapdragon X-powered laptop will have to wait on those special Windows features, likely for many months. AMD recently confirmed that its Ryzen AI 300 PCs wont be getting the Copilot+ features when they launch later this year, but will eventually. Desktop users are left out of the cold until at least the launch of Intel Arrowlake in Q4. And anyone with a current-gen laptop or desktop is probably left out permanently.

Given the weak set of Copilot+ features, anyone who is actually paying attention probably isnt that sad about having to miss them or wait for them. There are many other ways to get an offline (or online) AI image generator, to do real-time translation and to blur your webcam background. Recall is somewhat unique, but many people wont want it, because of the privacy risks associated with taking constant screenshots of your work.

Intel, on its website, has taken a more general approach: "An AI PC has a CPU, a GPU and an NPU, each with specific AI acceleration capabilities."

AMD, via a staff post on its forums, has a similar definition: "An AI PC is a PC designed to optimally execute local AI workloads across a range of hardware, including the CPU (central processing unit), GPU (graphics processing unit), and NPU (neural processing unit)."

Who is making AI PCs?

If you're following the definition that most of the industry is using: CPU, GPU, and NPU, then the answer is most laptop manufacturers are making them. They include Dell, HP, Lenovo, Asus, Acer, Samsung, Microsoft, among others.

These are laptops with Intel (Core Ultra "Meteor Lake"), AMD (Ryzen 7040 or 8040), or Qualcomm (Snapdragon X Elite or Plus) processors.

What is an NPU, exactly?

A neural processing unit, or NPU, are processors that specialize in parallel computing specifically for the purpose of AI workloads (GPUs often also use parallel processing in order to render advanced graphics). Intel, AMD, Qualcomm, and Apple have these attached onto the processor with the CPU and integrated GPU.

They're also highly efficient, allowing for longer battery life than running these processes on a CPU or a GPU (even if, in some cases, those might be the more performant options). NPUs run matrix math, allowing them to do things such as video decoding, upscaling and background removal at a fraction of the power.

An NPU's performance is measured in tera operations per second, or TOPS. Intels Meteor Lake processors and their NPUs can only do about 10 TOPS while Qualcomms Snapdragon X processors, AMDs Ryzen AI 300 processors and Intels Lunar Lake chips will all deliver 45 or more TOPS from their NPUs.

What about desktop PCs? What about GPUs?

At the moment, every PC with an NPU is a laptop (or, in some cases, tablets or gaming handhelds). But neither AMD nor Intel has released a desktop chip with an NPU. A big reason to include the NPU on mobile devices is that they are extremely power efficient, which helps with battery life. Desktop systems don't have batteries, so processor manufacturers can still pump their processors using more wattage.

Additionally, desktops are more likely to have discrete GPUs, which have also proven to be extremely adept at certain AI tasks (though this is mostly high-end parts, like the Nvidia GeForce RTX 4090, which has 24GB of RAM to work with).

In fact, in a recent blog post, Nvidia's vice president of consumer AI, Jason Paul, suggested that the company started the AI PC boom all the way back in 2018, when it launched its first GPUs with RTX tensor cores and DLSS with the RTX 20-series and Turing architecture. That's yet another different definition from the ones laptop and CPU companies are making.

While this doesn't fit the conventional, NPU-focused definition that many are working with, there are a number of companies putting the AI label on desktop PCs, too.

Newegg, for example, is selling its ABS desktops (which start with consumer-grade parts like the Nvidia GeForce RTX 4070 Super and an Intel Core i5-14400F for $1,800 and goes up from there), on an AI PC page alongside laptops using Intel NPUs, as well as desktop parts ("AI CPU," "API GPU"). MSI lists a number of its desktops using 14th Gen Intel processors as "AI Gaming Desktops" because of software that the company includes.

The desktops that make more sense to call AI PCs are workstations that have the power to train some models. Maingear, for instance, sells its Pro AI systems with Intel Xeon chips and Nvidia's RTX 5000 and RTX 6000 Ada GPUs. These range from $28,000 to $60,000 and are definitely not for people looking just to generate images or photos.

Are Macs AI PCs?

Apple is introducing its take on the AI to Mac laptops and desktops this fall. The company is launching a beta version of Apple Intelligence with generative writing, images, custom emoji, and a more capable version of its Siri assistant in macOS Sequoia this fall (as well as iPadOS 18 and iOS 18).

Apple will support Intelligence on Macs and iPads using its M1, M2, M3, and M4 families of processors. On the iPhone, it will support the A17 Pro on the iPhone 15 Pro and Pro Max (and presumably whatever is in the next iPhone this fall).

Because Apple uses its silicon across all of its Macs, its desktops, like the iMac, Mac Studio, and Mac Pro also have NPU (or as Apple refers to them, Neural Engines). That means macOS will have AI-specific features on desktop running on NPUs before Windows.

Do I need an AI PC?

Right now? "Need" is a strong word.

AI features are still in their infancy. In many cases they're still in beta. Many popular chatbots, like OpenAI's ChatGPT and Google Gemini, are totally cloud-based, as is most of what Microsoft Copilot does.

That's not to say there's no features you'll miss. On Copilot+ PCs, image generation built into Windows 11 and Restyle photo editing are exclusive to the new Snapdragon systems. Some other software, like the Adobe Photoshop, Lightroom, and Express as well as Da Vinci Resolve will use NPUs for some AI features and the cloud for others. Those NPU features may be slower or non-existent on older computers.

NPUs are also being used to power background blur and other camera effects on some PCs, though you don't inherently need an NPU for that kind of work (though it does free up the CPU and GPU).

But across the board, these features are still just rolling out, and it's unclear which will be the most useful to you. Local AI is more secure, as you don't have to send your information to another company's servers, but if you're using it in limited scenarios, the cloud functionality the most popular apps currently offer should more than suffice.

There's a lot of hype around AI. And while it has some legitimately cool uses, there are still plenty of places where it's unclear how much people will want to use it. If your current PC is still doing what you need it to do and is getting security updates, it may be worth waiting as more powerful tech comes out to support, presumably, more local AI tools, and see what you actually need.

It's already clear that companies will lock certain features to newer PCs. While early adopters may jump at the chance to try them, there's also no harm in letting those people be the beta testers (and many of these AI features are being labeled as beta by the companies that make them) and grab something when you know what you want.

Are AI PCs more secure?

One of the biggest pushes for AI on a laptop, rather than in the cloud, is security. After all, running an AI workload on an NPU in your computer means you don't necessarily need to send your information to the cloud.

That being said, that also means AI features need to be built securely, too. After security researchers discovered how easy it was to steal data from, Recall, Microsofts new AI feature that takes screenshots of all your activity, for later reference was pulled from the initial Copilot+ set of features as Microsoft promised more security and testing with Windows Insider members.

If you are running a business and having an LLM manage top-secret corporate data, having it processed locally would be more meaningful. But most of the AI features currently being marketed are not mission-critical business tools. Perhaps if Microsoft 365 Copilot ran completely locally, that would be a plus for some companies.

No matter what type of PC you're using, you should still adhere to good security practices. Other humans are still outsmarting us there.

Right now, the term AI PC is still somewhat vague. CPU vendors and Microsoft are using the term to sell you new computers (currently, new laptops only) that have powerful NPUs built into their processors. Most of the generative AI features people actually use today (chatbots, image generators) are free to use in the cloud, making them less than must-haves in their local form.

But NPUs do promise to save battery life by performing some common tasks such as video playback at much lower power. Some web browsers, such as Edge, can use the GPU today to do AI upscaling of videos, but soon that will be offloaded to the NPU. For creative professionals who are doing audio, video or photo editing while unplugged, the NPU will tackle tasks such as background noise removal at much lower power than your CPU or graphics card; of course, the software has to be optimized to do so. And, in the near future, well see more tasks transferred to NPUs, which in turn, will increase system efficiency even more.

So, in the end, the killer feature of AI PCs (at least if they are laptops) could really be longer endurance. If your laptop goes from lasting 12 hours on a charge to lasting 20 hours on a charge because of NPU usage, that could be a huge deal. But the generative features are still in their infancy, so if you're not ready to upgrade, there's still time to wait and see what evolves.

Continue reading here:

What is an AI PC and do you actually need one? - Tom's Hardware

I put Luma Dream Machine to the test with 7 AI videos here’s how it stacks up to Sora – Tom’s Guide

Luma Labs, the artificial intelligence company that previously put out the Genie generative 3D model, has entered the world of AI video with Dream Machine and it is impressive.

Demand to try Dream Machine overloaded Luma's servers so much they had to introduce a queuing system. I waited all night for my prompts to become videos, but the actual "dreaming" process takes about two minutes once you reach the top of the queue.

Some of the videos shared on social media from people given early access seemed too impressive to be real, cherry picked in a way you can with existing AI video models to show what they do best but I've tried it and it is that good.

While it doesn't seem to be Sora level, or even as good as Kling, what I've seen is one of the best prompt following and motion understanding AI video models yet. In one way it is significantly better than Sora anyone can use it today.

Each video generation is about five seconds long which is nearly twice as long as those from Runway or Pika Labs without extensions and there is evidence of some videos with more than one shot.

I created several clips while testing it out. One was ready within about three hours, the rest took most of the night. Some of them have questionable blending or blurring, but for the most party they capture movement better than any model I've tried.

I had them showing walking, dancing and even running. Older models might have people going backwards, or have a dolly zoom on a dancer standing still from prompts requesting that type of motion. Not Dream Machine.

Dream Machine captured the concept of the subject in motion brilliantly with no need to specify the area of motion. It was particularly good at running. But you have minimal fine-tuned or granular control beyond the prompt.

This may be because it's a new model but everything is handled by the prompt which the AI automatically improves using its own language model.

This is a technique also used by Ideogram and Leonardo in image generation and helps offer a more descriptive explanation of what you want to see.

It could also be a feature of video models built on transformer diffusion technology rather than straight diffusion. Haiper, the UK-based AI video startup also says its model works best when you let the prompt do the work and Sora is said to be little more than a simple text prompt with minimal additional controls.

I came up with a series of prompts to test out Dream Machine. For some of these I've also tried them with existing AI video models to see how they compare and none of them achieved the level of motion accuracy or realistic physics.

In some cases I just gave it a simple text prompt, enabling the enhance feature. For others I prompted it myself with a longer prompt and in a couple of cases I gave it an image I'd generated in Midjourney.

For this video I created a longer form and descriptive prompt. I wanted to create something that looked like it was filmed on a smartphone.

The prompt: "An excited child running towards an ice cream truck parked on a sunny street. The camera follows closely behind, capturing the back of the childs head and shoulders, their arms swinging in excitement, and the brightly colored ice cream truck getting closer. The video has a slight bounce to mimic the natural movement of running while holding a phone."

It created two videos. The first made it look like the ice cream truck was about to run the child over, and the arm movements on the child were a bit weird.

The second video was much better. Certainly not realistic and had an impressive motion blur. The video above was of the second shot as it also captured the idea of a slight bounce in camera motion.

This time I just gave Dream Machine a simple prompt and told it not to enhance the prompt, just take what it is given. It actually created two videos that flow from one another, as if they were the first and second shot in a scene.

The prompt: "A man discovers a magic camera that brings any photo to life, but chaos ensues when he accidentally snaps a picture of a dinosaur."

While there is a bit of warping, particularly on the fringes, the motion of the dinosaur crashing the room appreciates real-world physics in an interesting way.

Next up a complex prompt again. Specifically one where Dream Machine has to consider light, shaky movement and a relatively complex scene.

The prompt: "A person walking along a busy city street at dusk, holding a smartphone vertically. The camera captures their hand as they swing it slightly while walking, showing glimpses of shop windows, people passing by, and the glow of streetlights. The video has a slight handheld shake to mimic the natural movement of holding a phone."

This could have gone two ways. The AI could have captured the view from the camera in the persons hand, or captured the person walking while holding the camera first versus third person. It opted for a third-person view.

It wasn't perfect with some warping on the fringes but it was better than I'd have expected given the elements of inconsistency in my prompt.

Next up I started with an image generated in Midjourney of a dancer in silhouette. I've tried using this with Runway, Pika Labs and Stable Video Diffusion and in each cases it shows movement into the shot but not of the character moving.

The prompt: "Create a captivating tracking shot of a woman dancing in silhouette against a contrasting, well-lit background. The camera should follow the dancer's fluid movements, maintaining focus on her silhouette throughout the shot."

It wasn't perfect. There was a weird warping of the leg as it spins and the arms seem to combine with the fabric, but at least the character moves. That is a constant in Luma Dream Machine it is so much better at motion.

One of the first prompts I try with any new generative AI image or video mode is "cats dancing on the moon in a spacesuit". It is weird enough to not have existing videos to draw from and complex enough for video to struggle with motion.

My exact prompt for Luma Dream Machine: "A cat in a spacesuit on the moon dancing with a dog." That was it, no refinement and no description of motion type I left that to the AI.

What this prompt showed is that you do need to give the AI some instruction in how to interpret motion. It didn't do a bad job, better than the alternative currently available models but far from perfect.

Next up was another one that started with a Midjourney image. It was a picture showing a bustling European food market. The original Midjourney prompt was: "An ultra-realistic candid smartphone photo of a bustling, open-air farmers market in a quaint, European town square."

For Luma Labs Dream Machine I simply added the instruction: "Walking through a busy, bustling food market." No other motion command or character instruction.

I wish I'd been more specific about how the characters should move. It captured the motion of the camera really well but it resulted in a lot of warping and merging between people in the scene. This was one of my first attempts and so hadn't tried out better techniques for prompting the model.

Finally, I decided to throw Luma Dream Machine a complete curveball. I'd been experimenting with another new AI model Leonardo Phoenix which promises impressive levels of prompt following. So I created a complex AI image prompt.

Phoenix did a good job but that was just an image, so I decided to put the exact same prompt into Dream Machine: "A surreal, weathered chessboard floating in a misty void, adorned with brass gears and cogs, where intricate steampunk chess pieces - including steam-powered robot pawns."

It pretty much ignored everything but the chess board and created this surrealist video of chess pieces being swept off the end of the board as if they were melting. Because of the surrealism element I can't tell if this was deliberate or a failure of its motion understanding. It looks cool though.

Luma Labs Dream Machine is an impressive next step in generative AI video. It is likely they've utilized experience in generative 3D modelling to improve motion understanding in video but it still feels like a stop gap to true AI video.

Over the past two years AI image generation has gone from a weird, low-res representation of humans with multiple fingers and faces looking more like something Edvard Munch might paint than a photograph to becoming near indistinguishable from reality.

AI video is much more complex. Not only does it need to replicate the realism of a photograph, but have an understanding of real world physic and how that impacts motion across scenes, people, animals, vehicles and objects.

For now I think even the best AI video tools are meant to be used alongside traditional filmmaking rather than replace it but we are getting closer to what Ashton Kutcher predicts is an era where everyone can make their own feature length movies.

Luma Labs have created one of the closest to reality motion tools I've seen yet but it is still falling short of what is needed. I don't think it is Sora level, but I can't compare it to videos I've made myself using Sora only what I've seen from filmmakers and OpenAI themselves and these are likely cherry picked from hundreds of failures.

Abel Art, an avid AI artist who had early access to Dream Machine has created some impressive work. But he said he needed to create hundreds of generations just for one minute of video to make it coherent and once you discard unusable clips.

His ratio is roughly 500 clips for 1 minute of video, with each clip at about 5 seconds he's discarding 98% of shots to create the perfect scene.

I suspect the ratio for Pika Labs and Runway is higher and reports suggest Sora has a similar discard rate, at least from filmmakers that have used it.

For now I think even the best AI video tools are meant to be used alongside traditional filmmaking rather than replace it but we are getting closer to what Ashton Kutcher predicts is an era where everyone can make their own feature length movies.

Back to MacBook Air

Deal ends Mon, Jun 24, 2024

Load more deals

Read the original here:

I put Luma Dream Machine to the test with 7 AI videos here's how it stacks up to Sora - Tom's Guide

My Take on the Reactions to Artificial Intelligence – Havana Times

Illustration: https://urbeuniversity.edu/

By Eduardo N. Cordovi Hernandez

HAVANA TIMES Here, in Havana, despite the blackouts and being technologically on par with other contemporary societies like the Australian Aborigines, the Pygmies of the Congo, or the Tuaregs of the Sahara, we still have a few hours of electricity to keep up with the gossip of the global neighborhood.

Since AI emerged, the praise hasnt stopped and it didnt take long for criticisms to appear in equal profusion. Theres not a day that goes by without at least one article warning about the dangers of its use, its incompetence in certain areas where immediate decision-making from an emotional perspective is crucial, something that AI lacks.

Following the premise of the previous paragraph, it is generally considered that its extreme logic can lead to errors since it may accept ambiguous premises as valid. For example: if asked which US president appears on the hundred-dollar bill, it will answer Benjamin Franklin without clarifying that Franklin was not a president, implicitly suggesting he was, which is false.

The issue is that we demand a level of certainty and guarantee from this tool that we ourselves, as humans, cannot provide, even though many are sure that we are the nonplus ultra of Creation, which I am not here to debate.

A broad sector of human non-artificial intelligence considers that AI can make mistakes, and I am astonished with euphemism: Really? And I immediately ask: And humans dont?

What I see in this problem is the following: If a human makes a mistake, we make them pay for the consequences of their error in some way, whether with their money, their freedom, or their life. In reality, with their freedom or their life, they dont pay anything, but because these are so valuable and at risk of being lost, people sharpen their attention to avoid making mistakes. In simpler or less important cases, they pay by being discredited, mocked, and ridiculed.

The fact is that AI has no sense of humor or fear of making mistakesit doesnt care! It refers to the database it has (which it hasnt worked on, as a Spaniard would say) and thats it.

All the flaws and others, such as those stemming from ambition or the supposed evil of so many other interests, are concentrated in us humans, and no one seems to be concerned that individuals with psychopathic personality traits, certain character problems, or emotional imbalances may or can take on leadership positions in companies or governments!

Its not that such individuals are psychopaths, paranoiacs, or megalomaniacs; its about certain traits that can be anticipated and the individuals remaining within the range of normal acceptance, but that in a supposed normalcy like the one we have, these traits can still be factors that enhance talent.

Well, maybe Im being a bit tragic; but even if we lower the bar, we will see that we remain the same, because someone who does not show emotional maturity, lets say being a bit demanding, someone who loses their cool or their temper quickly, or who, having such a good character, doesnt stress out, is already on the path to being a promising candidate for inefficacy or inefficiency in their own life. Imagine if such a person became president of an important corporation, such as presiding over the government of a nation, and suddenly believed they were the chosen one, and started changing laws to be president for life.

Universal History, meaning the universe, the cosmos, the galaxies, and their adjacent neighborhoods, is full of such cases. I refer to cases that exceeded criminal levels and now! Because a work tool appears with the same potential as a human as Albert Einstein; of whom I read every day in the news suggested by Google, that other scientists think, believe, claim, and even prove that he was wrong, that the Theory of Relativity does not explain many things well resulting, to conclude, that all the wise people on the planet also have an opinion about AI. I would say: And what does another stripe matter to the tiger.

Its totally laughable, at least here in my neighborhood, as they just cut the electricity. How exquisite!

Read more from Eduardo N. Cordovi here.

Follow this link:

My Take on the Reactions to Artificial Intelligence - Havana Times

Pope to G7: AI is neither objective nor neutral – Vatican News – English

In an address to the G7 summit, Pope Francis discusses the threat and promise of artificial intelligence, the techno-human condition, human vs algorithmic decision-making, AI-written essays, and the necessity of political collaboration on technology.

By Joseph Tulloch

On Friday afternoon, Pope Francis addressed the G7 leaders summit in Puglia, Italy.

He is the first Pope to ever address the forum, which brings together the leaders of the US, UK, Italy, France, Canada, Germany, and Japan.

The Pope dedicated his address to the G7 to the subject of artificial intelligence.

He began by saying that the birth of AI represents a true cognitive-industrial revolution which will lead to complex epochal transformations.

These transformations, the Pope said, have the potential to be both positive for example, the democratization of access to knowledge, the exponential advancement of scientific research, and a reduction in demanding and arduous work and negative for instance, greater injustice between advanced and developing nations or between dominant and oppressed social classes.

Noting that AI is above all a tool, the Pope spoke of what he called the techno-human condition.

He explained that he was referring to the fact that humans relationship with the environment has always been mediated by the tools that they have produced.

Some, the Pope said, see this as a weakness, or a deficiency; however, he argued, it is in fact something positive. It stems, he said, from the fact that we are beings inclined to what lies outside of us, beings radically open to the beyond.

This openness, Pope Francis said, is both the root of our techno-human condition and the root of our openness to others and to God, as well as the root of our artistic and intellectual creativity.

The Pope then moved on to the subject of decision-making.

He said that AI is capable of making algorithmic choices that is, technical choices among several possibilities based either on well-defined criteria or on statistical inferences.

Human beings, however, not only choose, but in their hearts are capable of deciding.

This is because, the Pope explained, they are capable of wisdom, of what the Ancient Greeks calledphronesis(a type of intelligence concerned with practical action), and of listening to Sacred Scripture.

It is thus very important, the Pope stressed, that important decisions must always be left to the human person.

As an example of this principle, the Pope pointed to the development of lethal autonomous weapons which can take human life with no human input and said that they must ultimately be banned.

The Pope also stressed that the algorithms used by artificial intelligence to arrive at choices are neither objective nor neutral.

He pointed to the algorithms designed to help judges in deciding whether to grant home-confinement to prison inmates. These programmes, he said, make a choice based on data such as the type of offence, behaviour in prison, psychological assessment, and the prisoners ethnic origin, educational attainment, and credit rating.

However, the Pope stressed, this is reductive: human beings are always developing, and are capable of surprising us by their actions. This is something that a machine cannot take into account.

A further problem, the Pope emphasised, is that algorithms can only examine realities formalised in numerical terms:

The Pope then turned to consider the fact that many students are increasingly relying on AI to help them with their studies, and in particular, with writing essays.

It is easy to forget, the Pope said, that strictly speaking, so-called generative artificial intelligence is not really generative it does not develop new analyses or concepts but rather repeats those that it finds, giving them an appealing form.

This, the Pope said, risks undermining the educational process itself.

Education, he emphasised, should offer the chance for authentic reflection, but instead runs the risk of being reduced to a repetition of notions, which will increasingly be evaluated as unobjectionable, simply because of their constant repetition.

Bringing his speech to a close, the Pope emphasised that AI is always shaped by the worldview of those who invented and developed it.

A particular concern in this regard, he said, is that today it is increasingly difficult to find agreement on the major issues concerning social life - there is less and less consensus, that is, regarding the philosophy that should be shaping artificial intelligence.

What is necessary, therefore, the Pope said, is the development of an algor-ethics, a series of global and pluralistic principles which are capable of finding support from cultures, religions, international organizations and major corporations.

If we struggle to define a single set of global values, the Pope said, we can at least find shared principles with which to address and resolve dilemmas or conflicts regarding how to live.

Faced with this challenge, the Pope said, political action is urgently needed.

Only a healthy politics, involving the most diverse sectors and skills, the Pope stressed, is capable of dealing with the challenges and promises of artificial intelligence.

The goal, Pope Francis concluded, is not stifling human creativity and its ideals of progress but rather directing that energy along new channels.

You can find the full text of the Pope's address to the G7 here.

Read more here:

Pope to G7: AI is neither objective nor neutral - Vatican News - English

Bill Gates on his nuclear energy investment, AI’s challenges – NPR

Bill Gate poses for a portrait at NPR headquarters in Washington, D.C., June 13, 2024. Ben de la Cruz/NPR hide caption

Artificial intelligence may come for our jobs one day, but before that happens, the data centers it relies on are going to need a lot of electricity.

So how do we power them and millions of U.S. homes and businesses without generating more climate-warming gases?

Microsoft founder, billionaire philanthropist and investor Bill Gates is betting that nuclear power is key to meeting that need and hes digging into his own pockets to try and make it happen.

Gates has invested $1 billion into a nuclear power plant that broke ground in Kemmerer, Wyo., this week. The new facility, designed by the Gates-founded TerraPower, will be smaller than traditional fission nuclear power plants and, in theory, safer because it will use sodium instead of water to cool the reactors core.

TerraPower estimates the plant could be built for up to $4 billion, which would be a bargain when compared to other nuclear projects recently completed in the U.S. Two nuclear reactors built from scratch in Georgia cost nearly $35 billion, the Associated Press reports.

Construction on the TerraPower plant is expected to be completed by 2030.

Gates sat for an interview at NPR headquarters with Morning Edition host Steve Inskeep to discuss his multibillion dollar nuclear power investment and how he views the benefits and challenges of artificial intelligence, which the plant hes backing may someday power.

This interview has been edited for length and clarity.

Steve Inskeep: Let me ask about a couple of groups that you need to persuade, and one of them is long-time skeptics of the safety of nuclear power, including environmental groups, people who will put pressure on some of the political leaders that you've been meeting here in Washington. Are you convinced you can make a case that will persuade them?

Bill Gates: Well, absolutely. The safety case for this design is incredibly strong just because of the passive mechanisms involved. People have been talking about it for 60 years, that this is the way these things should work.

Meaning if it breaks down, it just cools off.

Exactly.

Something doesn't have to actively happen to cool it.

There's no high pressure on the reactor. Nothing that's pushing to get out. Water, as it's heated up, creates high pressure. And we have no high pressure and no complex systems needed to guarantee the safety. The Nuclear Regulatory Commission is the best in the world, and they'll question us and challenge us. And, you know, that's fantastic. That's a lot of what the next six years are all about.

Taillights trace the path of a motor vehicle at the Naughton Power Plant, Jan. 13, 2022, in Kemmerer, Wyo. Bill Gates and his energy company are starting construction at their Wyoming site adjacent to the coal plant for a next-generation nuclear power plant he believes will revolutionize how power is generated. Natalie Behring/AP/FR170146 AP hide caption

Let me ask about somebody else you need to persuade, and that is markets showing them that this makes financial sense. Sam Altman, CEO of OpenAI, is promoting and investing in nuclear power and is connected with a company that put its stock on the market and it immediately fell. Other projects that started to seem too expensive have been canceled in recent years. Can you persuade the markets?

Well, the current reactors are too expensive. There are companies working on fission and there's companies working on fusion. Fusion is further out. I hope that succeeds. I hope that in the long run it is a huge competitor to this TerraPower nuclear fission. Unlike previous reactors, we're not asking the ratepayers in a particular geography to guarantee the costs. So this reactor, all of the costs of building this are with the private company, TerraPower, in which I'm the biggest investor. And for strategic reasons, the U.S. government is helping with the first-of-the-kind costs.

The U.S. Department of Energy is funding half the costs of TerraPowers project, which includes the cost of designing and licensing the reactor, the AP reports.

I wonder if you can approach an ordinary investor and say, This is a good risk. It's going to pay off in a reasonable time frame?

You know, we're not choosing to take this company public, because understanding all of these issues are very complex. Many of our investors will be strategic investors who want to supply components, or they come from countries like Japan and Korea, where renewables are not as easy because of the geography. And so they want to go completely green. They, even more than the U.S., will need nuclear to do that.

What is the connection between AI and nuclear power?

Well, I suppose people want innovation to give us even cheaper electricity while making it clean. People who are optimistic about innovation in software and AI bring that optimism to the other things they do. There is a more direct connection, though, which is that the additional data centers that we'll be building look like they'll be as much as a 10% additional load for electricity. The U.S. hasn't needed much new electricity but with the rise in a variety of things from electric cars and buses to electric heat pumps to heating homes, demand for electricity is going to go up a lot. And now these data centers are adding to that. So the big tech companies are out looking at how they can help facilitate more power, so that these data centers can serve the exploding AI demand.

I'm interested in whether you see artificial intelligence as something that potentially could exacerbate income inequality, something that you as a philanthropist would think about.

Well, I think the two domains that I'm most involved in seeing how AI can help are health and education. I was in Newark, New Jersey, recently seeing the Khan Academy AI called Khanmigo being used in math classes, and I was very impressed how the teachers were using it to look at the data, divide students up to have personalized tutoring at the level of a kid who's behind or a kid who's ahead.

Whenever I get, like, a medical bill or a medical diagnosis, I put it in the AI and get it to explain it to me. You know, it's incredible at that. And if we look at countries like in Africa where the shortage of doctors is even more dramatic than in the United States, the idea that we can get more medical advice to pregnant women or anybody suffering from malaria, I'm very excited. And so driving it forward appropriately in those two domains I see as completely beneficial.

Did you understand what I was asking, about the concentration of power?

Absolutely. This is a very, very competitive field. I mean, Google is doing great work. Meta. Amazon. And it's not like there's a limited amount of money for new startups in this area. I mean, Elon Musk just raised $6 billion. It's kind of like the internet was in the year 2000. The barriers to entry are very, very low, which means we're moving quickly.

And the other thing about a concentration of power Do you worry about, you know, more money for investors and fewer jobs for ordinary people? Like they can get this wonderful AI technology, but they dont have a job?

I do worry about that. Basically, if you increase productivity, that should give you more options. We don't let robots play baseball. We're just never going to be interested in that. If robots get really good, and AIs get really good, are we in some ways going to want, in terms of job creation, to put limits on that, or tax those things? Ive raised that in the past. They're not good enough yet to be raising those issues. But you know, say in three to five years, they could be good enough.

But for now, your hope is the AI doesn't replace my job. It makes me more productive in the job that I already have.

Well, there are few jobs that will replace you just like computers did. In most things today, AI is a co-pilot, it raises your productivity. But if you're a support person, taking support calls and you're twice as productive, some companies will take that productivity and answer more calls and have more quality of answer. Some companies will need less people, now freeing up labor to do other things. Do they go and help reduce class size or help the handicapped or help with the elderly? If we're able to produce more, then the pie is bigger. But are we smart in terms of tax policies or how we distribute that, so we actually take freed-up labor and put into things wed like to have?

The Bill & Melinda Gates Foundation is an NPR funder.

The audio version of this story was produced by Kaity Kline and edited by Reena Advani. The digital version was edited by Amina Khan.

Go here to read the rest:

Bill Gates on his nuclear energy investment, AI's challenges - NPR

Yahoo resurrects Artifact inside a new AI-powered News app – The Verge

Artifact is dead, long live Yahoos version of Artifact. The architecture behind Artifact, the news aggregation app built by Instagram co-founders Kevin Systrom and Mike Krieger, will live on inside the body of a brand-new Yahoo News app.

Available to download today on iOS and Android, the new Yahoo News app brings an AI-powered personalized news feed for users based on their interests, while a feature called Key Takeaways can give a bullet summary of a news article when a reader is feeling TL;DR.

Other features of the Yahoo News app include Top Stories, which picks up on trending stories for users to read and will soon include key takeaway summaries. You can block stories with undesired keywords as well as filter out certain publishers to your preference. And just like Artifact, Yahoo News also lets you flag content like clickbait-y headlines, then lets AI write something better.

Yahoo is also taking some of what its building in the News app to the Yahoo News online homepage. Starting today, the website has a new layout that highlights top news, gives personalized recommendations, and shows trending topics. The new homepage experience is opt in.

The rest is here:

Yahoo resurrects Artifact inside a new AI-powered News app - The Verge

Tempus AI Surges in Nasdaq Debut Following IPO – Investopedia

Key Takeaways

Shares of healthcare technology firm Tempus AI surged Friday as they began trading on the Nasdaq.

Trading under the ticker symbol TEM," Tempus AI shares opened at $40 each, $3 above their initial public offering (IPO) price. They jumped nearly 18% to $43.64 at one point before fading to $38.30 as of 2 p.m. ET, a rise of 3.5%.

The company sold 11.1 million shares in the IPO, and the $37-per-share price was at the high end of its target range of $35 to $37. Tempus said the sale raised $410.7 million.

The company uses AI to analyze medical tests to help doctors better treat their patients. Alphabets (GOOGL) Google is among the investors in the firm.

Founder and Chief Executive Officer (CEO) Eric Lefkofsky, who was also the founder of e-commerce marketplace Groupon (GRPN), told CNBC that while Tempus is still a money-losing business, it expects "sometime in 2025 to turn the corner and be both cash flow positive and adjusted EBITDA positive.

More here:

Tempus AI Surges in Nasdaq Debut Following IPO - Investopedia

Pope to G7: AI a ‘cognitive-industrial revolution’ that could threaten human dignity – National Catholic Reporter

Pope Francis on June 14 issued a stark warning to world leaders that artificial intelligence has led to a "cognitive-industrial revolution" that could undermine human dignity, in an historic speech where he became the first pontiff to ever address the annual "G7" summit.

Artificial intelligence (or "AI"), said the pope, is both "an exciting and fearsome tool" where the "benefits or harm it will bring will depend on its use. We need to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs."

"Human dignity itself depends on it," he cautioned.

The 20-minute speech took place in the southern Italian region of Puglia, where the "Group of Seven" leaders from Canada, France, Germany, Italy, Japan, the United Kingdom and the United States are gathered from June 13-15 for the intergovernmental political and economic forum known as the "G7."

"We are enthusiastic when we imagine the advances that can result from artificial intelligence," the pope told the world leaders, "but, at the same time, we are fearful when we acknowledge the dangers inherent in its use."

Characterizing AI as a "sui generis"tool, Francis who in 2023 was himself a victim of a "deep-fake"AI-generated image that went viral said that human freedom requires tremendous responsibility when navigating how to develop and use such technologies.

"When our ancestors sharpened flint stones to make knives, they used them both to cut hides for clothing and to kill each other," he said. "The same could be said of other more advanced technologies, such as the energy produced by the fusion of atoms, as occurs within the sun, which could be used to produce clean, renewable energy or to reduce our planet to a pile of ashes."

Francis wasinvited to address the G7 by its host, Italian Prime Minister Giorgia Meloni, who said that she hoped global leaders would benefit from the Vatican's ongoing ethical reflections on the usage of AI.

The pope, along with a number of Vatican officials, have been sounding the alarm about both the possibilities and peril of AI since the launch of the"Rome Call for AI Ethics" in 2020.

The document identifies six core principles for AI ethics including transparency, inclusion and responsibility with a number of leading big tech firms such as Cisco, IBM and Microsoft, alongside a range of international organizations and religious leaders, signing onto the principles.

More recently, the pope dedicated his 2024 World Day of Peacemessage to the theme of AI, where he called for a binding international treaty to regulate its development andwarned that it could lead to a "technological dictatorship" if not properly regulated.

"We are enthusiastic when we imagine the advances that can result from artificial intelligence but, at the same time, we are fearful when we acknowledge the dangers inherent in its use."

Pope Francis

Last year, the European Union reached alandmark agreement that provided the first ever global framework for artificial intelligence regulation. While a bipartisan group of lawmakers in the United States has proposed similar legislation, the timeline for its consideration remains unclear.

During his June 14 address to the world leaders, Francis specifically identified lethal autonomous weapons (or "killer robots"), which can independently search for and engage targets, and called for a ban of their use.

"This starts from an effective and concrete commitment to introduce ever greater and proper human control," the pope said. "No machine should ever choose to take the life of a human being."

While the United States isnot known to currently possess such weapons, there are no U.S. laws banning their development or usage.

In his speech, the pope cited a number of specific examples where he said AI programs revealed their limitations: judges using computer programs to determine prison sentences, chatbots that mimic human interactions and students who use such technologies to prepare papers.

In each scenario, the pope noted that AI offered some utility, but ultimately offered diminished or flawed outcomes.

"Indeed, we seem to be losing the value and profound meaning of one of the fundamental concepts of the West: that of the human person," the pope lamented.

Along with the heads of state for the traditional G7 countries, a number of other world leaders were on hand for what was labeled an "outreach" session that also included Argentine President Javier Milei, Indian Prime Minister Narendra Modi and Brazilian President Lula da Silva, among others. Francis personally greeted each head of state individually before offering his remarks to the roundtable.

The pope, who traveled to the summit via helicopter and is spending less than 10 hours on the ground in Puglia before returning to Rome, is scheduled to have closed door bilateral meetings with at least 9 heads of state, including U.S. President Joe Biden, later today.

As he concluded his remarks, Francis pleaded with the politicians many of whom face upcoming elections and are managing fragile governing coalitions at home to use their power in service of the common good and to engage in a "healthy politics" to navigate the challenges artificial intelligence presents.

"It is up to everyone to make good use of it," the pope said, "but the onus is on politics to create the conditions for such good use to be possible and fruitful."

Go here to see the original:

Pope to G7: AI a 'cognitive-industrial revolution' that could threaten human dignity - National Catholic Reporter