Page 623«..1020..622623624625..630640..»

First international benchmark of artificial intelligence and machine … – Nuclear Energy Agency

Nuclear Energy Agency (NEA) - First international benchmark of artificial intelligence and machine learning in nuclear reactor physics

Recent performance breakthroughs in artificial intelligence (AI) and machine learning (ML) have led to unprecedented interest among nuclear engineers. Despite the progress, the lack of dedicated benchmark exercises for the application of AI and ML techniques in nuclear engineering analyses limits their applicability and broader usage. In line with the NEA strategic target to contribute to building a solid scientific and technical basis for the development of future generation nuclear systems and deployment of innovations, theTask Force on Artificial Intelligence and Machine Learning for Scientific Computing in Nuclear Engineering was established within theExpert Group on Reactor Systems Multi-Physics (EGMUP) of the Nuclear Science Committees Working Party on Scientific Issues and Uncertainty Analysis of Reactor Systems (WPRS). The Task Force will focus on designing benchmark exercises that will target important AI and ML activities, and cover various computational domains of interest, from single physics to multi-scale and multi-physics.

A significant milestone has been reached with the successful launch of the first comprehensive benchmark of AI and ML to predict the Critical Heat Flux (CHF). This CHF corresponds in a boiling system to the limit beyond which wall heat transfer decreases significantly, which is often referred to as critical boiling transition, boiling crisis and (depending on operating conditions) departure from nucleate boiling (DNB), or dryout. In a heat transfer-controlled system, such as a nuclear reactor core, CHF can result in a significant wall temperature increase leading to accelerated wall oxidation, and potentially to fuel rod failure. While constituting an important design limit criterion for the safe operation of reactors, CHF is challenging to predict accurately due to the complexities of the local fluid flow and heat exchange dynamics.

Current CHF models are mainly based on empirical correlations developed and validated for a specific application case domain. Through this benchmark, improvements in the CHF modelling are sought using AI and ML methods directly leveraging a comprehensive experimental database provided by the US Nuclear Regulatory Commission (NRC), forming the cornerstone of this benchmark exercise. The improved modelling can lead to a better understanding of the safety margins and provide new opportunities for design or operational optimisations.

The CHF benchmark phase 1 kick-off meeting on 30 October 2023 gathered 78 participants, representing 48 institutions from 16 countries. This robust engagement underscores the profound interest and commitment within the global scientific community toward integrating AI and ML technologies into nuclear engineering. The ultimate goal of the Task Force is to leverage insights from the benchmarks and distill lessons learnt to provide guidelines for future AI and ML applications in scientific computing in nuclear engineering.

eNrVmN9v2jAQx9/7V6C85wfdWtgUqDbWbkitxmjRpr0g4xxgauz0bPNjf/0coBqdknWY+qESL8mZu4v9vc9dkl6s5ry2AFRMilZQj5KgBoLKjIlJKxjcXYXN4KJ9ks7Iguwva0bJ8DQ5DWqUE6VaQWGPRkCEin7cXH8C6wEwaJ/UUjmaAdVP1hnNePSFqOkNyYs1tXQhWVabg57KrBXkRm/u1lKl0ebRXkq8VzmhkMa7O/tWjXJ4dnae7BvTuPD4H66NArwmYlLqGYSTT2oQQegO0TCRuK5MuvGu4ZY0U31Q0iCFHtHTHsoFyyArjTMmXIFTkPEyuwVccNBFkFLn8YzOlZNzMiOrPjx0y5P+YK0dvdJhEtYbST1Jmm/qdftzCoV7W1UarXiIOOfDZvNt0ojHDJUOmdCAgmirdsLDka2I6ZzgfSjHIUHNxowye79YxTmbWDOERGThnNApExByICisb7siFIYWlyECoVpimE/XilHlePA9iZpwT0fOVOepdD3FQXh4VloZUzkn62imctetIkisGdBSxt+DFE9wh5Z73O7ZX/6F4Tw+MOvBDkieMi5415FG6AouXfVdN6IjbTWsqk/UDaV6tdMiA/Vybn9JUd5LembEGXXlpSWaAaUH/W41Ll8NaT4SBQP0h5rvTGRyqV4eYfuS8ZR9vqHwv6aRc+fq/Gm1WdEXLw3KHGILNqaO4VVXjOWxpLJyL3f1KPbXofPN/Ccp4VAxAQ4diWgF/ji1eishf+W5NZQ6/Xx556q9bwZwfbu5LHXNstajaty6hY8WZIVemffhZbMlx7OvBpYgbtOuwXIsTbXO1fs4Xi6XkQSahQJIJHHyunrS3ozh763HyyCzHey2bPeU+mjbsA87ftdSfm7UOXZ83/1/95pQUScGjjiLLfW9sbl7+fK4/zO7e0u794RP/sJs5uwNYXyNaGZU6vGoBmOPVVyh5cPXsYUgHCbLNN5+CmufpHHxGax98hsAQXsu

WbSZBZ7fHBT6SYqA

See the rest here:
First international benchmark of artificial intelligence and machine ... - Nuclear Energy Agency

Read More..

11+ best Cyber Monday Chromebook deals from $79 – Laptop Mag

The best Cyber Monday Chromebook deals offer the best end-of-year discounts on Chrome OS-powered laptops. Beyond cheaply priced budget Chromebooks, we're seeing significant price cuts on Plus Chromebooks. These powerful Chromebooks feature the same Intel hardware found in Windows laptops, making them a great alternative. Google's lightweight and efficient Chrome OS running on Intel's powerful chip provides near-instant boot-ups, snappy performance, and long battery life.

So whether you want a budget laptop for basic tasks or a Plus Chromebook for productivity and gaming, its a wonderful time to save. Im sharing the best Chromebook deals you can get for Cyber Monday so you get first dibs on the best end-of-year discounts. See my recommended deals below.

It's Cyber Monday is upon us and deals on the most wished-for holiday tech are still live. Visit our Cyber Monday 2023 deals hub for the best deals still available.

Today's best Black Friday Chromebook deals

Visit link:
11+ best Cyber Monday Chromebook deals from $79 - Laptop Mag

Read More..

I tried hard to criticize the $249 HP Core i3 laptop but failed … – TechRadar

HP has yet another fantastic deal, this time coming from Walmart, from the same family as the $179 HP laptop deal we found earlier. The HP Laptop 15 costs just $249 and yet delivers the sort of firepower youd see on a laptop twice the price.

At its heart is an Intel Core i3-1215U processor; now, this is no ordinary Core i3, it has six cores which makes it far more powerful than previous Core i3 models. For example, it is about 80% faster than the Core i3-1115G4, which is only one year older.

Like many mainstream laptops, it also has 8GB of RAM and a 256GB SSD. Unlike most though, the memory is dual channel, which means you can expect slightly better performance. Its 15.6-inch display is a full HD model, rather than the HD resolution Im used to seeing in this price range. The color gamut and the brightness however are poor, as expected, but it is an acceptable compromise at this price.

It has a separate numeric keypad, five ports, two microphones, an HD webcam, a card reader, two speakers, Wi-Fi 6, and 41WHr battery (which HP claims will last around 450 minutes). It weighs in at 1.7kg, which is really not bad at all for a laptop with a 15.6-inch display.

Note that it is made mostly of recycled plastic, which is a step in the right direction of building more sustainability in technology. The laptop runs on Windows 11 Home in S mode but you can swap it to the classic Windows 11 Home for free.

There are two other notable features that set it apart from other laptops costing $300 and less, too. The first is the fact it comes with 25GB Dropbox cloud storage for a year. The second is the inclusion of a fingerprint reader, making it ideal for anyone looking for an affordable business laptop with added security features.

Read this article:
I tried hard to criticize the $249 HP Core i3 laptop but failed ... - TechRadar

Read More..

Artificial Intelligence images of ‘average’ person from each US state … – UNILAD

Artificial Intelligence has been asked to create a host of things since it's creation.

Another thing AI's created it what it believes your average Joe might look like depending on which US state they live in.

And it's safe to say the results are questionable.

We should really all know by now that there's no such thing as the 'average person', but there are stereotypes, fashion trends and local traditions, and its these factors that seem to have inspired AI when it came to creating images of the 'average' human from a specific US state.

In a post on the Reddit thread r/midjourney, a Redditor shared a series of AI-generated images from a variety of states, along with the caption: "The most stereotypical person in [state name]."

The caption presumably represented the prompt they'd given to the AI program before letting it do its thing, with the chosen states including Texas, California, Colorado, Florida, Oregon and Maine.

And the results of the prompt are interesting, to say the least. Where to begin?

Kicking things off with Texas, we have a man dressed in some 'cowboy'-style attire, including a large cowboy hat, a brown shirt tucked into blue jeans and a wide belt buckle.

It's all flower-power in California, where the AI human has long hair blowing in the breeze, big sunglasses and a floral shirt.

While in Colorado it's a different kind of plant getting all the attention, with a woman perched on what looks to be a mountaintop packed with marijuana plants.

She's wearing a green hoodie and headband, with what looks to be a smoking joint in her hand.

I'm not sure how many people hike up weed mountains to get a hit in Colorado, but okay.

Next let's head to Florida, where a man with a long white beard stands on a road with long blue shorts, a baggy pink shirt and a sunhat, before moving to Oregon, where we're greeted by a woman with short greyish-blue hair.

And things take a dramatic turn as we head to Maine; a city known for its lobster.

To represent this, our AI man stands with a hat featuring an actual lobster on his head.

Again, I'm not sure how 'average' that is, but I've never been to Maine myself.

The AI-generated images have sparked mixed responses after being shared online, with one outraged Reddit user claiming the original poster 'clearly used unflattering prompts for the red states'.

Another unimpressed viewer commented: "Hi. Maine here. Can you not put dead lobsters all over everything? K thx."

The creations have left many people intrigued, though, with a lot of comments calling for more AI-generated images from even more states.

Read more:
Artificial Intelligence images of 'average' person from each US state ... - UNILAD

Read More..

WD Executes Next Phase of HDD Roadmap, Begins 24TB CMR … – Embedded Computing Design

By Ken Briodagh

Senior Technology Editor

Embedded Computing Design

November 20, 2023

News

As new applications, use cases and connected devices multiply, Western Digital is deploying advanced hard disk drive (HDD) technologies to help data centers design more cost-efficient, scalable and sustainable infrastructure. In a recent release, the company announced that it is now shipping its new 10-disk 24TB CMR HDD family for hyperscale, cloud and enterprise data center customers.

According to the announcement, the new 28TB SMR HDD is also ramping up, and the 26TB SMR HDD exabyte shipments reportedly reached nearly half of its data center exabytes shipped in the first quarter fiscal year 2024.

With these new offerings, Western Digital is once again proving that hard drives are not just keeping pace, they are forging a path forward, ensuring that data-intensive applications of today and tomorrow have a strong foundation to build on while the industry prepares for HAMR, said Ed Burns, Research Director, Hard Disk Drive and Storage Technologies, IDC. We are seeing strong momentum for Western Digitals SMR HDDs and believe that SMR adoption will continue to grow as their new 28TB SMR HDD offers the next compelling TCO value proposition that cloud customers cannot ignore.

This new generation of drives is built on a proven platform and is designed for data center customers who are consistently looking for the highest capacity storage to help them achieve the lowest possible total cost of ownership (TCO), WD said in the release. These high-capacity HDDs reportedly are a step forward in meeting the companys sustainability targets and toward helping data center customers meet theirs in that the 28TB and 24TB HDDs are built with 40 percent (by weight) recycled content, and are more than 10 percent more energy efficient per terabyte.

Product Details

At 28TB, the new Ultrastar DC HC680 SMR HDD is designed for storage density in hyperscale, cloud and enterprise applications, WD said. The company said it is ideal for sequential write workloads where storage density, watt/TB and $/TB are critical parameters, and it targets environments such as bulk storage, online backup and archive, video surveillance, cloud storage, storage for regulatory compliance, big data storage, and other applications where data may be infrequently accessed.

The Ultrastar DC HC580 24TB CMR HDD with improved OptiNAND technology is the next step in data density, according to the announcement, and is set to allow data center customers to maximize storage within the same footprint, and in power-constrained environments. For better storage density, the HC580 reportedly can enable up to 612TB of raw storage per rack unit in a 4U 102-drive bay solution. These drives are also more power efficient, providing 12 percent less Watt/TB2 compared to the companys previous 22TB version.

The Ultrastar DC HC680 and HC580 HDDs are currently being qualified by select hyperscalers, CSPs and OEM customers, and are now available for large enterprise customers looking for the highest capacity with lower power per terabyte for designing more efficient storage systems and data centers.

The new Ultrastar drives are also being qualified and integrated into the companys Ultrastar Data60 and Data102 JBOD hybrid storage platforms, which are key building blocks for designing next-generation disaggregated storage and software-defined storage (SDS) infrastructure, WD said. Each storage platform comes with IsoVibe and ArcticFlow technologies for improved performance and reliability. Ultrastar Data60 and Data102 platforms with the new HDDs will be available beginning next month.

Western Digital also announced that it is now shipping its 24TB WD Gold CMR SATA HDD into the channel. These drives, the company said, are specifically designed for system integrators and resellers serving enterprises and SMBs who need more reliable storage for big data and enterprise storage workloads compared to traditional client HDDs. Leveraging innovations from the Ultrastar technology platform, features vibration protection technology, is designed to handle workloads up to 550TB per year, and comes with a 5-year limited warranty.

New and existing endpoints from industries, connected devices, digital platforms, AI innovations, autonomous machines and more create a staggering amount of data each day. This relentless creation of data ultimately finds its way to the cloud, which is underpinned by our continued advancements in HDDs, said Ashley Gorakhpurwalla, EVP and GM, HDD Business Unit, Western Digital. As a strategic partner to cloud and data center customers around the world, we are extending the value of our proven HDD platform and technology innovations to deliver the highest capacity HDD storage and TCO value that our customers demand.

Ken Briodagh is a writer and editor with two decades of experience under his belt. He is in love with technology and if he had his druthers, he would beta test everything from shoe phones to flying cars. In previous lives, hes been a short order cook, telemarketer, medical supply technician, mover of the bodies at a funeral home, pirate, poet, partial alliterist, parent, partner and pretender to various thrones. Most of his exploits are either exaggerated or blatantly false.

More from Ken

Read the rest here:
WD Executes Next Phase of HDD Roadmap, Begins 24TB CMR ... - Embedded Computing Design

Read More..

Pentagon faces future with lethal AI weapons on the battlefield – NBC Chicago

Artificial intelligence employed by the U.S. military has piloted pint-sized surveillance drones in special operations forces missions and helped Ukraine in its war against Russia. It tracks soldiers fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space.

Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative dubbed Replicator seeks to galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many, Deputy Secretary of Defense Kathleen Hicks said in August.

While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy - including on weaponized systems.

There is little dispute among scientists, industry experts and Pentagon officials believe that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles.

Thats especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.

Its unclear if the Pentagon is currently formally assessing any fully autonomous lethal weapons system for deployment, as required by a 2012 directive. A Pentagon spokeswoman would not say.

Replicator highlights immense technological and personnel challenges for Pentagon procurement and development as the AI revolution promises to transform how wars are fought.

"The Department of Defense is struggling to adopt the AI developments from the last machine-learning breakthrough, said Gregory Allen, a former top Pentagon AI official now at the Center for Strategic and International Studies think tank.

The Pentagon's portfolio boasts more than 800 AI-related unclassified projects, much still in testing. Typically, machine-learning and neural networks are helping humans gain insights and create efficiencies.

The AI that weve got in the Department of Defense right now is heavily leveraged and augments people, said Missy Cummings, director of George Mason Universitys robotics center and a former Navy fighter pilot. Theres no AI running around on its own. People are using it to try to understand the fog of war better.

One domain where AI-assisted tools are tracking potential threats is space, the latest frontier in military competition.

China envisions using AI, including on satellites, to "make decisions on who is and isnt an adversary, U.S. Space Force chief technology and innovation officer Lisa Costa, told an online conference this month.

Nine humanoid robots gathered at the AI for Good conference in Geneva, Switzerland, where organizers are seeking to make the case for artificial intelligence to help resolve some of the worlds biggest challenges.

The U.S. aims to keep pace.

An operational prototype called Machina used by Space Force keeps tabs autonomously on more than 40,000 objects in space, orchestrating thousands of data collections nightly with a global telescope network.

Machina's algorithms marshal telescope sensors. Computer vision and large language models tell them what objects to track. And AI choreographs drawing instantly on astrodynamics and physics datasets, Col. Wallace Rhet Turnbull of Space Systems Command told a conference in August.

Another AI project at Space Force analyzes radar data to detect imminent adversary missile launches, he said.

Elsewhere, AI's predictive powers help the Air Force keep its fleet aloft, anticipating the maintenance needs of more than 2,600 aircraft including B-1 bombers and Blackhawk helicopters.

Machine-learning models identify possible failures dozens of hours before they happen, said Tom Siebel, CEO of Silicon Valley-based C3 AI, which has the contract. C3's tech also models the trajectories of missiles for the the U.S. Missile Defense Agency and identifies insider threats in the federal workforce for the Defense Counterintelligence and Security Agency.

Among health-related efforts is a pilot project tracking the fitness of the Army's entire Third Infantry Division more than 13,000 soldiers. Predictive modeling and AI help reduce injuries and increase performance, said Maj. Matt Visser.

In Ukraine, AI provided by the Pentagon and its NATO allies helps thwart Russian aggression.

NATO allies share intelligence from data gathered by satellites, drones and humans, some aggregated with software from U.S. contractor Palantir. Some data comes from Maven, the Pentagons pathfinding AI project now mostly managed by the National Geospatial-Intelligence Agency, say officials including retired Air Force Gen. Jack Shanahan, the inaugural Pentagon AI director,

Maven began in 2017 as an effort to process video from drones in the Middle East spurred by U.S. Special Operations forces fighting ISIS and al-Qaeda and now aggregates and analyzes a wide array of sensor- and human-derived data.

AI has also helped the U.S.-created Security Assistance Group-Ukraine help organize logistics for military assistance from a coalition of 40 countries, Pentagon officials say.

To survive on the battlefield these days, military units must be small, mostly invisible and move quickly because exponentially growing networks of sensors let anyone see anywhere on the globe at any moment, then-Joint Chiefs chairman Gen. Mark Milley observed in a June speech. And what you can see, you can shoot.

To more quickly connect combatants, the Pentagon has prioritized the development of intertwined battle networks called Joint All-Domain Command and Control to automate the processing of optical, infrared, radar and other data across the armed services. But the challenge is huge and fraught with bureaucracy.

Christian Brose, a former Senate Armed Services Committee staff director now at the defense tech firm Anduril, is among military reform advocates who nevertheless believe they "may be winning here to a certain extent."

The argument may be less about whether this is the right thing to do, and increasingly more about how do we actually do it -- and on the rapid timelines required," he said. Brose's 2020 book, The Kill Chain, argues for urgent retooling to match China in the race to develop smarter and cheaper networked weapons systems.

To that end, the U.S. military is hard at work on "human-machine teaming." Dozens of uncrewed air and sea vehicles currently keep tabs on Iranian activity. U.S. Marines and Special Forces also use Andurils autonomous Ghost mini-copter, sensor towers and counter-drone tech to protect American forces.

Industry advances in computer vision have been essential. Shield AI lets drones operate without GPS, communications or even remote pilots. It's the key to its Nova, a quadcopter, which U.S. special operations units have used in conflict areas to scout buildings.

On the horizon: The Air Forces loyal wingman program intends to pair piloted aircraft with autonomous ones. An F-16 pilot might, for instance, send out drones to scout, draw enemy fire or attack targets. Air Force leaders are aiming for a debut later this decade.

The loyal wingman timeline doesn't quite mesh with Replicator's, which many consider overly ambitious. The Pentagon's vagueness on Replicator, meantime, may partly intend to keep rivals guessing, though planners may also still be feeling their way on feature and mission goals, said Paul Scharre, a military AI expert and author of Four Battlegrounds.

Anduril and Shield AI, each backed by hundreds of millions in venture capital funding, are among companies vying for contracts.

Nathan Michael, chief technology officer at Shield AI, estimates they will have an autonomous swarm of at least three uncrewed aircraft ready in a year using its V-BAT aerial drone. The U.S. military currently uses the V-BAT -- without an AI mind -- on Navy ships, on counter-drug missions and in support of Marine Expeditionary Units, the company says.

It will take some time before larger swarms can be reliably fielded, Michael said. Everything is crawl, walk, run -- unless youre setting yourself up for failure.

The only weapons systems that Shanahan, the inaugural Pentagon AI chief, currently trusts to operate autonomously are wholly defensive, like Phalanx anti-missile systems on ships. He worries less about autonomous weapons making decisions on their own than about systems that dont work as advertised or kill noncombatants or friendly forces.

President Joe Biden issued an executive order on Monday designed to protect privacy, support workers and set new standards for safety and security around artificial intelligence.

The department's current chief digital and AI officer Craig Martell is determined not to let that happen.

Regardless of the autonomy of the system, there will always be a responsible agent that understands the limitations of the system, has trained well with the system, has justified confidence of when and where its deployable -- and will always take the responsibility, said Martell, who previously headed machine-learning at LinkedIn and Lyft. That will never not be the case.

As to when AI will be reliable enough for lethal autonomy, Martell said it makes no sense to generalize. For example, Martell trusts his car's adaptive cruise control but not the tech thats supposed to keep it from changing lanes. As the responsible agent, I would not deploy that except in very constrained situations, he said. Now extrapolate that to the military.

Martells office is evaluating potential generative AI use cases it has a special task force for that but focuses more on testing and evaluating AI in development.

One urgent challenge, says Jane Pinelis, chief AI engineer at Johns Hopkins Universitys Applied Physics Lab and former chief of AI assurance in Martells office, is recruiting and retaining the talent needed to test AI tech. The Pentagon can't compete on salaries. Computer science PhDs with AI-related skills can earn more than the military's top-ranking generals and admirals.

Testing and evaluation standards are also immature, a recent National Academy of Sciences report on Air Force AI highlighted.

Might that mean the U.S. one day fielding under duress autonomous weapons that dont fully pass muster?

We are still operating under the assumption that we have time to do this as rigorously and as diligently as possible, said Pinelis. I think if were less than ready and its time to take action, somebody is going to be forced to make a decision.

Read more:
Pentagon faces future with lethal AI weapons on the battlefield - NBC Chicago

Read More..

Breda O’Brien: Artificial intelligence is only as awful as the humans … – The Irish Times

Sam Altman, in the news for his dizzyingly fast transition from being fired to being reinstated as chief executive of OpenAI, likes to tell people that he shares a birthday with the father of the atomic bomb, J Robert Oppenheimer.

Altman, one of the founders of OpenAI which developed ChatGPT, believes that work on artificial intelligence resembles the famous Manhattan Project, which gathered the best minds to beat Germany in the race to produce nuclear weapons.

It would seem to be an unfortunate analogy, but Altman believes that by foreseeing the potential for disaster we can work to avoid it and benefit human beings instead.

The recent debacle demonstrates how unlikely it is that this optimistic vision will prevail. OpenAI started as a non-profit in 2015 but soon ran into funding difficulties.

A for-profit subsidiary was initiated in 2019 under the scrutiny of the non-profit board, which was to ensure that safe artificial intelligence is developed and benefits all of humanity. This was to take precedence over any obligation to create a profit.

Loosely speaking, the board had more doomsayers, those who worry that AI has the potential to be dangerous to the extent of wiping out all of humanity, while Altman is more of an accelerationist, who believes that the potential benefits far outweigh the risks.

What happened when the board no longer had faith in Altman because he was not consistently candid in his communications with the board? Altman jumped ship to Microsoft, followed by Greg Brockman, another founder, and the majority of OpenAI employees threatened to do likewise. Yes, Microsoft, which was last year criticised by a group of German data-protection regulators over its compliance with GDPR.

[What will happen when the middle class get hit by ChatGPT?]

The pressure to reinstate Altman may not have been motivated purely by uncritical adoration, as staff and investors knew that firing him meant that a potential $86 billion deal to sell employee shares would probably not happen.

The boards first real attempt to rein Altman in failed miserably, in other words. The new board includes Larry Summers, former US treasury secretary and superstar economist, who has been the subject of a number of recent controversies, including over his connection to Jeffrey Epstein. When he was president of Harvard, Summers was forced to apologise for substantially understating the impact of socialisation and discrimination on the numbers of women employed in higher education in science and maths. He had suggested that it was mostly down to genetic factors rather than discrimination.

At a recent seminar in Bonnevaux, France, at the headquarters of the World Community of Christian Meditators, former Archbishop of Canterbury Rowan Williams addressed the question of how worried we should be about artificial intelligence. He made a valid point, echoed by people such as Jaron Lanier, computer scientist and virtual reality pioneer, that artificial intelligence is a misnomer for what we now have. He compared the kind of holistic learning that his two-year-old grandson demonstrates with the high-order data processing of large language models. His grandson is learning to navigate a complex landscape without bumping too much into things or people, to code and decode messages including metaphors and singing, all in a holistic way where it is difficult to disentangle the strands of what is going on. Unlike AI, his grandson is also capable of wonder.

While Archbishop Williamss distinction between human learning and machine learning is sound, the problem may not be the ways in which AI does not resemble us, or learn like us. We may need to fear AI most when it mirrors the worst aspects of our humanity without the leavening influence of our higher qualities.

As cartoonist Walt Kelly had his character, Pogo, say in an Earth Day poster, We have met the enemy and he is us

Take hallucinations, the polite term for when ChatGPT lies to you, such as falsely accusing a legal scholar of sexual harassment, as outlined in a Washington Post article this year. (To add insult to injury, it cited a non-existent Washington Post article as evidence of the non-existent harassment.) As yet, no one has succeeded in programming a large language model so that it does not hallucinate, partly for technical reasons and partly because these chatbots are scraping enormous amounts of information from the internet and reassembling it in plausible ways. As the early computer scientists used to say, garbage in, garbage out.

Human beings used the internet from the beginning to lie and spread disinformation. Human beings created the large language models that mimic humanity so effectively. We allow them to continue to develop even though OpenAI has not shared, for commercial reasons, how it designed and trained its model.

Talking about regulation, as Altman does with plausible earnestness, is meaningless if we do not understand what we are regulating. The real fears of potential mass destruction are brushed aside.

As cartoonist Walt Kelly had his character, Pogo, say in an Earth Day poster, We have met the enemy and he is us. Our inability to cry halt or even pause shows our worst qualities greed, naive belief in inevitable progress, and the inability to think with future generations in mind. We should perhaps focus less on the terrors of AI, and more on the astonishing hubris of those who have created and unleashed them.

Read the rest here:
Breda O'Brien: Artificial intelligence is only as awful as the humans ... - The Irish Times

Read More..

Live chat: A new writing course for the age of artificial intelligence – Yale News

How is academia dealing with the influence of AI on student writing? Just ask ChatGPT, and itll deliver a list of 10 ways in which the rapidly expanding technology is creating both opportunities and challenges for faculty everywhere.

On the one hand, for example, while there are ethical concerns about AI compromising students academic integrity, there is also growing awareness of the ways in which AI tools might actually support students in their research and writing.

Students in Writing Essays with AI, a new English seminar taught by Yales Ben Glaser, are exploring the many ways in which the expanding number of AI tools are influencing written expression, and how they might help or harm their own development as writers.

We talk about how large language models are already and will continue to be quite transformative, Glaser said, not just of college writing but of communication in general.

An associate professor of English in Yales Faculty of Arts and Sciences, Glaser sat down with Yale News to talk about the need for AI literacy, ChatGPTs love of lists, and how the generative chatbot helped him write the course syllabus.

Ben Glaser: Its more the former. None of the final written work for the class is written with ChatGPT or any other large language model or chatbot, although we talk about using AI research tools like Elicit and other things in the research process. Some of the small assignments directly call for students to engage with ChatGPT, get outputs, and then reflect on it. And in that process, they learn how to correctly cite ChatGPT.

The Poorvu Center for Teaching and Learning has a pretty useful page with AI guidelines. As part of this class, we read that website and talked about whether those guidelines seem to match students own experience of usage and what their friends are doing.

Glaser: I dont get the sense that they are confused about it in my class because we talk about it all the time. These are students who simultaneously want to understand the technology better, maybe go into that field, and they also want to learn how to write. They dont think theyre going to learn how to write by using those AI tools better. But they want to think about it.

Thats a very optimistic take, but I think that Yale makes that possible through the resources it has for writing help, and students are often directed to those resources. If youre in a class where the writing has many stages drafting, revision its hard to imagine where ChatGPT is going to give you anything good, partly because youre going to have to revise it so much.

That said, its a totally different world if youre in high school or a large university without those resources. And then of course there are situations that have always led to plagiarism, where youre strung out at the last minute and you copy something from Google.

Glaser: First of all, its a really interesting thing to study. Thats not what youre asking youre asking what it can do or where does it belong in a writing process. But when you talk to a chatbot, you get this fuzzy, weird image of culture back. You might get counterpoints to your ideas, and then you need to evaluate whether those counterpoints or supporting evidence for your ideas are actually good ones. Theres no understanding behind the model. Its based on statistical probabilities its guessing which word comes next. It sometimes does so in a way that speeds things along.

If you say, give me some points and counterpoints in, say, AI use in second-language learning, it might spit out 10 good things and 10 bad things. It loves to give lists. And theres a kind of literacy to reading those outputs. Students in this class are gaining some of that literacy.

Glaser: I dont love the word brainstorming, but I think there is a moment where you have a blank page, and you think you have a topic, and the process of refining that involves research. ChatGPTs not the most wonderful research tool, but it sure is an easy one.

I asked it to write the syllabus for this course initially. What it did was it helped me locate some researchers that I didnt know, it gave me some ideas for units. And then I had to write the whole thing over again, of course. But that was somewhat helpful.

Glaser: It can be. I think thats a limited and effective use of it in many contexts.

One of my favorite class days was when we went to the library and had a library session. Its an insanely amazing resource at Yale. Students have personal librarians, if they want them. Also, Yale pays for these massive databases that are curating stuff for the students. The students quickly saw that these resources are probably going to make things go smoother long-term if they know how to use them.

So it's not a simple AI tool bad, Yale resource good. You might start with the quickly accessible AI tool, and then go to a librarian, and say, like, heres a different version of this. And then youre inside the research process.

Glaser: One thing that some writers have done is, if you interact with it long enough, and give it new prompts and develop its outputs, you can get something pretty cool. At that point youve done just as much work, and youve done a different kind of creative or intellectual project. And Im all for that. If everythings cited, and you develop a creative work through some elaborate back-and-forth or programming effort including these tools, youre just doing something wild and interesting.

Glaser: Im glad that I could offer a class that students who are coming from computer science and STEM disciplines, but also want to learn how to write, could be excited about. AI-generated language, thats the new medium of language. The Web is full of it. Part of making students critical consumers and readers is learning to think about AI language as not totally separate from human language, but as this medium, this soup if you want, that were floating around in.

Go here to read the rest:
Live chat: A new writing course for the age of artificial intelligence - Yale News

Read More..

Bill Gates says using AI could lead to 3-day work week – Fox Business

C3 AI CEO Tom Siebel provides insight on the 'unimaginably powerful' technology on 'The Claman Countdown.'

Bill Gates is weighing in on the potential of artificial intelligence (AI) and how it could allow humans to work just three days a week.

"If you zoom out, the purpose of life is not just to do jobs," the Microsoft co-founder said Monday on an episode of Trevor Noah's "What Now? With Trevor Noah" podcast. "So if you eventually get a society where you only have to work three days a week or something, thats probably OK if the machines can make all the food and the stuff and we dont have to work as hard."

"The demand for labor to do good things is still there if you match the skills to it, and then if you ever get beyond that, then, OK, you have a lot of leisure time and will have to figure out what to do with it," Gates said.

Bill Gates, co-chairman of the Bill and Melinda Gates Foundation, during the EEI 2023 event in Austin, Texas, on June 12, 2023. (Jordan Vonderhaar/Bloomberg via Getty Images / Getty Images)

Gates also acknowledged that job displacement happens with new technologies.

"If they come slow enough, theyre generational," he said. The billionaire gave an example of fewer farmers in this generation compared to prior ones.

JPMORGAN CHASE CEO JAMIE DIMON ON WHETHER AI WILL REPLACE SOME JOBS: OF COURSE, YEAH

"So if it proceeds at a reasonable pace, and the government helps those people who have to learn new things, then its all good," he told Noah. "Its the aging society, its OK because the software makes things more productive."

Gates argued earlier this year that AI could provide major benefits to productivity, health care and education. He has also more recently talked about the potential of AI-powered personal assistants called "agents" that eventually "will be able to help with virtually any activity and any area of life" online.

Microsoft co-founder Bill Gates reacts during a visit with Britain's Prime Minister Rishi Sunak at Imperial College University on Feb. 15, 2023, in London. (Photo by Justin Tallis - WPA Pool/Getty Images / Getty Images)

AMAZON LOOKING TO HELP 2M PEOPLE GROW THEIR AI SKILLS

In March, while touting AI, Gates also called for establishing "rules of the road" so "any downsides of artificial intelligence are far outweighed by its benefits."

The potential future impact of AI on jobs and workflows has come up more as companies increasingly move to embrace the technology.

Former TD Ameritrade Chairman & CEO Joe Moglia joins 'Mornings with Maria' to discuss his outlook for A.I., durable goods, the economy and the impact of the Federal Reserve's rate hikes.

In April, the World Economic Forum found that nearly three-quarters of the companies it surveyed around the world indicated they would likely adopt AI. Half of the surveyed companies said they expected AI to create job growth, while 25% thought it would lead to job cuts.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

LinkedIn recently reported that it experienced a surge in the number of job advertisements referencing AI compared with November of last year.

Go here to see the original:
Bill Gates says using AI could lead to 3-day work week - Fox Business

Read More..

Artificial General Intelligence will make you feel special, very special – Deccan Herald

Artificial General Intelligence will make you feel special, very specialThe IT crowd is not schooled in the arts, humanities or social sciences. But most of all, history.

Last Updated 26 November 2023, 04:27 IST

Roger Marshall is a computer scientist, a newly minted Luddite and a cynic

Thrilled that Artificial General Intelligence (AGI) will make your life even more efficient? Wait till you find out what else is in the works. If you feel you count for nothing in this world, you are mistaken. AGI will make you feel special, very special. AGI takes education seriously -- because it is always learning new things about you.

If you have transited through any of the major airports in the US, Western Europe or Japan, no doubt you will have noticed that, except for the travellers and the security people, hardly anyone else is around. The shops, restaurants and airline check-in counters, the few that are still in existence, dont have any staff to speak of. They have all gone self-service.

If you want to get a bite to eat, there is no paper menu for you to look at nor are there any wait staff to take your order. You need a smartphone to satisfy your thirst or your hunger. You point your phone at the QR code prominently displayed at each table, place your order electronically, and sullenly glare at your phone while waiting for it to be delivered by the sole hapless soul from the kitchen. Good luck finding someone to complain to if an incorrect or badly prepared item was delivered. Robotic service, but no robots. Not yet. Even graveyards are livelier than modern airports or restaurants.

In a few short months, you will be surprised to learn that the price that you will be charged for anything you buy -- food, clothing, travel tickets, taxi ride, etc., -- is dependent on who you are and how much you can afford. This is profiling at its finest, no personal detail too small to ignore. Items sold in stores will no longer have fixed prices attached to them. They can be instantaneously changed by AGI programmes to suit the customer. Or the occasion. Remember congestion pricing?

Advanced systems such as ChatGPT-4 are based on large language models. Other large AI models in the fields of physics, chemistry, economics, medicine, and climate (several such models already exist in rudimentary form) will be deployed in the near-future. Predictions are that when these models interact in a neural network learning environment, a vast treasure trove of new knowledge will be created that can ultimately prove beneficial to humanity. All in the belief that the private sector is much better than the public sector in all manner of things. However, no one engaged in AI research has come up with a satisfactory explanation for how exactly a system such as AGI would work and, more importantly, why the output of the system should be trusted.

In his New York Times essay of June 30, 2023, titled, The True Threat of Artificial Intelligence, author Evgeny Morozov takes issue with Silicon Valley IT stalwarts and their fawning admirers in scientific and academic circles who are cheerleading ongoing efforts to promote AGI. Morozov argues that for-profit corporations solutionism approach to what ails public spaces and organisations is always market-based and that the privatisation of public enterprises and laissez-faire economic policies of the 1980s which are still in vogue serve to explain why AGI is a harbinger of bad things waiting to happen.

If you live in the US, you simply cannot avoid interacting with AI systems (intelligent agents) that are already in place. These software agents evaluate your applications for jobs, college admissions, loans, etc., and pronounce judgement. If any problems are encountered, you are prevented from contacting a human to resolve the situation.

Make no mistake, the handful of companies calling for a new Manhattan Project to develop AGI will end up controlling the world, and nation-states will become a thing of the past. These for-profit companies would love to get rid of any restrictions placed on their operations, privatise all services provided by any nation-state (including the US) to its citizens, and gain control of State-owned entities. In short, no more public schools, community hospitals, fire and police departments, public utilities (water, sewer, electricity, gas, transportation), etc. Why, even the military, what with mercenary soldiers (human and/or robot) lurking in the background.

The after-effects of the original Manhattan Project, which produced the A-bomb, are still with us and are not distant memories. Just ask the survivors or their descendants in Hiroshima, Nagasaki, Three Mile Island, Chernobyl, and Fukushima.

The IT crowd is not schooled in the arts, humanities or social sciences. But most of all, history.

View original post here:
Artificial General Intelligence will make you feel special, very special - Deccan Herald

Read More..