Category Archives: Artificial Intelligence

NSF Funds Groundbreaking Research Project to ‘Democratize’ AI – Northeastern University

Groundbreaking research by Northeastern University will investigate how generative AI works and provide industry and the scientific community with unprecedented access to the inner workings of large language models.

Backed by a $9 million grant from the National Science Foundation, Northeastern will lead the National Deep Inference Fabric that will unlock the inner workings of large language models in the field of AI.

The project will create a computational infrastructure that will equip the scientific community with deep inferencing tools in order to develop innovative solutions across fields. An infrastructure with this capability does not currently exist.

At a fundamental level, large language models such as Open AIs ChatGPT or Googles Gemini are considered to be black boxes which limits both researchers and companies across multiple sectors in leveraging large-scale AI.

Sethuraman Panchanathan, director of the NSF, says the impact of NDIF will be far-reaching.

Chatbots have transformed societys relationship with AI, but how they operate is yet to be fully understood, Panchanathan says. With NDIF, U.S. researchers will be able peer inside the black box of large language models, gaining new insights into how they operate and greater awareness of their potential impacts on society.

Even the sharpest minds in artificial intelligence are still trying to wrap their heads around how these and other neural network-based tools reason and make decisions, explains David Bau, a computer science professor at Northeastern and the lead principal investigator for NDIF.

We fundamentally dont understand how these systems work, what they learned from the data, what their internal algorithms are, Bau says. I consider it one of the greatest mysteries facing scientists today what is the basis for synthetic cognition?

David Madigan, Northeasterns provost and senior vice president for academic affairs, says the project will help address one of the most pressing socio-technological problems of our time how does AI work?

Progress toward solving this problem is clearly necessary before we can unlock the massive potential for AI to do good in a safe and trustworthy way, Madigan says.

In addition to establishing an infrastructure that will open up the inner workings of these AI models, NDIF aims to democratize AI, expanding access to large language models.

Northeastern will be building an open software library of neural network tools that will enable researchers to conduct their experiments without having to bring their own resources, and sets of educational materials to teach them how to use NDIF.

The project will build an AI-enabled workforce by training scientists and students to serve as networks of experts, who will train users across disciplines.

There will be online and in-person educational workshops that we will be running, and were going to do this geographically dispersed at many locations taking advantage of Northeasterns physical presence in a lot of parts of the country, Bau says.

Research emerging from the fabric could have worldwide implications outside of science and academia, Bau explains. It could help demystify the underlying mechanisms of how these systems work to policymakers, creatives and others.

The goal of understanding how these systems work is to equip humanity with a better understanding for how we could effectively use these systems, Bau says. What are their capabilities? What are their limitations? What are their biases? What are the potential safety issues we might face by using them?

Large language models like Chat GPT and Googles Gemini are trained on huge amounts of data using deep learning techniques. Underlying these techniques are neural networks, synthetic processes that loosely mimic the activity of a human brain that enable these chatbots to make decisions.

But when you use these services through a web browser or an app, you are interacting with them in a way that obscures these processes, Bau says.

They give you the answers, but they dont give you any insights as to what computation has happened in the middle, Bau says. Those computations are locked up inside the computer, and for efficiency reasons, theyre not exposed to the outside world. And so, the large commercial players are creating systems to run AIs in deployment, but theyre not suitable for answering the scientific questions of how they actually work.

At NDIF, researchers will be able to take a deeper look at the neural pathways these chatbots make, Bau says, allowing them to see whats going on under the hood while these AI models actively respond to prompts and questions.

Researchers wont have direct access to Open AIs Chat GPT or Googles Gemini as the companies havent opened up their models for outside research. They will instead be able to access open source AI models from companies such as Mistral AI and Meta.

What were trying to do with NDIF is the equivalent of running an AI with its head stuck in an MRI machine, except the difference is the MRI is in full resolution. We can read every single neuron at every single moment, Bau says.

But how are they doing this?

Such an operation requires significant computational power on the hardware front. As part of the undertaking, Northeastern has teamed up with the University of Illinois Urbana-Champaign, which is building data centers equipped with state-of-the-art graphics processing units (GPUs) at the National Center for Supercomputing Applications. NDIF will leverage the resources of the NCSA DeltaAI project.

NDIF will partner with New Americas Public Interest Technology University Network, a consortium of 63 universities and colleges, to ensure that the new NDIF research capabilities advance interdisciplinary research in the public interest.

Northeastern is building the software layer of the project, Bau says.

The software layer is the thing that enables the scientists to customize these experiments and to share these very large neural networks that are running on this very fancy hardware, he says.

Northeastern professors Jonathan Bell, Carla Brodley, Bryon Wallace and Arjun Guha are co-PIs on the initiative.

Guha explains the barriers that have hindered research into the inner-workings of large generative AI models up to now.

Conducting research to crack open large neural networks poses significant engineering challenges, he says. First of all, large AI models require specialized hardware to run, which puts the cost out of reach of most labs. Second, scientific experiments that open up models require running the networks in ways that are very different from standard commercial operations. The infrastructure for conducting science on large-scale AI does not exist today.

NDIF will have implications beyond the scientific community in academia. The social sciences and humanities, as well as neuroscience, medicine and patient care can benefit from the project.

Understanding how large networks work, and especially what information informs their outputs, is critical if we are going to use such systems to inform patient care, Wallace says.

NDIF will also prioritize the ethical use of AI with a focus on social responsibility and transparency. The project will include collaboration with public interest technology organizations.

Read more here:
NSF Funds Groundbreaking Research Project to 'Democratize' AI - Northeastern University

Are we rushing ahead AI artificial intelligence machine learning lab – Chemistry World

The emergence of self-driving labs and automated experimentation has brought with it the promise of increased rates of productivity and discovery in chemistry beyond what humans can achieve alone. But the black box nature of AI means we cannot see how or why deep learning systems make their decisions, making it difficult to know how it can best be used to optimise scientific research or if the outcomes can ever be trusted.

In November 2023, a paper was published in Nature reporting the discovery of over 40 novel materials using an autonomous laboratory guided by AI. However, researchers were quick to question the autonomous labs results. A preprint followed in January that reported that there were systematic errors all the way through owing to issues with both the computational and experimental work.

One of the authors of the critique, Robert Palgrave, a materials chemist at University College London, UK, said that although AI had made big advances there was a bit of a tendency to feel that AI had to change everything right now and that actually we should not expect things to change overnight.

Milad Abolhasani, who leads a research group that uses autonomous robotic experimentation to study flow chemistry strategies at North Carolina State University in the US, says the hype has taken over somewhat when it comes to AI and it is time to pause. As humans we are great at envisioning what the future is going to look like and what are the possibilities but you have to move step by step and make sure things are done correctly.

For many, the draw of AI comes from a need to enhance productivity. Whether thats reviewing the literature faster, running experiments faster, generating data faster, the productivity outcomes of AI are very appealing, explains Lisa Messeri, an anthropologist at Yale University in the US. And that has to do with institutional pressures to publish, to get your research done so you can do all the other things that you have to do.

Messeri says AI also holds the tantalising prospect of the promise of objectivity the idea that scientists are always in pursuit of tools that they feel are robust and limit human biases and interventions. While AI could indeed provide these benefits for some research, there are risks associated with relying too heavily on them and a need for us to remember the importance of including a diverse set of thinkers in the production of scientific knowledge. And, of course, AI models are only as good as the data that trains them.

Theres a rush for everyone to start doing the kind of science thats well suited for AI tools

Molly Crockett, Princeton University

For Messeri and her colleague Molly Crockett, a neuroscientist at Princeton University in the US, who co-wrote a perspective on the topic in Nature, the risks fall into three categories, all of which arise from the illusion of understanding, a phenomenon, well-documented in the cognitive sciences, relating to our tendency to overestimate how well we understand something.

The first risk arises when an individual scientist is trying to solve a problem using an AI tool and because the AI tool performs well, the scientist mistakenly believes that they understand the world better than they actually do, explains Crockett.

The second two refer to scientists as a collective and the inadvertent creation of a scientific monoculture. If you plant only one type of crop in a monoculture, this is very efficient and productive, but it also makes the crop much more vulnerable to disease, to pests, explains Crockett.

Were worried about two kinds of monocultures, she continues. The first is a monoculture of knowing we can use lots of different approaches to solve problems in science and AI is one approach but because of the productivity gains promised by AI tools, theres a rush for everyone to start doing the kind of science thats well suited for AI tools [and the] questions that are less well suited for AI tools get neglected.

They are also concerned about the development of a monoculture of knowers where, instead of drawing on the knowledge of an entire team with disciplinary and cognitive diversity, only AI tools are used. We know that its really beneficial to have interdisciplinary teams if youre solving a complicated problem, says Crockett.

Its great if you have people on your team who come from a lot of different backgrounds or have different skill sets in an era where we are increasingly avoiding human interactions in favour of digital interactions there may be a temptation to replace collaborators with AI tools [but] that is a really dangerous practice because its precisely in those cases where you lack expertise that you will be less able to determine whether the outputs returned by an AI are actually valid.

The question is, how can we tailor AI-driven tools such as self-driving labs to address specific research questions? Abolhasani and his colleague at NC State University, Amanda Volk, recently defined seven performance metrics to help unleash the power of self-driving labs something he was shocked to find did not already exist in the published literature.

The metrics are designed based on the notion that we want the machine-learning agent of self-driving labs to be as powerful as possible to help us make more informed decisions, he says. However, if the data the lab is trained on is not of a high enough quality, the decisions made by the lab are not going to be helpful, he adds.

A lot of self-driving labs do not even mention what the total chemical consumption was per experiment

Milad Abolhasani, North Carolina State University

The performance metrics they describe include degree of autonomy, which covers the level of influence a human has over the system; operational lifetime; throughput; experimental precision; material usage; accessible parameter space, which represents the range of experimental parameters that can be accessed; and optimisation efficiency, or the overall system performance.

We were surprised when we did the literature search that 95% of papers on self-driving labs did not report how long they could run the platform before it broke down [or] before they had to refill something, he explains. I would like to know how many experiments can that self-driving lab do per hour per day what is the precision of running the experiments how much can I trust the data youre producing?

A lot of self-driving labs do not even mention what the total chemical consumption [was] per experiment and per optimisation that they did, he adds.

Abolhasani and Volk say that by clearly reporting these metrics, research can be guided towards more productive and promising technological areas, and that without a thorough evaluation of self-driving labs, the field will lack the necessary information for guiding future research.

However, optimising the role AI can play within intricate fields such as synthetic chemistry will require more than improved categorisation and larger quantities of data. In a recent article in the Journal of the American Chemical Society, digital chemist Felix Strieth-Kalthoff, alongside such AI chemistry pioneers asAln Aspuru-Guzik, Frank Glorius and Bartosz Grzybowski,argues that algorithm designers need to form closer ties with synthetic chemists to draw on their specialist knowledge.

They argue that such a collaboration would be mutually beneficial by enabling synthetic chemists to develop AI models for synthetic problems of particular interest, transplanting the AI know-how into the synthetic community.

For Abolhasani, the success of autonomous experimentation in chemistry will ultimately come down to trust. Autonomous experimentation is a tool that can help scientists [but] in order to do that the hardware needs to be reproducible and trustworthy, he explains.

Its a must for the community in order to expand the user base

Milad Abolhasani, North Carolina State University

And to build this trust entry barriers need to be lowered to give more chemists the opportunity to use self-driving labs in their work. It has to be as intuitive as possible so that chemists with no expertise in autonomous experimentation can interact with self-driving labs, he explains.

In addition, he says, the best self-driving labs are currently very expensive, so lower-cost options need to be developed while still maintaining their reliability and reproducibility. Its a must for the community in order to expand the user base, he says.

Once [self-driving labs] become a mainstream tool in chemistry [they] can help us digitise chemistry and material science and provide access to high-quality experimental data but the power of that expert data is when the data is reproducible, reliable and is standardised for everybody to use.

Messeri believes AI will be most useful when it is seen only as an augmentation to humans, rather than a replacement. To do this, she says, the community will need to be much more particular about when and where it is used. I am very confident that creative scientists are going to be able to come up with cases in which this can be responsibly and productively implemented, she adds.

Crockett suggests scientists consider AI tools as another approach to analysing data one that is different to a human mind. As long as we respect that then we can strengthen our approach by including these tools as another diverse node in the network, she says.

Importantly, Crockett says this moment could also serve as a wake-up call about the institutional pressures that may be driving scientists towards AI in order to improve productivity without necessarily more understanding. But this problem is much bigger than any individual and requires widespread institutional acceptance before any solution can be found.

More here:
Are we rushing ahead AI artificial intelligence machine learning lab - Chemistry World

3 Stocks Poised to Profit from the Rise of Artificial Intelligence – InvestorPlace

While artificial intelligence may be all the rage, the usual suspects in the space have largely flourished handsomely, which then incentivizes the case for underappreciated AI stocks to buy.

Rather than simply focusing on technology firms that have a direct link to digital intelligence, its useful to consider companies whether theyre tech enterprises or not that are using AI in their businesses. Yes, the semiconductor space is exciting but AI is so much more than that.

These less-appreciated ideas just might surprise Wall Street. With that, below are intriguing AI stocks to buy that dont always get the spotlight.

Source: Jim Lambert / Shutterstock.com

At first glance, agricultural equipment specialist Deere (NYSE:DE) doesnt seem a particularly relevant idea for AI stocks to buy. Technically, youd be right. After all, this is an enterprise that as roots going back to 1837. That said, an old dog can still learn new tricks.

With so much talk about autonomous mobility, Deere took a page out of the playbook and has invested in an automated tractor. Featuring 360-degree cameras, a high-speed processor and a neural network that sorts through images and determines which objects are safe to drive over or not, Deeres invention is the perfect marriage between a traditional industry and innovative methodologies.

Perhaps most importantly, Deere is meeting a critical need. Unsurprisingly, fewer young people are interested in an agriculture-oriented career. Therefore, these automated tractors are entering the market at the right time.

Lastly, DE trades at a modest price/earnings-to-growth (PEG) ratio of 0.54X. Thats lower than the sector median 0.82X. Its a little bit out there but Deere is one of the underappreciated AI stocks to buy.

Source: Eric Glenn / Shutterstock.com

While its just my opinion, grocery store giant Kroger (NYSE:KR) sells itself. No, the grocery industry is hardly the most exciting arena available. At the same time, people have to eat. Further, the company benefits from the trade-down effect. If economic conditions become even more challenging, people will eschew eating out for cooking in. Overall, that would be a huge plus for KR stock.

With that baseline bullish thesis out of the way, Kroger is also an enticing idea for hidden-gem AI stocks to buy. Earlier this year, the company announced that it will use AI technology for content management and product descriptions for marketplace sellers. Last year, Krogers head executive mentioned AI eight times during an earnings call.

Fundamentally, Kroger should benefit from revenue predictability. While the consensus sales target calls for a 1% decline in the current fiscal year, the high-side estimate is aiming for $152.74 billion. Last year, the print came out to just over $150 billion. With shares trading at only 0.27X trailing-year sales, KR could be a steal.

Source: Travelerpix / Shutterstock.com

Billed as a platform for live online learning, Nerdy (NYSE:NRDY) represents a legitimate tech play for AI stocks to buy. Indeed, its corporate profile states that its purpose-built proprietary platform leverages myriad innovations including AI to connect students, users and parents/guardians to tutors, instructors and subject matter experts.

Fundamentally, Nerdy should benefit from two key factors. Number one, the Covid-19 crisis disrupted education, particularly for young students. That could have a cascading effect down the line, making it all the more vital to play catchup. Nerdy can help in that department.

Number two, U.S. students have continued to fall behind in international tests. Its imperative for social growth and stability for students to get caught up, especially in the digital age. Therefore, NRDY is especially attractive.

Finally, analysts anticipate fiscal 2024 revenue to hit $237.81 million, up 23% from last years tally of $193.4 million. And in fiscal 2025, experts are projecting sales to rise to $293.17 million. Thats up more than 23% from forecasted 2024 sales. Therefore, its one of the top underappreciated AI stocks to buy.

On the date of publication, Josh Enomoto did not have (either directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

A former senior business analyst for Sony Electronics, Josh Enomoto has helped broker major contracts with Fortune Global 500 companies. Over the past several years, he has delivered unique, critical insights for the investment markets, as well as various other industries including legal, construction management, and healthcare. Tweet him at @EnomotoMedia.

Continued here:
3 Stocks Poised to Profit from the Rise of Artificial Intelligence - InvestorPlace

Who in Europe is investing the most in artificial intelligence? – Euronews

Europe faces challenges in the adoption of artificial intelligence, including regulatory barriers and a shortage of skilled professionals. The Next Generation EU has committed 4.4 billion to AI initiatives, with two Southern European countries leading the way.

Artificial Intelligence (AI) is reshaping the global economic landscape, emerging as a pivotal force in the digital domain and driving innovation across various sectors.

By 2030, AI is expected to have injected more than 11 trillion into the global economy, according to industry forecasts. It's anticipated that AI and robotics will jointly spark the creation of around 60 million new jobs globally by 2025, underscoring the critical importance of digitalisation in propelling economic growth.

In a concerted effort to match global tech leaders, the European Union is intensifying its push to integrate and advance AI, with a particular emphasis on bolstering digital infrastructure and capabilities across its member states.

However, despite these optimistic projections, challenges persist.

Velina Lilyanova, a researcher at the European Parliamentary Research Service, has highlighted Europe's slow AI adoption in critical sectors such as healthcare and public administration.

"Europe has a weakness in this respect," she claims in her recent study entitled Investment in Artificial Intelligence in the National Recovery and Resilience Plans.

Lilyanova points out that Europe faces several challenges that hinder broader AI uptake, including regulatory barriers, trust issues, a shortage of digital skills, and low levels of company digitalisation.

"Member States need to address these barriers to facilitate widespread uptake," she stated, emphasising the need for regulatory reforms, enhancing digital skills, and boosting company digitalisation.

The European Commission has laid out ambitious goals for 2030: aiming for 90% of EU small and medium-sized enterprises (SMEs) to achieve at least a basic level of digital intensity and for 75% of EU companies to adopt technologies like cloud computing, AI, and big data.

Investment strategies in AI vary significantly among EU member states, ranging from direct research and development (R&D) funding to indirect support via business and public service digitalisation, as detailed by Lilyanova.

Spain's National Recovery and Resilience Plan (NRRP) specifically allocates funds to strengthen AI development, aiming to position the country as a leader in AI scientific excellence and innovation. The plan focuses on developing AI tools and applications in the Spanish language to enhance productivity in the private sector and efficiency in public administration.

Italy's Strategic Programme on AI (2022-2024), aligning with the broader EU AI strategy, aims to make Italy a global hub for AI research and innovation by enhancing skills and attracting leading AI talents.

Denmark is leveraging its strong R&D ecosystem and high digital intensity among SMEs to enhance its national digital strategy, incorporating AI to improve public administration through reforms.

The European Commission's Joint Research Centre has conducted an exhaustive analysis of AI-related funding across EU countries.

According to a 2023 study by Papazoglu et al. , the Next Generation EU (NGEU) instrument and its Recovery and Resilience Facility (RRF) account for 70% of total investments in digital transformation.

Specifically, of the 116.8 billion allocated by the NGEU RRF for the "Digital Decade", 4.376 billion is earmarked for AI projects.

A breakdown of national investments reveals Italy as the frontrunner, planning 1.895 billion to AI-related projects. Spain follows with 1.2 billion. Together, the two Southern European nations represent 71% of the total investments allocated to AI-related projects within NGEU RRF.

Denmark leads on a relative basis, dedicating 8.7% of its Digital RRF budget to AI projects. followed by Spain at 6.4%, and Ireland at 5.2%.

European countries are allocating an average of nearly 3% of their digitalisation funds to AI projects.

Sweden, the Netherlands, Belgium, and Austria are at the lower end, committing less than 1% of their RRF budgets to AI-related projects.

Read the rest here:
Who in Europe is investing the most in artificial intelligence? - Euronews

EY survey reveals artificial intelligence is creating new hiring needs, while also making it more challenging to source … – PR Newswire

NEW YORK, April 29, 2024 /PRNewswire/ --Ernst & Young LLP(EY US) today announced the release of its latest Technology Pulse Poll, which examines the impact of artificial intelligence (AI) on the future of work from integration to talent and culture. The poll, which was conducted in March 2024 and surveyed more than 250 leaders in the technology industry, highlights the ongoing push and pull around AI in the workplace.

The poll finds that 50% of business leaders anticipate a combination of both layoffs and hiring over the next six months as a direct result of AI adoption. Yet, even with hiring plans in place, three out of five technology leaders (61%) say that emerging technology has made it more challenging for their company to source top technology talent.

"One thing is certain: Companies are reshaping their workforce to be more AI savvy. With this transition, we can anticipate a continuous cycle of strategic workforce realignment, characterized by simultaneous layoffs and hiring, and not necessarily in equal volumes," says Vamsi Duvvuri, EY Technology, Media and Telecommunications AI Leader. "But it's not all doom and gloom. Employees and companies alike continue to show enthusiasm around AI, specifically when it comes to opportunities to scale and compete more effectively in the marketplace."

According to the poll, 72% of respondents say their employees are using AI at least daily in the workplace, with top use cases being coding and software development (51%), data analysis (51%), and internal and external communication (47%).Though many leaders report concerns about AI and believe that more regulation is needed around this technology, most technology business leaders (85%) believe that emerging technology has had a positive impact on their workplace culture.

"AI is transforming the way we work, creating new opportunities for innovation and growth, while simultaneously posing unprecedented challenges, especially when it comes to talent," saysKen Englund, EY Americas Technology, Media and Telecommunications Leader. "Our recent pulse poll demonstrates that technology companies generally have a positive sentiment toward the next productivity wave. There's a lot of excitement at these companies in terms of how they will successfully apply their own industry tools to themselves."

The EY survey also found that:

Methodology

EY US commissioned Atomik Research to conduct an online survey of 255 business leaders in the technology industry throughout the United States. The margin of error is +/- 6 percentage points with a confidence level of 95%. Fieldwork took place between March 8 and March 16, 2024.

About EY

EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.

Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients. Information about how EY collects and uses personal data and a description of the rights individuals have under data protection legislation are available via ey.com/privacy. EY member firms do not practice law where prohibited by local laws. For more information about our organization, please visit ey.com.

SOURCE EY

Excerpt from:
EY survey reveals artificial intelligence is creating new hiring needs, while also making it more challenging to source ... - PR Newswire

Meet the Newest Artificial Intelligence (AI) Stock in the S&P 500. It Soared 1700% in 2 Years, and Wall Street Says the … – sharewise

The S&P 500 (SNPINDEX: ^GSPC) is the most popular benchmark for the U.S. stock market. The index includes 500 large-cap companies, currently defined as companies worth at least $18 billion, and it covers about 80% of domestic equities by market capitalization. To be considered for inclusion, a company must also be profitable, and its stock must be sufficiently liquid.

Super Micro Computer (NASDAQ: SMCI) became the newest artificial intelligence (AI) company in the S&P 500 when it joined the index in March 2024, little more than a year after it joined the S&P MidCap 400 in December 2022. Meanwhile, shares soared over 1,700% over the last two years as strong demand for AI computing products fueled rapid sales growth.

The stock still carries a consensus rating of "buy" among Wall Street analysts, and the median price target of $965 per share implies 26% upside from its current price of $762 per share. Here's what investors should know about Supermicro.

Continue reading

Source Fool.com

Read the original here:
Meet the Newest Artificial Intelligence (AI) Stock in the S&P 500. It Soared 1700% in 2 Years, and Wall Street Says the ... - sharewise

The U.S. Needs to ‘Get It Right’ on AI – TIME

Artificial intelligence has been a tricky subject in Washington.

Most lawmakers agree that it poses significant dangers if left unregulated, yet there remains a lack of consensus on how to tackle these concerns. But speaking at a TIME100 Talks conversation on Friday ahead of the White House Correspondents Dinner, a panel of experts with backgrounds in government, national security, and social justice expressed optimism that the U.S. government will finally get it right so that society can reap the benefits of AI while safeguarding against potential dangers.

We can't afford to get this wrongagain, Shalanda Young, the director of the Office of Management and Budget in the Biden Administration, told TIME Senior White House Correspondent Brian Bennett. The government was already behind the tech boom. Can you imagine if the government is a user of AI and we get that wrong?

Read More: A Call for Embracing AIBut With a Human Touch

The panelists agreed that government action is needed to ensure the U.S. remains at the forefront of safe AI innovation. But the rapidly evolving field has raised a number of concerns that cant be ignored, they noted, ranging from civil rights to national security. The code is starting to write the code and thats going to make people very uncomfortable, especially for vulnerable communities, says Van Jones, a CNN host and social entrepreneur who founded the Dream Machine, a non-profit that fights overcrowded prisons and poverty. If you have biased data going in, you're going to have biased decision-making by algorithms coming out. That's the big fear.

The U.S. government might not have the best track record of keeping up with emerging technologies, but as AI becomes increasingly ubiquitous, Young says theres a growing recognition among lawmakers of the need to prioritize understanding, regulation, and ethical governance of AI.

Michael Allen, managing director of Beacon Global Strategies and Former National Security Council director for President George W. Bush, suggested that in order to address a lack of confidence about the use of artificial intelligence, the government needs to ensure that humans are at the forefront of every decision-making process involving the technologyespecially when it comes to national security. Having a human in the loop is ultimately going to make the most sense, he says.

Asked how Republicans and Democrats in Washington can talk to each other about tackling the problems and opportunities that AI presents, Young says theres already been a bipartisan shift around science and technology policy in recent yearsfrom President Bidens signature CHIPS and Science Act to funding for the National Science Foundation. The common theme behind the resurgence in this bipartisan support, she says, is a strong anti-China movement in Congress.

There's a big China focus in the United States Congress, says Young. But you can't have a China focus and just talk about the military. You've got to talk about our economic and science competition aspects of that. Those things have created an environment that has given us a chance for bipartisanship.

Allen noted that in this age of geopolitical competition with China, the U.S. government needs to be at the forefront of artificial intelligence. He likened the current moment to the Nuclear Age, when the U.S. government funded atomic research. Here in this new atmosphere, it is the private sector that is the primary engine of all of the innovative technologies, Allen says. The conventional wisdom is that the U.S. is in the lead, were still ahead of China. But I think that's something as you begin to contemplate regulation, how can we make sure that the United States stays at the forefront of artificial intelligence because our adversaries are going to move way down the field on this.

Congress is yet to pass any major AI legislation, but that hasnt stopped the White House from taking action. President Joe Biden signed an executive order to set guidelines for tech companies that train and test AI models, and has also directed government agencies to vet future AI products for potential national security risks. Asked how quickly Americans can expect more guardrails on AI, Young noted that some in Congress are pushing to establish a new, independent federal agency that can help inform lawmakers about AI without a political lens, offering help on legislative solutions.

If we dont get this right, Young says, how can we keep trust in the government?

TIME100 Talks: Responsible A.I.: Shaping and Safeguarding the Future of Innovation was presented by Booking.com.

Continue reading here:
The U.S. Needs to 'Get It Right' on AI - TIME

Pope to take part in G7 summit in June to talk Artificial Intelligence – Crux Now

ROME Adding to what was already a busy papal schedule for 2024, the Vatican confirmed Friday that Pope Francis will participate, in person, in a G7 summit scheduled for the southern Italian region of Puglia June 13-15.

According to a statement from Italian Prime Minister Giorgia Meloni, the pope will take part in a session of the G7 summit dedicated to artificial intelligence, a subject of mounting concern to this papacy.

The popes participation will mark the first time a pontiff has taken part in a G7 summit, which has been meeting on a regular basis since 1975 and is considered the most important annual gathering of the leaders of the major Western powers.

The Pontifical Academy for Life organized a summit in 2020 along with major global technology firms such as Microsoft and IBM, which produced a document known as the Rome Call for AI Ethics. More recently, Francis devoted his messages both for the 2024 World Day of Peace and also the World Day of Social Communications to the theme of artificial intelligence.

We carried a movement forward from the base, now the pope at the G7 will speak to governments, said Italian Archbishop Vincenzo Paglia, head of the Pontifical Academy for Life.

The Rome Call is premised in part on what the document calls algorethics, meaning an ethical code for the digital age.

Signatories committed to request the development of an artificial intelligence that serves every person and humanity as a whole; that respects the dignity of the human person, so that every individual can benefit from the advances of technology; and that does not have as its sole goal greater profit or the gradual replacement of people in the workplace, the document says.

Italian Father Paolo Benanti, an advisor to both the Vatican and the Italian government on AI issues, said the Rome Call for AI ethics demonstrates the wisdom of religions on the subject, so that a future of peace and prosperity can be assured for humanity.

In this context, the participation of the pope at the G7 in Puglia is of great importance, Benanti said.

The G7 summit brings together the United States, the United Kingdom, France, Germany, Italy, Canada and Japan, as well as the European Union.

This year Italy holds the rotating presidency of the G7. It will mark the fifth time the summit has been held in Italy, with the most recent coming in Genoa in 2001, when the body was still known as the G8 with the participation of Russia.

Im convinced that the presence of His Holiness will give a decisive contribution to the definition of a regulatory, ethical and cultural framework for artificial intelligence, Meloni said in a video statement announcing the popes presence.

Last Wednesday, Pope Francis met the CEO of Cisco Systems, Chuck Robbins, who was in the Vatican to sign on to the 2020 Rome Call for AI Ethics.

Robbins said at the time that the Rome Call principles align with Ciscos core belief that technology must be built on a foundation of trust at the highest levels in order to power an inclusive future for all.

Recently, Paglia announced that a group of leaders of Asian religions will meet in Hiroshima, Japan, in July, in order to sign the Rome Call for AI Ethics. That summit follows a similar event in 2022 when Jewish and Muslim leaders signed on to the document.

Read the original post:
Pope to take part in G7 summit in June to talk Artificial Intelligence - Crux Now

Pope Francis to attend G7 summit to speak on artificial intelligence – Catholic World Report

Vatican City, Apr 27, 2024 / 12:30 pm (CNA).

Pope Francis will attend the G7 summit in June to speak about the ethics of artificial intelligence, Italian Prime Minister Giorgia Meloni announced Friday.

The Group of Seven (G7) industrialized nations summit is being held in the southern Italian region of Puglia from June 1315 and will bring together leaders from Britain, Canada, France, Germany, Italy, Japan, and the United States.

Meloni, who will chair the summit, said in a video message on April 26 that Pope Francis had accepted her invitation to attend a session of the summit on the topic of artificial intelligence.

This is the first time in history that a pontiff will participate in the work of a G7, Meloni said.

I am convinced that the presence of His Holiness will make a decisive contribution to the definition of a regulatory, ethical,andcultural framework for artificial intelligence, she added.

The Vatican has been heavily involved in the conversation of artificial intelligence ethics, hosting high-level discussions with scientists and tech executives on the ethics of artificial intelligence in 2016 and 2020.

The pope has hosted Microsoft President Brad Smith, IBM Executive John Kelly III, and most recently, Chuck Robbins, the chief executive of Cisco Systems, in Rome each of whom has signed the Vaticans artificial intelligence ethics pledge, theRome Call for AI Ethics.

The Rome Call, a document by the Pontifical Academy for Life, underlines the need for the ethical use of AI according to the principles of transparency, inclusion, accountability, impartiality, reliability, security, and privacy.

Pope Francis chose artificial intelligence as the theme of his 2024 peace message, which recommended that global leaders adopt an international treaty to regulate the development and use of AI.

The pope established the RenAIssance Foundation in April 2021 as a Vatican nonprofit foundation to support anthropological and ethical reflection of new technologies on human life.

The Vatican has confirmed the popes participation in the G7 summit.

If you value the news and views Catholic World Report provides, please consider donating to support our efforts. Your contribution will help us continue to make CWR available to all readers worldwide for free, without a subscription. Thank you for your generosity!

Click here for more information on donating to CWR. Click here to sign up for our newsletter.

Originally posted here:
Pope Francis to attend G7 summit to speak on artificial intelligence - Catholic World Report

Pope Francis to participate in G7 session on AI – Vatican News – English

Pope Francis will take part in the upcoming G7 session on Artificial Intelligence under Italys presidency of the group.

By Vatican News

The Holy See Press Office on Friday confirmed that Pope Francis will intervene in the G7 Summit in Italys southern Puglia region in the session devoted to Artificial Intelligence (AI).

The confirmation of the Holy Fathers participation in the Summit, which will take place from June 13 to 15 at Borgo Egnazia in Puglia, follows the announcement made by Italian Prime Minister, Giorgia Meloni.

"This is the first time in history that a pontiff will participate in the work of a G7," she said, adding that the Pope would attend the "outreach session" for guest participants at the upcoming Group of Seven industrialised nations meeting.

The Summit foresees the participation of the United States, Canada, France, the United Kingdom, Germany, and Japan.

"I heartily thank the Holy Father for accepting Italy's invitation. His presence honours our nation and the entire G7," Meloni explained, emphasizing how the Italian government intends to enhance the contribution given by the Holy See on the issue of artificial intelligence, particularly with the "Rome Call for AI Ethics of 2020," promoted by the Pontifical Academy for Life, in a process "that leads to the concrete application of the concept of algorithmic ethics, namely giving ethics to algorithms."

"I am convinced," she added, "that the Pope's presence will provide a decisive contribution to defining a regulatory, ethical, and cultural framework for artificial intelligence, because on this ground, on the present and future of this technology, our capacity will once again be measured, the capacity of the international community to do what another Pope, Saint John Paul II, recalled on October 2, 1979, in his famous speech to the United Nations."

"Political activity, whether national or international, comes from man, is exercised by man, and is for man," Meloni quoted.

Pope Francis dedicated his Message for the 57th World Day of Peace on 1 January 2024 to Artificial Intelligence and Peace urging humanity to cultivate wisdom of the heart which, he says, can help us to put systems of artificial intelligence at the service of a fully human communication.

Go here to read the rest:
Pope Francis to participate in G7 session on AI - Vatican News - English