Page 977«..1020..976977978979..9901,000..»

Four questions about AI that need answering now, according to leading physicist Brian Cox – 9News

If humans don't manage to find a solution to four key questions involving artificial intelligencenow "deeper challenges" could be faced down the road, British physicist Professor Brian Cox believes.

The warning echoes that of dozens of AI industry leaders, academics and celebrities who recently called for mitigation measures to be enacted to reduce the risk of global annihilation due to AI systems.

Cox told 9news.com.au there are both immediate and long-term questions about AI that humanity needs to start answering as soon as possible - but conceded an extinction-level event is not likely.

READ MORE: AI voice cloning scams 'will come to Australia' - so how do you guard against it?

"I think there are big questions, really practical questions, now about AI systems in use," he said.

"The immediate problem is when you give control to an AI system, as we do now, then who decides how those decisions are made?

"Is it just left to this AI thing, or does society have some kind of democratic control?"

READ MORE: AI industry and researchers sign statement warning of 'extinction' risk

Cox went on to cite a fictitious example of what would happen should an automated driving system be put in a situation where the driver and another person, are about to be injured

"This is an unlikely scenario," he admitted, before continuing.

"But is it going to injure a pedestrian, injure the driver, injure a child, does it say I don't want to injure a child, rather than an adult.

"Those questions need addressing."

READ MORE: AI may soon be able to predict who will and wont stick to their diets, study claims

The trickier questions, Cox said, arise over longer-term fears the systems could become as intelligent as humans one day.

"We don't have those at the moment," he said, adding those systems are called Artificial General Intelligence (AGI).

"I'm not an expert, but there are varying estimates about time scales, I mean it's not actually completely accepted we will ever be able to build something of that intelligence; it's in the unknown box, but it might happen quite soon".

Cox said a raft of questions would be raised should AGI systems ever "out-evolve" humans and claim the same rights.

"Ultimately, this all comes down to thinking carefully now about how we regulate these systems; who controls them, who programs them, how much autonomy do they have, how do we deal with them legally...

"Those are real questions and if we answer them now we will be better prepared if we run into deeper challenges in the future."

READ MORE: What are phone porting scams, and is there anything you can do about them?

The jobs most at risk of being replaced by artificial intelligence

The comment comes as the renowned professor of particle physics prepares to make his return to Australia for the world premiere of his live stage show Symphonic Horizons.

Cox and the Sydney Symphony Orchestra will join forces at the Sydney Opera House for four shows to answer fundamental questions about human existence and space.

He said if he could have one question answered about the universe tomorrow it would be "are we alone".

READ MORE: NASA picks up 'heartbeat' signal from lost space probe

"There's a lot of them (questions), but one of the ones I think we have a chance in answering is 'are we alone', or let's put it another way, how far we have to go to discover life beyond Earth?".

Cox explained the mystery of consciousness, questions of where we sit in the universe, is one of the aspects that fascinates him most about the unexplored frontier.

This will form a core aspect of Symphonic Horizons.

"I really think that one of the fundamental questions about the universe is, why is it able to support structures, living things - things like you and me - that can understand the universe," he said.

"And why can the universe understand itself, and how does that happen?

"That for me is as fundamental as asking questions about the nature of space and time, and the origins of the universe itself.

"We will all come out (of the shows) with a different view of our place in the universe, but I don't know what that view will be until we have done it."

Sign up here to receive our daily newsletters and breaking news alerts, sent straight to your inbox.

See the original post:

Four questions about AI that need answering now, according to leading physicist Brian Cox - 9News

Read More..

GPT-5: The Future of AI? – Fagen wasanni

OpenAIs latest model, GPT-5, is generating a lot of excitement in the AI community. As the successor to the impressive GPT-3, GPT-3.5, and GPT-4 models, there are high expectations for what GPT-5 can achieve. While the release date remains unknown, the filing of a trademark application for GPT-5 by OpenAI has fueled speculation that it could be coming soon.

Building on the advancements of its predecessors, GPT-5 is expected to push the boundaries of AI even further. It promises to go beyond language generation and reasoning capabilities and also delve into audio, video, and other mediums. GPT-5 aims to uncover obscure scientific concepts and lesser-known subjects with ease. Additionally, there is talk of potential integrations with third-party services, bringing us closer to the elusive goal of Artificial General Intelligence (AGI).

The question on everyones mind is whether GPT-5 will achieve AGI, the ultimate goal of AI. Some early reports suggest that GPT-5 could indeed reach this milestone. However, experts caution that true AGI is still a distant dream and could be years away. If GPT-5 does achieve AGI, it would be a significant moment for the AI community, with far-reaching implications in various fields such as healthcare and education.

However, not everyone is in favor of rapidly advancing AI models. Concerns about the ethical and societal implications of such advanced technology have led to calls for caution. The responsible development of AGI is essential to minimize potential negative impacts on society.

In conclusion, the anticipation surrounding GPT-5 is palpable. With its potential multimodal capabilities and the possibility of AGI, GPT-5 has the potential to revolutionize the field of AI. However, it is important to remember that these expectations are still speculative. The true capabilities of GPT-5 will only be known once OpenAI officially releases it. Until then, the AI community eagerly awaits its arrival.

Visit link:

GPT-5: The Future of AI? - Fagen wasanni

Read More..

Drafting patent claims for AI-based inventions: navigating eligibility … – Reuters

August 7, 2023 - In the rapidly evolving landscape of artificial intelligence (AI), innovation is at an all-time high. From machine learning algorithms that predict disease onset to neural networks that make financial predictions, AI has been instrumental in pushing technological boundaries. However, as technology accelerates, our legal systems, particularly the patent system, struggle to keep pace.

Central to patent law is the Alice/Mayo patent eligibility test. This test is designed to evaluate whether a particular invention is eligible for patent protection. The basic requirements for patent eligibility encompass "useful process, machine, manufacture, or composition of matter."

However, the U.S. Supreme Court decisions in the cases of Mayo Collaborative Services v. Prometheus Laboratories, Inc. and Alice Corp. v. CLS Bank International held that mere laws of nature, natural phenomena, and abstract ideas are not sufficient for patent eligibility. The Court emphasized that while abstract ideas or natural phenomena in isolation aren't patent eligible, their transformation into a practical application serving as an innovative building block could be.

The Alice/Mayo test, a two-part framework, aids in making this distinction. It first evaluates if claims are targeting an abstract idea, law of nature, or natural phenomenon. If they are, the test then checks for additional elements that make the claim innovative and distinct from the foundational concept.

However, when applied to AI technologies, the test can yield subjective, inconsistent and sometimes contentious results. The crux of the issue lies in the fact that many AI innovations can be viewed as abstract since they involve algorithms and mathematical processes. Determining where the line is drawn between a patent-ineligible abstract idea and a patent-eligible inventive concept in the realm of AI can be challenging.

Earlier this year, the Supreme Court had an opportunity to clarify this complex issue. Three pivotal cases Interactive Wearables LLC v. Polar Electro Oy, No. 21-1281 (U.S. May 15, 2023); Tropp v. Travel Sentry Inc., No. 22-22 (U.S. May 15, 2023), and Avery Dennison Corp. v. ADASA Inc., No. 22-822 (U.S. May 30, 2023) were anticipated to provide more definitive guidelines on patent eligibility.Interactive Wearables LLC v. Polar Electro Oy questioned the appropriate standard for determining whether a patent claim is directed to a patent-ineligible concept under the Alice/Mayo framework.

Tropp v. Travel Sentry Inc involved an appeal of a decision that determined that patents for a method of providing consumers with special dual-access luggage locks that a screening entity would access in accordance with a special procedure and corresponding key controlled by the luggage screening entity, all while allowing the luggage to remain locked following screening, were patent ineligible.

Avery Dennison Corp. v. ADASA Inc. questioned whether a claim for a patent to subdivide a binary serial number and assign the "most significant bits" such that they remain identical across Radio Frequency Identification Device (RFID) tags constitutes patent-eligible subject matter.

While not directly related to AI, their decisions could have potentially set precedent, or offered insights that would clarify some of the complexities encountered when dealing with AI-based inventions. These cases were seen as potential vehicles to address the ambiguities surrounding what is considered an abstract idea versus an inventive concept a central dilemma in AI patent evaluations. By choosing not to review these cases, the Supreme Court perpetuated uncertainty, particularly for stakeholders in the AI domain.

AI, by its very nature, often blurs the line between abstract and concrete. Abstract ideas, like mathematical formulas, are not patent-eligible. But what happens when such a formula becomes a machine-learning model driving real-world outcomes? This gray area is where AI innovations typically reside.

Specifically, while the algorithms themselves may seem abstract and theoretical, when implemented, they can drive revolutionary changes in various sectors, including health care, finance, and entertainment. For example, an AI-based system in cybersecurity within a financial network may employ a mathematical model like a neural network, which, while abstract in nature, leads to the practical application of enhanced detection of fraudulent transactions.

The inherent subjectivity in making this distinction means that AI technologies often tread a thin line between acceptance and rejection during the patent examination and evaluation process. This ambiguity can deter innovators from pursuing patents, given the unpredictability of the outcomes, thereby potentially stifling advancement.

The onus now falls on inventors, businesses and their legal teams to craft patent claims that can stand up to this ambiguous eligibility test. Here are some key general considerations for drafting patent claims for AI-based inventions:

1. Specify the domain

Specifying the domain in AI patent claims is pivotal for multiple reasons. By clearly defining the field of application, inventors delineate the scope of their invention, ensuring protection in a specific niche. This clarity avoids overbroad claims that risk rejection and highlights the novelty within a particular context.

For patent examiners, a clear domain offers context and understanding, streamlining the examination process. Moreover, a well-defined domain strengthens both the enforcement and defense of patents while simplifying licensing and commercialization efforts. In the world of AI patents, precision in detailing the domain is important when capturing the invention's true essence and value.

2. Detail the AI mechanism

Rather than make a generic reference to "AI," highlighting the precise mechanism, be it a neural network, deep learning, or reinforcement learning, provides clarity on the technology's foundation. This granularity not only emphasizes the uniqueness of the invention but also aids patent examiners in understanding its intricacies.

A well-defined AI mechanism enhances the patent's robustness, ensuring that its distinctiveness is clearly captured and protected. In the realm of AI patents, specificity in describing the AI mechanism is a linchpin for safeguarding innovation.

3. Highlight technical advantages

Emphasizing the technical advantages of the AI in patent claims is essential. When a claim articulates clear benefits, such as improved efficiency or error reduction, the intrinsic value of AI's contribution to the invention becomes evident. This not only underscores the invention's uniqueness but also showcases its practical significance. In the patenting of AI technologies, focusing on these technical advantages can bolster the patent's strength, clearly differentiating it from mere abstract ideas or generic applications.

4. Avoid over-reliance on algorithms

Though algorithms form the backbone of AI, an excessive focus on them might overshadow their real-world applications. It's essential to strike a balance: outline the algorithm's essence but pivot to its tangible applications and results. This approach not only captures the innovation's full spectrum but also strengthens its patentability by emphasizing its practical impact over mere theoretical constructs.

Given the nuanced nature of AI and the intricacies of patent law, specific strategies can optimize the chances of obtaining robust patent protection. Here are some essential tips for drafting AI-based patent claims, each addressing distinct facets of the patenting process to ensure that AI innovators secure the recognition and protection they deserve.

1. Multi-layered claiming

Using a multi-layered claiming approach in AI patent applications is strategic. Beginning with a broader claim and then transitioning to more specific, dependent claims ensures a comprehensive coverage. This layered strategy acts as a safety net; if a broad claim faces rejection, the subsequent, more detailed claims might still secure approval. In essence, it's a way to cast a wide net while also having targeted catches, optimizing the chances of patent protection across varying depths of the invention's scope.

2. Functional claiming

Opting for functional claiming in AI patent applications emphasizes the tool's practical utility over its internal workings. By focusing on what the AI tool accomplishes, like "identifying anomalies in X data," instead of delving into the intricate algorithmic steps, the claim is anchored in tangible outcomes.

This not only simplifies the claim's language but also broadens its protective scope, covering potential variations in algorithmic implementations that achieve the same functional result. In the AI patent landscape, functional claiming offers a way to capture the essence of an invention's real-world impact.

3.Include data specificity

In AI patent applications, data specificity is integral. Given AI's intrinsic reliance on data, delineating the data type, its processing method and its significance to the invention offers a clearer picture of the technology in action. By anchoring the claim in the specifics of the data utilized, the invention's distinctiveness and practical utility are underscored.

In essence, detailing the data landscape not only clarifies the AI tool's operation but also strengthens the patent's grounding in tangible and innovative use cases.

4. Avoid over-generalizing

Steering clear of over-generalization in AI patent claims is essential. While casting a wide net might seem attractive, exceedingly broad claims risk being tagged as abstract and face rejection. It's more prudent to focus on the distinct facets and practical applications of the invention. Such focused claims not only stand a better chance of securing patent protection but also effectively highlight the value proposition and innovation the AI tool brings to the table.

5. Stay updated

Staying abreast of developments is crucial in the fluid landscape of AI patenting. Given the swift advancements in software and AI, coupled with shifting patent regulations, it's vital to keep tabs on current case law, patent office directives and industry shifts. Being informed ensures that your patent strategies remain aligned with the latest legal precedents and can adapt to emerging trends, maximizing the chances of securing robust and relevant protection for AI innovations.

In conclusion, drafting patent claims for AI-based inventions requires a careful blend of technical detailing, legal foresight and strategic breadth. By ensuring patent eligibility and crafting a precise claim language, inventors can significantly enhance the chances of their AI tool being protected by a patent.

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. Westlaw Today is owned by Thomson Reuters and operates independently of Reuters News.

Anup Iyer is an associate with Moore & Van Allen. He specializes in assisting clients with obtaining patent and trademark rights across diverse technology sectors such as artificial intelligence (AI), optical communication, high performance computing, computer processor architecture, wireless communication technologies, and cybersecurity. He is based in Charlotte, North Carolina, and may be reached at anupiyer@mvalaw.com.

Nick Russell is a member with the firm, based out of the Charlotte office. He provides patent portfolio management and guidance to institutions in the network science sectors, financial sectors, and health care sectors. He has extensive experience in patent preparation and prosecution in computer and network science technologies, payment networks, authentication methods, cybersecurity, cloud computing, and mobile device technology. He may be reached at nickrussell@mvalaw.com.

Original post:

Drafting patent claims for AI-based inventions: navigating eligibility ... - Reuters

Read More..

Rakuten Group and OpenAI Collaborate to Bring Conversational AI … – Fagen wasanni

Rakuten Group has announced a partnership with OpenAI to offer advanced conversational artificial intelligence (AI) experiences for consumers and businesses globally. This collaboration aims to revolutionize the way customers shop and interact with businesses, while improving productivity for merchants and business partners.

As a global innovation company, Rakuten operates Japans largest online shopping mall and provides various services in e-commerce, fintech, digital content, and telecommunications. With over 70 services and 1.7 billion members worldwide, Rakuten possesses high-quality data and extensive knowledge in different domains.

OpenAI, an AI research and deployment company, is dedicated to ensuring that artificial general intelligence benefits humanity as a whole. Through this partnership, Rakuten will integrate AI services into its products and services, utilizing its valuable data and domain expertise. OpenAI will provide Rakuten with priority access to its APIs and support, exploring mutually beneficial commercial opportunities.

The collaboration will also see Rakuten integrating Rakuten AI experiences into ChatGPT products using OpenAIs plugin architecture. This will enable businesses to interact with AI agents using natural language, performing tasks such as research, data analysis, inventory optimization, pricing, and business process automation.

This partnership holds tremendous potential for the online services landscape, leveraging Rakutens diverse ecosystem and 100 million members in Japan. By combining Rakutens operational capabilities and unique data with OpenAIs cutting-edge technology, the collaboration aims to provide value to millions of people in Japan and around the world.

See the original post here:

Rakuten Group and OpenAI Collaborate to Bring Conversational AI ... - Fagen wasanni

Read More..

AI is muddying the truth. Weve known how to fix it for centuries. – The Boston Globe

Not only does our concept of truth feel more slippery today, but the long-established ways we arrive at insights and decisions are being compromised. Having worked in the data and information sector for a combined five decades, we are very concerned that, left unchecked, the rapid rollout of generative AI could erode the epistemological foundations of society that is, the ways in which we construct knowledge. As the cognitive scientist Douglas Hofstadter wrote in The Atlantic, it could well undermine the very nature of truth on which our society and I mean all of human society is based.

The White Houses recent announcement that it has secured voluntary commitments from a handful of companies to improve the safety of their AI technology is a start, but does not address the fundamental risk humanity faces: the end of our ability to discern truth. As our society faces existential crises from climate change and pandemic preparedness to systemic racism and the fragility of democracy we urgently need to protect trust in evidence-based decision making.

Among the White Houses proposals to regulate generative AI is a watermarking system a step in the right direction, but one that falls far short of enforcing transparency and verifiability. Should this actually be adopted, some will see the AI watermark and reflexively discount the content as fake news; some wont see the watermark at all; and others scrolling through their social media feeds or otherwise trying to digest massive amounts of information will trust the output purely out of convenience.

More fundamentally, the question of whether a news story or journal article is AI-generated or not is distinct from whether that content is fact-based or credible. To truly enhance trust and support in evidence-based decisions, the public (and our regulatory agencies) needs an audit-trail back to underlying data sources, methodologies, and prompts. We need to be able to answer questions like: How was the conclusion arrived at? How was the diagnosis made?

Despite its well-known flaws, the centuries-old scientific method, and its counterparts across law, medicine, and journalism, is the best approach humanity has found to arrive at testable, reliable and revisable conclusions and predictions about the world. We observe, hypothesize, test, analyze, report, and repeat our way to a truer understanding of the world and more effective solutions for how to improve it.

Decision making in modern, democratic society is underpinned by this method. Tools such as peer review in scientific journals and fact-checking ensure meritocracy, reliability, and self-correction. Randomized controlled trials ensure effectiveness; jurisprudence takes legal precedents into account. Also built into the scientific method is humility about the limitations of what is knowable by a given means at a given point in time, and honesty about the confidence we can place in any conclusion based on how it was arrived at.

An answer generated by an AI chatbot that is trained to sound authoritative but has no actual observed, experienced, or measured model of the world to align with and is unable to cite its sources or explain how it used those sources violates these principles and standards. If you havent yet experienced an AI hallucination, just ask a chatbot to create a bio of you. It is likely to attribute work to you that you had no hand in, and cities of residence where you never lived.

There is also an important historical relationship between how we know and how we govern. It can be argued that the reason and logic that defined the Scientific Revolution in the 16th and 17th centuries was also the foundation for democratic thought in Europe, and later, the Declaration of Independence. At this already-perilous moment for democracy around the world, we should at least ponder this link.

Some might argue that letting generative AI technologies run unchecked is the right thing in the name of technological progress; the path to artificial general intelligence may produce breakthroughs that reveal deeper truths about the universe or better solutions to the worlds challenges. But that should be societys assessment to make not a handful of corporations before these technologies are more widely deployed.

We must build trust and transparency into any AI system that is intended to support decision making. We could train AI systems on source material that adheres to societys highest standards of trust, such as peer-reviewed scientific literature, corrected for retractions. We could design them to extract facts and findings about the world from reliable source material and use them exclusively to generate answers. We could require that they cite their sources and show their work, and be honest about their limitations and bias, reflecting uncertainty back to the user. Efforts are already underway to build these mechanisms into AI, with the hope they can actually level up societys expectations for transparency and accountability.

Evidence-based decision making should immediately become a principle of nascent international AI governance efforts, especially as countries with diverse models of governance introduce AI regulations. Appropriate governance need not compromise scientific and technological progress.

We should also keep in mind that the methods and legitimacy of science have been and continue to be appropriated for scientific racism. As we consider how decisions are made in both the private and public sectors from those about hiring and college admissions to government policies we must consider the sources we base them on. Modern society is full of historical bias, discrimination, and subjugation. AI should be used to shine awareness on these inequities not calcify them further into the training data of automated and impenetrable decisions for decades to come.

We have a once-in-a-century opportunity to define, collectively, a more rational, explainable, systemic, inclusive, and equitable basis for decision making powered by AI. Perhaps we can even chart a future in which AI helps inoculate humanity against our own fallibility, gullibility, and bias in the interest of a fairer society and healthier public sphere.

Lets not waste this moment.

Adam Bly is the founder and CEO of System. He was formerly vice president of data at Spotify and a visiting fellow in science, technology, and society at Harvard Kennedy School. Amy Brand is director and publisher of the MIT Press. Send comments to magazine@globe.com.

See the rest here:

AI is muddying the truth. Weve known how to fix it for centuries. - The Boston Globe

Read More..

Microsoft kills Cortana in Windows as it focuses on next-gen AI – TechCrunch

Microsoft is shutting down its digital assistant app Cortana this month, having now put more of its focus on modern-day AI advances, like its ChatGPT-like Bing Chat and other AI-powered productivity features across Windows and its web browser Edge. A support page confirms the end of Cortana as a standalone app in Windows, starting in August 2023.

The company also confirmed to TechCrunch the page was first published earlier in June, but declined to share more of its thinking on the matter beyond what was referenced on the page itself.

However, reading between the lines from the explanation provided, it appears that Microsoft sees Cortana as something that was a stepping stone toward this new AI future, where users will instead rely on a smarter chatbot running GPT-4, powered thanks to Microsofts partnership with OpenAI. The company also announced in May that it would build this new ChatGPT-based Bing experience right into Windows 11.

In the meantime, Windows users will be in a transitional period where Cortana will still be around in some form, though the standalone Windows app will no longer be supported. For now, however, Cortana will continue to be available in Outlook mobile, Teams mobile, Microsoft Teams display and Microsoft Teams rooms, the company notes.

Those Cortana-powered experiences may not be long for this world either, as Microsoft has already detailed its plans to bring Bing Chat to the enterprise, where Microsoft 365 Copilot will be integrated into its productivity software, plus Outlook, Teams and more.

We know that this change may affect some of the ways you work in Windows, so we want to help you transition smoothly to the new options, Microsoft explains on the support page. Instead of clicking the Cortana icon and launching the app to begin using voice, now you can use voice and satisfy your productivity needs through different tools.

The company then points users to Cortana alternatives like Windows 11 voice access, which lets users control their PC with voice commands, the new AI-powered Bing, Microsoft 365 Copilot and Windows Copilot, which offers centralized AI assistance for Windows users.

The website Windows Latest (not affiliated with Microsoft) was the first to report on the Cortana apps shutdown, having noticed that the latest update for the Cortana Windows app caused the app to stop working. Upon launching the app, a message informed users that Cortana in Windows as a standalone app is deprecated and pointed to the support page through a Learn More button.

Image Credits: Windows Latest

Microsofts shift to Bing Chat from its first-gen assistant Cortana may be later mirrored by other Big Tech companies.

This week, The Information reported, for example, that Amazon promoted its head scientist for Alexa, Rohit Prasad, to run a team developing artificial general intelligence. That signals that Amazon, too, may be thinking about how Alexa could evolve into something more capable than the digital assistant it is today. Apple has also been developing its own generative AI tools, Bloomberg reported, but hasnt yet decided how they would be released to customers.

Read the original here:

Microsoft kills Cortana in Windows as it focuses on next-gen AI - TechCrunch

Read More..

Past, Present, Future: AI, Geopolitics, and the Global Economy – Tech Policy Press

Chris Riley is Executive Director of the Data Transfer Initiative and a Distinguished Research Fellow at the University of Pennsylvanias Annenberg Public Policy Center.

Spurred by ChatGPT and similar generative technologies, the news is filled with articles about AI replacing humans. Sometimes the concern is over AI replacing employees, displacing jobs; sometimes its about AI serving as a relationship partner, fulfilling human social and emotional needs. Most often, its even more direct, taking the form of fears that AI will dispense with humanity entirely.

But as powerful as AI technologies are, these fears are little more than science fiction in the present day. Theyre also a distraction but not yet, it seems, from ongoing efforts to regulate AI systems or invest in greater accountability. News and updates on both of these fronts continue to advance every day.

Rather, digital replacement fears are distracting the US from thinking about two other ways in which AI will shape our future. On the one hand, AI offers a major upside: It can amplify todays massive investments in revitalizing the countrys industrial leadership. On the other, a major downside: It could contribute to breaking the already fragile post-World War II international order. These possibilities are intertwined, and their prospects will depend on US technology policy actions or the lack thereof.

First, the upside. Through whats increasingly being called Bidenomics, the US is witnessing a resurgence of domestic industrial and manufacturing capacity. The Inflation Reduction Act included $369 billion in incentives and direct investments specifically directed to climate change, catalyzing massive new and expanded battery and electric vehicle plants on American soil. It was followed by another $40 billion to connect every American to high speed internet. The CHIPS and Science Act adds money for semiconductor manufacturing, as does the Bipartisan Infrastructure Law for roads and bridges.

Along with private investment, the net result is double or triple past years investments in core US capacities. And the economic benefits are showing. Inflation is improving faster in the US than other countries, and unemployment remains at record lows; the nations economy is alive and well.

These investments also offer perhaps the clearest benefits of machine learning systems: improving logistics and efficiency, and handling repetitive and automatable tasks for businesses. Whether or not large language models can ever outscore top applicants to the worlds best graduate schools, AI offers massive improvements in areas that the EUs AI Act would categorize as minimal risk of harm.

And the US has significant advantages in its capacity for developing and deploying AI to amplify its industrial investments, notably including its workforce, an advantage built in part through many years of talent immigration. Together, this is a formula for the US to reach new heights of global leadership, much as it reached after its massive economic investments in the mid-20th century.

Meanwhile, AI has long been regarded as the 21st centurys Space Race, given how the technology motivates international nation-state level competition for scientific progress. And just as the Space Race took place against the tense backdrop of the Cold War, the AI Race is heating up at another difficult geopolitical moment, following Russias unprovoked invasion of Ukraine. But the international problems are not just in eastern Europe. Although denied by US officials, numerous foreign policy experts indicate a trajectory toward economic decoupling of the US and China, even as trans-Pacific tensions rise over Taiwans independence (the stakes of which are complicated in part by Taiwans strategically important semiconductors industry).

Global harmony in the online world is no clearer than offline. Tensions among the US, China, and Europe are running high, and AI will exacerbate them. Data flows between the US and EU may be in peril if an active privacy law enforcement case against Meta by the Irish data protection authority cannot be resolved with a new data transfer agreement. TikTok remains the target of specific legislation restricting its use in the United States and Europe because of its connections to China. Because of AI, the US is considering increased export controls limiting Chinas access to hardware that can power AI systems, expanding on the significant constraints already in place. The EU has also expressed a goal of de-risking from China, though whether its words will translate to action remains an open question.

For now, the US and EU are on the same side. But in the Council of Europe, where a joint multilateral treaty for AI governance is underway, US reticence may put the endeavor in jeopardy. And the EU continues to outpace (by far) the US in passing technology laws, with significant costs for American technology companies. AI will further this disparity and the tensions it generates, as simultaneously the EU moves forward with its comprehensive AI Act, US businesses continue to flourish through AI, and Congress continues to stall on meaningful tech laws.

It seems more a matter of when, not whether, these divisions will threaten Western collaboration, including in particular on relations with China. If, for example, the simmering situation in Taiwan boils over, will the West be able to align even to the degree it did with Ukraine?

The United Nations, with Russia holding a permanent security council seat, proved far less significant than NATO in the context of the Ukraine invasion; China, too, holds such a seat. What use the UN, another relic of the mid-20th century, will hold in such a future remains to be seen.

These two paths one of possible domestic success, the other of potential international disaster present a quandary. But technology policy leadership offers a path forward. The Biden Administration has shown leadership on the potential for societal harms of AI through its landmark Blueprint for an AI Bill of Rights and the voluntary commitments for safety and security recently adopted by leading AI companies. Now it needs to follow that with second and third acts taking bolder steps to align with Europe on regulation and risk mitigation, and integrating support for industrial AI alongside energy and communications investments, to ensure that the greatest benefits of machine learning technologies can reach the greatest number of people.

The National Telecommunications and Information Administration (NTIA) is taking a thoughtful approach to AI accountability, which if turned into action, can dovetail with the EUs AI Act and build a united democratic front on AI. And embracing modularity a co-regulatory framework describing modules of codes and rules implemented by multinational, multistakeholder bodies without undermining government sovereignty as the heart of AI governance could further stabilize international tensions on policy, without the need for a treaty. It could be a useful lever in fostering transatlantic alignment on AI through the US-EU Trade and Technology Council, for example. This would provide a more stable basis for navigating tensions with China arising from the AI Race, as well as a foundation of trust to pair with US investment in AI capacity for industrial growth.

Hopefully, such sensible policy ideas will not be drowned out by the distractions of dystopia, the grandiose ghosts of which will eventually disperse like the confident predictions of imminent artificial general intelligence made lately (just as they were many decades ago). While powerful, over time AI seems less likely to challenge humanity than to cannibalize itself, as the outputs of LLM systems inevitably make their way into the training data of successor systems, creating artifacts and errors that undermine the quality of the output and vastly increase confusion over its source. Or perhaps the often pablum output of LLMs will fade into the miasma of late-stage online platforms, producing just [a]nother thing you ignore or half-read, as Ryan Broderick writes in Garbage Day. At minimum, the magic we perceive in AI today will fade over time, with generative technologies revealed as what Yale computer science professor Theodore Kim calls industrial-scale knowledge sausages.

In many ways, these scenarios the stories of AI, the Space Race, US industrial leadership, and the first tests of the UN began in the 1950s. In that decade, the US saw incredible economic expansion, cementing its status as a world-leading power; the Soviet Union launched the first orbiting satellite; the UN, only a few years old, faced its first serious tests in the Korean War and the Suez Crisis; and the field of AI research was born. As these stories continue to unfold, the future is deeply uncertain. And AIs role in shaping the future of US industry and the international world order may well prove to be its biggest legacy.

Chris Riley is Executive Director of the Data Transfer Initiative and a Distinguished Research Fellow at the University of Pennsylvanias Annenberg Public Policy Center. Previously, he was a senior fellow for internet governance at the R Street Institute. He has worked on tech policy in D.C. and San Francisco for nonprofit and public sector employers and managed teams based in those cities as well as Brussels, New Delhi, London, and Nairobi. Chris earned his PhD from Johns Hopkins University and a law degree from Yale Law School.

Related

Originally posted here:

Past, Present, Future: AI, Geopolitics, and the Global Economy - Tech Policy Press

Read More..

NSF CAREER Award advancing UK researcher’s work integrating AI … – UKNow

LEXINGTON, Ky. (Aug. 7, 2023) As industry experts navigate the excitement and concerns surrounding the use of artificial intelligence (AI), one University of Kentucky researcher is exploring its potential integration into manufacturing through a National Science Foundation (NSF) Faculty Early Career Development (CAREER) Award.

Peng Edward Wang, Ph.D., is the Robley D. Evans Faculty Fellow and an assistant professor with a joint appointment in the Electrical and Computer Engineering and Mechanical and Aerospace Engineering departments in the UK Stanley and Karen Pigman College of Engineering. He joined the university in 2019.

The focus of my NSF CAREER Award will be developing the first large-scale generalizable machine learning model specifically tailored for manufacturing applications like welding. We aspire to develop a groundbreaking AI machine learning solution that can be likened to the manufacturing counterpart of ChatGPT, said Wang.

ChatGPT is an AI language model that processes language to generate relevant responses, but it has its limitations, including the inability to process image-based manufacturing data.

The NSF will support Wang with $567,930 over five years for his research on AI and machine learning (ML), where he will utilize real-world production data to develop a system that can be applied broadly to manufacturing systems to improve autonomy, manufacturing efficiency, quality control and sustainability.

This award is one of the most prestigious awards in support of the early career-development activities of teacher-scholars who most effectively integrate education and research within the context of their organizations mission, according to the NSF website.

I have been working in the field of AI and ML, developing solutions specifically tailored for manufacturing applications, for several years, said Wang. However, it has come to my attention that most AI/ML solutions are currently being developed in controlled laboratory environments. There are significant disparities between the data generated in labs and the real-world production data obtained from manufacturing plants.

The disparities in the data are visible in areas like data variety, quality and labeling. For researchers like Wang, this prevents applying these AI/ML solutions to practical manufacturing settings.

Over the next five years, Wang and his team will work to address these challenges and bridge the gap between AI/ML research in controlled environments and real-world manufacturing plants. Crucial to that work is the access to complex, real-world manufacturing plant data.

I think all people working in the field of AI and ML know that the truth is, if you don't have data, you have nothing to develop a successful model, said Wang.

The research team is partnering with industry powerhouses General Motors (GM) and General Electric (GE) to provide the data necessary for this work. GM has shared its welding plant and laboratory data, which serves as a necessary foundation for Wangs project.

Collaborating with GM and GE not only provides us with invaluable resources but also offers us the opportunity to test and refine our AI/ML models in real manufacturing environments, said Wang. By working closely with these industry leaders, we can ensure that our solutions are robust, practical and aligned with the needs of the manufacturing sector.

As part of this work, Wang will also collaborate with the Kentucky Association of Manufacturers (KAM) to share knowledge and provide guidance to small and medium manufacturers on transitioning from traditional manufacturing practices to a smart manufacturing model that can suit their unique needs.

This NSF-funded project will also build a partnership with the Math Science & Technology Center (MSTC) at Paul Laurence Dunbar High School in Lexington. Up to four students will have the opportunity to work on a specifically designed research project such as utilizing or developing the intelligent robotic welding module for mechanical part design, with specific characteristics.

Wang recognizes this work could pragmatically change the future of manufacturing with an envisioned model applicable to diverse materials, machines and production lines. It could also accommodate a range of downstream application tasks in areas like product quality inspection, defect detection and even process optimization and control.

To me, the NSF CAREER Award is not only a recognition but, more importantly, it provides me an opportunity to work on a real-world challenge, said Wang. So if I can be very successful on this project, our group will develop the first successful generalizable machine learning solution for manufacturing that will be very meaningful.

Research reported in this publication was supported by theNational Science Foundationunder Award Number2237242.The opinions, findings, and conclusions or recommendations expressed are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Continue reading here:

NSF CAREER Award advancing UK researcher's work integrating AI ... - UKNow

Read More..

AI Researchers Claim They Can Double the Efficiency of Chatbots – Decrypt

Have you ever noticed that your AI chatbot get lost in the middle of a conversation, or it simply says it cannot handle prompts that are too long? Well, that is because each model has a limitation in its processing capabilities, and starts to suffer once it goes over that limit pretty much like they suffered from some kind of a digital attention deficit disorder. But this could soon change thanks to a new method for supercharging LLM capabilities.

Current LLMs have limited context capacities. For example, ChatGPT taps just 8,000 tokens of context, while Claude handles 100,000. Tokens are the basic units of text or code used by an LLM AI to process and generate language This restricts how much background information they can harness when formulating replies. Abacus AI has developed a method that allegedly doubles the usable context length for open-source LLMs like Metas Llama without compromising the model's accuracy in practical application.

Their technique involves "scaling" the position embeddings that track word locations in input texts. According to their Github page, Abacus AI claims that its scaling method drastically increases the number of tokens that a model can handle.

The researchers evaluated two scaled LlaMA variants on tasks like substring location and open-book QA. The scale 16 model maintained accuracy on real-world examples up to 16,000-word contexts, versus only 2,000 words in baseline Llama. It even showed some coherence at 20,000+ words, something that was not possible to achieve with just fine-tuning techniques.

The significance of context extension cannot be overstated. A narrow context window makes the model accurate but not really usable in complex tasks that require some background. Conversely, with an expanded context, LLMs can process and generate better responses but either take more time to do so or return sup-par results. Handling longer contexts efficiently could enable LLMs to absorb whole documents or multiple documents as background when generating text. This may lead to outputs that are more knowledge-grounded and consistent across long conversations.

However, the gains are not perfectly proportional to the scale factors.

Its still necessary to fine tune strategies because scaling alone doesnt guarantee high quality outputs. The Abacus team is also exploring advanced position encoding schemes from recent papers to further extend context capacity.

Their work suggests that scaling up existing LLMs is a viable path to expanding usable context length. This could democratize access to Large Language Models capable of handling lots of context at once.

Abacus AI has opened the doors of their repository for research purposes only, sharing code specific to their fine-tuning projects. This makes it possible to further iterate on its development and apply the fine tuning methods on virtually any open source Large Language Model.

With applications from personalized chatbots to creative writing aids, more memory-empowered LLMs could soon enable next-generation AI assistants that are conversant across diverse topics. For now, researchers are progressing rapidly to overcome technical constraints in pursuit of artificial general intelligence meaning, generalized human cognitive abilities in an AI model. Maybe someday our digital friends will handle as many tabs as we humans can, but without the headache!

Read more here:

AI Researchers Claim They Can Double the Efficiency of Chatbots - Decrypt

Read More..

Are we destined to reiterate Oppenheimers missteps in the development of AI? – Times of India

The stirring narrative encapsulated in the recent 2023 film Oppenheimer echoes a historical recollection that holds vital lessons for our present-day world teetering on the brink of its next major technological shift Artificial General Intelligence (AGI). The film narrates the tale of J. Robert Oppenheimer, the architect of the atomic bomb, a man ensnared in an ethical conundrum that left him tormented by the potential devastation of his creation.

Oppenheimers story is a stark reminder that technology, unbridled and unchecked, can precipitate a Pandoras box of unwelcome outcomes. The atomic bomb, conceived in a world fraught with the horror of World War II, ostensibly a deterrent to further global conflict, obliterated Hiroshima and Nagasaki in a cataclysmic spectacle of power, annihilating hundreds of thousands of innocent lives.

The question thus raised is, are we poised to repeat such a grievous oversight with the genesis of Artificial General Intelligence?

AGI, in the simplest terms, can be defined as a class of artificial intelligence equal to human cognitive capabilities. It is an entity capable of learning, reasoning, comprehending and performing tasks at a level indistinguishable from a human being. It promises a utopian future where intractable global issues such as climate change and poverty could potentially be alleviated. However, just as the dawn of the nuclear age held both the promises of atomic energy and the threat of nuclear warfare, the emergence of AGI holds a similarly dual-edged sword.

The digital age presents us with the challenge to wield this formidable technology with caution and reverence, lest it transforms into a potential Digital Oppenheimer. The destructive capabilities of AGI, when weaponised, could be exponentially more devastating than any conventional armament. The danger lies not just in its misuse but also in the potential for an uncontrollable, self-evolving AGI that might eventually surpass our understanding and control.

To avert such a scenario, we mustnt lose sight of the invaluable lessons from Oppenheimers narrative. It underscores the consequence of unchecked ambition and the dire need for ethical foresight. It echoes the warning that the unbridled pursuit of power can culminate in unforeseen catastrophe.

We must endeavour to instil safety and ethics as foundational elements in the development of AGI. Just as nuclear non-proliferation treaties were vital to managing the proliferation of nuclear weapons, we require international consensus and regulations to impede the militarisation of AGI. Furthermore, significant investment is required to probe the ethical quandaries surrounding AGI and its potential societal impact.

Transparency and public engagement are crucial components of ethical AI development. It is only through open dialogues and the fostering of public trust that we can hope to navigate the tumultuous journey to AGI without seeding mass apprehension.

Moreover, educating the public about the potential risks and benefits of AGI is paramount. A well-informed populace can constructively participate in shaping policy discourse and ensure that the development of AGI aligns with societal interests rather than being monopolised by a handful of corporate or state entities.

Developing ethical guidelines for AI use is of equal importance. These should not merely be prescriptive but must be woven into the very fabric of AGI development. They should address not just how AI should be used, but also the moral implications of its application, the equity of access, and its potential to impact employment, privacy, and society at large.

In conclusion, the development of AGI, a daunting challenge, is a responsibility we cannot shirk. It offers the promise of solving some of humanitys most complex problems, but it also carries the potential for unimaginable destruction. We must strive to not replicate the mistakes of Oppenheimer, ensuring that we build robust safety mechanisms, foster transparency, and engage in a public discourse that shapes the development and use of AGI.

The future of AGI may be uncertain, but it is our responsibility to ensure that it is used for good. Its incumbent upon us to channel our intelligence and compassion to shape a future where AGI is a force for good rather than a harbinger of destruction. We must strike a careful balance between ambition and caution, ensuring that AGI becomes a tool for progress, not a weapon for mass destruction.

The endeavour of harnessing AGI is complex, but it is a task we must undertake if we wish to safeguard our future. We must learn from our past missteps and build a culture of safety, transparency, and ethics in AGI development. We must strive to create an AI that benefits humanity, not one that poses a threat to our very existence. The lessons from Oppenheimers tale must guide us towards a future where AGI is a tool for good, a tool for progress, a tool for humanity.

Views expressed above are the author's own.

END OF ARTICLE

Here is the original post:

Are we destined to reiterate Oppenheimers missteps in the development of AI? - Times of India

Read More..