Category Archives: Artificial General Intelligence

The Future is Now: Understanding and Harnessing Artificial … – North Forty News

Image created with AI (by Monika Lea Jonesand Bo Maxwell Stevens, AI Fusion Insights) Support Northern Colorado Journalism

Show your support for North Forty News by helping us produce more content. It's a kind and simple gesture that will help us continue to bring more content to you.

By:

Monika Lea JonesChief Creative Officer, AI Fusion InsightsLocal Contributor, North Forty News

Bo Maxwell StevensFounder and CEO, AI Fusion InsightsLocal Contributor, North Forty News

Artificial Intelligence (AI) is no longer a concept of the future; its a present reality transforming our world. AI language models like ChatGPT, with over 100 million users, are revolutionizing the way we communicate and access information. AI refers to the simulation of human intelligence in machines, enabling them to perform tasks that typically require human intellect. This includes learning from experience, understanding language, and making decisions.

AI is not just a single technology but a blend of various technologies and algorithms. These models (especially the Large Language models like ChatGPT) currently dont reason but instead work by detecting patterns in preexisting human generated materials that they are trained on. Josiah Seaman, Founder of Creative Contours, describes AI as a multiplier for human creativity and a vessel for human skill.

AIs ubiquity is undeniable. Its integrated into our daily lives, from YouTube recommendations to Spotifys music suggestions. Spotify even introduced an AI DJ, X, that personalizes music based on your preferences and listening history. AI is expected to become even more advanced and integrated into our lives in the coming months and years.

Nikhil Krishnaswamy, a computer science professor at CSU, emphasizes the importance of everyone having input in AIs deployment. He believes that AI should be used to the maximum benefit of everyone, not just those who already have power and resources. He also emphasizes that humans should remain the final decision-makers in situations requiring value judgments and situational understanding.

AIs future promises more personalized experiences, improved data analysis, and possibly new forms of communication. However, ethical considerations are crucial. Krishnaswamy and Seaman agree that AI should eliminate undesirable tasks, not jobs. Seamans vision of the future of AI is similar to that of Star Trek, where AI disrupts our current system of capitalism, currency, and ownership, but people can strive for loftier goals.

The impact of AI on jobs is a topic of debate. Dan Murray, founder of the Rocky Mountain AI Interest Group, suggests that while some jobs will be lost, new ones will be created. Murray has heard it said that you wont be replaced by AI but you might be replaced by someone who uses AI. Seaman believes AI can improve quality of life by increasing productivity, potentially reducing the need for work. This aligns with the concept of Universal Basic Income, a topic of interest for organizations like OpenAI.

Northern Colorado is already a supportive community for arts, culture and leisure such as outdoor sports in nature. These activities are often considered luxuries when our budgets are tight, but how could these areas of our lives flourish when our basic needs are met?

AI is already improving lives in various ways. Krishnaswamy cites AIs role in language learning for ESL students, while Murray mentions Furhat Robotics social robots, which help autistic children communicate. Seaman encourages community leaders to envision a future where AI fosters inclusive, nature-protective communities. CSU Philosophy professor, Paul DiRado, suggests AI will shape our lives as the internet did, raising questions about how well interact with future Artificial General Intelligence systems that have their own motivations or interests. How can collaboration between humans and AI help influence what essentially becomes the realization of desires, human or otherwise?

While not everyone needs to use AI, staying informed about developments and understanding potential benefits is important. Murray encourages non-technical people to try the free versions of AI tools, which are often easy to use and can solve everyday problems. He also suggests sharing knowledge and joining AI interest groups.

Dan Murray notes, some people may think AI is hard to use. Its actually very easy and the programming language, if you will, is simply spoken or written English. What could be easier?

Artificial Intelligence is here and evolving rapidly. Its potential is boundless, but it must be embraced responsibly. As we integrate AI into our lives, we must consider ethical implications. There are issues that AI can perpetuate such as: surveillance, amplifying human biases, and widening inequality. Currently AI is a tool. Just like a match, which can light a campfire or burn down a forest, the same tool could be used for both benefit and harm. The future of AI is exciting, and were all part of its journey. As we experience the dawn of AI, we should consider how it can improve efficiency, creativity, and innovation in our lives.

See more here:

The Future is Now: Understanding and Harnessing Artificial ... - North Forty News

GPT-5: The Future of AI? – Fagen wasanni

OpenAIs latest model, GPT-5, is generating a lot of excitement in the AI community. As the successor to the impressive GPT-3, GPT-3.5, and GPT-4 models, there are high expectations for what GPT-5 can achieve. While the release date remains unknown, the filing of a trademark application for GPT-5 by OpenAI has fueled speculation that it could be coming soon.

Building on the advancements of its predecessors, GPT-5 is expected to push the boundaries of AI even further. It promises to go beyond language generation and reasoning capabilities and also delve into audio, video, and other mediums. GPT-5 aims to uncover obscure scientific concepts and lesser-known subjects with ease. Additionally, there is talk of potential integrations with third-party services, bringing us closer to the elusive goal of Artificial General Intelligence (AGI).

The question on everyones mind is whether GPT-5 will achieve AGI, the ultimate goal of AI. Some early reports suggest that GPT-5 could indeed reach this milestone. However, experts caution that true AGI is still a distant dream and could be years away. If GPT-5 does achieve AGI, it would be a significant moment for the AI community, with far-reaching implications in various fields such as healthcare and education.

However, not everyone is in favor of rapidly advancing AI models. Concerns about the ethical and societal implications of such advanced technology have led to calls for caution. The responsible development of AGI is essential to minimize potential negative impacts on society.

In conclusion, the anticipation surrounding GPT-5 is palpable. With its potential multimodal capabilities and the possibility of AGI, GPT-5 has the potential to revolutionize the field of AI. However, it is important to remember that these expectations are still speculative. The true capabilities of GPT-5 will only be known once OpenAI officially releases it. Until then, the AI community eagerly awaits its arrival.

Visit link:

GPT-5: The Future of AI? - Fagen wasanni

Four questions about AI that need answering now, according to leading physicist Brian Cox – 9News

If humans don't manage to find a solution to four key questions involving artificial intelligencenow "deeper challenges" could be faced down the road, British physicist Professor Brian Cox believes.

The warning echoes that of dozens of AI industry leaders, academics and celebrities who recently called for mitigation measures to be enacted to reduce the risk of global annihilation due to AI systems.

Cox told 9news.com.au there are both immediate and long-term questions about AI that humanity needs to start answering as soon as possible - but conceded an extinction-level event is not likely.

READ MORE: AI voice cloning scams 'will come to Australia' - so how do you guard against it?

"I think there are big questions, really practical questions, now about AI systems in use," he said.

"The immediate problem is when you give control to an AI system, as we do now, then who decides how those decisions are made?

"Is it just left to this AI thing, or does society have some kind of democratic control?"

READ MORE: AI industry and researchers sign statement warning of 'extinction' risk

Cox went on to cite a fictitious example of what would happen should an automated driving system be put in a situation where the driver and another person, are about to be injured

"This is an unlikely scenario," he admitted, before continuing.

"But is it going to injure a pedestrian, injure the driver, injure a child, does it say I don't want to injure a child, rather than an adult.

"Those questions need addressing."

READ MORE: AI may soon be able to predict who will and wont stick to their diets, study claims

The trickier questions, Cox said, arise over longer-term fears the systems could become as intelligent as humans one day.

"We don't have those at the moment," he said, adding those systems are called Artificial General Intelligence (AGI).

"I'm not an expert, but there are varying estimates about time scales, I mean it's not actually completely accepted we will ever be able to build something of that intelligence; it's in the unknown box, but it might happen quite soon".

Cox said a raft of questions would be raised should AGI systems ever "out-evolve" humans and claim the same rights.

"Ultimately, this all comes down to thinking carefully now about how we regulate these systems; who controls them, who programs them, how much autonomy do they have, how do we deal with them legally...

"Those are real questions and if we answer them now we will be better prepared if we run into deeper challenges in the future."

READ MORE: What are phone porting scams, and is there anything you can do about them?

The jobs most at risk of being replaced by artificial intelligence

The comment comes as the renowned professor of particle physics prepares to make his return to Australia for the world premiere of his live stage show Symphonic Horizons.

Cox and the Sydney Symphony Orchestra will join forces at the Sydney Opera House for four shows to answer fundamental questions about human existence and space.

He said if he could have one question answered about the universe tomorrow it would be "are we alone".

READ MORE: NASA picks up 'heartbeat' signal from lost space probe

"There's a lot of them (questions), but one of the ones I think we have a chance in answering is 'are we alone', or let's put it another way, how far we have to go to discover life beyond Earth?".

Cox explained the mystery of consciousness, questions of where we sit in the universe, is one of the aspects that fascinates him most about the unexplored frontier.

This will form a core aspect of Symphonic Horizons.

"I really think that one of the fundamental questions about the universe is, why is it able to support structures, living things - things like you and me - that can understand the universe," he said.

"And why can the universe understand itself, and how does that happen?

"That for me is as fundamental as asking questions about the nature of space and time, and the origins of the universe itself.

"We will all come out (of the shows) with a different view of our place in the universe, but I don't know what that view will be until we have done it."

Sign up here to receive our daily newsletters and breaking news alerts, sent straight to your inbox.

See the original post:

Four questions about AI that need answering now, according to leading physicist Brian Cox - 9News

The Impact of Artificial Intelligence on Society: Advancements … – Fagen wasanni

Artificial intelligence (AI) is increasingly becoming a force to be reckoned with, as it continues to shape and impact various sectors of society. From education to music, AI is proving to be transformative. However, there is a pressing concern that emerges from this technological progress: what happens when machines surpass human intelligence?

The concept of General AI or Artificial General Intelligence (AGI) raises questions about whether machines can achieve a level of intelligence equivalent to humans and also possess self-awareness. AGI would be capable of performing any task that a human mind can, and potentially even better, by adapting to new situations and circumstances.

While the idea of AGI is theoretically possible, it remains uncertain when or if it can be realized. The main obstacle lies in the human concept of creative and abstract thinking. Developing algorithms that can reason abstractly and think creatively, skills that even advanced AI systems lack, is essential for achieving AGI.

Despite the challenges, there have been significant advancements in AI across various fields. From advanced chatbots to music generation, such as ChatGPT and Googles Bard, to the imminent arrival of self-driving cars, the breadth of AIs influence is vast.

In medicine, AI is revolutionizing the industry by utilizing Big Data for compiling possible diagnoses and improving treatments. AI can help find better ways of using and combining drugs, as well as potentially creating better drugs, which can be crucial in preventing the spread of highly infectious or dangerous diseases.

However, AIs impact on the arts, particularly music, has been met with controversy. AI-generated art and music are challenging conventions and raising concerns about the potential replacement of artists.

Looking ahead, AI holds immense potential for advancements in scientific methods, making science more efficient, and addressing major challenges like climate crises by aiding in early-warning systems and predictive modeling.

But along with these benefits, there are also fears surrounding the dangers of AI. Professor Stephen Hawking famously stated that the development of full artificial intelligence could spell the end of the human race. If AI becomes capable of redesigning itself at a rapid rate, it could outpace human capabilities and pose a threat to our existence.

The military application of AI also raises significant concerns. Many countries are already incorporating AI into weaponry and systems, with the potential for autonomous weapon systems that can identify and attack targets without human intervention. The question arises: should decisions regarding the use of AI in military contexts be made without human control?

Some countries have taken measures to restrict AIs control over weapons, recognizing the potential dangers. However, the moral stance of all nations remains uncertain, which begs the question of potential risks posed by rogue states.

As AI continues to advance, it is imperative to navigate the challenges and risks while harnessing its potential for the betterment of society. By ensuring responsible development and ethical guidelines, we can leverage AIs capabilities for the collective benefit of humanity.

See original here:

The Impact of Artificial Intelligence on Society: Advancements ... - Fagen wasanni

Drafting patent claims for AI-based inventions: navigating eligibility … – Reuters

August 7, 2023 - In the rapidly evolving landscape of artificial intelligence (AI), innovation is at an all-time high. From machine learning algorithms that predict disease onset to neural networks that make financial predictions, AI has been instrumental in pushing technological boundaries. However, as technology accelerates, our legal systems, particularly the patent system, struggle to keep pace.

Central to patent law is the Alice/Mayo patent eligibility test. This test is designed to evaluate whether a particular invention is eligible for patent protection. The basic requirements for patent eligibility encompass "useful process, machine, manufacture, or composition of matter."

However, the U.S. Supreme Court decisions in the cases of Mayo Collaborative Services v. Prometheus Laboratories, Inc. and Alice Corp. v. CLS Bank International held that mere laws of nature, natural phenomena, and abstract ideas are not sufficient for patent eligibility. The Court emphasized that while abstract ideas or natural phenomena in isolation aren't patent eligible, their transformation into a practical application serving as an innovative building block could be.

The Alice/Mayo test, a two-part framework, aids in making this distinction. It first evaluates if claims are targeting an abstract idea, law of nature, or natural phenomenon. If they are, the test then checks for additional elements that make the claim innovative and distinct from the foundational concept.

However, when applied to AI technologies, the test can yield subjective, inconsistent and sometimes contentious results. The crux of the issue lies in the fact that many AI innovations can be viewed as abstract since they involve algorithms and mathematical processes. Determining where the line is drawn between a patent-ineligible abstract idea and a patent-eligible inventive concept in the realm of AI can be challenging.

Earlier this year, the Supreme Court had an opportunity to clarify this complex issue. Three pivotal cases Interactive Wearables LLC v. Polar Electro Oy, No. 21-1281 (U.S. May 15, 2023); Tropp v. Travel Sentry Inc., No. 22-22 (U.S. May 15, 2023), and Avery Dennison Corp. v. ADASA Inc., No. 22-822 (U.S. May 30, 2023) were anticipated to provide more definitive guidelines on patent eligibility.Interactive Wearables LLC v. Polar Electro Oy questioned the appropriate standard for determining whether a patent claim is directed to a patent-ineligible concept under the Alice/Mayo framework.

Tropp v. Travel Sentry Inc involved an appeal of a decision that determined that patents for a method of providing consumers with special dual-access luggage locks that a screening entity would access in accordance with a special procedure and corresponding key controlled by the luggage screening entity, all while allowing the luggage to remain locked following screening, were patent ineligible.

Avery Dennison Corp. v. ADASA Inc. questioned whether a claim for a patent to subdivide a binary serial number and assign the "most significant bits" such that they remain identical across Radio Frequency Identification Device (RFID) tags constitutes patent-eligible subject matter.

While not directly related to AI, their decisions could have potentially set precedent, or offered insights that would clarify some of the complexities encountered when dealing with AI-based inventions. These cases were seen as potential vehicles to address the ambiguities surrounding what is considered an abstract idea versus an inventive concept a central dilemma in AI patent evaluations. By choosing not to review these cases, the Supreme Court perpetuated uncertainty, particularly for stakeholders in the AI domain.

AI, by its very nature, often blurs the line between abstract and concrete. Abstract ideas, like mathematical formulas, are not patent-eligible. But what happens when such a formula becomes a machine-learning model driving real-world outcomes? This gray area is where AI innovations typically reside.

Specifically, while the algorithms themselves may seem abstract and theoretical, when implemented, they can drive revolutionary changes in various sectors, including health care, finance, and entertainment. For example, an AI-based system in cybersecurity within a financial network may employ a mathematical model like a neural network, which, while abstract in nature, leads to the practical application of enhanced detection of fraudulent transactions.

The inherent subjectivity in making this distinction means that AI technologies often tread a thin line between acceptance and rejection during the patent examination and evaluation process. This ambiguity can deter innovators from pursuing patents, given the unpredictability of the outcomes, thereby potentially stifling advancement.

The onus now falls on inventors, businesses and their legal teams to craft patent claims that can stand up to this ambiguous eligibility test. Here are some key general considerations for drafting patent claims for AI-based inventions:

1. Specify the domain

Specifying the domain in AI patent claims is pivotal for multiple reasons. By clearly defining the field of application, inventors delineate the scope of their invention, ensuring protection in a specific niche. This clarity avoids overbroad claims that risk rejection and highlights the novelty within a particular context.

For patent examiners, a clear domain offers context and understanding, streamlining the examination process. Moreover, a well-defined domain strengthens both the enforcement and defense of patents while simplifying licensing and commercialization efforts. In the world of AI patents, precision in detailing the domain is important when capturing the invention's true essence and value.

2. Detail the AI mechanism

Rather than make a generic reference to "AI," highlighting the precise mechanism, be it a neural network, deep learning, or reinforcement learning, provides clarity on the technology's foundation. This granularity not only emphasizes the uniqueness of the invention but also aids patent examiners in understanding its intricacies.

A well-defined AI mechanism enhances the patent's robustness, ensuring that its distinctiveness is clearly captured and protected. In the realm of AI patents, specificity in describing the AI mechanism is a linchpin for safeguarding innovation.

3. Highlight technical advantages

Emphasizing the technical advantages of the AI in patent claims is essential. When a claim articulates clear benefits, such as improved efficiency or error reduction, the intrinsic value of AI's contribution to the invention becomes evident. This not only underscores the invention's uniqueness but also showcases its practical significance. In the patenting of AI technologies, focusing on these technical advantages can bolster the patent's strength, clearly differentiating it from mere abstract ideas or generic applications.

4. Avoid over-reliance on algorithms

Though algorithms form the backbone of AI, an excessive focus on them might overshadow their real-world applications. It's essential to strike a balance: outline the algorithm's essence but pivot to its tangible applications and results. This approach not only captures the innovation's full spectrum but also strengthens its patentability by emphasizing its practical impact over mere theoretical constructs.

Given the nuanced nature of AI and the intricacies of patent law, specific strategies can optimize the chances of obtaining robust patent protection. Here are some essential tips for drafting AI-based patent claims, each addressing distinct facets of the patenting process to ensure that AI innovators secure the recognition and protection they deserve.

1. Multi-layered claiming

Using a multi-layered claiming approach in AI patent applications is strategic. Beginning with a broader claim and then transitioning to more specific, dependent claims ensures a comprehensive coverage. This layered strategy acts as a safety net; if a broad claim faces rejection, the subsequent, more detailed claims might still secure approval. In essence, it's a way to cast a wide net while also having targeted catches, optimizing the chances of patent protection across varying depths of the invention's scope.

2. Functional claiming

Opting for functional claiming in AI patent applications emphasizes the tool's practical utility over its internal workings. By focusing on what the AI tool accomplishes, like "identifying anomalies in X data," instead of delving into the intricate algorithmic steps, the claim is anchored in tangible outcomes.

This not only simplifies the claim's language but also broadens its protective scope, covering potential variations in algorithmic implementations that achieve the same functional result. In the AI patent landscape, functional claiming offers a way to capture the essence of an invention's real-world impact.

3.Include data specificity

In AI patent applications, data specificity is integral. Given AI's intrinsic reliance on data, delineating the data type, its processing method and its significance to the invention offers a clearer picture of the technology in action. By anchoring the claim in the specifics of the data utilized, the invention's distinctiveness and practical utility are underscored.

In essence, detailing the data landscape not only clarifies the AI tool's operation but also strengthens the patent's grounding in tangible and innovative use cases.

4. Avoid over-generalizing

Steering clear of over-generalization in AI patent claims is essential. While casting a wide net might seem attractive, exceedingly broad claims risk being tagged as abstract and face rejection. It's more prudent to focus on the distinct facets and practical applications of the invention. Such focused claims not only stand a better chance of securing patent protection but also effectively highlight the value proposition and innovation the AI tool brings to the table.

5. Stay updated

Staying abreast of developments is crucial in the fluid landscape of AI patenting. Given the swift advancements in software and AI, coupled with shifting patent regulations, it's vital to keep tabs on current case law, patent office directives and industry shifts. Being informed ensures that your patent strategies remain aligned with the latest legal precedents and can adapt to emerging trends, maximizing the chances of securing robust and relevant protection for AI innovations.

In conclusion, drafting patent claims for AI-based inventions requires a careful blend of technical detailing, legal foresight and strategic breadth. By ensuring patent eligibility and crafting a precise claim language, inventors can significantly enhance the chances of their AI tool being protected by a patent.

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. Westlaw Today is owned by Thomson Reuters and operates independently of Reuters News.

Anup Iyer is an associate with Moore & Van Allen. He specializes in assisting clients with obtaining patent and trademark rights across diverse technology sectors such as artificial intelligence (AI), optical communication, high performance computing, computer processor architecture, wireless communication technologies, and cybersecurity. He is based in Charlotte, North Carolina, and may be reached at anupiyer@mvalaw.com.

Nick Russell is a member with the firm, based out of the Charlotte office. He provides patent portfolio management and guidance to institutions in the network science sectors, financial sectors, and health care sectors. He has extensive experience in patent preparation and prosecution in computer and network science technologies, payment networks, authentication methods, cybersecurity, cloud computing, and mobile device technology. He may be reached at nickrussell@mvalaw.com.

Original post:

Drafting patent claims for AI-based inventions: navigating eligibility ... - Reuters

Rakuten Group and OpenAI Collaborate to Bring Conversational AI … – Fagen wasanni

Rakuten Group has announced a partnership with OpenAI to offer advanced conversational artificial intelligence (AI) experiences for consumers and businesses globally. This collaboration aims to revolutionize the way customers shop and interact with businesses, while improving productivity for merchants and business partners.

As a global innovation company, Rakuten operates Japans largest online shopping mall and provides various services in e-commerce, fintech, digital content, and telecommunications. With over 70 services and 1.7 billion members worldwide, Rakuten possesses high-quality data and extensive knowledge in different domains.

OpenAI, an AI research and deployment company, is dedicated to ensuring that artificial general intelligence benefits humanity as a whole. Through this partnership, Rakuten will integrate AI services into its products and services, utilizing its valuable data and domain expertise. OpenAI will provide Rakuten with priority access to its APIs and support, exploring mutually beneficial commercial opportunities.

The collaboration will also see Rakuten integrating Rakuten AI experiences into ChatGPT products using OpenAIs plugin architecture. This will enable businesses to interact with AI agents using natural language, performing tasks such as research, data analysis, inventory optimization, pricing, and business process automation.

This partnership holds tremendous potential for the online services landscape, leveraging Rakutens diverse ecosystem and 100 million members in Japan. By combining Rakutens operational capabilities and unique data with OpenAIs cutting-edge technology, the collaboration aims to provide value to millions of people in Japan and around the world.

See the original post here:

Rakuten Group and OpenAI Collaborate to Bring Conversational AI ... - Fagen wasanni

AI is muddying the truth. Weve known how to fix it for centuries. – The Boston Globe

Not only does our concept of truth feel more slippery today, but the long-established ways we arrive at insights and decisions are being compromised. Having worked in the data and information sector for a combined five decades, we are very concerned that, left unchecked, the rapid rollout of generative AI could erode the epistemological foundations of society that is, the ways in which we construct knowledge. As the cognitive scientist Douglas Hofstadter wrote in The Atlantic, it could well undermine the very nature of truth on which our society and I mean all of human society is based.

The White Houses recent announcement that it has secured voluntary commitments from a handful of companies to improve the safety of their AI technology is a start, but does not address the fundamental risk humanity faces: the end of our ability to discern truth. As our society faces existential crises from climate change and pandemic preparedness to systemic racism and the fragility of democracy we urgently need to protect trust in evidence-based decision making.

Among the White Houses proposals to regulate generative AI is a watermarking system a step in the right direction, but one that falls far short of enforcing transparency and verifiability. Should this actually be adopted, some will see the AI watermark and reflexively discount the content as fake news; some wont see the watermark at all; and others scrolling through their social media feeds or otherwise trying to digest massive amounts of information will trust the output purely out of convenience.

More fundamentally, the question of whether a news story or journal article is AI-generated or not is distinct from whether that content is fact-based or credible. To truly enhance trust and support in evidence-based decisions, the public (and our regulatory agencies) needs an audit-trail back to underlying data sources, methodologies, and prompts. We need to be able to answer questions like: How was the conclusion arrived at? How was the diagnosis made?

Despite its well-known flaws, the centuries-old scientific method, and its counterparts across law, medicine, and journalism, is the best approach humanity has found to arrive at testable, reliable and revisable conclusions and predictions about the world. We observe, hypothesize, test, analyze, report, and repeat our way to a truer understanding of the world and more effective solutions for how to improve it.

Decision making in modern, democratic society is underpinned by this method. Tools such as peer review in scientific journals and fact-checking ensure meritocracy, reliability, and self-correction. Randomized controlled trials ensure effectiveness; jurisprudence takes legal precedents into account. Also built into the scientific method is humility about the limitations of what is knowable by a given means at a given point in time, and honesty about the confidence we can place in any conclusion based on how it was arrived at.

An answer generated by an AI chatbot that is trained to sound authoritative but has no actual observed, experienced, or measured model of the world to align with and is unable to cite its sources or explain how it used those sources violates these principles and standards. If you havent yet experienced an AI hallucination, just ask a chatbot to create a bio of you. It is likely to attribute work to you that you had no hand in, and cities of residence where you never lived.

There is also an important historical relationship between how we know and how we govern. It can be argued that the reason and logic that defined the Scientific Revolution in the 16th and 17th centuries was also the foundation for democratic thought in Europe, and later, the Declaration of Independence. At this already-perilous moment for democracy around the world, we should at least ponder this link.

Some might argue that letting generative AI technologies run unchecked is the right thing in the name of technological progress; the path to artificial general intelligence may produce breakthroughs that reveal deeper truths about the universe or better solutions to the worlds challenges. But that should be societys assessment to make not a handful of corporations before these technologies are more widely deployed.

We must build trust and transparency into any AI system that is intended to support decision making. We could train AI systems on source material that adheres to societys highest standards of trust, such as peer-reviewed scientific literature, corrected for retractions. We could design them to extract facts and findings about the world from reliable source material and use them exclusively to generate answers. We could require that they cite their sources and show their work, and be honest about their limitations and bias, reflecting uncertainty back to the user. Efforts are already underway to build these mechanisms into AI, with the hope they can actually level up societys expectations for transparency and accountability.

Evidence-based decision making should immediately become a principle of nascent international AI governance efforts, especially as countries with diverse models of governance introduce AI regulations. Appropriate governance need not compromise scientific and technological progress.

We should also keep in mind that the methods and legitimacy of science have been and continue to be appropriated for scientific racism. As we consider how decisions are made in both the private and public sectors from those about hiring and college admissions to government policies we must consider the sources we base them on. Modern society is full of historical bias, discrimination, and subjugation. AI should be used to shine awareness on these inequities not calcify them further into the training data of automated and impenetrable decisions for decades to come.

We have a once-in-a-century opportunity to define, collectively, a more rational, explainable, systemic, inclusive, and equitable basis for decision making powered by AI. Perhaps we can even chart a future in which AI helps inoculate humanity against our own fallibility, gullibility, and bias in the interest of a fairer society and healthier public sphere.

Lets not waste this moment.

Adam Bly is the founder and CEO of System. He was formerly vice president of data at Spotify and a visiting fellow in science, technology, and society at Harvard Kennedy School. Amy Brand is director and publisher of the MIT Press. Send comments to magazine@globe.com.

See the rest here:

AI is muddying the truth. Weve known how to fix it for centuries. - The Boston Globe

NSF CAREER Award advancing UK researcher’s work integrating AI … – UKNow

LEXINGTON, Ky. (Aug. 7, 2023) As industry experts navigate the excitement and concerns surrounding the use of artificial intelligence (AI), one University of Kentucky researcher is exploring its potential integration into manufacturing through a National Science Foundation (NSF) Faculty Early Career Development (CAREER) Award.

Peng Edward Wang, Ph.D., is the Robley D. Evans Faculty Fellow and an assistant professor with a joint appointment in the Electrical and Computer Engineering and Mechanical and Aerospace Engineering departments in the UK Stanley and Karen Pigman College of Engineering. He joined the university in 2019.

The focus of my NSF CAREER Award will be developing the first large-scale generalizable machine learning model specifically tailored for manufacturing applications like welding. We aspire to develop a groundbreaking AI machine learning solution that can be likened to the manufacturing counterpart of ChatGPT, said Wang.

ChatGPT is an AI language model that processes language to generate relevant responses, but it has its limitations, including the inability to process image-based manufacturing data.

The NSF will support Wang with $567,930 over five years for his research on AI and machine learning (ML), where he will utilize real-world production data to develop a system that can be applied broadly to manufacturing systems to improve autonomy, manufacturing efficiency, quality control and sustainability.

This award is one of the most prestigious awards in support of the early career-development activities of teacher-scholars who most effectively integrate education and research within the context of their organizations mission, according to the NSF website.

I have been working in the field of AI and ML, developing solutions specifically tailored for manufacturing applications, for several years, said Wang. However, it has come to my attention that most AI/ML solutions are currently being developed in controlled laboratory environments. There are significant disparities between the data generated in labs and the real-world production data obtained from manufacturing plants.

The disparities in the data are visible in areas like data variety, quality and labeling. For researchers like Wang, this prevents applying these AI/ML solutions to practical manufacturing settings.

Over the next five years, Wang and his team will work to address these challenges and bridge the gap between AI/ML research in controlled environments and real-world manufacturing plants. Crucial to that work is the access to complex, real-world manufacturing plant data.

I think all people working in the field of AI and ML know that the truth is, if you don't have data, you have nothing to develop a successful model, said Wang.

The research team is partnering with industry powerhouses General Motors (GM) and General Electric (GE) to provide the data necessary for this work. GM has shared its welding plant and laboratory data, which serves as a necessary foundation for Wangs project.

Collaborating with GM and GE not only provides us with invaluable resources but also offers us the opportunity to test and refine our AI/ML models in real manufacturing environments, said Wang. By working closely with these industry leaders, we can ensure that our solutions are robust, practical and aligned with the needs of the manufacturing sector.

As part of this work, Wang will also collaborate with the Kentucky Association of Manufacturers (KAM) to share knowledge and provide guidance to small and medium manufacturers on transitioning from traditional manufacturing practices to a smart manufacturing model that can suit their unique needs.

This NSF-funded project will also build a partnership with the Math Science & Technology Center (MSTC) at Paul Laurence Dunbar High School in Lexington. Up to four students will have the opportunity to work on a specifically designed research project such as utilizing or developing the intelligent robotic welding module for mechanical part design, with specific characteristics.

Wang recognizes this work could pragmatically change the future of manufacturing with an envisioned model applicable to diverse materials, machines and production lines. It could also accommodate a range of downstream application tasks in areas like product quality inspection, defect detection and even process optimization and control.

To me, the NSF CAREER Award is not only a recognition but, more importantly, it provides me an opportunity to work on a real-world challenge, said Wang. So if I can be very successful on this project, our group will develop the first successful generalizable machine learning solution for manufacturing that will be very meaningful.

Research reported in this publication was supported by theNational Science Foundationunder Award Number2237242.The opinions, findings, and conclusions or recommendations expressed are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Continue reading here:

NSF CAREER Award advancing UK researcher's work integrating AI ... - UKNow

Past, Present, Future: AI, Geopolitics, and the Global Economy – Tech Policy Press

Chris Riley is Executive Director of the Data Transfer Initiative and a Distinguished Research Fellow at the University of Pennsylvanias Annenberg Public Policy Center.

Spurred by ChatGPT and similar generative technologies, the news is filled with articles about AI replacing humans. Sometimes the concern is over AI replacing employees, displacing jobs; sometimes its about AI serving as a relationship partner, fulfilling human social and emotional needs. Most often, its even more direct, taking the form of fears that AI will dispense with humanity entirely.

But as powerful as AI technologies are, these fears are little more than science fiction in the present day. Theyre also a distraction but not yet, it seems, from ongoing efforts to regulate AI systems or invest in greater accountability. News and updates on both of these fronts continue to advance every day.

Rather, digital replacement fears are distracting the US from thinking about two other ways in which AI will shape our future. On the one hand, AI offers a major upside: It can amplify todays massive investments in revitalizing the countrys industrial leadership. On the other, a major downside: It could contribute to breaking the already fragile post-World War II international order. These possibilities are intertwined, and their prospects will depend on US technology policy actions or the lack thereof.

First, the upside. Through whats increasingly being called Bidenomics, the US is witnessing a resurgence of domestic industrial and manufacturing capacity. The Inflation Reduction Act included $369 billion in incentives and direct investments specifically directed to climate change, catalyzing massive new and expanded battery and electric vehicle plants on American soil. It was followed by another $40 billion to connect every American to high speed internet. The CHIPS and Science Act adds money for semiconductor manufacturing, as does the Bipartisan Infrastructure Law for roads and bridges.

Along with private investment, the net result is double or triple past years investments in core US capacities. And the economic benefits are showing. Inflation is improving faster in the US than other countries, and unemployment remains at record lows; the nations economy is alive and well.

These investments also offer perhaps the clearest benefits of machine learning systems: improving logistics and efficiency, and handling repetitive and automatable tasks for businesses. Whether or not large language models can ever outscore top applicants to the worlds best graduate schools, AI offers massive improvements in areas that the EUs AI Act would categorize as minimal risk of harm.

And the US has significant advantages in its capacity for developing and deploying AI to amplify its industrial investments, notably including its workforce, an advantage built in part through many years of talent immigration. Together, this is a formula for the US to reach new heights of global leadership, much as it reached after its massive economic investments in the mid-20th century.

Meanwhile, AI has long been regarded as the 21st centurys Space Race, given how the technology motivates international nation-state level competition for scientific progress. And just as the Space Race took place against the tense backdrop of the Cold War, the AI Race is heating up at another difficult geopolitical moment, following Russias unprovoked invasion of Ukraine. But the international problems are not just in eastern Europe. Although denied by US officials, numerous foreign policy experts indicate a trajectory toward economic decoupling of the US and China, even as trans-Pacific tensions rise over Taiwans independence (the stakes of which are complicated in part by Taiwans strategically important semiconductors industry).

Global harmony in the online world is no clearer than offline. Tensions among the US, China, and Europe are running high, and AI will exacerbate them. Data flows between the US and EU may be in peril if an active privacy law enforcement case against Meta by the Irish data protection authority cannot be resolved with a new data transfer agreement. TikTok remains the target of specific legislation restricting its use in the United States and Europe because of its connections to China. Because of AI, the US is considering increased export controls limiting Chinas access to hardware that can power AI systems, expanding on the significant constraints already in place. The EU has also expressed a goal of de-risking from China, though whether its words will translate to action remains an open question.

For now, the US and EU are on the same side. But in the Council of Europe, where a joint multilateral treaty for AI governance is underway, US reticence may put the endeavor in jeopardy. And the EU continues to outpace (by far) the US in passing technology laws, with significant costs for American technology companies. AI will further this disparity and the tensions it generates, as simultaneously the EU moves forward with its comprehensive AI Act, US businesses continue to flourish through AI, and Congress continues to stall on meaningful tech laws.

It seems more a matter of when, not whether, these divisions will threaten Western collaboration, including in particular on relations with China. If, for example, the simmering situation in Taiwan boils over, will the West be able to align even to the degree it did with Ukraine?

The United Nations, with Russia holding a permanent security council seat, proved far less significant than NATO in the context of the Ukraine invasion; China, too, holds such a seat. What use the UN, another relic of the mid-20th century, will hold in such a future remains to be seen.

These two paths one of possible domestic success, the other of potential international disaster present a quandary. But technology policy leadership offers a path forward. The Biden Administration has shown leadership on the potential for societal harms of AI through its landmark Blueprint for an AI Bill of Rights and the voluntary commitments for safety and security recently adopted by leading AI companies. Now it needs to follow that with second and third acts taking bolder steps to align with Europe on regulation and risk mitigation, and integrating support for industrial AI alongside energy and communications investments, to ensure that the greatest benefits of machine learning technologies can reach the greatest number of people.

The National Telecommunications and Information Administration (NTIA) is taking a thoughtful approach to AI accountability, which if turned into action, can dovetail with the EUs AI Act and build a united democratic front on AI. And embracing modularity a co-regulatory framework describing modules of codes and rules implemented by multinational, multistakeholder bodies without undermining government sovereignty as the heart of AI governance could further stabilize international tensions on policy, without the need for a treaty. It could be a useful lever in fostering transatlantic alignment on AI through the US-EU Trade and Technology Council, for example. This would provide a more stable basis for navigating tensions with China arising from the AI Race, as well as a foundation of trust to pair with US investment in AI capacity for industrial growth.

Hopefully, such sensible policy ideas will not be drowned out by the distractions of dystopia, the grandiose ghosts of which will eventually disperse like the confident predictions of imminent artificial general intelligence made lately (just as they were many decades ago). While powerful, over time AI seems less likely to challenge humanity than to cannibalize itself, as the outputs of LLM systems inevitably make their way into the training data of successor systems, creating artifacts and errors that undermine the quality of the output and vastly increase confusion over its source. Or perhaps the often pablum output of LLMs will fade into the miasma of late-stage online platforms, producing just [a]nother thing you ignore or half-read, as Ryan Broderick writes in Garbage Day. At minimum, the magic we perceive in AI today will fade over time, with generative technologies revealed as what Yale computer science professor Theodore Kim calls industrial-scale knowledge sausages.

In many ways, these scenarios the stories of AI, the Space Race, US industrial leadership, and the first tests of the UN began in the 1950s. In that decade, the US saw incredible economic expansion, cementing its status as a world-leading power; the Soviet Union launched the first orbiting satellite; the UN, only a few years old, faced its first serious tests in the Korean War and the Suez Crisis; and the field of AI research was born. As these stories continue to unfold, the future is deeply uncertain. And AIs role in shaping the future of US industry and the international world order may well prove to be its biggest legacy.

Chris Riley is Executive Director of the Data Transfer Initiative and a Distinguished Research Fellow at the University of Pennsylvanias Annenberg Public Policy Center. Previously, he was a senior fellow for internet governance at the R Street Institute. He has worked on tech policy in D.C. and San Francisco for nonprofit and public sector employers and managed teams based in those cities as well as Brussels, New Delhi, London, and Nairobi. Chris earned his PhD from Johns Hopkins University and a law degree from Yale Law School.

Related

Originally posted here:

Past, Present, Future: AI, Geopolitics, and the Global Economy - Tech Policy Press

Microsoft kills Cortana in Windows as it focuses on next-gen AI – TechCrunch

Microsoft is shutting down its digital assistant app Cortana this month, having now put more of its focus on modern-day AI advances, like its ChatGPT-like Bing Chat and other AI-powered productivity features across Windows and its web browser Edge. A support page confirms the end of Cortana as a standalone app in Windows, starting in August 2023.

The company also confirmed to TechCrunch the page was first published earlier in June, but declined to share more of its thinking on the matter beyond what was referenced on the page itself.

However, reading between the lines from the explanation provided, it appears that Microsoft sees Cortana as something that was a stepping stone toward this new AI future, where users will instead rely on a smarter chatbot running GPT-4, powered thanks to Microsofts partnership with OpenAI. The company also announced in May that it would build this new ChatGPT-based Bing experience right into Windows 11.

In the meantime, Windows users will be in a transitional period where Cortana will still be around in some form, though the standalone Windows app will no longer be supported. For now, however, Cortana will continue to be available in Outlook mobile, Teams mobile, Microsoft Teams display and Microsoft Teams rooms, the company notes.

Those Cortana-powered experiences may not be long for this world either, as Microsoft has already detailed its plans to bring Bing Chat to the enterprise, where Microsoft 365 Copilot will be integrated into its productivity software, plus Outlook, Teams and more.

We know that this change may affect some of the ways you work in Windows, so we want to help you transition smoothly to the new options, Microsoft explains on the support page. Instead of clicking the Cortana icon and launching the app to begin using voice, now you can use voice and satisfy your productivity needs through different tools.

The company then points users to Cortana alternatives like Windows 11 voice access, which lets users control their PC with voice commands, the new AI-powered Bing, Microsoft 365 Copilot and Windows Copilot, which offers centralized AI assistance for Windows users.

The website Windows Latest (not affiliated with Microsoft) was the first to report on the Cortana apps shutdown, having noticed that the latest update for the Cortana Windows app caused the app to stop working. Upon launching the app, a message informed users that Cortana in Windows as a standalone app is deprecated and pointed to the support page through a Learn More button.

Image Credits: Windows Latest

Microsofts shift to Bing Chat from its first-gen assistant Cortana may be later mirrored by other Big Tech companies.

This week, The Information reported, for example, that Amazon promoted its head scientist for Alexa, Rohit Prasad, to run a team developing artificial general intelligence. That signals that Amazon, too, may be thinking about how Alexa could evolve into something more capable than the digital assistant it is today. Apple has also been developing its own generative AI tools, Bloomberg reported, but hasnt yet decided how they would be released to customers.

Read the original here:

Microsoft kills Cortana in Windows as it focuses on next-gen AI - TechCrunch