Category Archives: Artificial General Intelligence

Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press – Tech Policy Press

Based on the number of new bills across the states and in Congress, the number of working groups and reports commissioned by city, state, and local governments, and the drumbeat of activity from the White House, it would appear that it is an agenda-setting moment for policy regarding artificial intelligence (AI) in the United States. But the language describing AI research and applications continues to generate confusion and seed the ground for potentially harmful missteps.

Stakeholders agree that AI warrants thoughtful legislation, but struggle for consensus around problems and corresponding solutions. An aspect of this confusion is embodied by words we use. It is imperative that we not only know what we are talking about regarding AI, but agree on how we talk about it.

Last fall, the US Senate convened a series of closed-door meetings to inform US AI strategy. It brought together academics and civil society leaders, but was disproportionately headlined by prominent industry voices who have an interest in defining the terms of the discussion. From the expanding functionality of ever-larger AI models to the seemingly far-off threat to human existence, lawmakers and the public are immersed in AI branding and storytelling. Loaded terminology can mislead policymakers and stakeholders, ultimately causing friction between competing aspects of an effective AI agenda. While speculative and imprecise language has always permeated AI, we must emphasize nomenclature leaning more towards objectivity than sensationalism. Otherwise, US AI strategy could be misplaced or unbalanced.

Intelligence represents the promise of AI, yet its a construct thats difficult to measure. The very notion is multifaceted and characterized by a fraught history. The intelligence quotient (IQ), the supposed numerical representation of cognitive ability, remains misused and misinterpreted to this day. Corresponding research has led to contentious debates regarding purported fundamental differences between IQ scores of Black, White, and Hispanic people in the US. There's a long record of dubious attempts to quantify intelligence in ways that cause a lot of harm, and it poses a real danger that language about AI might do the same.

Modern discussions in the public sphere give full credence to AI imbued with human-like attributes. Yet, this idea serves as a shaky foundation for debate about the technology. Evaluating the power of current AI models relies on how theyre tested, but the alignment between test results and our understanding of what they can do is often not clear. AI taxonomy today is predominantly defined by commercial institutions. Artificial general intelligence (AGI), for example, is a phrase intended to illustrate the point at which AI matches or surpasses humans on a variety of tasks. It suggests a future where computers serve as equally competent partners. One by one, industry leaders have now made AGI a business milestone. But its uncertain how to know once weve crossed that threshold, and so the mystique seeps into the ethos.

Other examples illustrate this sentiment as well. The idea of a models emergent capabilities nods to AIs inherent capacity to develop and even seem to learn in unexpected ways. Similar developments have convinced some users of a large language models (LLM) sentience.

However, while these concepts are currently disputed, other scientists contend that even though bigger LLMs typically yield better performance, the presence of these phenomena ultimately relies on a practitioners test metrics.

The language and research of the private sector disproportionately influences society on AI. Perhaps its their prerogative; entrepreneurs and industry experts arent wrong to characterize their vision in their own way, and aspirational vocabulary helps aim higher and broader. But it may not always be in the public interest.

These terms arent technical jargon buried deep in a peer-review article. They are tossed around every day in print, on television, and in congressional hearings. Theres an ever-present tinge of not-quite-proven positive valence. On one hand, its propped up with bold attributes full of potential, but on the other, often dismissed and reduced to a mechanical implement when things go wrong.

The potential societal impact is inevitable when unproven themes are parroted by policymakers who may not always have time to do their homework.

Politicians are not immune to the hype. Examples abound in the speeches of world leaders like UK Prime Minister Rishi Sunak and in the statements of President Joe Biden. Congressional hearings and global meetings of the United Nations have adopted language from the loudest, most visible voices providing a wholesale dressing for the entire sector.

Whats missing here is the acknowledgement of how much language sets the conditions for our reality, and how these conversations play out in front of the media and public. We lack common, empirical, and objective terminology. Modern AI descriptors mean one thing to researchers, but may express something entirely different to the public.

We must call for intentional efforts to define and interrogate the words we use to describe AI products and their potential functionality. Exhaustive and appropriate test metrics must also justify claims. Ultimately, hypothetical metaphors can be distorting to the public and lawmakers, and this can influence the suitability of laws or inspire emerging AI institutions with ill-defined missions.

We cant press reset, but we can provide more thoughtful framing.

The effects of AI language are incredibly broad and indirect but, in total, can be enormously impactful. Steady and small-scale steps may deliver us to a place where our understanding of AI has been shaped, gradually modifying behavior by reinforcing small and successive approximations bringing us ever closer to a desired belief.

By the time we ask, how did we get here, the ground may have shifted underneath our feet.

Follow this link:

Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press - Tech Policy Press

Whoever develops artificial general intelligence first wins the whole game – ForexLive

High risk warning: Foreign exchange trading carries a high level of risk that may not be suitable for all investors. Leverage creates additional risk and loss exposure. Before you decide to trade foreign exchange, carefully consider your investment objectives, experience level, and risk tolerance. You could lose some or all your initial investment; do not invest money that you cannot afford to lose. Educate yourself on the risks associated with foreign exchange trading and seek advice from an independent financial or tax advisor if you have any questions.

Advisory warning: FOREXLIVE is not an investment advisor, FOREXLIVE provides references and links to selected news, blogs and other sources of economic and market information for informational purposes and as an educational service to its clients and prospects and does not endorse the opinions or recommendations of the blogs or other sources of information. Clients and prospects are advised to carefully consider the opinions and analysis offered in the blogs or other information sources in the context of the client or prospect's individual analysis and decision making. None of the blogs or other sources of information is to be considered as constituting a track record. Past performance is no guarantee of future results and FOREXLIVE specifically hereby acknowledges clients and prospects to carefully review all claims and representations made by advisors, bloggers, money managers and system vendors before investing any funds or opening an account with any Forex dealer. Any news, opinions, research, data, or other information contained within this website is provided on an "as-is" basis as a general market commentary and does not constitute investment or trading advice, and we do not purport to present the entire relevant or available public information with respect to a specific market or security. FOREXLIVE expressly disclaims any liability for any lost principal or profits which may arise directly or indirectly from the use of or reliance on such information, or with respect to any of the content presented within its website, nor its editorial choices.

Disclaimer: FOREXLIVE may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.

Finance Magnates CY Limited

Follow this link:

Whoever develops artificial general intelligence first wins the whole game - ForexLive

What is AGI? How is it linked with Chat GPT 5? – Analytics Insight

Artificial General Intelligence (AGI) is an advanced form of artificial intelligence that aims to mimic human intelligence across a wide range of cognitive tasks. Unlike narrow AI systems, which are designed for specific tasks like image recognition or natural language processing, AGI seeks to exhibit general intelligence comparable to that of humans, allowing it to learn, reason, and adapt to new situations autonomously.

AGI represents the pinnacle of AI research, where machines possess the ability to understand, learn, and apply knowledge across various domains, like the breadth of human intelligence. Achieving AGI requires breakthroughs in several key areas, including machine learning, natural language understanding, reasoning, and problem-solving.

ChatGPT-5, an advanced language model developed by OpenAI, represents a significant step towards the realization of AGI. While ChatGPT-5 is not an AGI system itself, it exhibits characteristics that align with the goals of AGI research. Heres how ChatGPT-5 is linked to the concept of AGI:

Natural Language Understanding: ChatGPT-5 demonstrates a remarkable ability to understand and generate human-like text based on context. It can engage in conversations, answer questions, and generate coherent responses across a wide range of topics, showcasing a level of language understanding that approaches human fluency.

Adaptability and Learning: AGI systems are expected to exhibit adaptive learning capabilities, allowing them to acquire new knowledge and skills over time. ChatGPT-5 leverages advanced machine learning techniques, such as transformer architectures and large-scale training data, to continuously improve its performance and adapt to different contexts and tasks.

Generalization: AGI systems should be capable of generalizing knowledge across diverse domains, applying insights gained from one task to solve new and unfamiliar problems. While ChatGPT-5 is primarily a language model, its ability to generate text spans a wide range of topics and domains, indicating a degree of generalization in its understanding and reasoning abilities.

Human-like Interaction: AGI systems are envisioned to interact with humans in a natural and intuitive manner, much like conversing with another person. ChatGPT-5 simulates human-like conversation, engaging users in dialogue and providing responses that are contextually relevant and coherent, fostering a sense of interaction and engagement.

Continuous Improvement: AGI research emphasizes the importance of continuous improvement and self-learning mechanisms. ChatGPT-5 incorporates feedback loops and iterative training processes, allowing it to learn from user interactions and refine its language generation capabilities over time, akin to the learning process observed in human intelligence.

Artificial General Intelligence (AGI) represents the pursuit of creating intelligent systems that exhibit human-like cognitive abilities across a broad range of tasks. While AGI remains a long-term goal of AI research, models like ChatGPT-5 offer glimpses into the potential of achieving AGI-like capabilities. By leveraging advanced machine learning techniques and large-scale training data, ChatGPT-5 demonstrates impressive natural language understanding, adaptability, generalization, and human-like interaction, highlighting its role in advancing the quest for AGI. As AI technology continues to evolve, the intersection between models like ChatGPT-5 and the principles of AGI research paves the way for future breakthroughs in artificial intelligence and human-machine interaction.

Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates

Link:

What is AGI? How is it linked with Chat GPT 5? - Analytics Insight

The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings – Brookings Institution

In April 2023, a Stanford study found rapid acceleration in the U.S. federal government spending in 2022. In parallel, the House Appropriations Committee was reported in June 2023 to be focusing on advancing legislation to incorporate artificial intelligence (AI) in an increasing number of programs and third-party reports tracking the progress of this legislation corroborates those findings. In November 2023, both the Department of Defense (DoD) and the Department of State (DoS) released AI strategies, illustrating that policy is starting to catch up to, and potentially shape, expenditures. Recognizing this criticality of this domain on government, The Brookings Institutions Artificial Intelligence and Emerging Technology Initiative (AIET) has been established to advance good governance of transformative new technologies to promote effective solutions to the most pressing challenges posed by AI and emerging technologies.

In this second in a series of articles on AI spending in the U.S. federal government, we continue to follow the trail of money to understand the federal market for AI work. In our last article, we analyzed five years of federal contracts. Key findings included that over 95% of AI-labeled expenditures were in NAICS 54 (professional, scientific, and technical services); that within this category over half of the contracts and nearly 90% of contract value sit within the Department of Defense; and that the vast majority of vendors had a single contract, reflecting a very fragmented vendor community operating in very narrow niches.

All of the data for this series has been taken directly from federal contracts and was consolidated and provided to us by Leadership Connect. Leadership Connect has an extensive repository of federal contracts and their data forms the basis for this series of papers.

In this analysis, we analyzed all new federal contracts since our original report that had the term artificial intelligence (or AI) in the contract description. As such, our dataset included 489 new contracts to compare with 472 existing contracts. Existing values are based on our previous study, tracking the five years up to August 2022; new values are based on the year following to August 2023.

Out of the 15 NAICS code categories we identified in the first paper, there were only 13 NAICS codes still in use from previous contract and only five used in new contracts, demonstrating a refinement and focusing of categorization of AI work. In the current analysis, we differentiate between funding obligated and potential value of award as the former is indicative of current investment and the latter is representative of future appetite. During the period of the study, the value of funding obligated increased over 150% from $261 million to $675 million while the value of potential value of award increased almost 1200% from $355 million to $4.561 billion. For funding obligated, NAICS 54 (Professional, Scientific and Technical Services) was the most common code used followed by NAICS 51 (Information and Cultural Industries), where NAICS 54 increased from $219 million for existing contracts to $366 million for new contracts, while NAICS 51 grew from $5 million of existing to $17 million of new contracts. For potential value of award, NAICS 54 increased from $311 million of existing to $1.932 billion of new contracts, while NAICS 51 grew from $5 million of existing to $2.195 billion of new contracts, eclipsing all other NAICS codes.

The number of federal agencies with contracts rose from 17 to 23 in the last year, with notable additions including the Department of the Treasury, the Nuclear Regulatory Commission, and the National Science Foundation. With an astounding growth from 254 contracts to 657 in the last year, the Department of Defense continues to dominate in AI contracts, with NASA and Health and Human Services being distant a second and third with 115 and 49 contracts respectively. From a potential value perspective, defense rose from $269 million with 76% of all federal funding to $4.323 billion with 95%. In comparison, NASA and HHS increased their AI contract values by between 25% and 30% each, but still fell to 1% each from 11% and 6% respectively of the overall federal government AI contract potential value due to the 1500% increase in the DoD AI contract values. In essence, DoD grew their AI investment to such a degree that all other agencies become a rounding error.

For existing contracts, there were four vendors with over $10 million in contract value, of which one was over $50 million. For new contracts, there were 205 vendors with over $10 million in contract value, of which six were over $50 million and a seventh was over $100 million. The driver for the change in potential value of contracts appears to be the proliferation of $15 million and $30 million maximum potential value contracts, of which 226 and 25 were awarded respectively in the last year, but none of which have funds obligated yet to them. We posit that these are contract vehicles established at the maximum signing authority value for future funding allocation and expenditure. It is notable that only one of the firms in the top 10 potential contract value in the previous study were in the top 10 of new contract awards (MORSE Corp), that the top firm in previous years did not receive any new contract (AI Signal Research) and that the new top firm did not receive any contracts in previous study years (Palantir USG).

In our previous analysis, we reported 62 firms with multiple awards, while over the past year there were 72 firms receiving multiple awards. However, the maximum number of awards has changed significantly, where the highest number of existing contracts was 69 (AI Solutions) while for new contracts the maximum is four. In fact, there were 10 vendors with four or more existing contracts but only three vendors with four or more new ones (Booz Allen Hamilton, Leidos, and EpiSys Science). This reflects a continued fragmented vendor community that is operating in very narrow niches with a single agency.

Growth in private sector R&D has been at above 10% per year for a decade while the federal government has shown more modest growth over the last five years after a period of stagnation, however the 1200% one-year increase in AI potential value of awards of over $4.2 billion is indicative of a new imperative in government AI R&D leading to deployment.

In our previous analysis, we noted that the vendor side of the market was highly fragmented with many small players whose main source of revenues were likely a single contract with a nearby federal client. The market remains fragmented with smaller vendors, but larger players such as Accenture, Booz Allen Hamilton, General Atomics, and Lockheed Martin, are moving quickly into the market, following, or perhaps resulting in, the significant increase of the value of contracts. In our previous analysis, we identified that these larger firms would be establishing beachheads for entry into AI and we expect this trend to continue with other large defense players such as RAND, Northrop Grumman, and Raytheon amongst others as vendors integrate AI in their offerings.

From the client side, we had previously discussed the large number of relatively small contracts demonstrating an experimental phase of purchasing AI. The explosion of large, maximum potential value contracts appears to be a shift from experimentation to implementation, which would be bolstered by the shift from almost uniquely NAICS 54 to a balance between NAICS 54 and 51. While research and experimentation are still ongoing, there are definite signs of vendors bringing to the federal market concrete technologies and systems. The thousand flowers are starting to bloom and agenciesparticularly DoDare tending to them carefully.

We had identified that the focus on federal AI spending was DoD and over the last year, this focus has proportionally become almost total. Defense AI applications have long been touted as a potential long term growth area and it appears that 2022/23 has been a turning point in the realization of those aspirations. While other agencies are continuing to invest in AI, either adding to existing investment or just starting, DoD is massively investing in AI as a new technology across a range of applications. In January 2024, Michael C. Horowitz (deputy assistant secretary of defense for force development and emerging capabilities) confirmed a wide swath of investments in research, development, test and evaluation, and new initiatives to speed up experimentation with AI within the department.

We have noted in other analyses that there are different national approaches to AI development, where the U.S. and its allies have been focusing on the traditional guardrails of technology management (e.g., data governance, data management, education, public service reform) and so spreading their expenditures between governance and capacity development, while potential adversaries are almost exclusively focused on building up their R&D capacity and are largely ignoring the guardrails. While we had identified risks with a broad-based approach leading to a winnowing of projects for a focused ramp-up of investment, we rather see a more muscular approach where a wide range of projects are receiving considerable funding. The vast increase in overall spendingparticularly in defense applicationsappears to indicate that the U.S. is substantially ramping up its investment in this area to address the threat of potential competitors. At the same time, public statements by federal agency leaders often strike a balance between the potential benefits and the risks of AI while outlining potential legislative and policy avenues while agencies seek means of controlling the potential negative impacts of AI. The recent advancement of U.S. Congress legislation and agency strategies coupled with the significant investment increase identified in the current study demonstrate that well-resourced countries such as the U.S. can have both security and capacity when it comes to AI.

The current framework for solving this coordination issue is the National Artificial Intelligence Initiative Office (NAIIO), which was established by the National Artificial Intelligence Initiative Act of 2020. Under this Act, the NAIIO is directed to sustain consistent support for AI R&D, support AI educationsupport interdisciplinary AI researchplan and coordinate Federal interagency AI activitiesand support opportunities for international cooperation with strategic AIfor trustworthy AI systems. While the intent of this Act and its formal structure are admirable, the current federal spending does not seem to reflect these lofty goals. Rather, we are seeing a federal market that appears to be much more chaotic than desirable, especially given the lead that China already has on the U.S. in AI activities. This fragmented federal market may resolve itself as the impact of recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence directs agency engagement on the issue of monitoring and regulation of AI.

In conclusion, the analysis of the U.S. federal governments AI spending over the past year reveals a remarkable surge in investment, particularly within the DoD. The shift from experimental contracts to large, maximum potential value contracts indicates a transition from testing to implementation, with a significant increase in both funding obligated and potential value of awards. The federal governments focus on AI, as evidenced by the substantial investments and legislative initiatives, reflects a strategic response to global competition and security challenges. While the market remains fragmented with smaller vendors, the concentration of investments in defense applications signals a turning point in the realization of AIs potential across various government agencies. The current trajectory, led by the DoD, aligns with the broader national approach that combines governance and capacity development to ensure both security and innovation in AI technologies.

As we noted in our first article in this series, if one wants to know what the real strategy is, one must follow the money. In the case of the U.S. federal government, the strategy is clearly focused on defense applications of AI. The spillover of this focus is a likelihood of defense and security priorities, needs and values being the dominant ones in government applications. This is a double-edged sword as while it may lead to more secure national systems or more effective defenses against hostile uses of AI against the U.S. and its allies, it may also involve trade-offs in individual privacy or decision-making transparency. However, the appropriate deployment of AI by government has the potential to increase both security and freedom, as noted in other contexts such as surveillance.

The AI industry is in a rapid growth phase as demonstrated by the potential revenues from the sector growing exponentially. As virtually all new markets go through the same industry growth cycle, the increasing value of the AI market will likely continue to draw in new firms in the short-term, including the previously absent large players to whom the degree of actual and potential market capitalization has now drawn their attention and capacity. While an industry consolidation phase of start-up and smaller player acquisitions will likely happen in the future, if the scale of AI market increase continues at a similar rate this winnowing process is likely still several years away. That being said, the government may start to look more towards their established partner firmsparticularly in the defense and security sectorwho have the track record and industrial capacity to meet the high value contracting vehicles being put in place.

Despite the commendable intentions outlined in the National Artificial Intelligence Initiative Act of 2020, the current state of federal spending on AI raises concerns about coordination and coherence. NAIIO is tasked with coordinating interagency AI activities and promoting international cooperation, but the observed chaotic nature of the federal market calls into question the effectiveness of the existing framework. The fragmented market may see resolution as the recent executive order on AI guides agencies towards more a more cohesive and coordinated approach to AI. As the U.S. strives to maintain its technological leadership and address security challenges posed by potential adversaries, the coordination of AI initiatives will be crucial. The findings emphasize the need for continued policy development, strategic planning, and collaborative efforts to ensure the responsible and effective integration of AI technologies across the U.S. federal government.

More:

The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings - Brookings Institution

Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI – PYMNTS.com

Three entities in the field of artificial intelligence (AI) plan to combine to create the Artificial Superintelligence Alliance.

Fetch.ai,Ocean ProtocolandSingularityNETaim to create a decentralized alternative to existing AI projects controlled by Big Tech, the companies said in a Wednesday (March 27)press release.

The proposed alliance is subject to approval from the three entities respective communities, per the release.

As part of this alliance, the tokens that fuel the members networks $FET, $OCEAN and $AGIX will be merged into a single $ASI token that will function across the combined decentralized network created by this partnership, according to the release.

The combined value of the three tokens is $7.6 billion as of Tuesday (March 26), per the release.

The creation of the largest open-sourced, decentralized network through a multi-billion token merger is a major step that accelerates the race to artificial general intelligence (AGI), the release said.

The Artificial Superintelligence Alliance also brings together SingularityNETs decentralized AI network, Fetch.ais Web3 platform and Ocean Protocols decentralized data exchange platform, according to the release.

The deal provides an unparalleled opportunity for these three influential leaders to create a powerful compelling alternative to Big Techs control over AI development, use and monetization, the release said.

Leveraging blockchain technology, it will turn AI systems into open networks for coordinating machine intelligence, rather than hiding their inner workings from the public, according to the release.

The alliance will also facilitate the commercialization of the technology and enable greater access to AI platforms and large databases, advancing the path to AGI on the blockchain, the release said.

In another recent development in this space,Stability AIannounced Friday (March 22) that its founder and CEO Emad Mostaque has resigned as CEO and stepped down from the companys board to pursuedecentralized AI.

We should have more transparent & distributed governance in AI as it becomes more and more important, Mostaque said when announcing his move. Its a hard problem, but I think we can fix it The concentration of power in AI is bad for us all. I decided to step down to fix this at Stability & elsewhere.

Link:

Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI - PYMNTS.com

What it means for nations to have "AI sovereignty" – Marketplace

Imagine that you could walk into one of the worlds great libraries and leave with whatever you wanted any book, map, photo or historical document forever. No questions asked.

There is an argument that something like that is happening to the digital data of nations. In a lot of places, anyone can come along and scrape the internet for the valuable data thats the backbone of artificial intelligence. But what if raw data generated in a particular country could be used to benefit not outside interests, but that country and its people?

Some nations have started building their own AI infrastructure to that end, aiming to secure AI sovereignty. And according to venture capitalist Vinod Khosla, the potential implications, and opportunities, are huge.

The following is an edited transcript of Khoslas conversation with Marketplaces Lily Jamali.

Vinod Khosla: These language models are trained in English, but theres 13 Indian scripts, and within that theres probably a couple of hundred languages or language variants. So the cultural context for these languages is different. We do think it deserves an effort to have cultural context and nuances, like in India: You dont speak Hindi and you dont speak English, you mix the two, whats sometimes called Hinglish. So those kinds of things have to be taken into account. Then you go to the other level. Will India rely on something that the technology could be banned, like a U.S. model?

Lily Jamali: So you were just talking about the cultural context. There is a huge political overlay

Khosla: Political and national security. So imagine India is buying oil [from] Iran, which it does. If theres an embargo on Iranian trade, is that possible that they cant get oil or they cant get AI models? So every country will need some level of national security independence in AI. And I think thats a healthy thing. Maybe itll make the world more diversified and a little bit safer.

Jamali: More safe. Why? Why do you say that?

Khosla: Because everybody cant be held hostage to just an American model. The Chinese are doing this for sure. But if theres a conflict between India and China, can it 100% predict what the U.S. will do? They may care more about Taiwan than the relationship between India and China, for example.

Jamali: And can you explain why you think it is important for each country to have its own model?

Khosla: Im not saying in India theyll only use the Indian model. They will use all sorts of models from all over the world, including open-source models. Now China, I have a philosophical view [that we are] competitors and enemies, and I take a somewhat hawkish view on China. The best way to protect ourselves is be well-armed to be safe against China and avoid conflict if its mutually assured destruction, so to speak. In countries like India or Japan, theyll use all sorts of models from everywhere in the world, including their own local models, depending upon the application or the context.

Jamali: As some of our listeners may know, you were very early to the AI trend, and wed love to know what you think might come next. So what do you think?

Khosla: Heres what I would say. AI has surprised us in the last two years. But its taken us 10 years to get to that ChatGPT moment, if you might. What has happened since is theres a lot of resources poured in. And that will accelerate development. But also, it diversified the kinds of things we worked on pretty dramatically. And so I think well see a lot of progress. Some things are predictable, like systems will get much better at reasoning and logic, some things that they get critiqued for. But then therell be surprises that we cant predict.

Jamali: Although we may try.

Khosla: Other kinds of capabilities will show up in these systems. Reasoning is an obvious one. The embodied world, which is generally meant to represent what happens in the real world, of which is mostly robotics, will see a lot of progress in the next five years. So think of logic and reasoning, rapid progress. Think of robotics, artificial intelligence, rapid progress. Think of diversity in the kinds of algorithms being used. Theyll be really interesting and probably not one people are generally expecting.

Jamali: Diversity in the kinds of algorithms. What kind of diversity are we talking about?

Khosla: If you take the human brain, sometimes we do pattern matching, and theres all kinds of emergent behavior that emerge from that. And [large language models] are going to keep going. And they may do everything. And we may reach AGI, or artificial general intelligence, just with LLMs. But its possible theres other approaches, whats called sometimes neurosymbolic computing. Reasoning is symbolic computing planning, being able to make long-term plans, things like that. We do a lot of probabilistic thinking this might happen or that might happen, whats the likelihood of this happening? Thats generally called probabilistic thinking. Theyll start to emerge. So those are just some examples. And of course, Ill be surprised.

Another person talking a lot about this is Jensen Huang, CEO of Nvidia, which designs industry-leading graphics processing units. This week, the company announced a collaboration with software and cloud company Oracle to deliver sovereign AI solutions to customers around the world.

Huang envisions AI factories that can run cloud services within a countrys borders. The pitch: Countries and organizations need to protect their most valuable data, and Oracle CEO Safra Catz said in a statement that strengthening ones digital sovereignty is key to making that happen.

See original here:

What it means for nations to have "AI sovereignty" - Marketplace

OpenAI’s Sam Altman Weighs In on AI Risks and His Current Stance on AGI Fears – CCN.com

agi Key Takeaways

As one of the most important figures in AI, Sam Altman is better placed than most to understand where the technology is going and has a unique perspective on its risks.

In a recent episode of Lex Fridmans podcast, the OpenAI CEO discussed Artificial General Intelligence (AGI), how he anticipates AI evolving in the coming years and the risks it poses to humanity.

When asked by Fridman when he thought humanity would build AGI, Altman observed that the question misses the complexity of the debate:

I used to love to speculate on that question. I have realized since that I think its very poorly formed, and that people use extremely different definitions for what AGI is. So I think it makes more sense to talk about when well build systems that can do capability X or Y or Z, rather than when we fuzzily cross this one-mile marker.

Nonetheless, he predicted that by the end of the decade we will have quite capable systems that will look remarkable compared to todays technology.

While Altman prefers to talk in terms of fuzzy milestones rather than radical epistemic shifts, he doesnt underestimate the potential for AI to transform the global economy.

He observed that it is a huge deal when a new technology can significantly increase the rate of scientific discovery. But he suggested that OpenAIs most advanced AI models have yet to instigate the kind of profound economic transformation he expects to see in the coming years.

As such, he concluded that the road to AGI will be a giant power struggle.

Asked by Fridman whether he trusts himself with the kind of power AGI could create, Altman hesitated.

Referring to the drama of his firing and subsequent reinstatement as the CEO of OpenAI last year, Altman acknowledged that it is now harder to argue that the board could easily fire him if it needed to.

Nonetheless, he said he still believes it is important that no single person has total control over the company or the technology it creates.

I continue to not want super-voting control over OpenAI. I never have, he stated. Even after all this craziness, I still dont want it. I continue to think that no company should be making these decisions and that we really need governments to put rules of the road in place.

Was this Article helpful? Yes No

Continued here:

OpenAI's Sam Altman Weighs In on AI Risks and His Current Stance on AGI Fears - CCN.com

SingularityNET to Hold Live Stream on YouTube on March 26th – TradingView

Coindar

SingularityNET will host a live stream on YouTube on March 26th at 5 pm UTC. The event will focus on the advancements in Hyperons cognitive components and scalability improvements at all computational levels.

Refer to the official tweet by AGIX:

Join us this Tuesday, March 26th, at 5 PM UTC to explore the technical advancements of the OpenCog Hyperon #AGI framework, including advances in Hyperons cognitive components & scalability improvements at all computational levels.

Set a reminder now: https://t.co/eXKFJh6VNm pic.twitter.com/1PuqabEMuT

AGIX Info

SingularityNET (AGI) is a blockchain-based marketplace and framework for artificial intelligence (AI) services. Founded by Dr. Ben Goertzel, it allows organizations and individuals to create, share, and monetize AI services at scale. The platforms native token, AGI (Artificial General Intelligence), is used for transactions within the network. By using SingularityNET, developers can access a broad set of AI algorithms, services, and agents, and can also contribute their own models to earn AGI tokens. The goal of SingularityNET is to create a decentralized, open market for AI services, thus reducing reliance on tech giants for such services.

View original post here:

SingularityNET to Hold Live Stream on YouTube on March 26th - TradingView

Experts explain how AI is supercharging innovation – Newswise

Newswise Rapid moving advances in artificial intelligence have stirred controversy and debate, but they have all raised enticing prospects for supercharged technological innovation. Researchers at Virginia Tech who are exploring these frontiers can offer previews of the potential positive developments that could derive from AI.

Advancing autonomous systems to assist in their diagnostics, safety, and human training

Ella Atkins, Fred D. Durham professor and head of the Kevin T. Crofton Aerospace and Ocean Engineering Department at Virginia Tech, investigates aerospace information systems for advanced air mobility, uncrewed aircraft systems, and space robotics applications. Her research explores how emerging AI and machine learning techniques can assist in a range of tasks from safe autonomy and self-diagnostics to tutoring human pupils.

AI and machine learning can make an autonomous vehicle safer through self-monitoring diagnostics and prognostics and data-informed decision making.Maintenance and repair operations for aircraft have been revolutionized with this technology.For example, this technology can assist modern vehicles in avoiding or recovering from problems such as unexpectedly low battery energy reserves, Atkins said.

Large Language Models, or LLM, powered by deep neural network machine learning, enable a person to interact with a computer more naturally, more conversationally. This can help a person learn even difficult concepts because the first step isto get past anxiety with that concept, and LLM conversation interacts more like a teaching assistant than an encyclopedia or textbook, she said.

Deploying human-robot interactions

I am interested in assistive technology, such as wheelchair mounted robot arms, says Dylan Losey, an assistant professor of mechanical engineering with a specialization in robotics who directs the Collaborative Robotics Laboratory (Collab). My lab is focused on the fundamentals of human-robot interaction. This includes how robots learn from humans, how robots collaborate with humans, and how humans understand the robots they are working with.

My main interest is developing robots that can learn from humans and communicating what the robot has learned back to the human operator. I see this mutual understanding between humans and robots key to avoiding the pitfalls of AI. I want AI that helps people do what they want, but is also clear and transparent to the human, Losey said.

Enabling self-sufficient communication systems

Walid Saad, a professor of electrical and computer engineering and the Next-G wireless lead at the Virginia Tech Innovation Campus, enthuses about the next steps in the evolution of AI and how it could intersect with forthcoming 6G wireless systems. Current AI systems exhibit prowess in learning but struggle with reasoning, he said. The central challenge for the upcoming years lies in equipping AI systems with common sense abilities, enabling these systems to think critically, reason logically, and plan proactively. This marks an initial stride toward the development of what's known as artificial general intelligence (AGI), aiming to approach intelligence levels seen in animals, if not eventually reaching human-level intelligence.

AI can help automate and augment existing functionalities within wireless systems like 6G, Saad said. Nevertheless, as we peer into a more distant horizon, the notion of AI-native wireless systems presents limitations. AGI could potentially herald a revolutionary paradigm in wireless technology by enabling systems capable of human-like cognition that is, reasoning, planning, and the application of 'common sense' where contextually relevant.

"While we realized for a while that 6G needs AI and potentially AGI, it is also worth noting that creating AGI needs an understanding of the physical world that 6G system can potentially provide, hence we foresee these two technologies truly flowing together in the future," Saad said.

Revolutionizing the construction industry

AI can help designers and engineers to optimize their design for energy consumption, user comfort, evacuation and disaster plans, conformity with codes and regulations, environmental impact, and even more, at a level that was not possible before, says Ali Shojaei with Virginia Techs Myers-Lawson School of Construction, who is working to revolutionize the construction industry through digital innovations.

AI-driven automation and robotics can significantly speed up the construction process and also reduce human error, he said. AI can optimize the supply chain in construction. From predicting the need for materials to tracking their delivery, AI can ensure that projects are not delayed due to material shortages or logistical errors.

In construction, AI-driven automation and robotics can significantly increase efficiency and precision, assisting in tasks like bricklaying, painting, or even complex tasks like installing electrical systems. And post-construction, AI can aid in the maintenance and lifecycle management of buildings, predicting when maintenance is needed and preventing costly repairs, Shojaei said.

Schedule an interview To schedule interviews with these experts, contact Mike Allen in the media relations office at [emailprotected] or 540-400-1700.

Read more from the original source:

Experts explain how AI is supercharging innovation - Newswise

Will AI save humanity? U.S. tech fest offers reality check – Japan Today

Artificial intelligence aficionados are betting that the technology will help solve humanity's biggest problems, from wars to global warming, but in practice, these may be unrealistic ambitions for now.

"It's not about asking AI 'Hey, this is a sticky problem. What would you do?' and AI is like, 'well, you need to completely restructure this part of the economy,'" said Michael Littman, a Brown University professor of computer science.

Littman was at the South By Southwest (or SXSW) arts and technology festival in Austin, Texas, where he had just spoken on one of the many panels on the potential benefits of AI.

"It's a pipe dream. It's a little bit science fiction. Mostly what people are doing is they're trying to bring AI to bear on specific problems that they're already solving, but just want to be more efficient.

"It's not just a matter of pushing this button and everything's fixed," he said.

With their promising titles ("How to Make AGI Beneficial and Avoid a Robot Apocalypse"), and the ever presence of tech giants, the panels attract big crowds, but they often hold more pragmatic objectives, like promoting a product.

At one meeting called "Inside the AI Revolution: How AI is Empowering the World to Achieve More," Simi Olabisi, a Microsoft executive, praised the tech's benefits on Azure, the company's cloud service.

When using Azure's AI language feature in call centers, "maybe when a customer called in, they were angry, and when they ended the call, they were really appreciative. Azure AI Language can really capture that sentiment, and tell a business how their customers are feeling," she explained.

The notion of artificial intelligence, with its algorithms capable of automating tasks and analyzing mountains of data, has been around for decades.

But it took on a whole new dimension last year with the success of ChatGPT, the generative AI interface launched by OpenAI, the now iconic AI start-up mainly funded by Microsoft.

OpenAI claims to want to build artificial "general" intelligence or AGI, which will be "smarter than humans in general" and will "elevate humanity," according to CEO Sam Altman.

That ethos was very present at SXSW, with talk about "when" AGI will become a reality, rather than "if."

Ben Goertzel, a scientist who heads the SingularityNET Foundation and the AGI Society, predicted the advent of general AI by 2029.

"Once you have a machine that can think as well as a smart human, you're at most a few years from a machine that can think a thousand or a million times better than a smart human, because this AI can modify its own source code," said Goertzel.

Wearing a leopard-print faux-fur cowboy hat, he advocated the development of AGI endowed with "compassion and empathy," and integrated into robots "that look like us," to ensure that these "super AIs" get on well with humanity.

David Hanson - founder of Hanson Robotics and who designed Desdemona, a humanoid robot that functions with generative AI - brainstromed about the plus and minuses of AI with superpowers.

AI's "positive disruptions...can help to solve global sustainability issues, although people are probably going to be just creating financial trading algorithms that are absolutely effective," he said.

Hanson fears the turbulence from AI, but pointed out that humans are doing a "fine job" already of playing "existential roulette" with nuclear weapons and by causing "the fastest mass extinction event in human history."

But "it may be that the AI could have seeds of wisdom that blossom and grow into new forms of wisdom that can help us be better," he said.

Initially, AI should accelerate the design of new, more sustainable drugs or materials, said believers in AI.

Even if "we're not there yet... in a dream world, AI could handle the complexity and the randomness of the real world, and... discover completely new materials that would enable us to do things that we never even thought were possible," said Roxanne Tully, an investor at Piva Capital.

Today, AI is already proving its worth in warning systems for tornadoes and forest fires, for example.

But we still need to evacuate populations, or get people to agree to vaccinate themselves in the event of a pandemic, stressed Rayid Ghani of Carnegie Mellon University during a panel titled "Can AI Solve the Extreme Weather Pandemic?"

"We created this problem. Inequities weren't caused by AI, they're caused by humans and I think AI can help a little bit. But only if humans decide they want to use it to deal with" the issue, Ghani said.

Read the original post:

Will AI save humanity? U.S. tech fest offers reality check - Japan Today