Category Archives: Artificial General Intelligence

Silicon Landlords: On the Narrowing of AI’s Horizon – The Nation

Culture / December 19, 2023

The one thing science fiction couldnt imagine is the world we have now: the near-complete control of Artificial Intelligence by a few corporations whose only goal is profit.

As HAL 9000, the true star of Stanley Kubricks landmark film, 2001: A Space Odyssey, died a silicon death by memory module removal, the machine, reduced to its infant state (the moment it became operational) recited:

Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song

In HALs fictional biography, written by Arthur C Clarke for both the films script and its novelization, HAL, a Heuristic Algorithmic Logic machine, was theorized, engineered, and built at the University of Illinoiss Coordinated Science Laboratory, where the real Illinois Automatic Computer (ILLIAC) supercomputers were built from the 1950s until the 1970s. Embedded within the idea of HAL is an assumption: that the artificial intelligence (AI) research programs of the mid-to-late 20th century, centered on universities, scientific inquiry (and yes, military imperatives) would continue uninterrupted into the future, eventually producing thinking machines that would be our partners.

Ironically for a work of the imagination, it turns out that what was unimaginable in the late 1960s for the makers of 2001 was the eventual near-complete control of the field of AI by a small group of North American corporationsthe Silicon Valley triumvirate of Amazon, Microsoft, and Googlewhose only goal (hyped claims and declarations of serving humanity aside) is profit. These companies claim to be producing HAL-esque machines (which, they suggest, exhibit signs of AGI: artificial general intelligence) but are actually producing narrow systems that enable the extraction of profit for these digital landlords while allowing them to maintain control over access to a technology they dominate and tirelessly work to insert into every aspect of life.

On December 5 of this year, MIT Technology Review published an article titled Make no mistakeAI is owned by Big Tech, written by Amba Kak, Sarah Myers West, and Meredith Whittaker. The article, focused on the political economy and power relations of the AI industry, begins with this observation:

Put simply, in the context of the current paradigm of building larger- and larger-scale AI systems, there is no AI without Big Tech. With vanishingly few exceptions, every startup, new entrant, and even AI research lab is dependent on these firms. All rely on the computing infrastructure of Microsoft, Amazon, and Google to train their systems, and on those same firms vast consumer market reach to deploy and sell their AI products.

There is no AI without Big Tech. Before the era of Silicon Valley dominance, when AI research programs were largely funded by a combination of government agencies such as DARPA and universities, driven, at least at the level of researchers, by scientific inquiry (it was far from a utopia; Cold War imperatives were always a significant factor), the financing required to build the systems researchers used and the direction of research were subject to public scrutiny as an option if not always in practice.

Today, the most celebrated and hyped methods, such as the resource hungry Large Language Models (LLM)ChatGPT, Googles recently released Gemini and Amazons Qare the product of a concentration of capital and computational resources put into service to enhance the profit and market objectives of private entities such as Microsoft (the primary source of funding for OpenAI), completely beyond the reach of public scrutiny.

The Greek economist Yanis Varoufakis uses the term, technofeudalism to describe what he sees as the tech industrys post-capitalist nature (closer in character to feudal lords, only this time with data centers, rather then walled castles or robber barons). I have problems with his argument, but I will grant Varoufakis one key point: the industrys wealth and power are indeed built almost entirely on a rentier model that places the largest firms between us and the things we need. Rather than controlling land (although that too is a part of the story: Data centers require lots of land), the industry controls access to our entertainments, our memories, and our means of communication.

To this list we can add the collection of algorithmic techniques called AI, promoted as essential and inevitable, owned and commanded by the cloud giants who have crowded out earlier research efforts with programs requiring staggering amounts of data, computational power, and resources. As 20th century Marxists were fond of saying, it is no accident that the very methods that depend on techniques controlled at-scale by the tech giants are the ones we are told we cant live without (how many times have you been told ChatGPT was the futureas inevitable as death and taxes?).

Continuing their analysis, the authors of the MIT article describe the power relationshipsseldom discussed in most breathlessly adoring tech media accountsthat shape how the AI industry actually works:

Microsoft now has a seat on OpenAIs board, albeit a nonvoting one. But the true leverage that Big Tech holds in the AI landscape is the combination of its computing power, data, and vast market reach. In order to pursue its bigger-is-better approach to AI development, OpenAI made a deal. It exclusively licenses its GPT-4 system and all other OpenAI models to Microsoft in exchange for access to Microsofts computing infrastructure.

For companies hoping to build base models, there is little alternative to working with either Microsoft, Google, or Amazon. And those at the center of AI are well aware of this

A visit to the Microsoft website for what it calls its Azure OpenAI Service (the implementation of OpenAIs platform via Microsofts Azure cloud computing service) shows the truth of the statement, There is little alternative to working with either Microsoft, Google, or Amazon. Computing hardware for AI research costs oceans of money (Microsofts $10 billion investment in OpenAI is an example) and demands constant maintenancethings smaller firms can scarcely afford. By offering a means through which start-ups and, really, all but the deepest-pocketed organizations can get access to what are considered cutting-edge methods, Microsoft and its fellow travelers have become the center of the AI ecosystem. The AI in your school, hospital, or police force (the list goes on) can, like roads leading to Rome, be traced back to Microsoft et al.

In the fictional world of HAL 9000, thinking machinesbuilt at universities, watched over by scientists and engineers, disconnected from profit incentivesemerged onto the world stage, becoming a part of life, even accompanying us to the stars. In our world, now 22 years past the 2001 imagined in the film, a small and unregulated group of corporations steer the direction of research, own the computers used for that research, and sell the results as products the world cant do without. These productsgenerative AI image generators like Dall-E, text calculators like ChatGPT, and a host of other systems, all derivativeare being pushed into the world, not as partners as with the fabled HAL but as profit vectors.

Popularswipe left below to view more authorsSwipe

Power, like life itself, is not eternal. The power of the tech industry, facilitated by the purposeful neglect of unconcerned, mis- or poorly informed governments and modern laissez-faire policies, is not beyond challenge. There are groups, such as the Distributed AI Research Institute, and even legislation, like the flawed EU AI Act, that offer glimpses of a different approach.

To borrow from linguistics professor Emily Bender, we must resist the urge to be impressed and focus our thoughts and efforts instead on ensuring that the tech industry and the AI systems it sells are firmly brought under democratic control.

The alternative is a chaotic dystopia in which were all at the mercy of the profit-driven whims of a few companies. This isnt a future anyone deserves. Not even Elon Musks (dwindling) army of reality-challenged fans.

From now until the end of the year, all donations up to $100,000 will be matched up by a generous supporter. Donate to support The Nations independent journalism today and double your impact!

Dwayne Monroe is a cloud architect, Marxist tech analyst, and Internet polemicist based in Amsterdam. He is currently writing a book, Attack Mannequins, exploring the use of AI as propaganda.

The story you just read is made possible by a dedicated community of Nation reader-supporters who give to support our progressive, independent journalism. A generous supporter has agreed to match all donations up to $100,000 from now until the end of the year. Make a contribution before 12/31 and double your impact. Donate today!

Though hyped in the media as the latest thing, the images generated by AI art are actually old, trapping the viewer in a time loop of kitsch.

Dwayne Monroe

Declarations of sentience are wildly premature. But the dangers AI poses to labor are very real.

Editorial/Dwayne Monroe

Visit link:

Silicon Landlords: On the Narrowing of AI's Horizon - The Nation

The Era of AI: 2023’s Landmark Year – CMSWire

The Gist

As we approach the end of another year, it's becoming increasingly clear that we are navigating through the burgeoning era of AI, a time that is reminiscent of the early days of the internet, yet poised with a transformative potential far beyond. While we might still be at what could be called the "AOL stages" of AI development, the pace of progress has been relentless, with new applications and capabilities emerging daily, reshaping every facet of our lives and businesses.

In a manner once attributed to divine influence and later to the internet itself, AI has become a pervasive force it touches everything it changes, and indeed, changes everything it touches. This article will recap the events that impacted the world of AI in 2023, including the evolution and growth of AI, regulations, legislation and petitions, the saga of Sam Altman, and the pursuit of Artificial General Intelligence (AGI).

The latest in the saga of AI began late last year, on Nov. 30, 2022, when OpenAI announced the release of ChatGPT 3.5, the second major release of the GPT language model capable of generating human-like text, which signified a major step in improving how we communicate with machines. Since then, its been a very busy year for AI, and there has rarely been a week that hasnt seen some announcement relating to it.

The first half of 2023 was marked by a series of significant developments in the field of AI, reflecting the rapid pace of innovation and its growing impact across various sectors. So far, the rest of the year hasnt shown any signs of slowing down. In fact, the emergence of AI applications across industries seems to have increased its pace. Here is an abbreviated timeline of the major AI news of the year:

February 13, 2023: Stanford scholars developed DetectGPT, the first in a forthcoming line of tools designed to differentiate between human and AI-generated text, addressing the need for oversight in an era where discerning the source of information is crucial. The tool came after the release of ChatGPT 3.5 prompted teachers and professors to become alarmed at the potential of ChatGPT to be used for cheating.

February 23, 2023: The launch of an open-source project called AgentGPT, which runs in a browser and uses OpenAI's ChatGPT to execute complex tasks, further demonstrated the versatility and practical applications of AI.

February 24, 2023: Meta, formerly known as Facebook, launched Llama, a large language model with 65 billion parameters, setting new benchmarks in the AI industry.

March 14, 2023: OpenAI released GPT 4, a significantly enhanced model over its predecessor, ChatGPT 3.5, raising discussions in the AI community about the potential inadvertent achievement of Artificial General Intelligence (AGI).

March 20, 2023: Studies examined the responses of GPT 3.5 and GPT 4 to clinical questions, highlighting the need for refinement and evaluation before relying on AI language models in healthcare. GPT 4 outperformed previous models, achieving an average score of 86.65% and 86.7% on the Self-Assessment and Sample Exam of the USMLE tests, with GPT 3.5 achieving 53.61% and 58.78%.

March 21, 2023: Googles focus on AI during its Google I/O event included the release of Bard, a ChatGPT competitor, and other significant announcements about its forthcoming large language models and integrations into Google Workspace and Gmail.

March 21, 2023: Nvidia's announcement of Picasso Cloud Services for creating large language and visual models, aimed at larger enterprises, underscored the increasing interest of major companies in AI technologies.

March 23, 2023: OpenAI's launch of Plugins for GPT expanded the capabilities of GPT models, allowing them to connect to third-party services via an API.

March 30, 2023: AutoGPT was released, with the capability to execute and improve its responses to prompts autonomously. This advancement in AI technology showcased a significant step toward greater autonomy in AI systems, and came with the ability to be installed on users local PCs, allowing individuals to have a large language model AI chat application in their homes without the need for internet access.

April 4, 2023: An unsurprising study discovered that participants could only differentiate between human and AI-generated text with about 50% accuracy, similar to random chance.

April 13, 2023: AWS announced Bedrock, a service making Fundamental AI Models from various labs accessible via an API, streamlining the development and scaling of generative AI-based applications.

May 23, 2023: OpenAI revealed plans to enhance ChatGPT with web browsing capabilities using Microsoft Bing and additional plugins for Plus subscribers, which would initially become available to ChatGPT Plus subscribers.

July 18, 2023: In a study, ChatGPT, particularly GPT 4, was found to be able to outperform medical students in responding to complex clinical care exam questions.

August 6, 2023: The EU AI Act, announced on this day, was one of the world's first legal frameworks for AI, and saw major developments and negotiations in 2023, with potential global implications, though it was still being hashed out in mid-December.

September 8, 2023: A study revealed that AI detectors, designed to identify AI-generated content, exhibit low reliability, especially for content created by non-native English speakers, raising ethical concerns. This has been an ongoing concern for both teachers and students, as these tools regularly present original content as being produced by AI, and AI-generated content as being original.

September 21, 2023: OpenAI announced that Dall-E 3, its text-to-image generation tool, would soon be available to ChatGPT Plus users.

November 4, 2023: Elon Musk announced the latest addition to the world of generative AI: Grok. Musk said that Grok promises to "break the mold of conventional AI," is said to respond with provocative answers and insights, and will welcome all manner of queries.

November 21, 2023: Microsoft unveiled Bing Chat 2.0 now called Copilot a major upgrade to its own chatbot platform, which leverages a hybrid approach of combining generative and retrieval-based models to provide more accurate and diverse responses.

November 22, 2023: With the release of Claude 2.1, Anthropic announced an expansion in Claude's capabilities, enabling it to analyze large volumes of text rapidly, a development favorably compared to the capabilities of ChatGPT.

December 6, 2023: Google announces its OpenAI rival, Gemini, which is multimodal, can generalize and seamlessly understand, operate across and combine different types of information, including text, images, audio, video and code.

These were only a very small portion of 2023s AI achievements and events, as nearly every week a new generative AI-driven application was being announced, including specialized AI-driven chatbots for specific use cases, applications, and industries. Additionally, there was often news of interactions with and uses of AI, AI jailbreaks, predictions about the potential dystopian future it may bring, proposals of regulations, legislation and guardrails, and petitions to stop developing the technology.

Shubham A. Mishra, co-founder and global CEO at AI marketing pioneer Pixis, told CMSWire that in 2023, the world focused on building the technology and democratizing it. "We saw people use it, consume it, and transform it into the most effective use cases to the point that it has now become a companion for them," said Mishra. "It has become such an integral part of its user's day-to-day functions that they don't even realize they are consuming it."

Many view 2023 as the year of generative AI but we are only beginning to tap into the potential applications of the technology. We are still trying to harness the full potential of generative AI across different use cases. In 2024, the industry will witness major shifts, be it a rise or fall in users and applications, said Mishra. There may be a rise in the number of users, but there will also be a second wave of Generative AI innovations where there will be an incremental rise in its applications.

Related Article:Harnessing AI: Top Use Cases for Digital Commerce

Anthony Yell, chief creative officer at interactive agency, Razorfish, told CMSWire that as a chief creative officer, he and his team have seen generative AI stand out by democratizing creativity, making it more accessible and enhancing the potential for those with skills and experience to reach new creative heights. "This technology has introduced the concept of a 'creative partner' or 'creative co-pilot,' revolutionizing our interaction with creative processes."

Yell believes that this era is about marrying groundbreaking creativity with responsible innovation, ensuring that AI's potential is harnessed in a way that respects brand identity and maintains consumer trust. This desire for responsibility and trust is something that is core to the acceptance of what has been and will continue to be a very disruptive technology. As such, 2023 has included many milestones in the quest for AI responsibility, safety, regulations, ethics, and controls. Here are some of the most impactful regulatory AI events in 2023.

February 28, 2023: Former Google engineer Blake Lemoine, who was fired in 2022 for going to the press with claims that Google LaMDA is actually sentient, was back in the news doubling down on his claim.

March 22, 2023: A group of technology and business leaders, including Elon Musk, Steve Wozniak and tech leaders from Meta, Google and Microsoft, signed an open letter hosted by the Future of Life Institute urging AI organizations to pause new developments in AI, citing risks to society. The letter stated that "we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT 4."

May 16, 2023: Sam Altman, CEO and co-founder of OpenAI, spoke with members of Congress to regulate AI due to the inherent risks that are posed by the technology.

May 30, 2023: AI industry leaders and researchers signed a statement hosted by the Center for AI Safety warning of the "extinction risk posed by AI." The statement said that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, and was signed by OpenAI CEO Sam Altman, Geoffrey Hinton, Google DeepMind and Anthropic executives and researchers, Microsoft CTO Kevin Scott, and security expert Bruce Schneier.

October 31, 2023: President Biden signed the sweeping Executive Order on Artificial Intelligence, which was designed to establish new standards for AI safety and security, protect Americans privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and advance American leadership around the world.

November 14, 2023: The DHS Cybersecurity and Infrastructure Security Agency (CISA) released its initial Roadmap for Artificial Intelligence, leading the way to ensure safe and secure AI development in the future. The CISA AI roadmap came in response to President Biden's October 2023 Executive Order on Artificial Intelligence.

December 11, 2023: The European Commission and the bloc's 27 member countries reached a deal on the world's first comprehensive AI rules, opening the door for the legal oversight of AI technology.

Rubab Rizvi, chief data scientist at Brainchild, a media agency affiliated with the Publicis Groupe, told CMSWire that from predictive analytics to seamless automation, the rapid embrace of AI has not only elevated efficiency but has also opened new frontiers for innovation, shaping a dynamic landscape that keeps us on our toes and fuels the excitement of what's to come.

The generative AI we've come to embrace in 2023 hasn't just been about enhancing personalization, she said. "It's becoming your digital best friend, offering tailored experiences that elevate brand engagement to a new level," said Rizvi. "This calls for proper governance and guardrails. As generative AI can potentially expose new previously inaccessible data, we must ensure that we are disciplined in protecting ourselves and our unstructured data." Rizvi aptly reiterated what many have said throughout the year: Dont blindly trust the machine."

Related Article: The Evolution of AI Chatbots: Past, Present and Future

OpenAI was the organization that officially started the era of AI with the announcement and introduction of ChatGPT 3.5 in 2022. In the year that followed, OpenAI ceaselessly worked to continue the evolution of AI, and has been no stranger to its share of both conspiracies and controversies. This came to a head late in the year, when the organization surprised everyone with news regarding its CEO, Sam Altman.

November, 17, 2023: The board of OpenAI fired co-founder and CEO Sam Altman, stating that a review board found he was not consistently candid in his communications and that "the board no longer has confidence in his ability to continue leading OpenAI.

November, 20, 2023: Microsoft hired former OpenAI CEO Sam Altman and co-founder Greg Brockman, with Microsoft CEO Satya Nadella announcing that Altman and Brockman would be joining to lead Microsofts new advanced AI research team, and that Altman would become CEO of the new group.

November 22, 2023: OpenAI rehired Sam Altman as its CEO, stating that it had "reached an agreement in principle for Sam Altman to return to OpenAI as CEO," along with significant changes in its non-profit board.

November 24, 2023: It was suggested that prior to Altmans firing, OpenAI researchers sent a letter to its board of directors warning of a new AI discovery that posed potential risks to humanity. The discovery, which has been referred to as Project Q*, was said to be a breakthrough in the pursuit of AGI, and reportedly influenced the board's firing of Sam Altman because of concerns that he was rushing to commercialize the new AI advancement without fully understanding its implications.

The quest for AGI, (something that Microsoft has since said could take decades), is an advanced form of AI characterized by self-learning capabilities and proficiency in a wide range of tasks, and stands as a cornerstone objective in the AI field. AGI could potentially seek to develop machines that mirror human intelligence, with the ability to understand, learn, and adeptly apply knowledge across diverse contexts, surpassing human performance in various domains.

Reflecting on 2023, we have witnessed a landmark year in AI, marked by groundbreaking advancements. Amidst these innovations, the year has also been pivotal in addressing the ethical, safety, and regulatory aspects of AI. As we conclude the year, the progress in AI not only showcases human ingenuity but also sets the stage for future challenges and opportunities, emphasizing the need for responsible stewardship of this transformative yet disruptive technology.

See the article here:

The Era of AI: 2023's Landmark Year - CMSWire

Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden – The Good Men Project

ByAnjana Susarla, Michigan State University

The turmoil at ChatGPT-maker OpenAI, bookended by the board of directors firing high-profile CEO Sam Altman on Nov. 17, 2023, and rehiring him just four days later, has put a spotlight on artificial intelligence safety and concerns about the rapid development of artificial general intelligence, or AGI. AGI is loosely defined as human-level intelligence across a range of tasks.

The OpenAI board stated that Altmans termination was for lack of candor, but speculation has centered on a rift between Altman and members of the board over concerns that OpenAIs remarkable growth products such as ChatGPT and Dall-E have acquired hundreds of millions of users worldwide has hindered the companys ability to focus on catastrophic risks posed by AGI.

OpenAIs goal of developing AGI has become entwined with the idea of AI acquiring superintelligent capabilities and the need to safeguard against the technology being misused or going rogue. But for now, AGI and its attendant risks are speculative. Task-specific forms of AI, meanwhile, are very real, have become widespread and often fly under the radar.

As a researcher of information systems and responsible AI, I study how these everyday algorithms work and how they can harm people.

AI plays a visible part in many peoples daily lives, from face recognition unlocking your phone to speech recognition powering your digital assistant. It also plays roles you might be vaguely aware of for example, shaping your social media and online shopping sessions, guiding your video-watching choices and matching you with a driver in a ride-sharing service.

AI also affects your life in ways that might completely escape your notice. If youre applying for a job, many employers use AI in the hiring process. Your bosses might be using it to identify employees who are likely to quit. If youre applying for a loan, odds are your bank is using AI to decide whether to grant it. If youre being treated for a medical condition, your health care providers might use it to assess your medical images. And if you know someone caught up in the criminal justice system, AI could well play a role in determining the course of their life.

Many of the AI systems that fly under the radar have biases that can cause harm. For example, machine learning methods use inductive logic, which starts with a set of premises, to generalize patterns from training data. A machine learning-based resume screening tool was found to be biased against women because the training data reflected past practices when most resumes were submitted by men.

The use of predictive methods in areas ranging from health care to child welfare could exhibit biases such as cohort bias that lead to unequal risk assessments across different groups in society. Even when legal practices prohibit discrimination based on attributes such as race and gender for example, in consumer lending proxy discrimination can still occur. This happens when algorithmic decision-making models do not use characteristics that are legally protected, such as race, and instead use characteristics that are highly correlated or connected with the legally protected characteristic, like neighborhood. Studies have found that risk-equivalent Black and Latino borrowers pay significantly higher interest rates on government-sponsored enterprise securitized and Federal Housing Authority insured loans than white borrowers.

Another form of bias occurs when decision-makers use an algorithm differently from how the algorithms designers intended. In a well-known example, a neural network learned to associate asthma with a lower risk of death from pneumonia. This was because asthmatics with pneumonia are traditionally given more aggressive treatment that lowers their mortality risk compared to the overall population. However, if the outcome from such a neural network is used in hospital bed allocation, then those with asthma and admitted with pneumonia would be dangerously deprioritized.

Biases from algorithms can also result from complex societal feedback loops. For example, when predicting recidivism, authorities attempt to predict which people convicted of crimes are likely to commit crimes again. But the data used to train predictive algorithms is actually about who is likely to get re-arrested.

The Biden administrations recent executive order and enforcement efforts by federal agencies such as the Federal Trade Commission are the first steps in recognizing and safeguarding against algorithmic harms.

And though large language models, such as GPT-3 that powers ChatGPT, and multimodal large language models, such as GPT-4, are steps on the road toward artificial general intelligence, they are also algorithms people are increasingly using in school, work and daily life. Its important to consider the biases that result from widespread use of large language models.

For example, these models could exhibit biases resulting from negative stereotyping involving gender, race or religion, as well as biases in representation of minorities and disabled people. As these models demonstrate the ability to outperform humans on tests such as the bar exam, I believe that they require greater scrutiny to ensure that AI-augmented work conforms to standards of transparency, accuracy and source crediting, and that stakeholders have the authority to enforce such standards.

Ultimately, who wins and loses from large-scale deployment of AI may not be about rogue superintelligence, but about understanding who is vulnerable when algorithmic decision-making is ubiquitous.

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

***

Premium Members get to view The Good Men Project with NO ADS. Need more info? A complete list of benefits is here.

Photo credit: iStockPhoto.com

Go here to read the rest:

Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project

The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 – DataDrivenInvestor

Photo by Johannes Plenio on Unsplash

In the fast-paced realm of artificial intelligence (AI), 2024 will be a transformative year, marking a profound shift in our understanding of AI capabilities and its real-world applications. While some developments have been a culmination of years of progress, others have emerged as groundbreaking innovations. In this article, well explore the most important AI innovations that will define 2024.

The term multimodality may sound technical, but its implications are revolutionary. In essence, it refers to an AI systems ability to process diverse types of data, extending beyond text to include images, video, audio, and more. In 2023, the public witnessed the debut of powerful multimodal AI models, with OpenAIs GPT-4 leading the way. This model allows users to upload not only text but also images, enabling the AI to see and interpret visual content.

Google DeepMinds Gemini, unveiled in December, further advanced multimodality, showcasing the models capacity to work with images and audio. This breakthrough opens doors to endless possibilities, such as seeking dinner suggestions based on a photo of your fridge contents. According to Shane Legg, co-founder of Google DeepMind, the shift towards fully multimodal AI marks a significant landmark, indicating a more grounded understanding of the world.

The promise of multimodality extends beyond mere utility; it enables models to be trained on diverse data sets, including images, video, and audio. This wealth of information enhances the models capabilities, propelling them towards the ultimate goal of artificial general intelligence that matches human intellect.

See more here:

The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - DataDrivenInvestor

Nvidia CEO Predicts AI Will Reach Parity with Human Intelligence in the Next Five Years – Game Is Hard

In a thought-provoking address at The New York Times annual DealBook Summit, Nvidias CEO Jensen Huang made a bold prediction about the future of artificial intelligence (AI). According to Huang, within the next five years, AI, particularly artificial general intelligence (AGI), will become competitive with human intelligence. AGI, which refers to the ability of computers to perform tasks in a human-like manner, is the frontier that Huang expects AI to conquer in the near future.

This forecast by Huang gains significant credibility from the immense demand for Nvidias powerful graphics processing units (GPUs). These GPUs play a crucial role in training AI models, handling large workloads across various industries, and supporting projects like OpenAIs ChatGPT. In sectors such as automotive, architecture, electronics, engineering, and scientific research, Nvidias GPUs have become indispensable. As a result, the companys fiscal third-quarter revenue has tripled, with net income soaring to an impressive $9.24 billion, a significant jump from the previous years $680 million.

During his address, Huang shared a personal recollection of supplying the worlds first AI supercomputer to OpenAI, highlighting his interaction with Elon Musk, the co-founder of the initiative. Despite recent turbulence at OpenAI, which included changes to its board structure and the controversial ousting and reinstatement of CEO Sam Altman, Huang expressed optimism for stability. He emphasized the importance of robust corporate governance and its role in the success of any organization.

Looking ahead, Huang envisions the emergence of off-the-shelf AI tools. He suggests that these tools will be customizable by various industries to cater to their specific needs, ranging from chip design to drug discovery. However, Huang refrained from ranking companies in the AI space, acknowledging that the industry is still a few years away from achieving AGI. He pointed out that machine learning has yet to master multistep reasoning, a fundamental milestone for developers. Huangs remarks highlight both Nvidias current success and the rapid progress within the AI sector.

Image source: BNN Newsroom

FAQ

Q: What did Nvidias CEO predict about the future of AI?

A: Nvidias CEO, Jensen Huang, predicted that within the next five years, AI, particularly artificial general intelligence (AGI), will become competitive with human intelligence.

Q: What is artificial general intelligence (AGI)?

A: AGI refers to the ability of computers to perform tasks in a human-like manner.

Q: What role do Nvidias GPUs play in AI?

A: Nvidias GPUs are crucial in training AI models, handling large workloads across various industries, and supporting projects like OpenAIs ChatGPT.

Q: How has Nvidias revenue been affected by the demand for its GPUs?

A: Nvidias fiscal third-quarter revenue has tripled, with net income soaring to $9.24 billion, a significant increase from the previous year.

Q: What did Nvidias CEO mention about OpenAI?

A: Huang mentioned supplying the worlds first AI supercomputer to OpenAI and expressed optimism for stability despite recent changes within the organization.

Q: What does Huang envision for the future of AI?

A: Huang envisions the emergence of off-the-shelf AI tools that can be customized by various industries to cater to their specific needs.

Key Terms/Jargon

1. Artificial general intelligence (AGI) Refers to the ability of computers to perform tasks in a human-like manner.

2. Graphics processing units (GPUs) Powerful processors used to manipulate and render images and videos. In the context of the article, Nvidias GPUs are essential for training AI models and supporting various industries.

3. OpenAI An artificial intelligence research organization aimed at developing friendly AGI for the benefit of humanity. Huang mentioned his interaction with OpenAI and its co-founder Elon Musk.

Related Links

Nvidia OpenAI

View original post here:

Nvidia CEO Predicts AI Will Reach Parity with Human Intelligence in the Next Five Years - Game Is Hard

Sam Altman on OpenAI and Artificial General Intelligence – TIME

If 2023 was the year artificial intelligence became a household topic of conversation, its in many ways because of Sam Altman, CEO of the artificial intelligence research organization OpenAI. Altman, who was named TIMEs 2023 CEO of the Year spoke candidly about his November oustingand reinstatementat OpenAI, how AI threatens to contribute to disinformation, and the rapidly advancing technologys future potential in a wide-ranging conversation with TIME Editor-in-Chief Sam Jacobs as part of TIMEs A Year in TIME event on Tuesday.

Altman shared that his mid-November sudden removal from OpenAI proved a learning experienceboth for him and the company at large. We always said that some moment like this would come, said Altman. I didnt think it was going to come so soon, but I think we are stronger for having gone through it.

Read More: CEO of the Year 2023: Sam Altman

Altman insists that the experience ultimately made the company strongerand proved that OpenAIs success is a team effort. Its been extremely painful for me personally, but I just think its been great for OpenAI. Weve never been more unified, he said. As we get closer to artificial general intelligence, as the stakes increase here, the ability for the OpenAI team to operate in uncertainty and stressful times should be of interest to the world.

I think everybody involved in this, as we get closer and closer to super intelligence, gets more stressed and more anxious, he explained of how his firing came about. The lesson he came away with: We have to make changes. We always said that we didnt want AGI to be controlled by a small set of people, we want it to be democratized. And we clearly got that wrong. So I think if we don't improve our governance structure, if we dont improve the way we interact with the world, people shouldnt [trust OpenAI]. But were very motivated to improve that.

The technology has limitless potential, Altman saysI think AGI will be the most powerful technology humanity has yet inventedparticularly in democratizing access to information globally. If you think about the cost of intelligence and the equality of intelligence, the cost falling, the quality increasing by a lot, and what people can do with that, he said, it's a very different world. Its the world that sci-fi has promised us for a long timeand for the first time, I think we could start to see what thats gonna look like.

Still, like any other previous powerful technology, that will lead to incredible new things, he says, but there are going to be real downsides.

Read More: Read TIMEs Interview With OpenAI CEO Sam Altman

Altman admits that there are challenges that demand close attention. One particular concern to be wary of, with 2024 elections on the horizon, is how AI stands to influence democracies. Whereas election interference circulating on social media might look straightforward todaytroll farmsmake one great meme, and that spreads outAltman says that AI-fueled disinformation stands to become far more personalized and persuasive: A thing that Im more concerned about is what happens if an AI reads everything youve ever written online and then right at the exact moment, sends you one message customized for you that really changes the way you think about the world.

Despite the risks, Altman believes that, if deployment of AI is safe and placed responsibly in the hands of people, which he says is OpenAIs mission, the technology has the potential to create a path where the world gets much more abundant and much better every year.

I think 2023 was the year we started to see that, and in 2024, well see way more of it, and by the time the end of this decade rolls around, I think the world is going to be in an unbelievably better place, he said. Though he also noted: No one knows what happens next. I think the way technology goes, predictions are often wrong.

A Year in TIME was sponsored by American Family Insurance, The Macallan, and Smartsheet.

The rest is here:

Sam Altman on OpenAI and Artificial General Intelligence - TIME

For accountants, what is AI, anyway? – Accounting Today

Andrew Ostrovsky/agsandrew - Fotolia

The sheer number of times people have written, spoken or even thought the phrase "artificial intelligence" has spiked dramatically in just one year since ChatGPT possibly its most prominent specimen exploded onto the scene.

But just what is it we're talking about when we discuss AI? Amid the panoply of sales pitches and marketing claims that have sprung up like mushrooms after a hard rain, it's easy to lose sight of the basics.

If we can't establish some common baselines about what AI is and is not, as well as what AI can and can't do, it will be highly difficult to have the coherent conversations on the topic that this profession needs.

This is why, as part of Accounting Today's inaugural AI Thought Leader Survey, we wanted to ask some very basic questions about AI and its capabilities we asked 22 AI experts on AI and its future in accounting not about the bleeding edge of this technology and its application to the accounting world, but about these very fundamental and basic concepts that are practically prerequisite for answering the larger questions that have captured people's attention.

Looking first at just what people think AI actually is, we see a variety of different takes. Some respondents, like Automata founder Wesley Hartman, set their bar very high for what counts as AI in a way that excludes what many today understand the term to mean.

"Artificial general intelligence is my definition of AI. This is where a system will use reasoning, planning, learning, natural communication and the ability to adjust when new information is presented to accomplish a task," he said. "I also think that the creation of new tasks or goals are important to distinguish between what is AGI versus what is close to AI versus human intelligence."

Others, like PwC vice chair Wes Bricker, took a much more expansive view to include anything that enhances our capabilities as humans.

"Artificial intelligence is advanced technology designed to enhance our capabilities as humans. It allows us to put people and technology together to create something bigger and better. It is a type of deep learning that uses prompts or existing data to create new content, including text, code, images, videos and audio," said Bricker.

The majority staked out ground somewhere in the middle, with a wide variety of personal definitions, but what they tended to have in common was that AI is something that can perform tasks that usually require human thought. Jin Chang, CEO of advisory services platform Fieldguide, elaborated a little further on what he thinks that means.

"Generally AI refers to the simulation of human intelligence, where machines can perform tasks that typically require human intelligence, such as problem-solving, decision-making and understanding natural language," he said.

In the first of what will be a multi-part series, we explore further how AI thought leaders respond to the question "What is your personal definition of AI? What is artificial intelligence?"

Tune in tomorrow for what these same people think does not count as AI.

View original post here:

For accountants, what is AI, anyway? - Accounting Today

OpenAI’s Mysterious AI Breakthrough(AGI): Unveiling the Truth – Medium

Illustration by the author with components created via Midjourney

Have you stumbled upon the whispers of a groundbreaking discovery that could change the face of artificial intelligence forever? OpenAI, the renowned AI research organization, finds itself at the center of a mysterious AI breakthrough that has sent shockwaves through the industry.

As you dig deeper into this enigma, youll unravel the truth behind OpenAIs recent organizational changes and business crises, all pointing to a possible achievement of true artificial general intelligence (AGI). The secrecy surrounding this development has only fueled your desire for power and knowledge.

Delve into the world of OpenAIs secrets, as we embark on a journey to reveal the clues, speculations, and uncertainties surrounding this awe-inspiring achievement. Prepare to be astounded by what lies beneath the surface.

Achieving true AI or AGI is the ultimate goal that OpenAI is believed to have accomplished or is currently pursuing. OpenAIs pursuit of artificial general intelligence, or AGI, serves as a testament to their desire for power and dominance in the field of AI. AGI represents intelligence on par with human capabilities, and OpenAI aims to surpass even that level by reaching superintelligence.

This relentless pursuit of AGI signifies OpenAIs ambition to wield unprecedented power and control. By achieving true AI or AGI, OpenAI seeks to unlock the potential for limitless knowledge and capabilities, solidifying their position as a leading force in the realm of artificial intelligence. Their dedication to this goal demonstrates their unwavering commitment to empowering those who desire to be at the forefront of technological advancement.

To understand the AI mystery, delve into the background of OpenAIs organizational changes and business crises. The enigmatic breakthrough is shrouded in uncertainty, leaving those hungry for power in anticipation.

OpenAIs CEO, Sam Altman, has been fired and rehired, suggesting a connection to the AI revelation. Yet, the details of this breakthrough remain concealed, fueling speculation and rumors among the ambitious.

While its improbable that true AGI has already been accomplished without public knowledge, the AI developers may possess promising advancements towards this ultimate goal. However, hunches in AI development can be misleading, and being on the AGI path doesnt guarantee immediate success.

Time will ultimately reveal the true nature and potential of this secretive AI achievement.

If youre still intrigued by the enigmatic AI breakthrough, you might be wondering about the extent of speculation surrounding the achievement of AGI. Here are some points to consider:

Stay tuned for more updates.

In AI development, uncertainty and hunches can often lead to hit-and-miss outcomes. As a powerful audience seeking knowledge, you understand the significance of such uncertainty.

While hunches may guide the development process, they arent foolproof. Believing that youre on the right path towards AGI doesnt guarantee immediate success. Incremental advancements should be celebrated, but its important not to overstate their significance.

Despite progress, the distance to AGI may still be far. Its crucial to acknowledge that the specifics of the AI breakthrough remain unknown. Speculation and rumors surround the true nature of this achievement.

However, time will ultimately reveal the reality behind OpenAIs mysterious AI breakthrough.

Amidst the uncertainty surrounding OpenAIs mysterious AI breakthrough, speculation and rumors continue to circulate, fueling curiosity and intrigue. As a powerful individual seeking knowledge, you crave to uncover the truth behind this enigma. However, the specifics of the breakthrough havent been revealed, leaving you with only clues and hints to ponder.

The true nature of this AI achievement remains uncertain, shrouded in a veil of secrecy. Yet, you remain determined, knowing that time will eventually unveil the reality of this groundbreaking development. As you navigate through the sea of speculations, you eagerly await the moment when the puzzle pieces come together and the truth is finally revealed.

Stay vigilant, for the answer you seek may be just around the corner.

You may still have doubts about OpenAIs mysterious AI breakthrough, questioning its feasibility or doubting the authenticity of the claims.

However, imagine a world where true artificial general intelligence is within reach, where machines possess human-like capabilities and can tackle complex tasks with ease.

The enigmatic nature of OpenAIs developments only adds to the intrigue and excitement surrounding this potential breakthrough.

As speculation and rumors continue to circulate, the truth behind this groundbreaking achievement remains one of the most captivating mysteries of our time.

Here is the original post:

OpenAI's Mysterious AI Breakthrough(AGI): Unveiling the Truth - Medium

Figuring Out What Artificial General Intelligence Consists Of Is Enormously Vital And Mindfully On The Minds Of AI Researchers At Google DeepMind -…

Figuring Out What Artificial General Intelligence Consists Of Is Enormously Vital And Mindfully On The Minds Of AI Researchers At Google DeepMind  Forbes

See original here:

Figuring Out What Artificial General Intelligence Consists Of Is Enormously Vital And Mindfully On The Minds Of AI Researchers At Google DeepMind -...

AGI: Would Its Data Center Be Sentient or Have Consciousness? – Medium

Photo credit: iStock

By David Stephen

Considerations of consciousness for a massive data center, with a superior neural architecture, powering AGI, stem from semblance of subjective experiences it processes, comparable to those mechanized by the human brain.

Subjective experience, as a definition of consciousness, is in chunks of production and output. Listening, like studying, is a subjective experience. Reacting, by thoughts, text or speech is also a subjective experience. There are slices between those that make artificial general intelligence come into consideration.

When an individual sees a text, it is integrated in the thalamus and mostly interpreted in the cortex. Interpretation includes if the text can be understood, effects like an emotion [say at the amygdala], may depend on the contents of the text, then reaction may follow. Through that process, sets of electrical and chemical impulses are interacting and relaying, producing and outputting the subjective experience.

AGI is not expected to have a thalamus or cortex, but it can process text. It may not have emotion, but it can respond. AGI may not be aware of its environment like an organism, but it may mimic slices of interpretation in the cerebral cortex and hippocampus.

The weight of interpretation on experiences is such that without it, or less, sensations may mean nothing or less. Seeing or hearing something and not knowing may not be inconsequential, if the thing is harmful.

There are aspects that AGI may score more than negligible in a measure of consciousness, like texts, images and sounds. These aspects provide a window into a potential sentience aggregation.

In the brain, it is hypothesized that the feature that makes experiences subjective is present in the interactions that carry out functions. Simply, in any process, there is an accompanying feature, by sets of electrical and chemical impulses, for the sense of self. This feature appears prominent for some external senses more than in some internal senses

More here:

AGI: Would Its Data Center Be Sentient or Have Consciousness? - Medium