Page 1,280«..1020..1,2791,2801,2811,282..1,2901,300..»

Artificial intelligence GPT-4 shows ‘sparks’ of common sense, human-like reasoning, finds Microsoft – Down To Earth Magazine

"); o.document.close(); setTimeout(function() { window.frames.printArticleFrame.focus(); window.frames.printArticleFrame.print(); document.body.removeChild(a); }, 1000); } jQuery(document).bind("keyup keydown", function(e) { if ((e.ctrlKey || e.metaKey) && (e.key == "p" || e.charCode == 16 || e.charCode == 112 || e.keyCode == 80)) { e.preventDefault(); printArticle(); } });

OpenAIs more powerful version of ChatGPT, GPT-4, can be trained to reason and use common sense like humans, a new study by Microsoft has found.

GPT-4 is a significant step towards artificial general intelligence (AGI) and can reason, plan and learn from experience at the same level as humans do, or possibly above them, the analysis found.

The AI is part of a new cohort of large language models (LLM), including ChatGPT and Googles PaLM. LLMs can be trainedin massive amounts of data and fed both images and text to come up with answers.

Microsoft invested billions of dollars in OpenAI and had access to it before it was launched publicly. The company recently took out a 155-page analysis,Sparks of Artificial General Intelligence: Early experiments with GPT-4.

Read more:If AI goes wrong, it can go quite wrong: Heres ChatGPT CEOs full testimony in US Congress

GPT-4 is also used to power Microsofts Bing Chat feature.

The research team discovered that LLMs can be trained to reason and use common sense like humans. They demonstrated GPT-4 can solve complex tasks in several fields without special prompting, including mathematics, vision, medicine, law and psychology.

The system available to the public is not as powerful as the version they tested, Microsoft said.

The paper gave several examples of how the AI seemed to understand concepts, like what a unicorn is. GPT-4 drew a unicorn in a sub programming language called TiKZ. In the crude drawings, GPT4 got the concept of a unicorn right.

To demonstrate the difference between true learning and memorisation, researchers asked GPT-4 to Draw a unicorn in TikZ three times over the course of one month. The AI showed a clear evolution in the sophistication ofthedrawings. Source: Microsoft

GPT-4 also exhibited more common sense than previous models, like ChatGPT, OpenAI said. Both GPT-4 and ChatGPT were asked to stack a book, nine eggs, a laptop, a bottle and a nail.

While ChatGPT recommended placing the eggs on top of the nail, the more sophisticated model arranged the items so the eggs would not break.

Read more:Thirsty AIs: ChatGPT drinks half a litre of fresh water to answer 20-50 questions, says study

However, the report acknowledged that AI still has limitations and biases and users were warned to be careful. GPT is still not fully reliable because it still hallucinates facts and makes reasoning and basic arithmetic errors.

The analysis read:

While GPT-4 is at or beyond human-level for many tasks, overall, its patterns of intelligence are decidedly not human-like. However, GPT-4 is almost certainly only a first step towards a series of increasingly generally intelligent systems, and in fact, GPT-4 itself has improved throughout our time testing it.

However, the paper warned the users to be careful, warning of its limitations likeconfidence calibration,cognitive fallacies and irrationality andchallenges with sensitivity to inputs.

Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with an additional context or avoiding high-stakes uses altogether) matching the needs of a specific use-case, it said.

We are a voice to you; you have been a support to us. Together we build journalism that is independent, credible and fearless. You can further help us by making a donation. This will mean a lot for our ability to bring you news, perspectives and analysis from the ground so that we can make change together.

View post:

Artificial intelligence GPT-4 shows 'sparks' of common sense, human-like reasoning, finds Microsoft - Down To Earth Magazine

Read More..

The Senate’s hearing on AI regulation was dangerously friendly – The Verge

The most unusual thing about this weeks Senate hearing on AI was how affable it was. Industry reps primarily OpenAI CEO Sam Altman merrily agreed on the need to regulate new AI technologies, while politicians seemed happy to hand over responsibility for drafting rules to the companies themselves. As Senator Dick Durbin (D-IL) put it in his opening remarks: I cant recall when weve had people representing large corporations or private sector entities come before us and plead with us to regulate them.

This sort of chumminess makes people nervous. A number of experts and industry figures say the hearing suggests we may be headed into an era of industry capture in AI. If tech giants are allowed to write the rules governing this technology, they say, it could have a number of harms, from stifling smaller firms to introducing weak regulations.

Industry capture could harm smaller firms and lead to weak regulations

Experts at the hearing included IBMs Christina Montgomery and noted AI critic Gary Marcus, who also raised the specter of regulatory capture. (The peril, said Marcus, is that we make it appear as if we are doing something, but its more like greenwashing and nothing really happens, we just keep out the little players.) And although no one from Microsoft or Google was present, the unofficial spokesperson for the tech industry was Altman.

Although Altmans OpenAI is still called a startup by some, its arguably the most influential AI company in the world. Its launch of image and text generation tools like ChatGPT and deals with Microsoft to remake Bing have sent shockwaves through the entire tech industry. Altman himself is well positioned: able to appeal to both the imaginations of the VC class and hardcore AI boosters with grand promises to build superintelligent AI and, maybe one day, in his own words, capture the light cone of all future value in the universe.

At the hearing this week, he was not so grandiose. Altman, too, mentioned the problem of regulatory capture but was less clear about his thoughts on licensing smaller entities. Wedont wanna slow down smaller startups. We dont wanna slow down open source efforts, he said, adding, We still need them to comply with things.

Sarah Myers West, managing director of the AI Now institute, tells The Verge she was suspicious of the licensing system proposed by many speakers. I think the harm will be that we end up with some sort of superficial checkbox exercise, where companies say yep, were licensed, we know what the harms are and can proceed with business as usual, but dont face any real liability when these systems go wrong, she said.

Requiring a license to train models would ... further concentrate power in the hands of a few

Other critics particularly those running their own AI companies stressed the potential threat to competition. Regulation invariably favours incumbents and can stifle innovation, Emad Mostaque, founder and CEO of Stability AI, told The Verge. Clem Delangue, CEO of AI startup Hugging Face, tweeted a similar reaction: Requiring a license to train models would be like requiring a license to write code. IMO, it would further concentrate power in the hands of a few & drastically slow down progress, fairness & transparency.

But some experts say some form of licensing could be effective. Margaret Mitchell, who was forced out of Google alongside Timnit Gebru after authoring a research paper on the potential harms of AI language models, describes herself as a proponent of some amount of self-regulation, paired with top-down regulation. She told The Verge that she could see the appeal of certification but perhaps for individuals rather than companies.

You could imagine that to train a model (above some thresholds) a developer would need a commercial ML developer license, said Mitchell, who is now chief ethics scientist at Hugging Face. This would be a straightforward way to bring responsible AI into a legal structure.

Mitchell added that good regulation depends on setting standards that firms cant easily bend to their advantage and that this requires a nuanced understanding of the technology being assessed. She gives the example of facial recognition firm Clearview AI, which sold itself to police forces by claiming its algorithms are 100 percent accurate. This sounds reassuring, but experts say the company used skewed tests to produce these figures. Mitchell added that she generally does not trust Big Tech to act in the public interest. Tech companies [have] demonstrated again and again that they do not see respecting people as a part of running a company, she said.

Even if licensing is introduced, it may not have an immediate effect. At the hearing, industry representatives often drew attention to hypothetical future harms and, in the process, gave scant attention to known problems AI already enables.

For example, researchers like Joy Buolamwini have repeatedly identified problems with bias in facial recognition, which remains inaccurate at identifying Black faces and has produced many cases of wrongful arrest in the US. Despite this, AI-driven surveillance was not mentioned at all during the hearing, while facial recognition and its flaws were only alluded to once in passing.

Industry figures often stress future harms of AI to avoid talking about current problems

AI Nows West says this focus on future harms has become a common rhetorical sleight of hand among AI industry figures. These individuals position accountability right out into the future, she said, generally by talking about artificial general intelligence, or AGI: a hypothetical AI system smarter than humans across a range of tasks. Some experts suggest were getting closer to creating such systems, but this conclusion is strongly contested.

This rhetorical feint was obvious at the hearing. Discussing government licensing, OpenAIs Altman quietly suggested that any licenses need only apply to future systems. Where I think the licensing scheme comes in is not for what these models are capable of today, he said. But as we head towards artificial general intelligence thats where I personally think we need such a scheme.

Experts compared Congress (and Altmans) proposals unfavorably to the EUs forthcoming AI Act. The current draft of this legislation does not include mechanisms comparable to licensing, but it does classify AI systems based on their level of risk and imposes varying requirements for safeguards and data protection. More notable, though, is its clear prohibitions of known and current harmful AI uses cases, like predictive policing algorithms and mass surveillance, which have attracted praise from digital rights experts.

As West says, Thats where the conversation needs to be headed if were going for any type of meaningful accountability in this industry.

See the article here:

The Senate's hearing on AI regulation was dangerously friendly - The Verge

Read More..

Fear the fire or harness the flame: The future of generative AI – VentureBeat

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Generative AI has taken the world by storm. So much so that in the last several months, the technology has twice been a major feature on CBSs 60 Minutes. The rise of startling conversant chatbots such as ChatGPT has even prompted warnings of runaway technology from some luminary artificial intelligence (AI) experts. While the current state of generative AI is clearly impressive perhaps dazzling would be a better adjective it might be even further advanced than is generally understood.

This week, The New York Times reported that some researchers in the tech industry believe these systems have moved toward something that cannot be explained as a stochastic parrot a system that simply mimics its underlying dataset. Instead, they are seeing An AI system that is coming up with humanlike answers and ideas that werent programmed into it. This observation comes from Microsoft and is based on responses to their prompts from OpenAIs ChatGPT.

Their view, as put forward in a research paper published in March, is that the chatbot showed sparks of artificial general intelligence (AGI) the term for a machine that attains the resourcefulness of human brains. This would be a significant development, as AGI is thought by most to still be many years, possibly decades, into the future. Not everyone agrees with their interpretation, but Microsoft has reorganized parts of its research labsto include multiple groups dedicated to exploring this AGI idea.

Separately, Scientific American described several similar research outcomes, including one from philosopher Raphal Millire of Columbia University. Hetyped a program into ChatGPT, asking it to calculate the 83rd number in the Fibonacci sequence.

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Its multistep reasoning of a very high degree, he said.

The chatbot nailed it.It shouldnt have been able to do this since it isnt designed to manage a multistep process. Millire hypothesized that the machine improvised a memory within the layers of its network another AGI-style behavior for interpreting words according to their context. Millire believes this behavior is much like how nature repurposes existing capacities for new functions, such as the evolution of feathers for insulation before they were used for flight.

Arguably already showing early signs of AGI, developers continue to make advances with large language models (LLMs). Late last week, Google announced significant upgrades to their Bard chatbot. This upgrade included moving Bard to the new PaLM 2 large language model. Per a CNBC report, PaLM 2 uses almost five times as much training data as its predecessor from 2022, allowing it to perform more advanced coding, math and creative writing tasks. Not to be outdone, OpenAI this week started to make plug-ins available for ChatGPT, including the ability to access the Internet in real time instead of relying solely on a dataset with content through 2021.

At the same time, Anthropic announced an expanded context window for their Claude chatbot. Per a LinkedIn post from AI expert Azeem Azhar, a context window is the length of text that a LLM can process and respond to.

In a sense, it is like the memory of the system for a given analysis or conversation, Azhar wrote. Larger context windows allow the systems to have much longer conversations or to analyze much bigger, more complex documents.

According to this post, the window for Claude is now about three times larger than that for ChatGPT.

All of which is to say that if ChatGPT exhibited sparks of AGI in research performed several months ago, state of the art has already surpassed these capabilities. That said, there remain numerous shortcomings to these models, including occasional hallucinations where they simply make up answers. But it is the speed of advances that has spooked many and led to urgent calls for regulation. However, Axios reports the likelihood that lawmakers in the U.S. will unite and act on AI regulation before the technology rapidly develops remains slim.

Those who see an existential danger from AI worry that AI could destroy democracy or humanity. This group of experts now includes Geoffrey Hinton, the Godfather of AI, along with long-time AI doomsayers such as Eliezer Yudkowsky. The latter said that by building a superhumanly smart AI, literally everyone on Earth will die.

While not nearly as dire in their outlook, even the executives of leading AI companies (including Google, Microsoft, and OpenAI) have said they believe AI regulation is necessary to avoid potentially damaging outcomes.

Amid all of this angst, Casey Newton, author of the Platformer newsletter, recently wrote about how he should approach what is essentially a paradox. Should his coverage emphasize the hope that AI is the best of us and will solve complex problems and save humanity, or should it instead speak to how AI is the worst of us obfuscating the truth, destroying trust and, ultimately, humanity?

There are those who believe the worries are overblown. Instead, they see this response as a reactionary fear of the unknown, or what amounts to technophobia. For example, essayist and novelist Stephen Marche wrote in the Guardian that tech doomerism is a species of hype.

He blames this in part on the fears of engineers who build the technology but who simply have no idea how their inventions interact with the world. Marche dismisses the worry that AI is about to take over the world as anthropomorphizing and storytelling; its a movie playing in the collective mind, nothing more. Demonstrating how in thrall we are to these themes, a new movie is expected this fall, pits humanity against the forces of AI in a planet-ravaging war for survival.

A common sense approach was expressed in an opinion piece from Professor Ioannis Pitas, chair of the International AI Doctoral Academy. Pitas believes AI is a necessary human response to a global society and physical world of ever-increasing complexity. He sees the positive impact of AI systems greatly outweighing their negative aspects if proper regulatory measures are taken. In his view, AI should continue to be developed, but with regulations to minimize already evident and potential negative effects.

This is not to say there are no dangers ahead with AI. Alphabet CEO Sundar Pichai has said, AI is one of the most important things humanity is working on. It is more profound than electricity or fire.

Perhaps fire provides a good analogy. There have been many mishaps in handling fire, and these still occasionally occur. Fortunately, society has learned to harness the benefits of fire while mitigating its dangers through standards and common sense. The hope is that we can do the same thing with AI before we are burned by the sparks of AGI.

Gary Grossman is SVP of technology practice atEdelmanand global lead of the Edelman AI Center of Excellence.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

See original here:

Fear the fire or harness the flame: The future of generative AI - VentureBeat

Read More..

Programmed Values: The Role of Intention in Developing AI – Psychology Today

The advent of artificial intelligence (AI) seems to have sent shockwaves through the world. Not a day passes without my clients and students contemplating its effects. James, a young journalist, told me, Maybe AIs will be wonderful assistants when I do research, while Ravi, an electrical engineering student, is less optimistic. In five years, our society will be completely changed by the influence of AI. He adds, The problem is that AI will develop in ways we cant foresee, with exponential speed. He looks downcast. I find that deeply depressing, and it leaves me sleepless at night.

AI is a widespread source of anxiety. Many worry that their jobs will be obsolete since AI can do some tasks faster and better than humans. Tristan Harris, from the Centre of Humane Technology, warns that privacy can easily be violated in many areas of our lives. He also suggests that AI systems can aggravate existing societal biases and discriminations. Malicious actors can exploit vulnerabilities in AI systems, leading to the manipulation of public opinion. This could lead to negative psychological impacts on marginalized individuals or communities. He is especially concerned about the potential for AI to increase polarization. Intelligent algorithms could radicalize moderate consumers to capture attention.

Juan, a young scientist from our local university interested in existential and spiritual concerns, has an exciting view. I wish we could program a longing for wisdom and goodness into our AI. Then AI could influence our society in a positive, compassionate way.

Leike, Schulman, and Wu speak about the AI alignment problem, which describes the degree of correspondence between the values and motivations of humans and AI. They tell us, Unaligned AGI could pose significant risks to humanity, and solving the AGI (Artificial General Intelligence) alignment problem could be so difficult that it will require all humanity to work together.

Philosopher Nick Bostrom notes that to build an AI that acts safely while acting in many domains, with many consequences, including problems engineers never explicitly envisioned, one must specify good behavior in as in X equals such that the consequence of X is not harmful to humans. In brief, many AI researchers and thinkers believe good, human-compatible intentions must be explicitly programmed into how AI systems are designed. AI systems carry the intentions of the people who create them, whether coded intentionally or not.

AI reflects the intention of the person who created it. If the intention is to make a profit, then that is what the AI will do. If the AI intends to replace a humans work, then that is what the AI will do. If the intention is to mimic a humans voice and expression, then that is what the AI will do. AI has no inherent sense of caring, intuition, or intrinsic conscience.

To tackle the enormous problem of how to ethically and safely achieve the goal of integrating AI into society, we need to build alignment and intention. Alignment refers to the state of congruence between values, beliefs, and goals. In terms of purpose, it involves aligning our actions and plans with a more profound sense of purpose in life. Intentions play a crucial role in shaping our experiences and outcomes and help us to stay focused, motivated, and in tune with our goals, even when we face challenges. By being fully present with our inner values, we align our intentions with our core values. When alignment and intention work, they create a powerful positive synergy.

That is why we have to clarify our intention, tune into our intuition, and be conscious when programming AI. We need the utmost clarity and self-awareness as an individual, a group of scientists, a society, or an international decision-making body.

Buddhist psychology can help us here. It emphasizes the importance of cultivating wholesome intentions. Wholesome intention originates in our generosity, loving kindness, compassion, and the absence of harmful desires. This leads indirectly to positive outcomes by promoting behaviors aligned with those intentions. Meditation and mindfulness allow individuals to make conscious choices that lead to well-being and spiritual progress.

It is important to note that Buddhist psychology does not solely focus on intentions but also considers the actual consequences of actions. Intention alone is insufficient to determine an effort's ethical value, as the outcome and impact on oneself and others are also considered. However, intention serves as a crucial starting point and a significant factor in determining the ethical quality of an action.

I recommend that we lean on positive and Buddhist psychology to develop a playbook for making AI useful for individuals, groups, and societies so we successfully support our health and well-being. The potential threats of unaligned AI are undoubtedly immense, but we are not powerless in their midst. Careful alignment and encoding of positive human values in AI systems require us to understand our values, intentions, and motivations. Here, Buddhist psychology can serve as a guide, offering practices to listen, discover our intentions, and more deeply align them with our best interests.

See more here:

Programmed Values: The Role of Intention in Developing AI - Psychology Today

Read More..

Henry Kissinger says the U.S. and China are in a classic pre-World War I situation that could lead to conflict, but A.I. makes this not a normal…

U.S.-China tensions have been called a repeat of Cold War-era dynamics. But a more apt comparison may be the years preceding World War I, when much of Europe was sitting on a powder keg of hostilities and overlapping territorial claims that required only two bullets to ignite. The difference now is that the great rivals have access to nuclear weapons, and with artificial intelligence they could be on the cusp of creating another tool that could wipe out mankind, according to former Secretary of State Henry Kissinger.

The U.S. and China represent the greatest dangers to peace right now if their animosities escalate to a military confrontation, Kissinger said in an interview with The Economist published Wednesday. The two countries are heading toward a major confrontation, he said, as both nations have convinced themselves that the other represents a strategic danger.

Tensions over Taiwan are likely to be a major flashpoint for future conflict, as President Joe Biden has signaled the U.S. would aid the island nation if China invades to reconquer what it considers a breakaway state. But much like with nuclear weapons during the Cold War, the world is now perfecting a new technologyA.I.that may be too dangerous to even consider deploying militarily.

[China and America] are two powers of the type where, historically, a military confrontation was inevitable. But this is not a normal circumstance, because of mutually assured destruction and artificial intelligence, Kissinger said.

U.S.-China relations are now at a low point. They started declining after the Trump administration imposed strict trade tariffs on China that have continued under the Biden administration. More recently, the two sides have clashed over Chinas intellectual property theft from U.S. companies, Chinese President Xi Jinpings close relationship with Russian counterpart Vladimir Putin, and the rising risk of China invading Taiwan.

Were in the classic pre-World War I situation, where neither side has much margin of political concession and in which any disturbance of the equilibrium can lead to catastrophic consequences, Kissinger said.

The world, in general, is becoming more dangerous, Kissinger said. He reluctantly supports Ukraine joining the military alliance NATO once the war ends, after years spent pushing back on the idea. India is another rising power, and Kissinger advocated strengthening its military in case occasional border skirmishes with China escalate to an all-out invasion.

Meanwhile, Taiwan isnt the only East Asian country concerned about Chinas rise, as Japanese officials have long warned that China poses an existential threat to the region. Last year, Japan announced a $320 billion military build-up plan, ending decades of pacifism. The plan includes missiles capable of reaching China, but the build-up could go even further, as Kissinger predicted that Japan is heading towards becoming a nuclear power in five years.

Just as superpowers settled into an unspoken detente from mutual destruction during the Cold War, A.I. might be weaponized with similar implications.

We are at the very beginning of a capability where machines could impose global pestilence or other pandemicsnot just nuclear, but any field of human destruction, Kissinger said. The circumstances require responsible leaders, who at least make an attempt to avoid conflict.

Current A.I. systems like OpenAIs ChatGPT have threatened economic disruption and widespread layoffs by making certain jobs obsolete, but some computer science experts have warned that the technology could still be in its infancy, and keeping up with it will likely get a lot harder in the future.

The U.S. and China are racing to dominate A.I. because of its potential economic importance, although former Google CEO Eric Schmidtwho earlier this year compared A.I.s impact on warfare to that of nuclear weaponswarned this week that the U.S. is only slightly ahead of China on A.I.

Some A.I. experts have cautioned that the world is simply not ready for A.I. being refined to a more sophisticated level known as artificial general intelligence. AGI, as its also called, when the technology can match human capabilities and reasoning, may be achieved in only a few years. If computers become capable that level of intelligence, humans might lose control and the technology could threaten the world, Kissinger said.

We may well wind up destroying ourselves. And a point is now quite reachable where the machines can refuse to be shut off. I mean once the machines recognized this possibility, they can build it into their advice prior to the contingency. Many genius scientists believe thatand they know more than I do.

Kissinger called for restraint in developing new technologies with civilization-ending potential, adding that as the world returns to being a more dangerous place, its essential that responsible governance prevails over hot heads.

When were in an adversarial world with mutually assured destruction, you owe it morally to your society to avoid it, he said.

The rest is here:

Henry Kissinger says the U.S. and China are in a classic pre-World War I situation that could lead to conflict, but A.I. makes this not a normal...

Read More..

AI glossary: words and terms to know about the booming industry – NBC News

The artificial intelligence (AI) boom has brought with it a cornucopia of jargon from "generative AI" to "synthetic data" that can be hard to parse. And as hard as it is to really understand what AI is (see our explainer for that) having a working knowledge of AI terms can help make sense of this technology.

As part of our series explaining the basics of AI, here is a short glossary of terms that will hopefully help you navigate the rapidly developing field.

Artificial Intelligence: Technology that aims to replicate human-like thinking within machines. Some examples of abilities that fall into this category include identifying people in pictures, work in factories and even doing taxes.

Generative AI: Generative AI is an AI that can create things like text, images, sound and video.Traditional applications of AI largely classify content, while generative AI models create it. For instance, a voice recognition model can identify your voice, while a generative voice model can use your voice to create audiobooks. Almost all models that have recently captured the publics attention have been generative, including chatbots like OpenAIs ChatGPT, image creators like Stable Diffusion and MidJourney, and voice-cloning programs like Resemble.

Training Data: A collection of information text, image, sound curated to help AI models accomplish tasks. In language models, training datasets focus on text-based materials like books, comments from social media, and even code. Because AI models learn from training data, ethical questions have been raised around its sourcing and curation. Low-quality training data can introduce bias, leading to unfair models that make racist or sexist decisions.

Algorithmic Bias: An error resulting from bad training data and poor programming that causes models to make prejudiced decisions. Such models may draw inappropriate assumptions based on gender, ability or race. In practice, these errors can cause serious harm by affecting decision-making from mortgage applications to organ-transplant approvals. Many critics of the speedy rollout of AI have invoked the potential for algorithmic bias.

Artificial General Intelligence (AGI): A description of programs that are as capable or even more capable than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.

Autonomous Agents: An AI model that has both an objective and enough tools to achieve it. For instance, self-driving cars are autonomous agents that use sensory input, GPS data and driving algorithms to make independent decisions about how to navigate and reach destinations. A group of autonomous agents can even develop cultures, traditions and shared language, as researchers from Stanford have demonstrated.

Prompt Chaining: The process of using previous interactions with an AI model to create new, more finely tuned responses, specifically in prompt-driven language modeling. For example, when you ask ChatGPT to send your friend a text, you expect it to remember things like the tone you use to talk to her, inside jokes and other content from previous conversations. These techniques help incorporate this context.

Large Language Models (LLM): An application of AI usually generative that aims to understand, engage and communicate with language in a human-like way. These models are distinguished by their large size: The biggest version of GPT-3, a direct predecessor to ChatGPT, contained 175 billion different variables called parameters that were trained on 570 gigabytes of data. Googles PaLm model is even larger, having 540 billion parameters. As hardware and software continue to advance, this scale is expected to increase.

Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that are not yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze, or make up facts about events that arent in its training data. Its not fully understood why this happens, but can arise from sparse data, information gaps and misclassification.

Emergent Behavior: Skills that AI might demonstrate that it was not explicitly built for. Some examples include emoji interpretation, sarcasm and using gender-inclusive language. A research team at Google Brain identified over 100 of these behaviors, noting that more are likely to emerge as models continue to scale.

Alignment: Efforts to ensure AI systems share the same values and goals as their human operators. To bring motives into agreement, alignment research seeks to train and calibrate models, often using functions to reward or penalize models. If the model does a good job, you give it positive feedback. If not, you give it negative feedback.

Multimodal AI: A form of AI that can understand and work with multiple types of information, including text, image, speech and more. This is powerful because it allows AI to understand and express itself in multiple dimensions, giving both a broader and more nuanced understanding of tasks. One application of multimodal AI is this translator, which can convert Japanese comics into English.

Prompt Engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAIs ChatGPT, describing the tasks users feed into the algorithm (e.g. Give me five popular baby names).

Training: Training is the process of refining AI using data so its better suited for a task. An AI can be trained by feeding in data based on what you want it to learn from like feeding Shakespearean sonnets to a poetry bot. You can do this multiple times in iterations called epochs, until your models performance is consistent and reliable.

Neural Networks: Neural networks are computer systems constructed to approximate the structure of human thought, specifically via the structure of your brain. Theyre built like thisbecause they allow a model to build up from the abstract to the concrete. In an image model, initial layers, concepts like color or position might be formed, building up to firmer, more familiar forms like fruit or animals.

Narrow AI: Some AI algorithms have a one-track mind. Literally. Theyre designed to do one thing and nothing more. If a narrow AI algorithm can play checkers, it cant play chess. Examples include algorithms that only detect NSFW images and recommendation engines designed to tell you what Amazon or Etsy product to buy next.

Jasmine Cui is a reporter for NBC News.

Jason Abbruzzese is the senior editor for technology, science and climate for NBC News Digital.

More here:

AI glossary: words and terms to know about the booming industry - NBC News

Read More..

A glimpse of AI technologies at the WIC in N China’s Tianjin – CGTN

02:38

The seventh World Intelligence Congress (WIC), a major artificial intelligence (AI) event in China, kicked off on Thursday in north China's Tianjin Municipality, focusing on topics such as intelligent connected vehicles, artificial general intelligence and brain-computer interfaces.

China's AI industry is making steady progress in technological innovation, industrial ecology and integrated application, with the scale of its core sectors reached 508 billion yuan ($72.5 billion) in 2022, an increase of 18 percent year on year, according to China AcademyofInformation and CommunicationsTechnology.

A new generation of intelligent electric vehicle technology at the seventh World Intelligence Congress in north China's Tianjin Municipality, May 18, 2023. /CFP

A new generation of intelligent electric vehicle technology at the seventh World Intelligence Congress in north China's Tianjin Municipality, May 18, 2023. /CFP

The WIC exhibition featured technologies and products related to generative AI, 5G plus industrial internet.

Generative AI on display, apart from iFLYTEK's SparkDesk, included a generative language modeldeveloped by a homegrown intelligent speech and AI company, the National Supercomputing Center of Tianjin.

Also on show were examples of how the integration of 5G plus industrial internet has deepened in supporting multiple scenarios of the manufacturing industry, including those used in inspection and transport.

A visitor plays chess with a robot at the seventh WIC in Tianjin, May 18, 2023. /CFP

A visitor plays chess with a robot at the seventh WIC in Tianjin, May 18, 2023. /CFP

Nearly 500 enterprises participated in the exhibition, including 350 intelligent technology enterprises and 51 research institutions and universities, according to the WIC.

The exhibition presented music, literature, art and other fields through AI, 3D, metaverse and other technologies to break time and space restrictions, allowing participants an immersive experience.

An intelligent driving challenge and four other competitions were also held during the congress.

(Cover image via CFP, designed by Xing Cheng)

See the original post:

A glimpse of AI technologies at the WIC in N China's Tianjin - CGTN

Read More..

UK schools bewildered by AI and do not trust tech firms, headteachers say – The Guardian

Artificial intelligence (AI)

School leaders announce launch of body to protect students from the risks of artificial intelligence

Sat 20 May 2023 05.50 EDT

Schools are bewildered by the fast pace of development in artificial intelligence and do not trust tech firms to protect the interests of students and educational establishments, headteachers have written.

A group of UK school leaders have announced the launch of a body to advise and protect schools from the risks of AI, with their fears not limited to the capacity of chatbots such as ChatGPT to aid cheating. There are also concerns about the impact on childrens mental and physical health as well as the teaching profession itself, according to the Times.

The headteachers fears were outlined in a letter to the Times in which they warned of the very real and present hazards and dangers being presented by AI, which has gripped the public imagination in recent months through the emergence of breakthroughs in generative AI where tools can produce plausible text, images and even voice impersonations on command.

The group of school leaders is led by Sir Anthony Seldon, the head of Epsom College, a fee-paying school, while the AI body is supported by the heads of dozens of private and state schools.

The letter to the Times says: Schools are bewildered by the very fast rate of change in AI and seek secure guidance on the best way forward, but whose advice can we trust? We have no confidence that the large digital companies will be capable of regulating themselves in the interests of students, staff and schools and in the past the government has not shown itself capable or willing to do so.

Signatories to the letter include Seldon, Chris Goodall, the deputy head of Epsom & Ewell High School, and Geoff Barton, general secretary of the Association of School and College Leaders.

It adds that the group is pleased the government is now grasping the nettle on the issue. This week Rishi Sunak said guardrails would have to be put around AI as Downing Street indicated support for a global framework for regulating the technology. However, the letter adds that educational leaders are forming their own advisory body because AI is moving too quickly for politicians to cope.

AI is moving far too quickly for the government or parliament alone to provide the real-time advice schools need. We are thus announcing today our own cross-sector body composed of leading teachers in our schools, guided by a panel of independent digital and AI experts.

Supporters include James Dahl, the head of Wellington College in Berkshire, and Alex Russell, chief executive of the Bourne Education Trust, which runs about two dozen state schools.

The Times reported that the group would create a website led by the heads of science or digital at 15 state and private schools, offering guidance on developments in AI and what technology to avoid or embrace.

Seldon told the Times: Learning is at its best, human beings are at their best, when they are challenged and overcome those challenges. AI will make life easy and strip away learning and teaching unless we get ahead of it.

The Department for Education said: The education secretary has been clear about the governments appetite to pursue the opportunities and manage the risks that exist in this space, and we have already published information to help schools do this. We continue to work with experts, including in education, to share and identify best practice.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original post:

UK schools bewildered by AI and do not trust tech firms, headteachers say - The Guardian

Read More..

The Potential of AI in Tax Practice Relies on Understanding its … – Thomson Reuters Tax & Accounting

Curiosity, conversation, and investment into artificial intelligence are quickly gaining traction in the tax community, but proper due diligence requires an acknowledgement of what such tools are and arent yet capable of, as well as an assessment of security and performance risks, according to industry experts.

With the tax world exploring how AI can improve practice and administration; firms, the IRS, and taxpayers alike are in the early stages of considering its potential for streamlining tasks, saving time, and improving access to information. Regardless of ones individual optimism or skepticism about the possible future of AI in the tax space, panelists at an American Bar Association conference in Washington, D.C., this past week suggested that practitioners arm themselves with important fundamentals and key technological differences under the broad-stroke term of AI.

An increasingly popular and publicly available AI tool is ChatGPT. Users can interact with ChatGPT by issuing whatever prompts come to mind, such as telling it to write a script for a screenplay or simply asking a question. As opposed to algorithmic machine learning tools specifically designed with a narrow focus, such as those in development at the IRS to crack down on abusive transactions like conservation easements, ChatGPT is what is called a large language model (LLM).

LLMs, according to PricewaterhouseCoopers Principal Chris Kontaridis, are text-based and use statistical methodologies to create a relationship between your question and patterns of data and text. In other words, the more data an LLM like ChatGPTwhich is currently learning from users across the entire internetabsorbs, the better it can attempt to predict and algorithmically interact with a person. Importantly, however, ChatGPT is not a knowledge model, Kontaridis said. Calling ChatGPT a knowledge model would insinuate that it is going to give you the correct answer every time you put in a question. Because it is not artificial general intelligence, something akin to a Hollywood portrayal of sentient machines overtaking humanity, users should recognize that ChatGPT is not self-reasoning, he said.

Were not even close to having real AGI out there, Kontaridis added.

Professor Abdi Aidid of the University of Toronto Faculty of Law and AI research-focused Blue J Legal, said at the ABA conference that the really important thing when youre using a tool like [ChatGPT] is recognizing its limitations. He explained that it is not providing source material for legal or tax advice. What its doing, and this is very important, is simply making a probabilistic determination about the next likely word. For instance, Aidid demonstrated that if you ask ChatGPT what your name is, it will give you an answer whether it knows it or not. You can rephrase the same question and ask it again, and it might give you a slightly different answer with different words because its responding to a different prompt.

At a separate panel, Ken Crutchfieldvice president and general manager of Legal Markets said he asked ChatGPT who invented the Trapper Keeper binder, knowing in fact his father Bryant Crutchfield is credited with the invention. ChatGPT spit out a random name. In telling the story, Crutchfield said: I went through, and I continued to ask questions, and I eventually convinced ChatGPT that it was wrong, and it admitted it and it said yes, Bryant Crutchfield did invent the Trapper Keeper.' Crutchfield said that when someone else tried asking ChatGPT who invented the Trapper Keeper, it gave yet another name. He tried it again himself more recently, and the answer included his fathers name, but listed his own alma mater. So its getting better and kind of learns through the these back-and-forths with people that are interacting.

Aidid explained that these instances are referred to as hallucinations. That is, when an AI does not know the answer and essentially makes something up on the spot based on the data and patterns it has up to that point. If a user were to ask ChatGPT about the Inflation Reduction Act, it would hallucinate an answer because it currently is limited to knowledge as recent as September 2021. Generative AI ChatGPT is still more sophisticated than more base-level tools that work off of decision trees, such as when a taxpayer interacts with the IRS Tax Assistant Tool, Aidid said. The Tax Assistant Tool, Aidid said, is not generative AI.

Mindy Herzfeld, professor at the University of Florida Levin College of Law, responsed that it is especially problematic because the [Tax Assistant Tool] is implying that it has all that information and its generating responses based on the world of information, but its really not doing that, so its misleading.

The most potential for the application of generative AI is with so-called deep learning tools, which are supposedly more advanced and complex iterations of machine learning platforms. Aidid said deep learning can work with unstructured data. Such technology can not only synthesize and review information, but review new information for us. Its starting to take all that and generate thingsnot simple predictionsbut actually generate things that are in the style and mode of human communication, and thats where were seeing significant investment today.

Herzfeld said that machine learning is already being used in tax on a daily basis, but deep learning is a little harder to see where that is in tax law. These more advanced tools will likely be developed in-house at firms, likely in partnership with AI researchers.

PwC is working with Blue J in pursuit of tax-oriented deep learning generative AI to help reduce much of the clerical work that is all too time-consuming in tax practice, according to Kontaridis. Freeing up staff to focus efforts to other things while AI sifts through mountains of data is a boon, he said.

However, as the saying goes, with great power comes with great responsibility. Here, that means guarding sensitive information and ensuring accuracy. Kontaridis said that its really important to make sure before you deploy something like this to your staff or use it yourself that youre doing it in a safe environment where you are protecting the confidentiality of your personal IP and privilege that you have with your clients.

Herzfeld echoed that practitioners should bear in mind how easily misinformation could be perpetuated through an overreliance or lack of oversight of AI, which she called a very broadly societal risk. Kontaridis assured the audience that he is not worried about generative AI replacing our role and the tax professional this is a tool that will help us do our work better.

Referring to the myth that CPA bots will take over the industry, he said: What Im worried about is the impact it has on our profession at the university level of it, discouraging bright young minds from pursuing careers in tax and accounting consulting.

Get all the latest tax, accounting, audit, and corporate finance news with Checkpoint Edge. Sign up for afree 7-day trialtoday.

Excerpt from:

The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting

Read More..

Zoom Invests in and Partners With Anthropic to Improve Its AI … – PYMNTS.com

Zoomhas become the latest tech company riding this years wave of artificial intelligence (AI) integrations.

The video conferencing platform announced in a Tuesday (May 16) press release that it has teamed with and isinvestingin AI firmAnthropic.

The collaboration will integrate Anthropics AI assistant, Claude, with Zooms platform, beginning withZoom Contact Center.

With Claude guiding agents toward trustworthy resolutions and powering self-service for end-users, companies will be able to take customer relationships to another level, saidSmita Hashim, chief product officer for Zoom, in the release.

Working with Anthropic, Hashim said, furthers the companys goal of a federated approach to AI while also advancing leading-edge companies like Anthropic and helping to drive innovation in the Zoom ecosystem and beyond.

As the next step in evolving the Zoom Contact Center portfolio, (Zoom Virtual Agent, Zoom Contact Center, Zoom Workforce Management), Zoom plans to incorporate Anthropic AI throughout its suite, improving end-user outcomes and enabling superior agent experiences, the news release said.

Zoom said in the release it eventually plans to incorporate Anthropic AI throughout its suite, including products like Team Chat, Meetings, Phone, Whiteboard and Zoom IQ.

Last year, Zoom debutedZoom Virtual Agent, an intelligent conversational AI and chatbot tool that employs natural language processing and machine learning to understand and solve customer issues.

The company did not reveal the amount of its investment in Anthropic, which isbackedbyGoogleto the tune of $300 million.

Zooms announcement came amid a flurry of AI-related news Tuesday, with fraud prevention firmComplyAdvantagelaunching anAI tooland the New York Times digging intoMicrosofts claims that it had made a breakthrough in the realm of artificial general intelligence.

Perhaps the biggest news isOpenAICEO Sam Altmanstestimonybefore a U.S. Senate subcommittee, in which he warned: I think if this technology goes wrong, it can go quite wrong.

Altmans testimony happened as regulators and governments around the world step up their examination of AI in a race tomitigate fearsabout its transformative powers, which have spread in step with the future-fit technologys ongoing integration into the broader business landscape.

Go here to read the rest:

Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com

Read More..