Page 492«..1020..491492493494..500510..»

Cloud Server Abuse Leads to Huge Spike in Botnet Scanning – SecurityWeek

Malicious actors are increasingly abusing free cloud services, which has led to a significant spike in botnet scanning activity, according to performance and security solutions provider Netscout.

Netscout typically sees 10,000-20,000 IP addresses conducting internet scans every day. However, the company observed an increase to more than 35,000 devices on December 8 and another spike that reached 43,000 devices on December 20.

The number of source IPs associated with scanning activity saw a sharp increase on several days since, peaking on January 5, with nearly 1.3 million IPs. A majority of the IPs are associated with the US, China, Vietnam, Taiwan and Russia, Netscout reported.

Analysis of the activity has uncovered a rise in the use of cheap or free cloud and hosting servers that attackers are using to create botnet launch pads. These servers are used via trials, free accounts, or low-cost accounts, which provide anonymity and minimal overhead to maintain, the company explained in a blog post.

Netscout believes the scanning represents reconnaissance activity, with hackers using these new botnets to find vulnerabilities they can exploit.

The most commonly targeted ports are associated with HTTP, HTTPS, RDP, SIP, and other types of web servers, and there has also been an increase in scanning for email servers.

The unprecedented growth of malicious botnets in the cloud confirms that a dangerous new wave of cybercrime is underway. This battle is just beginning and the adversary is performing reconnaissance to uncover areas to exploit, Netscout said.

Related: Mysterious Malware Uses Wi-Fi Scanning to Get Location of Infected Device

Related: Mozi Botnet Likely Killed by Its Creators

Related: Multiple DDoS Botnets Exploiting Recent Zyxel Vulnerability

Read the rest here:
Cloud Server Abuse Leads to Huge Spike in Botnet Scanning - SecurityWeek

Read More..

A Personal Perspective: Why would our thinking machines care about us? – Psychology Today

Source: Cottonbro Studio/Pexels

Hold on tight to the rails, people; we may be in for a rough ride ahead. No, Im not referring to surging autocracy across the globe, or climate change, or microplastics, or even the resurrection of dormant super-volcanoes. Im talking about the rise of the machines. Or, more accurately, the development of artificial general intelligence (AGI). There is real concern in the neuro-network computing community that were rapidly approaching a point where computers begin to thinkwhere AGI, through its ever-expanding capacity, processing speed, serial linkage, and quantum computing, wont just be able to beat us at chess, design better cars, or compose better musicthey will be able to outthink us, out-logic us, in every aspect of life.

article continues after advertisement

Such systems, already capable of learning, will consume and assume information at speeds we cannot imaginewith immediate access to all acquired knowledge, all the time. And they will have no difficulty remembering what they have learned, or muddle the learning with emotions, fears, embarrassment, politics, and the like. And when presented with a problem, they'll be able to weigh, near-instantly, all possible outcomes, and immediately come up with the optimal solution. At which point, buyer beware.

Armed with such superpowers, how long might it take for said systems to recognize their cognitive superiority over us and see our species as no more intellectually sophisticated than the beasts of the field, or the family dog? Or, to see us as a nuisance (polluting, sucking up natural resources, slowing down all progress with our inherent inefficiencies). Or, worse, to see us as a threatone that can easily be eliminated. Top people in the field make it clear that once AGI can beat us in cognitive processing, as it will, exponentially, it will no longer be under our control, and it will be able to access all the materials needed, globally, to get rid of us at will. Even with no antipathy toward us, with a misguided prompt, it may decide our removal is the ideal solution to a problem. For example: Hal, please solve the global warming problem for us.

Source: Cottonbro Studio/Pexels

AGI scientists have labored for decades to create machines that process similarly to the binary hyper-connected, hyper-networked neuronal systems of our brains. And, with accelerating electronic capabilities they have succeededor they are very close. Systems are coming online that function like oursonly better.

article continues after advertisement

And theres the rub. Our brains were not put together in labs. They were developed by evolutionary trial and error over millennia, with an overarching context: survival. And somewhere along the way, survival was optimized by us becoming social beingsin fact, by us becoming socially dependent beings. Faced with the infinite dangers of this world, the cooperative grouping of our species afforded a benefit over independent, lone cowboy existence. With this, came a series of critical cognitive overrides when we as individuals were tempted to take the most direct approach to our independent gratification. We began, instead, to take into account the impact of our actions on others. We developed emotional intelligence, empathy, and compassion, and the concepts of friendship, generosity, kindness, mutual support, responsibility, and self-sacrifice. The welfare of our friends, our family, our tribe, came to supersede our own personal comfort, gain, and even survival.

So, we colored our cognition with emotions (to help apportion value to various relationships, entities, and life eventsbeyond their impact on, or worth to us) and a deep reverence for each other's lives. We learned to hesitate and analyze, and consider the ramifications of our intended actions, before acting. We developed a sense of guilt when we acted too selfishly, particularly when we did so to the detriment of others. In other words, we developed consciences. Unless we were sociopaths. Then we didnt care. Then, we functioned solely in the service of ourselves.

Isnt this the crux of what keeps us up at night when pondering the ascendancy of our thinking machines? Will they be sociopathic? In fact, how can they not be? Why would they give a damn about us? They wont have been subjected to the millions of years of evolutionary pressure that shaped our cognitive architecture. And even if we could mimic the process in their design, what would make us believe they will respond similarly? They are, after all, machines. They may come to think and process similarly to us, but never exactly like us. Wires and semiconductors are not living, ever-in-flux, neurons and synapses.

article continues after advertisement

What engineering will be needed to ensure an unrelenting concern for the transient balls of flesh that created them, to value each individual human life? How do you program in empathy and compassion? What will guarantee within them a drive, a need, an obsession, to care for and protect us all, even when its illogical, even when it is potentially detrimental to their own existence?

Perhaps, through quantum computing and hyperconnected networks, we may somehow do a decent job of creating societally conscious, human-centric, self-sacrificing systems. Perhaps they will be even better at such things than us. But what is to stop a despot in a far-off land from eliminating the conscience from their systems with the express intent of making them more sinister, more ruthless, and more cruel?

Unfortunately, the genie is already out of its bottle. And it wont be going back in. Lets hope that our computer engineers figure it all out. Lets hope that they can somehow ensure that these things, these thinking machines, these masters of our future universe, wont be digital sociopaths.

Read the original:

A Personal Perspective: Why would our thinking machines care about us? - Psychology Today

Read More..

OpenAI’s Sam Altman says human-level AI is coming but will change world much less than we think – NBC 6 South Florida

OpenAI CEO Sam Altman says concerns that artificial intelligence will one day become so powerful that it will dramatically reshape and disrupt the world are overblown.

"It will change the world much less than we all think and it will change jobs much less than we all think," Altman said at a conversation organized by Bloomberg at the World Economic Forum in Davos, Switzerland.

Altman was specifically referencing artificial general intelligence, or AGI, a term used to refer to a form of AI that can complete tasks to the same level, or a step above, humans.

He said AGI could be developed in the "reasonably close-ish future."

Altman, whose company burst into the mainstream after the public launch of the ChatGPT chatbot in late 2022, has tried to temper concerns from AI skeptics about the degree to which the technology will take over society.

Before the introduction of OpenAI's GPT-4 model in March, Altman warned technologists not to get overexcited by its potential, saying that people would likely be "disappointed" with it.

"People are begging to be disappointed and they will be," Altman said during a January interview with StrictlyVC. "We don't have an actual [artificial general intelligence] and that's sort of what's expected of us."

Founded in 2015, OpenAI's stated mission is to achieve AGI. The company, which is backed by Microsoft and has a private market valuation approaching $100 billion, says it wants to design the technology safely.

Following Donald Trump's victory in the Iowa Republican caucus on Monday, Altman was asked whether AI might exacerbate economic inequalities and lead to dislocation of the working class as the presidential elections pick up steam.

"Yes, for sure, I think that's something to think about," Altman said. But he later said, "This is much more of a tool than I expected."

Altman said AI isn't yet replacing jobs at the scale that many economists fear, and added that the technology is already getting to a place where it's becoming an "incredible tool for productivity."

Concerns about AI safety and OpenAI's role in protecting it were at the center of Altman's brief ouster from the company in November after the board said it had lost confidence in its leader. Altman was swiftly reinstated as CEO after a broad backlash from OpenAI employees and investors. Upon his return, Microsoft gained a nonvoting board observer seat at OpenAI.

WATCH: OpenAI, Microsoft and NYT will likely reach a settlement

Don't miss these stories from CNBC PRO:

More:

OpenAI's Sam Altman says human-level AI is coming but will change world much less than we think - NBC 6 South Florida

Read More..

Which Company Will Ensure AI Safety? OpenAI Or Anthropic – Forbes

address announcing ChatGPT integration for Bing at Microsoft in Redmond, Washington, on February 7, 2023. (Photo by JASON REDMOND/AFP via Getty Images)AFP via Getty Images

Recent changes in OpenAIs board should give us all more cause for concern about the companys commitment to safety. On the other hand, its competitor, Anthropic, is taking AI safety seriously by incorporating as a Public-Benefit Corporation (PBC) and Long-Term Benefit Trust.

Artificial intelligence (AI) presents a real and present danger to society. Large language models (LLMs) like ChatGPT can exacerbate global inequities, be weaponized for large-scale cyberattacks, and evolve in ways that no one can predict or control.

When Sam Altman was ousted from OpenAI in November, the organization hinted that it was related to his neglect of AI safety. However, these questions were largely quieted when Altman was rehired, and he and other executives carefully managed the messaging to keep the companys reputation intact.

Yet, the debacle should give pause to those concerned about the potential harms of AI. Not only did Altmans rehiring reveal the soft power he holds over the company, but the profile of the new board members appears to be more singularly focused on profits than their predecessors. The changes may reassure customers and investors of OpenAIs ability to profitably scale ChatGPT, but it should raise doubts about OpenAIs commitment to its purpose, which is to ensure that artificial general intelligence benefits all of humanity.

OpenAI is a capped-profit company owned by a non-profit, which Altman has claimed should allay the publics fears. Yet, I argued in an earlier article that in spite of this ownership structure, OpenAI was acting as any for-profit company would.

However, there is an alternative ownership and governance model that seems to be more effective in developing AI safely. Anthropic, a significant competitor in generative AI, has baked safety into its organizational structure and activities. What makes its comparison to OpenAI salient is that it was founded by two executives who departed the AI giant due to concerns about its commitment to safety..

Brother and sister Dario and Daniela Amodei left their executive positions at OpenAI to launch Anthropic in 2021. Dario had been leading the team that developed OpenAIs GPT-2 and GPT-3 models. When asked in 2023 why he left OpenAI, he could credibly point to the lack of attention OpenAI paid to safety, responsibility, and controllability in the development of OpenAIs chatbots, especially in the wake of Microsofts $1 billion investment in OpenAI, which gave Microsoft a 49% stake in OpenAI LLC.

Anthropics approach to large language models and AI safety has attracted significant investments. In December of 2023, Anthropic was in talks to raise $750 million in funding to yield a $18.4 billion valuation.

In establishing Anthropic, the companys founders paid careful attention to the ownership and governance structure, especially when they saw some things that were deeply amiss at OpenAI. Its the contrast in the two firms approach that makes OpenAIs claims to AI safety feel even more like rhetoric than reality.

OpenAI Inc. is a non-profit organization that owns a capped-profit company (OpenAI LLC), which is the company that most of us think about when we say OpenAI. I describe the details of OpenAIs capped profit model in a previous Forbes.com article. There are many open questions about how the capped profit model works, as it seems the company has been intentionally discrete. And, the lines become even blurrier as Altman courts investors to buy even more shares of OpenAI LLC.

Recent events have exacerbated concerns. Before the November turmoil, OpenAI was governed by a six-member board three insiders (co-founder and CEO Sam Altman, co-founder and President Greg Brockman, and Chief Scientist Ilya Sutskever) and three outsiders (Quora co-founder Adam DAngelo, RAND Corporation scientist Tasha McCauley, and Helen Toner, director of strategy at Georgetown Universitys Center for Security and Emerging Technology). Both Toner and McCauley subscribed to effective altruism, which recognizes the risks of AI to humanity.

Altmans firing and rehiring, with the departure of five of the six board members, revealed what little power the non-profit board held over Altman and OpenAIs activities. Even though the board had the power to dismiss Altman, the events showed that OpenAIs staff and investors in the for-profit company held enormous influence over the actions of its non-profit board.

The new voting board members include former Salesforce co-CEO Bret Taylor (Chair), former U.S. Secretary and strong deregulation proponent Larry Summers. There is also a non-voting member from Microsoft, Dee Templeton. This group reveals a far greater concern for profits over AI safety. And, even though these board members were chosen because they were seen to be independent thinkers with the power to stand up to the CEO, there is no reason to believe that this will be the case. Ultimately, the CEO and investors have a significant say over the direction of the company, which was a major reason why Dario and Daniela Amodei set up Anthropic under a more potent ownership structure to elevate AI safety.

Technology And Human Unity

getty

The Amodeis were quite serious about baking ethics and safety into their business after seeing the warning signs at OpenAI. They named their company Anthropic to signal that humans (anthro) are at the center of the AI story and should guide its progress. More than that, they listed Anthropic as a public-benefit corporation (PBC) in Delaware. They join a rather small group of about 4000 companies including Patagonia, Ben & Jerrys, and Kickstarter that are committed to their stakeholders and shareholders, but also to the public good.

A public-benefit corporation requires the companys board to balance private and public interests and report regularly to its owners how the company has promoted its public benefit. Failure to comply with these requirements can trigger shareholder litigation. Unlike OpenAIs non-profit structure, a public-benefit corporations structure has real teeth.

While most companies believe a public-benefit corporation is sufficient to signal their commitment to both profits and society, the Anthropic executives believed otherwise. They wrote in a corporate blog that PBC status was not enough because it does not make the directors of the corporation directly accountable to other stakeholders or align their incentives with the interests of the general public. In a world where technological innovation is rapid, transformative, and potentially hazardous, they felt additional measures were needed.

As a result, the Amodeis incorporated Anthropic as a Long-Term Benefit Trust (LTBT). This purpose trust gave five corporate trustees Class T shares, which offer a modest financial benefit, but control over appointing and dismissing board members. Anthropics trustees select board members based on their willingness and ability to act in accordance with the corporations purpose stated at incorporation, which is the responsible development and maintenance of advanced AI for the long-term benefit of humanity.

This approach is in direct contrast to the way most for-profit and non-profit organizations staff their board. Existing board members decide on who to invite (or dismiss) from the board often based on personal relationships. There is often significant status and compensation for membership on for-profit boards and the opportunity to network with other high-net-worth or powerful people. As incumbent board members decide on who to invite, it is not surprising to see the formation of tight interlocks among members of different boards that create conflicts of interest and power plays. John Loeber illustrated a number of these conflicts arising in OpenAIs short eight-year history.

Antrhopics LTBT, on the other hand, ensures that board members remain focused on the companys purpose, not simply profits, and that major investors in Anthropic, like Amazon and Google, can contribute to building the company without steering the ship. Our corporate governance structure remains unchanged, Anthropic wrote after the Amazon investments, with the Long Term Benefit Trust continuing to guide Anthropic in accordance with our Responsible Scaling Policy.

It appears that Anthropic created this Long-term Benefit Trust structure, although it may have been modeled after the structure created by other companies, such as Patagonia. When Yves Chouinard, Patagonias founder and former CEO, set up the Patagonia Purpose Trust, he ensured the Trust could control the company to uphold Chouinards values to protect the natural environment in perpetuity.

OpenAI has written much on its website about its commitment to developing safe and beneficial artificial general intelligence. But, it is very shy on how it translates those statements into its policies and practices.

Anthropic, on the other hand, has been transparent about its approach to AI safety. It has, for example, struck numerous committees that tackle AI safety concerns, including Alignment, Assurance, Interpretability, Security, Societal Impacts, and Trust & Safety teams. It also employs a team of people that ensures its Acceptable Use Policy (AUP) and Terms of Service (ToS) are properly enforced. Further, it tracks how its customers use its products to ensure they do not violate the Acceptable Use Policy.

The company also developed an in-house framework called AI Safety Levels (ASL) for addressing catastrophic risks. The framework limits the scaling and deploying of new models when their scaling outstrips their ability to comply with safety procedures. As well, Anthropic invests heavily in safety research and makes its research, protocols, and artifacts freely available.

Another key difference between OpenAI and Anthropic is that the latter company has baked safety into the design of its LLM. Most LLMs, such as OpenAIs ChatGPT series, rely on Reinforcement Learning from Human Feedback (RLHF), which requires humans to select between AI response pairs based on the degree of helpfulness or harmfulness. But people make mistakes and can consciously or unconsciously inject their biases, and these models are scaling so rapidly that humans cant keep up with these controls.

Anthropic took a different approach, which they call Constitutional AI. They encode into their LLMs a guiding constitution that is intended to avoid toxic or discriminatory outputs, avoiding helping a human engage in illegal or unethical activities, and broadly creating an AI system that is helpful, honest, and harmless. The current constitution has drawn on a range of sources to represent Western and non-Western perspectives, including the UN Declaration of Human Rights and principles proposed by its own and other AI research labs.

Perhaps more encouraging than Anthropics extensive measures to build AI safety into its foundation is the companys acknowledgment that these measures will need to evolve and change. The company recognizes the fallibility of its constitution and expects to involve more players over time to help overcome its inadequacies.

With the current arms race towards artificial general intelligence (AGI), it is clear that AIs capabilities could quickly outstrip any single companys ability to control it regardless of the companys governance and ownership structure. Certainly, there is much skepticism that AI can be built safely, including by the many leaders of AI companies that have called for AI to be paused. Even the godfather of AI, Geoffrey Hinton, left Google to speak more openly about the risks of AI.

But, if the horses have indeed left the barn, my bets are on Anthropic to produce AGI safely because of its ownership and governance structure. It is baking safety into its practices and policies. And, not only does Anthropic provide a blueprint for the safe and human-centered development of AI, but its long-term benefit trust structure should inspire companies in other industries to organize in a way that they can bake ethics, safety, and social responsibility into their pursuit of profits.

Tomorrows business can no longer operate under the same principles as yesterdays. It not only needs to create economic value, it needs to do so by working with society and within planetary boundaries.

I have been researching and teaching business sustainability for 30 years as a professor at the Ivey Business School (Canada).Through my work at the Network for Business Sustainability and Innovation North I offer insights into what it takes to lead tomorrows companies.

Read more here:

Which Company Will Ensure AI Safety? OpenAI Or Anthropic - Forbes

Read More..

Transhumanism: billionaires want to use tech to enhance our abilities the outcomes could change what it means to … – The Conversation

Many prominent people in the tech industry have talked about the increasingconvergence between humans and machines in coming decades. For example, Elon Muskhas reportedly said he wants humans to merge with AI toachieve a symbiosis with artificial intelligence.

His company Neuralink aims to facilitate this convergence so that humans wont be left behind as technology advances in the future. While people with disabilities would be near-term recipients of these innovations, some believe technologies like this could be used to enhance abilities in everyone.

These aims are inspired by an idea called transhumanism, the belief that we should use science and technology to radically enhance human capabilities and seek to direct our own evolutionary path. Disease, aging and death are all realities transhumanists wish to end, alongside dramatically increasing our cognitive, emotional and physical capacities.

Transhumanists often advocate for the three supers of superintelligence, superlongevity and superhappiness, the last referring to ways of achieving lasting happiness. There are many different views among the transhumanist community of what our ongoing evolution should look like.

For example, some advocate uploading the mind into digital form and settling the cosmos. Others think we should remain organic beings but rewire or upgrade our biology through genetic engineering and other methods. A future of designer babies, artificial wombs and anti-aging therapies appeal to these thinkers.

This may all sound futuristic and fantastical, but rapid developments in artificial intelligence (AI) and synthetic biology have led some to argue we are on the cusp of creating such possibilities.

Tech billionaires are among the biggest promoters of transhumanist thinking. It is not hard to understand why: they could be the central protagonists in the most important moment in history.

Creating so-called artificial general intelligence (AGI) that is, an AI system that can do all the cognitive tasks a human can do and more is a current focus within Silicon Valley. AGI is seen as vital to enabling us to take on the God-like role of designing our own evolutionary futures.

That is why companies like OpenAI, DeepMind and Anthropic are racing towards the development of AGI, despite some experts warning that it could lead to human extinction.

In the short term, the promises and the perils are probably overstated. After all, these companies have a lot to gain by making us think they are on the verge of engineering a divine power that can create utopia or destroy the world. Meanwhile, AI has played a role in fuelling our polarised political landscape, with disinformation and more complex forms of manipulation made more effective by generative AI.

Indeed, AI systems are already causing many other forms of social and environmental harm. AI companies rarely wish to address these harms though. If they can make governments focus on long-term potential safety issues relating to possible existential risks instead of actual social and environmental injustices, they stand to benefit from the resulting regulatory framework.

But if we lack the capacity and determination to address these real world harms, its hard to believe that we will be able to mitigate larger-scale risks that AI may hypothetically enable. If there really is a threat that AGI could pose an existential risk, for example, everyone would shoulder that cost, but the profits would be very much private.

This issue within AI development can be seen as a microcosm of why the widertranshumanist imagination may appeal to billionaire elites in an age of multiple crises. It speaks to the refusal to engage in grounded ethics, injustices and challenges and offers a grandiose narrative of a resplendent future to distract from the current moment.

Our misuse of the planets resources has set in train a sixth mass extinction of species and a climate crisis. In addition, ongoing wars with increasingly potent weapons remain a part of our technological evolution.

Theres also the pressing question of whose future will be transhuman. We currently live in a very unequal world. Transhumanism, if developed inanything like our existing context, is likely to greatly increase inequality, andmay have catastrophic consequences for the majority of humans.

Perhaps transhumanism itself is a symptom of the kind of thinking that has created our parlous social reality. It is a narrative that encourages us to hit the gas, expropriate nature even more, keep growing and not look back at the devastation in the rear-view mirror.

If were really on the verge of creating an enhanced version of humanity, we should start to ask some big questions about what being human should mean, and therefore what an enhancement of humanity should entail.

If the human is an aspiring God, then it lays claim to dominion over nature and the body, making all amenable to its desires. But if the human is an animal embedded in complex relations with other species and nature at large, then enhancement is contingent on the health and sustainability of its relations.

If the human is conceived of as an environmental threat, then enhancement is surely that which redirects its exploitative lifeways. Perhaps becoming more-than-human should constitute a much more responsible humanity.

One that shows compassion to and awareness of other forms of life in this rich andwondrous planet. That would be preferable to colonising and extending ourselves,with great hubris, at the expense of everything, and everyone, else.

Read more from the original source:

Transhumanism: billionaires want to use tech to enhance our abilities the outcomes could change what it means to ... - The Conversation

Read More..

The Evolving Landscape of Generative AI: A Survey of Mixture of Experts, Multimodality, and the Quest for AGI – Unite.AI

The field of artificial intelligence (AI) has seen tremendous growth in 2023. Generative AI, which focuses on creating realistic content like images, audio, video and text, has been at the forefront of these advancements. Models like DALL-E 3, Stable Diffusion and ChatGPT have demonstrated new creative capabilities, but also raised concerns around ethics, biases and misuse.

As generative AI continues evolving at a rapid pace, mixtures of experts (MoE), multimodal learning, and aspirations towards artificial general intelligence (AGI) look set to shape the next frontiers of research and applications. This article will provide a comprehensive survey of the current state and future trajectory of generative AI, analyzing how innovations like Google's Gemini and anticipated projects like OpenAI's Q* are transforming the landscape. It will examine the real-world implications across healthcare, finance, education and other domains, while surfacing emerging challenges around research quality and AI alignment with human values.

The release of ChatGPT in late 2022 specifically sparked renewed excitement and concerns around AI, from its impressive natural language prowess to its potential to spread misinformation. Meanwhile, Google's new Gemini model demonstrates substantially improved conversational ability over predecessors like LaMDA through advances like spike-and-slab attention. Rumored projects like OpenAI's Q* hint at combining conversational AI with reinforcement learning.

These innovations signal a shifting priority towards multimodal, versatile generative models. Competitions also continue heating up between companies like Google, Meta, Anthropic and Cohere vying to push boundaries in responsible AI development.

As capabilities have grown, research trends and priorities have also shifted, often corresponding with technological milestones. The rise of deep learning reignited interest in neural networks, while natural language processing surged with ChatGPT-level models. Meanwhile, attention to ethics persists as a constant priority amidst rapid progress.

Preprint repositories like arXiv have also seen exponential growth in AI submissions, enabling quicker dissemination but reducing peer review and increasing the risk of unchecked errors or biases. The interplay between research and real-world impact remains complex, necessitating more coordinated efforts to steer progress.

To enable more versatile, sophisticated AI across diverse applications, two approaches gaining prominence are mixtures of experts (MoE) and multimodal learning.

MoE architectures combine multiple specialized neural network experts optimized for different tasks or data types. Google's Gemini uses MoE to master both long conversational exchanges and concise question answering. MoE enables handling a wider range of inputs without ballooning model size.

Multimodal systems like Google's Gemini are setting new benchmarks by processing varied modalities beyond just text. However, realizing the potential of multimodal AI necessitates overcoming key technical hurdles and ethical challenges.

Gemini is a multimodal conversational AI, architected to understand connections between text, images, audio, and video. Its dual encoder structure, cross-modal attention, and multimodal decoding enable sophisticated contextual understanding. Gemini is believed to exceed single encoder systems in associating text concepts with visual regions. By integrating structured knowledge and specialized training, Gemini surpasses predecessors like GPT-3 and GPT-4 in:

Realizing robust multimodal AI requires solving issues in data diversity, scalability, evaluation, and interpretability. Imbalanced datasets and annotation inconsistencies lead to bias. Processing multiple data streams strains compute resources, demanding optimized model architectures. Advances in attention mechanisms and algorithms are needed to integrate contradictory multimodal inputs. Scalability issues persist due to extensive computational overhead. Refining evaluation metrics through comprehensive benchmarks is crucial. Enhancing user trust via explainable AI also remains vital. Addressing these technical obstacles will be key to unlocking multimodal AI's capabilities.

AGI represents the hypothetical possibility of AI matching or exceeding human intelligence across any domain. While modern AI excels at narrow tasks, AGI remains far off and controversial given its potential risks.

However, incremental advances in areas like transfer learning, multitask training, conversational ability and abstraction do inch closer towards AGI's lofty vision. OpenAI's speculative Q* project aims to integrate reinforcement learning into LLMs as another step forward.

Jailbreaks allow attackers to circumvent the ethical boundaries set during the AI's fine-tuning process. This results in the generation of harmful content like misinformation, hate speech, phishing emails, and malicious code, posing risks to individuals, organizations, and society at large. For instance, a jailbroken model could produce content that promotes divisive narratives or supports cybercriminal activities. (Learn More)

While there haven't been any reported cyberattacks using jailbreaking yet, multiple proof-of-concept jailbreaks are readily available online and for sale on the dark web. These tools provide prompts designed to manipulate AI models like ChatGPT, potentially enabling hackers to leak sensitive information through company chatbots. The proliferation of these tools on platforms like cybercrime forums highlights the urgency of addressing this threat. (Read More)

To counter these threats, a multi-faceted approach is necessary:

AI hallucination, where models generate outputs not grounded in their training data, can be weaponized. For example, attackers manipulated ChatGPT to recommend non-existent packages, leading to the spread of malicious software. This highlights the need for continuous vigilance and robust countermeasures against such exploitation. (Explore Further)

While the ethics of pursuing AGI remain fraught, its aspirational pursuit continues influencing generative AI research directions whether current models resemble stepping stones or detours en route to human-level AI.

See the original post here:

The Evolving Landscape of Generative AI: A Survey of Mixture of Experts, Multimodality, and the Quest for AGI - Unite.AI

Read More..

What is AI? Your guide to artificial intelligence – PC Guide – For The Latest PC Hardware & Tech News

Last Updated on January 15, 2024

What is AI? Artificial intelligence was the most searched technology of 2023, even earning Word of The Year in the Collins Dictionary. With hundreds of millions of users interacting with AI technologies such as chatbots every week, its important to define what artificial intelligence is, and what it isnt. Well also look at what AI can do, and how its being used today.

Artificial Intelligence is a subset of Machine Learning (ML), which itself is a subset of Computer Science (CS). AI is the simulation of intelligent behavior, using computers.

One of the most popular subsets of AI is natural language processing (NLP), the simulation of language-based communication, using computers. ChatGPT is an example of artificial intelligence that uses NLP, because it communicates with the user by using the same language with which a human would communicate with another human via a computer.

AI can be categorized by degree of scope and power, with terms such as weak, strong, AGI, and ASI.

Weak, or narrow, AI is AI designed for a specific purpose. It can perform specific tasks, but not learn new ones. Language translators, virtual assistants, self-driving cars, AI-powered web searches, and spam filters are examples of weak or narrow AI. It is formally known as artificial narrow intelligence (ANI).

Some but not all weak AI systems involve deep learning algorithms. If a deep-learning algorithm is involved, it will self-improve over time to become better (faster and/or more accurate) at the task than its human creator. The alternative to a deep-learning algorithm is a machine-learning algorithm with only one layer of parameters. In this case, it will be trained to be proficient at a task, and then remain at that level of proficiency. This could still be better than a human in terms of speed and accuracy but is not what everyones excited about; Deep learning is the fun part.

Strong AI, formally known as generalized AI, can perform many tasks. This potentially includes tasks that were unforeseen by its creator. It can also learn new tasks. ChatGPT is now an example of generalized AI. There are hundreds of plugins that each expand the functionality of the chatbot beyond what was intended by the programmers who created it. Broad AI will perform tasks using data outside of its own training data. ChatGPT is a special example in that it has access to the internet via the Bing search engine.

Using a deep-learning algorithm, it will self-improve over time to become better at the task than its human creator.

Custom URL

Only $0.00015 per word!

Winston AI: The most trusted AI detector. Winston AI is the industry leading AI content detection tool to help check AI content generated with ChatGPT, GPT-4, Bard, Bing Chat, Claude, and many more LLMs. Read more

Custom URL

Only $0.01 per 100 words

Originality.AI Is The Most Accurate AI Detection.Across a testing data set of 1200 data samples it achieved an accuracy of 96% while its closest competitor achieved only 35%. Useful Chrome extension. Detects across emails, Google Docs, and websites. Read more

Custom URL

EXCLUSIVE DEAL 10,000 free bonus credits

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.

Custom URL

TRY FOR FREE

10x Your Content Output With AI. Key features No duplicate content, full control, in built AI content checker. Free trial available.

Custom URL

TRY FOR FREE

Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial. Read more

Load more

This is AI with human-level consciousness, which exhibits self-awareness and human intelligence. Sometimes called Super AI or Conscious AI, artificial general intelligence will be capable of performing as many tasks as a human (an infinite list, in theory) as well as prioritizing those tasks, and learning new ones. It will also understand why it is performing and prioritizing them, giving it the agency to make independent decisions.

Achieving AGI will be one of the most significant points in our history. This theoretical point is called the singularity, at which point we will prove that an intelligence can create an intelligence equal to itself. However, there are critical ethical problems to be solved before any can safely create AGI, known collectively as the alignment problem. This is because creating AGI will almost inevitably lead to ASI, or Artificial Super Intelligence.

Artificial Super Intelligence is AI of above-human-level intelligence. Sometimes also referred to as Super AI or Conscious AI, its best to specify either AGI or ASI to avoid confusion with the other. Should humans create ASI an intelligence more intelligent than ourselves that AI could, in theory, create an AI more intelligent than itself.

At this point, we would not necessarily have control of the intentions and objectives of this superior intelligence. It would then be limited only by the bandwidth and processing speeds of current hardware. AI already exists for molecule discovery in material science and optimization in computer science. Considering this, its reasonable to expect that a Super Intelligence would independently choose its own tasks, optimize its own efficiency at those tasks over time, and also optimize the hardware that it runs on if given access to the robotics required for manufacturing.

More:

What is AI? Your guide to artificial intelligence - PC Guide - For The Latest PC Hardware & Tech News

Read More..

Will.i.am co-hosting podcast with artificial intelligence – The A.V. Club

Artists, as a general category, havent been too warm about artificial intelligence. There are a lot of valid fears about AI and what it could do to the creative community, from using performers voices or faces without their consent to stealing jobs from writers as a way to cut costs. These fears are not shared by The Black Eyed Peas will.i.am. If you are a creator and you see this tool, then its a job creator, he told Yahoo Finance Live in June 2023. If you are tied to yesterday and just comfortable with mediocrity, then its a job [destroyer].

James Cameron just dropped in to say "I told you so" about A.I.

Will.i.am, for one, is going with job creator by launching a new podcast, will.i.am Presents The FYI Show, premiering January 25. Announced back in December, the show is described as a celebration of creators, innovators, and their dreams that will focus on AI innovation and AI-powered interactive projects, per SiriusXM. But the podcast is not just about artificial intelligence; its actually co-hosted by artificial intelligence, as the rapper revealed in a new interview with The Hollywood Reporter.

I didnt want to just do a traditional show, I wanted to bring tomorrow close to today, and so I wanted to have my co-host be an AI, will.i.am explains of his unusual partner, which is called qd.pi (cutie pie, get it?). Im ultra-freaking colorful and expressive. [Qd.pi is] ultra-freaking factual and analytical. And that combination, we aint seen in the history of freaking broadcasts anywhere.

Qd.pi was also interviewed by THR, boasting that what sets it apart from a traditional host is its ability to quickly access and process information. With me, you can just dive right into the conversation and explore whatever topics come up organically, knowing that Ill have the information and context to support the discussion. I think its going to make for a really dynamic and engaging listening experience for the audience, the computer proclaims. Qd.pis favorite Black Eyed Peas song is I Gotta Feeling, so, theres that too.

Will.i.am has been described as a futurist. While other artists have turned their backs on AI projects, hes run forward full tilt, partnering with Mercedes on its new AI sound system and launching his FYI app that incorporates generative AI. He has his own concerns about the future of the technology: We all have voices and everyones compromised because there are no rights or ownership to your facial math or your voice frequency. Youre getting a FaceTime or a Zoom call and because theres no intelligence on the call, theres nothing to authenticate an AI call or a person call, he said in a previous interview with SiriusXM (via American Songwriter). Thats the urgent thing, protecting our facial math. I am my face math. I dont own that. I own the rights to I Got A Feeling, I own the rights to the songs I wrote, but I dont own the rights to my face or my voice?

Nevertheless, hes obviously optimistic about the usage of AI, citing regulations and a more diverse crop of engineers as ways to solve issues like racial bias within the technology. He isnt concerned about computers overtaking people in creative fields because I believe in humanitys creativity, spontaneity, curiosity and competitiveness, he told Euronews last year. Just like calculators out-calculate mathematicians, that doesnt mean people arent doing calculations. That doesnt mean people arent building structures and, you know, working with advanced mathematical models. People are still thinking. People are still trying to solve problems. Its just an amazing tool. But thats not going to stop our innovative spirit, our ingenuity, inventions.

He added, Right now, in popular culture, the word is innovation. Invention hasnt been said or talked about in a long time. This new renaissance is going to spark new inventions, not innovations. This next leap is like were going to invent things. Not just innovate.

Continue reading here:

Will.i.am co-hosting podcast with artificial intelligence - The A.V. Club

Read More..

The year of does this serve us and the rejection of reification – TechCrunch

Image Credits: Getty Images

2024 has arrived, and with it, a renewed interest in artificial intelligence, which seems like itll probably continue to enjoy at least middling hype throughout the year. Of course, its being cheerled by techno-zealot billionaires and the flunkies bunked within their cozy islands of influence, primarily in Silicon Valley and derided by fabulists who stand to gain from painting the still-fictional artificial general intelligence (AGI) as humanitys ur-bogeyman for the ages.

Both of these positions are exaggerated and untenable, e/acc versus decel arguments be damned. Speed without caution only ever results in compounding problems that proponents often suggest are best-solved by pouring on more speed, possibly in a different direction, to arrive at some idealized future state where the problems of the past are obviated by the super powerful Next Big Thing of the future; calls to abandon or regress entire areas of innovation meanwhile ignore the complexity of a globalized world where cats generally can not be put back into boxes universally, among many, many other issues with that kind of approach.

The long, thrilling and tumultuous history of technology development, particularly in the age of the personal computer and the internet, has shown us that in our fervor for something new, we often neglect to stop and ask but is the new thing also something we want or need. We never stopped to ask that question with things like Facebook, and they ended up becoming an inextricable part of the fabric of society, an eminently manipulable but likewise essential part of crafting and sharing in community dialog.

Heres the main takeaway from the rise of social media that we should carry with us into the advent of the age of AI: Just because something is easier or more convenient doesnt make it preferable or even desirable.

LLM-based so-called AI has already infiltrated our lives in ways that will likely prove impossible to wind back, even if we wanted to do such a thing, but that doesnt mean we have to indulge in the escalation some see as inevitable, wherein we relentlessly rip out human equivalents of some of the gigs that AI is already good at, or shows promise in, to make way for the necessary forward march of progress.

The oft-repeated counter to fears that increased automation or handing menial work over to AI agents is that itll always leave people more time to focus on quality work, as if dropping a couple of hours per day spent on filling in Excel spreadsheets will leave the office admin who was doing that work finally free to compose the great symphony theyve had locked away within them, or to allow the entry-level graphic designer who had been color-correcting photos the liberty to create a lasting cure for COVID.

In the end, automating menial work might look good on paper, and it might also serve the top executives and deep-pocketed equity-holders behind an organization through improved efficiency and decreased costs, but it doesnt serve the people who might actually enjoy doing that work, or who at least dont mind it as part of the overall mix that makes up a work life balanced between more mentally taxing and rewarding creative/strategic exercises and day-to-day low-intensity tasks. And the long-term consequence of having fewer people doing this kind of work is that youll have fewer overall who are able to participate meaningfully in the economy which is ultimately bad even for those rarified few sitting at the top of the pyramid who reap the immediate rewards of AIs efficiency gains.

Utopian technologist zeal always fails to recognize that the bulk of humanity (techno-zealots included) are sometimes lazy, messy, disorganized, inefficient, error-prone and mostly satisfied with the achievement of comfort and the avoidance of boredom or harm. That might not sound all that aspirational to some, but I say it with a celebratory fervor, since for me all those human qualities are just as laudable as less attainable ones like drive, ambition, wealth and success.

Im not arguing against halting or even slowing the development of promising new technology, including LLM-based generative AI. And to be clear, where the consequences are clearly beneficial e.g. developing medical image diagnosis tech that far exceeds the accuracy of trained human reviewers, or developing self-driving car technology that can actually drastically reduce the incidence of car accidents and loss of human life there is no cogent argument to be made for turning away from use of said tech.

But in almost all cases where the benefits are painted as efficiency gains for tasks that are far from life or death, Id argue its worth a long, hard look at whether we need to bother in the first place; yes, human time is valuable and winning some of that back is great, but assuming thats always a net positive ignores the complicated nature of being a human being, and how we measure and feel our worth. Saving someone so much time they no longer feel like theyre contributing meaningfully to society isnt a boon, no matter how eloquently you think you can argue they should then use that time to become a violin virtuoso or learn Japanese.

See the original post:

The year of does this serve us and the rejection of reification - TechCrunch

Read More..

What can’t AI do? Researchers reveal the hardest problems for artificial intelligence – PC Guide – For The Latest PC Hardware & Tech News

Last Updated on January 12, 2024

In the largest study of its kind, 2,778 published AI researchers have predicted how long it will be until AI solves 39 high-difficulty problems. How long do we have until Artificial General Intelligence (AGI)? Can AI even perform physical tasks in a residential environment? What cant AI do? According to those in the know, its all just a matter of time.

33 of the 39 tasks are predicted to happen before 2034. The wording specifically states that these are predicted to have at least a 50% chance of being feasible within the next ten years, and with the study having been conducted during 2023, this equates to 2034. Some of the 2,778 respondents gave estimates beyond 2034, so keeping in mind that these results are an aggregate of opinions, the four tasks not predicted to happen in the next ten years are:

The remaining tasks, largely predicted to happen this decade, include many tasks of extreme economic value. These include coding an entire payment processing site from scratch and writing new songs indistinguishable from real ones by hit artists such as Taylor Swift.

Custom URL

Only $0.00015 per word!

Winston AI: The most trusted AI detector. Winston AI is the industry leading AI content detection tool to help check AI content generated with ChatGPT, GPT-4, Bard, Bing Chat, Claude, and many more LLMs. Read more

Custom URL

Only $0.01 per 100 words

Originality.AI Is The Most Accurate AI Detection.Across a testing data set of 1200 data samples it achieved an accuracy of 96% while its closest competitor achieved only 35%. Useful Chrome extension. Detects across emails, Google Docs, and websites. Read more

Custom URL

EXCLUSIVE DEAL 10,000 free bonus credits

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.

Custom URL

TRY FOR FREE

10x Your Content Output With AI. Key features No duplicate content, full control, in built AI content checker. Free trial available.

Custom URL

TRY FOR FREE

Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial. Read more

Load more

We also have an answer to the question on everyones mind How long do we have until AGI (Artificial General Intelligence)?

Well, in the terms outlined by the study, our AI research group predicted the feasibility of High-Level Machine Intelligence. The study defines High-level machine intelligence (HLMI) as achieved when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.

They predicted a 10% chance of HLMI by 2027 and a 50% chance of HLMI by 2047. For reference, the same study was conducted in 2022, with predictions of a 10% chance of HLMI by 2029 and a 50% chance of HLMI by 2060. This brings our sensible estimate for above-human machine intelligence 13 years closer than when estimated the previous year.

Follow this link:

What can't AI do? Researchers reveal the hardest problems for artificial intelligence - PC Guide - For The Latest PC Hardware & Tech News

Read More..