Category Archives: Ai

Crypto thieves will deploy more convincing AI scams in 2024, firms warn – Cointelegraph

AI-powered phishing scams, BRC-20 exploits, and new smart contract vulnerabilities are among the biggest threats that crypto projects and investors will likely face in 2024, according to blockchain security firms.

While the $1.7 billion in scam and hack-related losses in 2023 stands as an undeniable improvement to the $4 billion lost in 2022 Jesse Leclere, a blockchain analyst from CertiK, warned Cointelegraph that scams are only becoming more advanced and users should remain hyper-vigilant for well-executed exploits.

Phishing, evolving in its sophistication, will likely target not only individual users but also corporate systems [...] using social engineering tactics tailored to the crypto context, said Leclere, pointing to the Dec. 14 Ledger Connect exploit as a prime example of an advanced attack.

One of the key elements that will see phishing scams become more nefarious is the use of generative AI, he added, allowing hackers to automate operations and create convincing fake calls, videos, and messages through which to ensnare potential victims.

Jenny Peng, a research analyst from 0xScope warns that AI could form a key component in generating ever-more realistic deep fakes to fool crypto users.

Peng added that hackers are likely to also give the burgeoning BRC-20 ecosystem extra attention next year due to a relative lack of developments in security.

BRC-20 UniSat wallet launched in early 2023 and was promptly hit with a double-spend exploit. This incident shows that the BRC-20 ecosystem, where everything is new, will need to evolve its infrastructure quickly to be as battle-tested as Ethereums security-wise, she added.

Already one of the most long-standing pain points for the industry, cross-chain bridges will continue to be a concern in 2024, said Leclere.

As the industry increasingly adopts cross-chain solutions for greater interoperability, these protocols will become attractive targets for attackers, exploiting vulnerabilities arising from complex interactions between different protocols and chains, he added.

Many of the crypto sectors largest hacks to date have resulted from bridge exploits with the infamous $650 million Ronin bridge hack still standing as the worst on record. Without some serious security upgrades in the future, Leclere believes this will remain an issue for the industry heading into 2024.

Meanwhile, Phil Larratt, director of investigations at Chainalysis, offered asimilar caution, warning that bad actors will grow increasingly adept at getting away with their ill-gotten gains.

Related: $3M of crypto stolen on Christmas Day MS Drainer scammers fleece victims

In 2024, we can anticipate that illicit actors are going to become more sophisticated in the tactics and techniques they use, especially as more long-standing traditional organized criminals and financial crime actors continue to adopt crypto, he said.

With increasing know-how from security firms and law enforcement, Larratt warned that the next wave of scammers would most likely utilize privacy coins, bridges, mixers, and other obfuscation tools to a greater extent.

In response to this likely trend, we will need more intensive law enforcement investigations, increased training and knowledge sharing by law enforcement organizations, even more advanced fraud protection programs, and continued partnerships between the public and private sectors, he said.

Magazine: DeFis billion-dollar secret: The insiders responsible for hacks

More:

Crypto thieves will deploy more convincing AI scams in 2024, firms warn - Cointelegraph

Know all about Copy.ai, the AI-powered tool for automated content creation and marketing – HT Tech

With 2023 nearing its end, it is safe to say that artificial intelligence (AI) has been the buzzword of the year. Throughout the year, weve not only seen the rise of AI chatbots, but also the integration of AI into legacy applications such as Microsoft Office, and Adobe Photoshop. New AI tools have also surfaced, that make light work of tasks like image generation, and copywriting. One such AI-powered tool that aims to automate your social media content creation and AI marketing is Copy.ai. Know what is it and how it works.

Copy.ai is a generative AI-powered marketing content generation tool that leverages machine learning to generate content. Built on OpenAIs GPT-3 Large Language Model (LLM), it can generate content copies for social media campaigns, email marketing campaigns, blogs, headlines, product descriptions, translations, and more. It is available in 25 languages and features various tools such as Workflows, Workspaces, and plagiarism checker.

Copy.ai is free to use for individual users, meaning you dont have to pay anything to generate content. It can generate up to 2000 words in chat and you also get 200 Bonus credits. It also has a Pro subscription that offers 5 seats, unlimited words in chats, 500 Workflow credits, and up to 5 seats for teams. This subscription costs $49 a month.

The next plan is the Team subscription which features 20 seats, unlimited words in chats, and 3000 Workflow credits, making it perfect for growing teams. It costs $249 a month and is billed monthly. If youre a big team, then theres also a Growth plan offering 75 seats and 20000 Workflow credits. However, if youre a large-scale organization, then the Scale plan is for you, with 200 seats, unlimited words in chats, and 75000 Workflow credits. The Growth and Scale plans cost $1333 and $4000 a month respectively.

Copy.ai offers two main interfaces for generating content - Chat and Workflows. The Chat product is an AI system that interacts with users, answering queries and creating content based on user requests. It is targeted towards individual users and is more user-friendly.

On the other hand, the Workflows interface is aimed at dealing with large-scale tasks. It is capable of automating tasks with the help of an API. It also integrates a method called prompt chaining, which helps in creating more complex content.

Another feature of Copy.ai is called Chat. Using AI, it can search the web for you, and read LinkedIn profiles and websites. It can even go through YouTube videos! You can use it to write content as well as ask it to edit and rewrite content, helping you get work done faster. This tool costs $49 a month and you get 5 seats.

Like Grammarly, Copy.ai also includes a built-in plagiarism checker. Moreover, the additional can help you save information for future uses. It also features 90+ templates for content generation including blog intro, blog outline, social media bio, discounts or special promotions, testimonials, and Instagram captions.

Read more:

Know all about Copy.ai, the AI-powered tool for automated content creation and marketing - HT Tech

OpenAI investor says AI will hugely deflate the economy – Business Insider

Angle down icon An icon in the shape of an angle pointing down. Vinod Khosla, 68, was an early backer of OpenAI. In 2019, his VC firm invested $50 million into OpenAI. Patrick T. Fallon/AFP via Getty Images

The billionaire and early OpenAI backer Vinod Khosla says he expects AI to fundamentally change the global economy.

"AI should be hugely deflationary over twenty five years," Khosla wrote on X on Monday.

His outlook is a stark departure from what markets and the economy have been dealing with for the better part of two years. Inflationary pressures have continued even as price growth has come down from multidecade highs.

In a deflationary environment, prices fall, leading to lower profitability for companies and stagnant or even shrinking economic growth.

But Khosla predicted that AI's impact would make traditional measures of economic health less relevant.

"Capital should be scarce for a while, current measures of GDP and the economy will be less relevant but goods and services should be in great abundance," he said. "The key question is what are the right measures and the right questions."

Khosla's venture-capital firm invested $50 million into OpenAI in 2019, the largest investment in the firm's 15-year history.

The 68-year-old investor has often expounded on the opportunities and risks of AI. On December 12, Khosla told attendees of Fortune's Brainstorm AI conference that AI wasn't the world's great threat.

"The doomers are focusing on the wrong risks," Khosla said, adding, "By far, orders of magnitude, higher risk to worry about, is China, not sentient AI killing us off."

Khosla isn't alone in forecasting dramatic economic effects of AI.

When Elon Musk unveiled Tesla's AI robot last year, he predicted the economy could become "quasi-infinite" if the robots were capable of manual labor.

"This means a future of abundance," Musk said. "A future where there is no poverty, where you could have whatever you want in terms of products and services."

He added, "It really is a fundamental transformation of civilization as we know it."

Representatives for Khosla did not immediately respond to a request for comment from Business Insider sent outside regular business hours.

Loading...

See the original post:

OpenAI investor says AI will hugely deflate the economy - Business Insider

Copilot vs ChatGPT: Which free Android AI chatbot app should you use? – The Indian Express

Microsoft was quick on the draw to enter the AI chatbot race, launching Bing Chat back in February in hopes of driving some growth for their underdog search engine. Unfortunately, their strategy didnt seem to move the needle, with Bings growth mostly staying flat throughout the year. Now it looks like the Redmond giant is trying something new.

Bing Chat got a rebrand to Copilot and has been integrated into a bunch of Microsoft products including Edge and Windows 11. But mobile was an afterthought, with no direct way to access Copilot you had to download Edge or the Bing app. As a frequent Copilot user (its basically free GPT-4!), I always thought it would be really slick to have a dedicated app so I dont have to launch Edge every time. Well ask and ye shall receive the Copilot app is finally here on Android.

If youve already used Copilot in its web form, its mobile app will feel instantly familiar. You get the same messaging-style interface with sample prompts to get started with. But where it really shines is that everything seems to work more smoothly and responsively compared to Copilot web integrations like the one in Edge or Bing search. Actions like clearing the chat history to start afresh are lightning-quick, and the overall experience is very polished. Beyond the slick interface, the app delivers all the core capabilities youd expect, like drafting emails, documents, and even AI-generated images through Dall-E 3 integration.

With over 10 million downloads, the ChatGPT app is arguably Copilots biggest rival, even though their makers are more allies than competitors. ChatGPTs main advantage today is speed responses are typically instant, regardless of length or complexity. Meanwhile, Copilot can feel sluggish, taking time to digest prompts and search for additional context before formulating a reply. When it does produce output, its delivered line-by-line over several seconds.

Theres also a 4,000-character limit on Copilot, rendering the quick digestion of large documents tricky unless you split them into smaller chunks. On ChatGPT, the cap is way higher. ChatGPT recently also gained a nifty voice chat feature which has been designed such that it feels like youre talking to a real human on call no exaggeration.

Meanwhile, Copilot has the less fancy voice input mode you tap the mic and turn speech into text, then get a robotic voice read out the AIs response.

If ChatGPT is so awesome, why bother switching to Copilot at all? The biggest perk is the free web access and GPT-4. GPT-4 is a major step up from GPT-3.5 and just produces far more nuanced and accurate responses. Its also less likely to hallucinate. Overall, I found Copilots answers a lot easier to trust.

Then theres the fact Copilot can use the whole internet, meaning you can ask about current events. For example, if you ask the free ChatGPT Whats the most powerful iPhone?, itll probably say the iPhone 13 Pro Max from 2021. But Copilot gets it right iPhone 15 Pro Max, itll say. Likewise, prompts like Whats todays news? and Show me the cheapest flights from Delhi to Mumbai are actually useful, unlike on the free ChatGPT app. Sure, you can buy ChatGPT Plus, but at almost Rs 2,000 a month, it isnt exactly cheap.

Copilot also lets you generate images straight from the chat, which you cant do at all with the free ChatGPT. And finally, Copilot lets you adjust the tone of the responses between Creative, Balanced, and Precise for more granular control.

So in a nutshell: Think of Copilot as the poor mans ChatGPT Plus. ChatGPT Plus is still the best for power users, but if you dont wish to pay and are okay with Copilots slower speeds, youll be just fine with swapping ChatGPT for Copilot on your home screen like I did.

IE Online Media Services Pvt Ltd

Zohaib is a tech enthusiast and a journalist who covers the latest trends and innovations at The Indian Express's Tech Desk. A graduate in Computer Applications, he firmly believes that technology exists to serve us and not the other way around. He is fascinated by artificial intelligence and all kinds of gizmos, and enjoys writing about how they impact our lives and society. After a day's work, he winds down by putting on the latest sci-fi flick. Experience: 3 years Education: Bachelor in Computer Applications Previous experience: Android Police, Gizmochina Social: Instagram, Twitter, LinkedIn ... Read More

First uploaded on: 27-12-2023 at 14:14 IST

Original post:

Copilot vs ChatGPT: Which free Android AI chatbot app should you use? - The Indian Express

The AI Shopping Spree Is Taking Off. Meet 11 Bankers Poised to Come Out on Top. – Business Insider

When the telecom giant Cisco agreed to fork over $28 billion for a small data-analytics company called Splunk in September, it was a ray of light in an otherwise gloomy M&A, or mergers-and-acquisitions, market.

M&A declined in 2023 for the second year in a row as the economy grappled with a higher cost of borrowing and changing consumer behavior. But Cisco's deal to buy Splunk was among the largest acquisitions of 2023 and the largest acquisition in the AI space in at least three years. It helped push the number of AI deals into record territory, S&P Global Market Intelligence said.

With ChatGPT and Sam Altman becoming household names, some on Wall Street are now betting that the Cisco-Splunk deal is just the tip of the iceberg. They see corporate America's obsession with artificial intelligence kicking off a new era of mergers, acquisitions, and IPOs for years to come.

Neil Kell, the head of one of Bank of America's tech-banking groups, explained it this way: AI has been "an exciting, intellectual exercise over the last couple of years." But given how much AI has matured in recent years, the industry is "transitioning into a phase of action," he said.

Wall Street's AI predictions are so big that some banks, including Citi, have created new roles to accommodate the sector. In September, three Qatalyst alumni launched Axom Partners, an M&A-advisory firm focused specifically on AI-related dealmaking.

Some bankers predict that generative AI when the machines are trained to produce original content may soon become one of Wall Street's biggest revenue drivers because it touches so many different industries, from healthcare to manufacturing and advertising.

As of the end of November 2023, machine-learning deals drove some $69 billion in M&A, accounting for a record 27% of tech deals by value, according to S&P Global Market Intelligence. The financial-research firm counted deals that sit squarely in the AI sector, such as the Cisco-Splunk deal, and deals involving targets with AI capabilities, such as IBM's $4.6 billion acquisition of the business-management-software maker Apptio.

"There are very few spaces we can point to in tech where deal flow isn't being driven, in some capacity, by the need to incorporate AI or facilitate it," Melissa Incera, an M&A analyst with S&P Global Market Intelligence, said.

Of course, several things will need to fall into place for Wall Street to benefit from the AI-driven "super cycle" many bankers predict. AI companies, many of which are only now being formed, need to mature, and the winners and losers need to come into focus. Many buyers also need to figure out how to monetize AI. But some bankers believe AI will check these boxes quicker than in previous tech-driven deal cycles.

Given AI's potential to change our economy, Business Insider set out to find the bankers best positioned to lead the pack when the AI-deals boom is unleashed.

Our list focuses on bankers with deep experience in tech and AI, from M&A advisors to equity-capital fundraisers to chairs of massive tech teams and founders of banks.

We also did our best to capture their predictions of the upcoming AI revolution. Some bankers see AI as the next "iPhone moment" and say it could result in a dealmaking surge as soon as next year. Others see a longer runway. But everyone agrees it presents one of the most pressing threats and opportunities for businesses today.

"Even now, there isn't a deal anyone is working on where 'what is your gen AI strategy?' isn't the No. 1, two, and three questions. It's the fastest-moving technology we've ever seen, and that's going to continue to be the case," Rob Chisholm, an analyst at Qatalyst Partners, said.

Read the original here:

The AI Shopping Spree Is Taking Off. Meet 11 Bankers Poised to Come Out on Top. - Business Insider

AI risks need to be better understood and managed, research warns – Hindustan Times

Joe Burton, a professor at Lancaster University, UK, contends that AI and algorithms are more than mere tools used by national security agencies to thwart malicious online activities.

In a research paper recently published in the Technology in Society Journal, Burton suggests that AI and algorithms can also fuel polarisation, radicalism, and political violence, thereby becoming a threat to national security themselves.

AI is often framed as a tool to be used to counter violent extremism. Here is the other side of the debate, said Burton.

The paper looks at how AI has been securitised throughout its history, and in media and popular culture depictions, and by exploring modern examples of AI having polarising, radicalising effects that have contributed to political violence.

The research cites the classic film series, The Terminator, which depicted a holocaust committed by a sophisticated and malignant AI, as doing more than anything to frame popular awareness of AI and the fear that machine consciousness could lead to devastating consequences for humanity in this case a nuclear war and a deliberate attempt to exterminate a species.

This lack of trust in machines, the fears associated with them, and their association with biological, nuclear, and genetic threats to humankind has contributed to a desire on the part of governments and national security agencies to influence the development of the technology, to mitigate risk and to harness its positive potentiality, Burton said.

The role of sophisticated drones, such as those being used in the war in Ukraine, are, says Burton, now capable of full autonomy including functions such as target identification and recognition.

While there has been a broad and influential campaign debate, including at the UN, to ban killer robots and to keep humans in the loop when it comes to life-or-death decision-making, the acceleration and integration into armed drones has, he says, continued apace.

In cyber security the security of computers and computer networks AI is being used in a major way with the most prevalent area being (dis)information and online psychological warfare, Burton said.

During the pandemic, he said, AI was seen as a positive in tracking and tracing the virus but it also led to concerns over privacy and human rights.

The paper examines AI technology itself, arguing that problems exist in its design, the data that it relies on, how it is used, and its outcomes and impacts.

AI is certainly capable of transforming societies in positive ways but also presents risks which need to be better understood and managed, Burton added.

Read more:

AI risks need to be better understood and managed, research warns - Hindustan Times

Cathie Wood’s 3 Best Artificial Intelligence (AI) Stocks This Year: Are They Good Picks for 2024? – The Motley Fool

When the history books are written, 2023 could go down as a turning point for artificial intelligence (AI). Multiple companies rolled out impressive AI chips, large language models, and AI applications.

Cathie Wood stands out as one big winner from the AI explosion. Her Ark Invest exchange-traded funds (ETFs) have been and still are heavily invested in many high-flying AI stocks. Here are Wood's best AI stocks this year -- and whether or not they're good picks for 2024.

There's no surprise about which AI stock is Wood's best performer in 2023. Nvidia's (NVDA -0.30%) shares are on track to end the year up close to 230% thanks to soaring demand for the company's graphics processing units (GPUs).

Sure, Wood isn't as big of a fan of Nvidia as she used to be. She has reduced Ark Invest's stake in the chipmaker quite a bit. Wood thinks that Nvidia stock is now "really expensive" after its huge gains this year.

However, Nvidia remains the eleventh-largest holding in Wood's Ark Autonomous Technology & Robotics ETF (ARKQ -0.06%). Ark Next Generation Internet ETF (ARKW 0.70%) and Ark Fintech Innovation ETF (ARKF 1.01%) also still own small positions in Nvidia.

Another well-known AI leader isn't too far behind Nvidia. Shares of Meta Platforms (META 0.60%) have vaulted 190% higher as 2023 comes to a close. It's been Meta's best performance ever.

Meta's improving profitability has been a key factor behind its success this year. AI played a major role in the bottom-line improvement by helping increase the monetization of its platforms. The company's open-source AI strategy could reap further benefits over the long term.

To be sure, Meta isn't a big holding for Ark Invest. However, the stock is in the portfolios of two of Wood's ETFs -- her flagship Ark Innovation ETF and Ark Next Generation Internet ETF.

Palantir Technologies (PLTR -0.89%) ranks as Wood's No. 3 best AI stock of 2023. Shares of the software maker appear to be headed to end the year up close to 170%. As was the case with Meta, this performance is the best that Palantir has delivered in its history.

Investors seem to have appreciated Palantir's AI innovations. The company's expertise was also recognized by Dresner Advisory Services, which named Palantir the top vendor in its 2023 AI, Data Science, and Machine Learning Wisdom of Crowds Market Study.

Three of Wood's ETFs own positions in Palantir: Ark Innovation ETF, Ark Fintech Innovation ETF, and Ark Next Generation Internet ETF. All three of these funds have scooped up more shares of Palantir in December.

I think that all three of Wood's biggest AI winners of this year could also perform well in 2024. My view, though, is that Palantir is probably the weakest link among the group.

The stock trades at nearly 61 times expected earnings and almost 19 times trailing 12-month sales. Palantir's revenue growth of 17% year over year in its latest quarter makes that valuation hard to justify, although its future growth prospects should help ease some investors' minds.

As previously mentioned, Wood thinks that Nvidia's valuation is a bit too frothy. Some would argue otherwise, but there's no question that the company is facing increased competition. I don't expect Nvidia to deliver the kind of gains in the new year that it has in 2023. The stock should remain a solid winner over the long term, though.

That leaves Meta. Valuation isn't as much of an issue with this stock. Meta's forward earnings multiple of 20.5 isn't unreasonable. More importantly, the stock looks like a relative bargain factoring in its growth prospects. I view Meta as a great AI stock for investors to buy and hold.

Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Keith Speights has positions in Meta Platforms. The Motley Fool has positions in and recommends Meta Platforms, Nvidia, and Palantir Technologies. The Motley Fool has a disclosure policy.

Originally posted here:

Cathie Wood's 3 Best Artificial Intelligence (AI) Stocks This Year: Are They Good Picks for 2024? - The Motley Fool

Ad Agency Trends 2024: How Agencies Plan to Use AI – Adweek

This is the time of year that everyone weighs in on what they think the next year will have in store for us. Unsurprisingly, artificial intelligence is on everyones minds.

Some see AI as a source of good while recognizing it has its downsides as the technology advances. Agency leaders around the globe have their opinions as to where AI will take us in the future, especially as it applies to workflow, brands and marketing, and several shared their thoughts and predictions with Adweek.

Stevie Archer, chief creative officer of SS+K

AIwas front and center in 2023 because of the novelty. In 2024, fewer ideas will be about AI but more of them will actually be made possible by AI. It will become the tool, not the point; the means, not the end. Were already using it for ideation and pitches, and were just scratching the surface of what it can enable.

Jill Applebaum, chief creative officer of Public Inc.

Well see bigger creative bets that utilize AI. The most creative among us are already scheming. Second, I believe well see a tonal shift to comedy. The darker the world feels, the harder people want to laugh. Lastly, well see brands spend more of their budgets talking about their social impact. Apple invested massively in Mother Nature to showcase their sustainability and subsequently in The Lost Voice to promote an accessibility feature. We all know that Apple is an industry bellwether and that Gen Z wants to know what brands are doing to make the world a better place.

Anastasia Leng, CEO and founder of CreativeX

The good news? Speed and cost of content creation will decrease, and the floor for decent bearable ads should too. The bad news? If we struggled during the last few years of content proliferation, its time to buckle upas volume increases by 10 to 100 times, brands will experience quality and decisioning bottlenecks due to there being more content than there are people who can review and approve it. The irony? AI can be applied to solve the problem that AI is generating, building content QA systems that automatically check content for everything from digital suitability to brand consistency and more to ensure that even the ads the robots make for you are aligned to your brand and your existing creative learnings.

Rob Kottkamp, chief creative officer of Partners + Napier

AI will shift from an experimental trend to an essential tool embedded within the art and advertising world. Its a force multiplier that will allow creatives to explore new paths and open new lanes of opportunity. Some will resist AI, or use it to cut corners, assuming the risks that will follow. The creative entities that will be most celebrated however, are the ones that begin to openly integrate these tools into their creative process, using them to craft creative ideas that seamlessly blend the talents of both man and machine. Regardless of the approach, our work will be scrutinized more than ever and our industry will have to level up across all disciplines to maintain relevance.

Monica Ho, CMO of SOCi

AI is enabling marketers to evolve their roles in organizations by streamlining workflows and enhancing efficiency. While AI is a powerful tool, it still requires human oversight. For example, well see content creators transition into more of an editing and curation role, collaborating with AI to easily produce content that resonates with target audiences. Meanwhile, roles like SEO managers will integrate AI as a core responsibility, leveraging Generative Search Experience technology for improved search visibility and user experience, ensuring efficient and effective strategies. This relationship between AI and human expertise will define the future of marketing roles.

Stacey and Dr. Dawn Wade, co-founders of Nimbus

The rise of AI has helped optimize the day-to-day operations of industries across the board. While there has been a lot of backlash from inherent bias to cheating at work, there is also a lot of opportunity to responsibly use this technology from a marketing perspective. This technology also is not going anywhere so we foresee marketing professionals evolving and leveraging these tools to create a more efficient way of work. However, we cannot ignore the biases that inherently come along with its use. As marketers, there is a responsibility to serve as a filter when using AI outputs to ensure we are not unintentionally contributing to the discrimination of marginalized communities and risk being a victim to cancel culture as a result of being tone-deaf. So although these tools are essentially going to help us progress within our industry, it is still up to us leaders creating these campaigns to ensure that the final campaigns that we are creating for our clients do not fall victim to AIs shortcomings.

Read the original here:

Ad Agency Trends 2024: How Agencies Plan to Use AI - Adweek

The 3 Most Important AI Innovations of 2023 | TIME – TIME

In many ways, 2023 was the year that people began to understand what AI really isand what it can do. It was the year that chatbots first went truly viral, and the year that governments began taking AI risk seriously. Those developments werent so much new innovations, as they were technologies and ideas taking center-stage after a long gestation period.

But there were plenty of new innovations, too. Here are three of the biggest from the past year:

Multimodality might sound like jargon, but its worth understanding what it means: its the ability of an AI system to process lots of different types of datanot just text, but also images, video, audio and more.

This year was the first time that the public gained access to powerful multimodal AI models. OpenAIs GPT-4 was the first of these, allowing users to upload images as well as text inputs. GPT-4 can see the contents of an image, which opens up all kinds of possibilities, for example asking it what to make for dinner based on a photograph of the contents of your fridge. In September, OpenAI rolled out the ability for users to interact with ChatGPT by voice as well as text.

Google DeepMinds latest model Gemini, announced in December, can also work with images and audio. A launch video shared by Google showed the model identifying a duck based on a line drawing on a post-it note. In the same video, after being shown an image of pink and blue yarn and asked what it could be used to create, Gemini generated an image of a pink and blue octopus plushie. (The marketing video appeared to show Gemini observing moving images and responding to audio commands in real time, but in a post on its website, Google said the video had been edited for brevityand that the model was being prompted using still images, not video, and text prompts, not audio, although the model does have audio capabilities.)

I think the next landmark that people will think back to, and remember, is [AI systems] going much more fully multimodal, Google DeepMind co-founder Shane Legg said on a podcast in October. Its early days in this transition, and when you start really digesting a lot of video and other things like that, these systems will start having a much more grounded understanding of the world. In an interview with TIME in November, OpenAI CEO Sam Altman said multimodality in the companys new models would be one of the key things to watch out for next year.

Read More: Sam Altman is TIME's 2023 CEO of the Year

The promise of multimodality isnt just that models become more useful. Its also that the models can be trained on abundant new sets of dataimages, video, audiothat contain more information about the world than text alone. The belief inside many top AI companies is that this new training data will translate into these models becoming more capable or powerful. It is a step on the path, many AI scientists hope, toward artificial general intelligence, the kind of system that can match human intellect, making new scientific discoveries and performing economically valuable labor.

One of the biggest unanswered questions in AI is how to align it to human values. If these systems become smarter and more powerful than humans, they could cause untold harm to our speciessome even say total extinctionunless, somehow, they are constrained by rules that put human flourishing at their center.

The process that OpenAI used to align ChatGPT (to avoid the racist and sexist behaviors of earlier models) worked wellbut it required a large amount of human labor, through a technique known as reinforcement learning with human feedback, or RLHF. Human raters would assess the AIs responses and give it the computational equivalent of a doggy treat if the response was helpful, harmless, and compliant with OpenAIs list of content rules. By rewarding the AI when it was good and punishing it when it was bad, OpenAI developed an effective and relatively harmless chatbot.

But since the RLHF process relies heavily on human labor, theres a big question mark over how scalable it is. Its expensive. Its subject to the biases or mistakes made by individual raters. It becomes more failure-prone the more complicated the list of rules is. And it looks unlikely to work for AI systems that are so powerful they begin doing things humans cant comprehend.

Constitutional AIfirst described by researchers at top AI lab Anthropic in a December 2022 papertries to address these problems, harnessing the fact that AI systems are now capable enough to understand natural language. The idea is quite simple. First, you write a constitution that lays out the values youd like your AI to follow. Then you train the AI to score responses based on how aligned they are to the constitution, and then incentivize the model to output responses that score more highly. Instead of reinforcement learning from human feedback, its reinforcement learning from AI feedback. These methods make it possible to control AI behavior more precisely and with far fewer human labels, the Anthropic researchers wrote. Constitutional AI was used to align Claude, Anthropics 2023 answer to ChatGPT. (Investors in Anthropic include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.)

With constitutional AI, youre explicitly writing down the normative premises with which your model should approach the world, Jack Clark, Anthropics head of policy, told TIME in August. Then the model is training on that. There are still problems, like the difficulty of making sure the AI has understood both the letter and the spirit of the rules, (youre stacking your chips on a big, opaque AI model, Clark says,) but the technique is a promising addition to a field where new alignment strategies are few and far between.

Of course, Constitutional AI doesnt answer the question of to whose values AI should be aligned. But Anthropic is experimenting with democratizing that question. In October, the lab ran an experiment that asked a representative group of 1,000 Americans to help pick rules for a chatbot, and found that while there was some polarization, it was still possible to draft a workable constitution based on statements that the group came to a consensus on. Experiments like this could open the door to a future where ordinary people have much more of a say over how AI is governed, compared to today, when a small number of Silicon Valley executives write the rules.

One noticeable outcome of the billions of dollars pouring into AI this year has been the rapid rise of text-to-video tools. Last year, text-to-image tools had barely emerged from their infancy; now, there are several companies offering the ability to turn sentences into moving images with increasingly fine-grained levels of accuracy.

One of those companies is Runway, a Brooklyn-based AI video startup that wants to make filmmaking accessible to anybody. Its latest model, Gen-2, allows users to not just generate a video from text, but also change the style of an existing video based on a text prompt (for example, turning a shot of cereal boxes on a tabletop into a nighttime cityscape,) in a process it calls video-to-video.

Our mission is to build tools for human creativity, Runways CEO Cristobal Valenzuela told TIME in May. He acknowledges that this will have an impact on jobs in the creative industries, where AI tools are quickly making some forms of technical expertise obsolete, but he believes the world on the other side is worth the upheaval. Our vision is a world where human creativity gets amplified and enhanced, and it's less about the craft, and the budget, and the technical specifications and knowledge that you have, and more about your ideas. (Investors in Runway include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.)

Another startup in the text-to-video space is Pika AI, which is reportedly being used to create millions of new videos each week. Run by two Stanford dropouts, the company launched in April but has already secured funding that values it at between $200 and $300 million, according to Forbes. Pitched not at professional filmmakers but at the general user, free tools like Pika are trying to transform the user-generated content landscape. That could happen as soon as 2024but text-to-video tools are computationally expensive, so dont be surprised if they start charging for access once the venture capital runs out.

Read more from the original source:

The 3 Most Important AI Innovations of 2023 | TIME - TIME

Will superintelligent AI sneak up on us? New study offers reassurance – Nature.com

Some researchers think that AI could eventually achieve general intelligence, matching and even exceeding humans on most tasks.Credit: Charles Taylor/Alamy

Will an artificial intelligence (AI) superintelligence appear suddenly, or will scientists see it coming, and have a chance to warn the world? Thats a question that has received a lot of attention recently, with the rise of large language models, such as ChatGPT, which have achieved vast new abilities as their size has grown. Some findings point to emergence, a phenomenon in which AI models gain intelligence in a sharp and unpredictable way. But a recent study calls these cases mirages artefacts arising from how the systems are tested and suggests that innovative abilities instead build more gradually.

I think they did a good job of saying nothing magical has happened, says Deborah Raji, a computer scientist at the Mozilla Foundation who studies the auditing of artificial intelligence. Its a really good, solid, measurement-based critique.

The work was presented last week at the NeurIPS machine-learning conference in New Orleans.

Large language models are typically trained using huge amounts of text, or other information, whch they use to generate realistic answers by predicting what comes next. Even without explicit training, they manage to translate language, solve mathematical problems and write poetry or computer code. The bigger the model is some have more than a hundred billion tunable parameters the better it performs. Some researchers suspect that these tools will eventually achieve artificial general intelligence (AGI), matching and even exceeding humans on most tasks.

ChatGPT broke the Turing test the race is on for new ways to assess AI

The new research tested claims of emergence in several ways. In one approach, the scientists compared the abilities of four sizes of OpenAIs GPT-3 model to add up four-digit numbers. Looking at absolute accuracy, performance differed between the third and fourth size of model from nearly 0% to nearly 100%. But this trend is less extreme if the number of correctly predicted digits in the answer is considered instead. The researchers also found that they could also dampen the curve by giving the models many more test questions in this case the smaller models answer correctly some of the time.

Next, the researchers looked at the performance of Googles LaMDA language model on several tasks. The ones for which it showed a sudden jump in apparent intelligence, such as detecting irony or translating proverbs, were often multiple-choice tasks, with answers scored discretely as right or wrong. When, instead, the researchers examined the probabilities that the models placed on each answer a continuous metric signs of emergence disappeared.

Finally, the researchers turned to computer vision, a field in which there are fewer claims of emergence. They trained models to compress and then reconstruct images. By merely setting a strict threshold for correctness, they could induce apparent emergence. They were creative in the way that they designed their investigation, says Yejin Choi, a computer scientist at the University of Washington in Seattle who studies AI and common sense.

Study co-author Sanmi Koyejo, a computer scientist at Stanford University in Palo Alto, California, says that it wasnt unreasonable for people to accept the idea of emergence, given that some systems exhibit abrupt phase changes. He also notes that the study cant completely rule it out in large language models let alone in future systems but adds that "scientific study to date strongly suggests most aspects of language models are indeed predictable.

Raji is happy to see the community pay more attention to benchmarking, rather than to developing neural-network architectures. Shed like researchers to go even further and ask how well the tasks relate to real-world deployment. For example, does acing the LSAT exam for aspiring lawyers, as GPT-4 has done, mean that a model can act as a paralegal?

The work also has implications for AI safety and policy. The AGI crowd has been leveraging the emerging-capabilities claim, Raji says. Unwarranted fear could lead to stifling regulations or divert attention from more pressing risks. The models are making improvements, and those improvements are useful, she says. But theyre not approaching consciousness yet.

See the original post here:

Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com