Page 991«..1020..990991992993..1,0001,010..»

Google’s Responsive Search Ad Guide: Navigating AI In Advertising – Search Engine Journal

Google recently released a comprehensive guide to help marketers better understand and utilize Responsive Search Ads (RSAs).

The guide provides an in-depth look at how Google leverages AI technology to optimize RSA performance for each search query. It aims to give marketers the knowledge to take full advantage of this adaptive ad format.

This article summarizes the key information presented in Googles RSA guide.

Whether starting out with RSAs or looking to improve existing campaigns, this summary highlights the core advice from Google that can help advertisers succeed.

The guide begins by discussing advertisers difficulty in targeting the appropriate ads, recognizing how search queries and user behavior constantly evolve.

Googles internal data indicates that 15% of searches have never been entered. This constant change makes it challenging for companies to predict relevant trends and search patterns.

The guide points out that Googles Responsive Search Ads can be used in Search campaigns to deal with the challenge of finding the optimal combination of headlines and descriptions for different queries.

RSAs automatically test different headline and description variants to determine which combinations will likely perform best for any search query.

Responsive search ads generated by Googles artificial intelligence pick the most relevant headline and description pairings for each user.

The more varied headlines and descriptions provided, the more likely the AI can deliver ads tailored to potential customers.

The guide discusses the Pinning feature, which lets you choose a specific asset always to be included in ads.

This can be useful for complying with local regulations. However, the guide also notes that pinning limits the ability to generate unique ad combinations, which could negatively impact performance.

Googles guide gives a thorough explanation of how RSAs create search ads.

The process starts by comprehending the context behind each search query and keyword used for matching.

It then combines available assets based on their relevance to the query and predicted performance.

These creative combinations are scored, and the top-ranking ones continue to the auction.

After new assets are used, an AI model that continuously learns starts evaluating which assets and combinations lead to the best performance for each search query.

This evaluation process usually begins within a few hours of when a new asset is initially served.

The goal is to maximize performance for advertisers by determining the optimal assets and asset combinations to show for each query.

Google highlights its Ad Strength feature, which gives advertisers forward-looking feedback on how well their responsive search ad assets align with attributes that tend to boost performance.

Ad Strength offers real-time ratings of Poor, Average, Good, or Excellent that update dynamically as changes are made to the ad copy and assets.

This allows advertisers to optimize their ads by iterating based on the Ad Strength feedback provided by Google.

Googles guide outlines several tools to help users create high-quality assets, including asset suggestions, recommendations for improving Ad Strength, and the option to use automatically created assets.

Asset suggestions are headline and description options when creating or editing a responsive search ad. These are generated based on the final URL and are relevant to the ads context.

Recommendations for improving ad strength are shown to help optimize responsive search ads at scale. These appear for ads with Poor or Average strength and include asset suggestions.

The automatically created assets option is enabled at the campaign level. When turned on, the system will generate headlines and descriptions tailored to each responsive ads unique context.

Googles guide provides suggestions on how to assess the success of RSAs. It recommends that users prioritize boosting the business results of their ads and use those as benchmarks for performance.

The guide also emphasizes the value of analyzing asset performance ratings, which give insight into how well individual ad components have worked in the past.

The guide suggests utilizing AI-driven tools for bidding, keywords, and ad copy to get the best outcomes.

Implementing Smart Bidding, broad match keywords, and responsive search ads in combination can assist with showing the most relevant ad to each searcher at an optimal cost.

The guide wraps up by highlighting the main points to remember, including:

As Google keeps integrating the newest AI advancements into responsive search ads, the company hopes to streamline generating ads that accomplish business goals.

Featured Image: IB Photography/Shutterstock

See the original post:

Google's Responsive Search Ad Guide: Navigating AI In Advertising - Search Engine Journal

Read More..

Google reshuffles Assistant unit, lays off some staffers, to ‘supercharge’ products with A.I. – CNBC

Google CEO Sundar Pichai speaks on-stage during the Google I/O keynote session at the Google Developers Conference in Mountain View, California, on May 10, 2023.

Josh Edelson | AFP | Getty Images

Google wants to "supercharge" its seven-year-old smart assistant using new advancements in generative artificial intelligence, as part of the latest major reorganization of the Assistant unit.

In an email to employees Monday, Peeyush Ranjan, Google's vice president of engineering at Assistant, said the latest reshuffle will include a small number of layoffs. Ranjan said the company will look to push large language model, or LLM, technology into Assistant, Google's voice-powered software that's similar to Apple's Siri or Amazon's Alexa.

"As a team, we need to focus on delivering high quality, critical product experiences for our users," Ranjan wrote in the email, which was viewed by CNBC. "We've also seen the profound potential of generative AI to transform people's lives and see a huge opportunity to explore what a surpercharged Assistant, powered by the LLM technology, would look like."

A portion of the Assistant team has already started working on the efforts, Ranjan added. Employees who are being laid off will be given 60 days to search for other jobs within Google.

Axios first reported some of the unit's changes.

As a part of the reorganization, executives announced a dozen changes to the company's "Speech" team, which oversees voice commands. Francoise Beaufays, who had been the head of Speech, is moving to work under Sissie Hsiao, who oversees Bard and Assistant.

"This is an exciting moment for AI, with nearly every product demanding world-class AI-driven Speech," Beaufays wrote in a separate email announcing changes to the unit. Some members of the Speech team will focus their efforts specifically on Bard, she wrote.

Assistant executives said the changes will allow the division to move with "speed and focus."

Jennifer Rodstrom, a Google spokesperson, said in an email to CNBC that the company is "excited to explore how LLMs can help us supercharge Assistant and make it even better."

"Hundreds of millions of people use the Assistant every month and we're committed to giving them high quality experiences," she wrote.

The rapid developments in generative AI, which responds to text-based queries with intelligent and creative answers and can convert text to images, is pushing Google to embed the technology in as many products as possible.

For the older Assistant organization, that's meant frequent refinements. Assistant is used in Google's mobile and home devices, including its Pixel smartphone and in Nest smart speakers and devices. It's also used in smart watches, smart displays, TVs and in vehicles through the Android Auto platform.

In March, Hsiao announced changes to the organization, underscoring a prioritizing of Bard. Ranjan, who had been vice president of commerce, stepped in as engineering lead for the unit and oversees more than 1,700 full-time employees, according to an internal document.

Since the launch late last year of OpenAI's ChatGPT, Amazon has also emphasized the emerging importance of generative AI, adding it into Alexa products.

For Google, which has dominated internet search for the better part of two decades, there's more at stake, as ChatGPT and Microsoft Bing, which uses OpenAI's model, give people alternative ways to search for answers.

Google has been rolling out updates to Bard after launching it publicly in March. Last month, the company said it expanded to over 40 languages in more countries, and will include features like audio responses, thanks to its newest LLM, Palm 2.

WATCH: Google kicks off I/O event

More:

Google reshuffles Assistant unit, lays off some staffers, to 'supercharge' products with A.I. - CNBC

Read More..

BCSO using AI to track retail crime in the metro – KOB 4

ALBUQUERQUE, N.M. Whether its gunshot detection or car break-ins, it seems like artificial intelligence is everywhere and now the Bernalillo County Sheriffs Office is using it to track retail crime.

The departments most recent retail crime initiative was just rolled out on Tuesday, and it focuses on partnering with business owners.

AI technology is used to gather information on cars that are driving into business parking lots and to see if they are associated with any crime.

We want to make sure that we are partnering with businesses with the technology we are bringing into the office and in Bernalillo County. To make sure that we can expand what we are going on with our axon body-worn cameras and our license plate readers, said Bernalillo County Sheriff John Allen.

The Department is using Flock Safety but is also looking at other companies they can partner with to expand how they use AI.

To use an example, Coronado Mall, lets say you had somebody that did retail crime numerous times in the same vehicle. It will alert us saying that that vehicle they have a criminal trespass warning with that company, they need to be off of that property so the business and us are alerted, said Allen.

He said the program expands on their current license plate system and scans vehicles as they come into business parking lots.

It really emphasizes to have an unbiased investigation we dont want profiling or things of that nature to happen, we just dont want to focus on one license plate we want to focus on a description of vehicles because we know there are some commonalities in the retail crime we are seeing in Bernalillo County, he said.

Four businesses have already signed up for the initiative that was launched this week. Its been a combination of local shops and big box stores.

The Bernalillo County Sheriff said any business owner thats interested in this technology can reach out to the department. Theyre also looking to partner with other agencies on this initiative.

Continued here:

BCSO using AI to track retail crime in the metro - KOB 4

Read More..

Using AI to protect against AI image manipulation | MIT News … – MIT News

As we enter a new era where technologies powered by artificial intelligence can craft and manipulate images with a precision that blurs the line between reality and fabrication, the specter of misuse looms large. Recently, advanced generative models such as DALL-E and Midjourney, celebrated for their impressive precision and user-friendly interfaces, have made the production of hyper-realistic images relatively effortless. With the barriers of entry lowered, even inexperienced users can generate and manipulate high-quality images from simple text descriptions ranging from innocent image alterations to malicious changes. Techniques like watermarking pose a promising solution, but misuse requires a preemptive (as opposed to only post hoc) measure.

In the quest to create such a new measure, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) developed PhotoGuard, a technique that uses perturbations minuscule alterations in pixel values invisible to the human eye but detectable by computer models that effectively disrupt the models ability to manipulate the image.

PhotoGuard uses two different attack methods to generate these perturbations. The more straightforward encoder attack targets the images latent representation in the AI model, causing the model to perceive the image as a random entity. The more sophisticated diffusion one defines a target image and optimizes the perturbations to make the final image resemble the target as closely as possible.

Consider the possibility of fraudulent propagation of fake catastrophic events, like an explosion at a significant landmark. This deception can manipulate market trends and public sentiment, but the risks are not limited to the public sphere. Personal images can be inappropriately altered and used for blackmail, resulting in significant financial implications when executed on a large scale, says Hadi Salman, an MIT graduate student in electrical engineering and computer science (EECS), affiliate of MIT CSAIL, and lead author of a new paper about PhotoGuard.

In more extreme scenarios, these models could simulate voices and images for staging false crimes, inflicting psychological distress and financial loss. The swift nature of these actions compounds the problem. Even when the deception is eventually uncovered, the damage whether reputational, emotional, or financial has often already happened. This is a reality for victims at all levels, from individuals bullied at school to society-wide manipulation.

PhotoGuard in practice

AI models view an image differently from how humans do. It sees an image as a complex set of mathematical data points that describe every pixel's color and position this is the image's latent representation. The encoder attack introduces minor adjustments into this mathematical representation, causing the AI model to perceive the image as a random entity. As a result, any attempt to manipulate the image using the model becomes nearly impossible. The changes introduced are so minute that they are invisible to the human eye, thus preserving the image's visual integrity while ensuring its protection.

The second and decidedly more intricate diffusion attack strategically targets the entire diffusion model end-to-end. This involves determining a desired target image, and then initiating an optimization process with the intention of closely aligning the generated image with this preselected target.

In implementing, the team created perturbations within the input space of the original image. These perturbations are then used during the inference stage, and applied to the images, offering a robust defense against unauthorized manipulation.

The progress in AI that we are witnessing is truly breathtaking, but it enables beneficial and malicious uses of AI alike, says MIT professor of EECS and CSAIL principal investigator Aleksander Madry, who is also an author on the paper. It is thus urgent that we work towards identifying and mitigating the latter. I view PhotoGuard as our small contribution to that important effort.

The diffusion attack is more computationally intensive than its simpler sibling, and requires significant GPU memory. The team says that approximating the diffusion process with fewer steps mitigates the issue, thus making the technique more practical.

To better illustrate the attack, consider an art project, for example. The original image is a drawing, and the target image is another drawing thats completely different. The diffusion attack is like making tiny, invisible changes to the first drawing so that, to an AI model, it begins to resemble the second drawing. However, to the human eye, the original drawing remains unchanged.

By doing this, any AI model attempting to modify the original image will now inadvertently make changes as if dealing with the target image, thereby protecting the original image from intended manipulation. The result is a picture that remains visually unaltered for human observers, but protects against unauthorized edits by AI models.

As far as a real example with PhotoGuard, consider an image with multiple faces. You could mask any faces you dont want to modify, and then prompt with two men attending a wedding. Upon submission, the system will adjust the image accordingly, creating a plausible depiction of two men participating in a wedding ceremony.

Now, consider safeguarding the image from being edited; adding perturbations to the image before upload can immunize it against modifications. In this case, the final output will lack realism compared to the original, non-immunized image.

All hands on deck

Key allies in the fight against image manipulation are the creators of the image-editing models, says the team. For PhotoGuard to be effective, an integrated response from all stakeholders is necessary. Policymakers should consider implementing regulations that mandate companies to protect user data from such manipulations. Developers of these AI models could design APIs that automatically add perturbations to users images, providing an added layer of protection against unauthorized edits, says Salman.

Despite PhotoGuards promise, its not a panacea. Once an image is online, individuals with malicious intent could attempt to reverse engineer the protective measures by applying noise, cropping, or rotating the image. However, there is plenty of previous work from the adversarial examples literature that can be utilized here to implement robust perturbations that resist common image manipulations.

A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defense against unauthorized image manipulation. Working on this pressing issue is of paramount importance today, says Salman. And while I am glad to contribute towards this solution, much work is needed to make this protection practical. Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools. As we tread into this new era of generative models, lets strive for potential and protection in equal measures.

The prospect of using attacks on machine learning to protect us from abusive uses of this technology is very compelling, says Florian Tramr, an assistant professor at ETH Zrich. The paper has a nice insight that the developers of generative AI models have strong incentives to provide such immunization protections to their users, which could even be a legal requirement in the future. However, designing image protections that effectively resist circumvention attempts is a challenging problem: Once the generative AI company commits to an immunization mechanism and people start applying it to their online images, we need to ensure that this protection will work against motivated adversaries who might even use better generative AI models developed in the near future. Designing such robust protections is a hard open problem, and this paper makes a compelling case that generative AI companies should be working on solving it.

Salman wrote the paper alongside fellow lead authors Alaa Khaddaj and Guillaume Leclerc MS 18, as well as Andrew Ilyas 18, MEng 18; all three are EECS graduate students and MIT CSAIL affiliates. The teams work was partially done on the MIT Supercloud compute cluster, supported by U.S. National Science Foundation grants and Open Philanthropy, and based upon work supported by the U.S. Defense Advanced Research Projects Agency. It was presented at the International Conference on Machine Learning this July.

More here:

Using AI to protect against AI image manipulation | MIT News ... - MIT News

Read More..

Artificial Intelligence (AI) Software Revenue Is Zipping Toward $14 … – The Motley Fool

Artificial intelligence (AI) promises to improve productivity in many different industries, potentially doubling the output of the average knowledge worker by the end of the decade. Meanwhile, the falling cost of training AI models is making the technology ever more accessible. The intersection of those trends could trigger a demand boom in the coming years.

Indeed, Cathie Wood's Ark Invest says AI software revenue will hit $14 trillion by 2030, up from $1 trillion in 2021, as enterprises chase efficiency. Many companies will undoubtedly benefit from the boom, but Microsoft (MSFT 0.34%) and Datadog (DDOG 0.40%) are particularly well positioned to capitalize on the growing demand for AI software.

Here's what investors should know about these AI growth stocks.

Microsoft announced solid financial results for the June quarter, topping consensus estimates on the top and bottom lines. Revenue rose 8% to $56.2 billion, driven by double-digit growth in enterprise software (e.g., Microsoft 365, Dynamics 365) and Azure cloud services, and generally accepted accounting principles (GAAP) earnings jumped 21% to $2.69per diluted share as cost-cutting efforts paid off. But the company may be able to accelerate growth in future quarters.

The investment thesis is simple: Microsoft is the gold standard in enterprise software, and Microsoft Azure is the second-largest cloud services provider in the world. In both segments, the company aims to turbocharge growth by leaning into artificial intelligence (AI), and its exclusive partnership with ChatGPT creator OpenAI should be a significant tailwind. Indeed, Morgan Stanley analyst Keith Weiss says Microsoft is the software company "best positioned" to monetize generative AI.

In enterprise software, Microsoft accounted for 16.4% of global software-as-a-service (SaaS) revenue last year, earning nearly twice as much as its closest competitor, and industry experts have recognized its leadership in several quickly growing SaaS verticals, including office productivity, communications, cybersecurity, and enterprise resource planning (ERP) software. All four markets areexpectedtogrowat a double-digit pace through 2030, according to Grand View Research.

In cloud computing, Microsoft Azure accounted for 23% of cloud infrastructure and platform services revenue in the first quarter of 2023, up from 21% one year ago, 19% two years ago, and 17% three years ago. Those consistent market share gains reflect strength in hybrid computing, AI supercomputinginfrastructure, and AI developer services, according to CEO Satya Nadella, and they hint at strong growth in the coming years. The cloud computing market is expected to increase at 14.1%annually through 2030.

In AI software, Microsoft recently announced Microsoft 365 Copilot and Dynamics 365 Copilot, products that lean on generative AI to automate a variety of business processes and workflows. For instance, Microsoft 365 Copilot can draft emails in Outlook, analyze data in Excel, and create presentations in PowerPoint. Similarly, Azure OpenAI Services empowers developers to build cutting-edge generative AI software by connecting them with prebuilt AI models from OpenAI, including the GPT family of large language models.

Currently, shares trade at 11.9 times sales, a slight premium to the three-year average of 11.3 times sales, but a reasonable price to pay for a high-quality AI growth stock like Microsoft.

Datadog has yet to release results for the June quarter, but the company turned in a solid financial report for the March quarter. Its customer count rose 29%, and the average customer spent over 30% more, despite a broader pullback in business IT investments. In turn, revenue climbed 33% to $482 million, and non-GAAP net income jumped 17% to $0.28 per diluted share.

Going forward, the investment thesis centers on digital transformation: Datadog provides observability and cybersecurity software that helps clients resolve performance issues and security threats across their applications, networks, and infrastructure. Demand for such products should snowball in the years ahead, as IT environments are made more complex by cloud migrations and other digital transformation projects.

Datadog has distinguished itself as a leader in several observability software categories, including application performance monitoring, network monitoring, log monitoring, and AI for IT operations. Industry experts attribute that success to its broad product portfolio, robust innovation pipeline, and data science capabilities. Indeed, Datadog brings together more than two dozen monitoring products on a single platform, and it leans on AI to automate tasks like anomaly detection, incident alerts, and root cause analysis.

Looking ahead, Datadog says its addressable market will reach $62 billion by 2026, and any trend that adds complexity to corporate IT environments should be a tailwind. For instance, Wolfe Research analyst Alex Zukin believes interest in generative AI could help Datadog become "the fastest-growing software company."

Currently, shares trade at 19.8 times sales, a bargain compared to the three-year average of 36.6 times sales. At that price, risk-tolerant investors should feel comfortable buying a few shares of this growth stock.

See the rest here:

Artificial Intelligence (AI) Software Revenue Is Zipping Toward $14 ... - The Motley Fool

Read More..

AI-enhanced images a threat to democratic processes, experts warn – The Guardian

Artificial intelligence (AI)

Call for action comes after Labour MP shared a digitally manipulated image of Rishi Sunak on social media

Experts have warned that action needs to be taken on the use of artificial intelligence-generated or enhanced images in politics after a Labour MP apologised for sharing a manipulated image of Rishi Sunak pouring a pint.

Karl Turner, the MP for Hull East, shared an image on the rebranded Twitter platform, X, showing the prime minister pulling a sub-standard pint at the Great British beer festival while a woman looks on with a derisive expression. The image had been manipulated from an original photo in which Sunak appears to have pulled a pub-level pint while the person behind him has a neutral expression.

The image brought criticism from the Conservatives, with the deputy prime minister, Oliver Dowden, calling it unacceptable.

I think that the Labour leader should disown this and Labour MPs who have retweeted this or shared this should delete the image, it is clearly misleading, Dowden told LBC on Thursday.

Experts warned the row was an indication of what could happen during what is likely to be a bitterly fought election campaign next year. While it was not clear whether the image of Sunak had been manipulated using an AI tool, such programs have made it easier and quicker to produce convincing fake text, images and audio.

Wendy Hall, a regius professor of computer science at the University of Southampton, said: I think the use of digital technologies including AI is a threat to our democratic processes. It should be top of the agenda on the AI risk register with two major elections in the UK and the US looming large next year.

Shweta Singh, an assistant professor of information systems and management at the University of Warwick, said: We need a set of ethical principles which can assure and reassure the users of these new technologies that the news they are reading is trustworthy.

We need to act on this now, as it is impossible to imagine fair and impartial elections if such regulations dont exist. Its a serious concern and we are running out of time.

Prof Faten Ghosn, the head of the department of government at the University of Essex, said politicians should make it clear to voters when they are using manipulated images. She flagged efforts to regulate the use of AI in politics by the US congresswoman Yvette Clarke, who is proposing a law change that would require political adverts to tell voters if they contain AI-generated material.

If politicians use AI in any form they need to ensure that it carries some kind of mark that informs the public, said Ghosn.

The warnings contribute to growing political concern over how to regulate AI. Darren Jones, the Labour chair of the business select committee, tweeted on Wednesday: The real question is: how can anyone know if a photo is a deepfake? I wouldnt criticise @KarlTurnerMP for sharing a photo that looks real to me.

In reply to criticism from the science secretary, Michelle Donelan, he added: What is your department doing to tackle deepfake photos, especially in advance of the next election?

The science department is consulting on its AI white paper, which was published earlier this year and advocates general principles to govern technology development, rather than specific curbs or bans on certain products. Since that was published, however, Sunak has shifted his rhetoric on AI from talking mostly about the opportunities it will present to warning that it needs to be developed with guardrails.

Meanwhile, the most powerful AI companies have acknowledged the need for a system to watermark AI-generated content. Last month Amazon, Google, Meta, Microsoft and ChatGPT developer OpenAI agreed to a set of new safeguards in a meeting with Joe Biden that included using watermarking for AI-made visual and audio content.

In June Microsofts president, Brad Smith, warned that governments had until the beginning of next year to tackle the issue of AI-generated disinformation. We do need to sort this out, I would say by the beginning of the year, if we are going to protect our elections in 2024, he said.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

View original post here:

AI-enhanced images a threat to democratic processes, experts warn - The Guardian

Read More..

AI companies arent afraid of regulation we want it to be international and inclusive – The Guardian

Opinion

If our industry is to avoid superficial ethics-washing, historically excluded communities must be brought into the conversation

AI is advancing at a rapid pace, bringing with it potentially transformative benefits for society. With discoveries such as AlphaFold, for example, were starting to improve our understanding of some long-neglected diseases, with 200m protein structures made available at once a feat that previously would have required four years of doctorate-level research for each protein and prohibitively expensive equipment. If developed responsibly, AI can be a powerful tool to help us deliver a better, more equitable future.

However, AI also presents challenges. From bias in machine learning used for sentencing algorithms, to misinformation, irresponsible development and deployment of AI systems poses the risk of great harm. How can we navigate these incredibly complex issues to ensure AI technology serves our society and not the other way around?

First, it requires all those involved in building AI to adopt and adhere to principles that prioritise safety while also pushing the frontiers of innovation. But it also requires that we build new institutions with the expertise and authority to responsibly steward the development of this technology.

The technology sector often likes straightforward solutions, and institution-building may seem like one of the hardest and most nebulous paths to go down. But if our industry is to avoid superficial ethics-washing, we need concrete solutions that engage with the reality of the problems we face and bring historically excluded communities into the conversation.

To ensure the market seeds responsible innovation, we need the labs building innovative AI systems to establish proper checks and balances to inform their decision-making. When the language models first burst on to the scene, it was Google DeepMinds institutional review committee an interdisciplinary panel of internal experts tasked with pioneering responsibly that decided to delay the release of our new paper until we could pair it with a taxonomy of risks that should be used to assess models, despite industry-wide pressure to be on top of the latest developments.

These same principles should extend to investors funding newer entrants. Instead of bankrolling companies that prioritise novelty over safety and ethics, venture capitalists (VCs) and others need to incentivise bold and responsible product development. For example, the VC firm Atomico, at which I am an angel investor, insists on including diversity, equality and inclusion, and environmental, social governance requirements in the term sheets for every investment it makes. These are the types of behaviours we want those leading the field to set.

We are also starting to see convergence across the industry around important practices such as impact assessments and involving diverse communities in development, evaluation and testing. Of course, there is still a long way to go. As a woman of colour, Im acutely aware of what this means for a sector where people like me are underrepresented. But we can learn from the cybersecurity community.

Decades ago they started offering bug bounties a financial reward to researchers who could identify a vulnerability or bug in a product. Once reported, the companies had an agreed time period during which they would address the bug and then publicly disclose it, crediting the bounty hunters. Over time, this has developed into an industry norm called responsible disclosure. AI labs are now borrowing from this playbook to tackle the issue of bias in datasets and model outputs.

Last, advancements in AI present a challenge to multinational governance. Guidance at the local level is one part of the equation, but so too is international policy alignment, given the opportunities and risks of AI wont be limited to any one country. Proliferation and misuse of AI has woken everyone up to the fact that global coordination will play a crucial role in preventing harm and ensuring common accountability.

Laws are only effective, however, if they are future-proof. Thats why its crucial for regulators to consider not only how to regulate chatbots today, but also how to foster an ecosystem where innovation and scientific acceleration can benefit people, providing outcome-driven frameworks for tech companies to work within.

Unlike nuclear power, AI is more general and broadly applicable than other technologies, so building institutions will require access to a broad set of skills, diversity of background and new forms of collaboration including scientific expertise, socio-technical knowledge, and multinational public-private partnerships. The recent Atlantic declaration between the UK and US is a promising start toward ensuring that standards in the industry have a chance of scaling into multinational law.

In a world that is politically trending toward nostalgia and isolationism, multilayered approaches to good governance that involve government, tech companies and civil society will never be the headline-grabbing or popular path to solving the challenges of AI. But the hard, unglamorous work of building institutions is critical for enabling technologists to build toward a better future together.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Go here to read the rest:

AI companies arent afraid of regulation we want it to be international and inclusive - The Guardian

Read More..

Googles AI Search Generative Experience is getting video and … – The Verge

Googles AI-powered Search Generative Experience is getting a big new feature: images and video. If youve enabled the AI-based SGE feature in Search Labs, youll now start to see more multimedia in the colorful summary box at the top of your search results. Googles also working on making that summary box appear faster and adding more context to the links it puts in the box.

SGE may still be in the experiment phase, but its very clearly the future of Google Search. It really gives us a chance to, now, not always be constrained in the way search was working before, CEO Sundar Pichai said on Alphabets most recent earnings call. It allows us to think outside the box. He then said that over time, this will just be how search works.

The SGE takeover raises huge, thorny questions about the very future of the web, but its also just a tricky product to get right. Google is no longer simply trying to find good links for you every time you search its trying to synthesize and generate relevant, true, helpful information. Video in particular could go a long way here: Google has integrated YouTube more and more into search results over the years, linking to a specific chapter or moment inside a video that might help you with that why is my dryer making that noise query.

You can already see the publish dates and images starting to show up in SGE summaries. Image: Google / David Pierce

Surfacing and contextualizing links is also still going to be crucial for Google if SGE is going to work. Its now going to display publish dates next to the three articles in the summary box in an effort to help you better understand how recent the information is from these web pages, Google said in a blog post announcing the new features. 9to5Google also noticed Google experimenting with adding in-line links to the AI summary, though so far, that appears to have just been a test. Finding the right balance between giving you the information you were looking for and helping you find it yourself and all the implications of both those outcomes is forever one of the hardest problems within Google Search.

Making SGE faster is also going to take Google a while. All these large language model-based tools, from SGE and Bing to ChatGPT and Bard, take a few seconds to generate answers to your questions, and in the world of search, every millisecond matters. In June, Google said it had cut the loading time in half though Ive been using SGE for a few months, and I cant say Ive noticed a big difference before and after. SGE is still too slow. Its always the last thing to load on the page by a wide margin.

Still, Ive been consistently impressed with how useful SGE is in my searches. Its particularly handy for all the where should I go and what should I watch types of questions, where theres no right answer but Im just looking for ideas and options. Armed with more sources, more media, and more context, SGE might start to usurp the 10 blue links even further.

Here is the original post:

Googles AI Search Generative Experience is getting video and ... - The Verge

Read More..

Meta Will Let You Chat With AI Bots With Personalities, Report Says – CNET

Meta, the parent company of Facebook, Instagram and nowThreads, plans to launch AI chatbots with a variety of personalities, the Financial Times reported Tuesday. The chatbots, reportedly called personas, could expand the company's social networks with a range of new online tools and entertainment options.

The company could announce the chatbots as soon as September, the report said. Meta will offer the chatbots to improve search and recommendations, like travel advice in the style of a surfer, and to give people an online personality that's fun to "play with," the report said. One such AI persona the company tried building is a digital incarnation of President Abraham Lincoln.

If successful, the AI chatbots could help keep the 4 billion people who use Meta services each month more engaged, addressing a major Meta challenge as growth becomes harder and rivals such as TikTok draw people's attention elsewhere. Meta consolidated its artificial intelligence efforts earlier this year to "turbocharge" its work and build better "creative and expressive tools," Chief Executive Mark Zuckerberg said at the time.

AI chatbots also could provide the company with a new wealth of personal information useful for targeting advertisements, Meta's main revenue source. Search engines already craft ads based on the information you type into them, but AI chatbots could capture a new dimension of people's interests and attributes for more detailed profiling. Privacy is one of Meta's biggest challenges, and regulators already have begun eyeing AI warily.

Meta declined to comment.

AI chatbots, exemplified by OpenAI's ChatGPT, have become vastly more useful and engaging. Their use of large language models trained on vast swaths of the internet gives them a vastly greater ability to understand human text and offer helpful responses to our questions and conversation.

Chatbots are not without risks. They're prone to fabricating plausible but bogus responses, a phenomenon called hallucination, and can have a hard time with facts. LLM creators often hire "red teams" to try to discover and thwart potential abuses, like people using LLMs for sexual or violent purposes. But the area of AI security and abuse is new, andresearchers are finding new ways to evade LLM restrictionsas they dig into the area.

Many ChatGPT rivals are available already, including Anthropic's Claude 2, Microsoft's Bing and Google's Bard. Such tools are often available for use by other software and services, letting direct Meta rivals like Snap offer chatbots of their own. So getting ahead simply by offering an AI chatbot doesn't guarantee success.

Facebook has billions of users, though, and deep AI expertise. In July it released its own Llama 2 large language model.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

See more here:

Meta Will Let You Chat With AI Bots With Personalities, Report Says - CNET

Read More..

AI for all? Google ups the ante with free UK training courses for firms – The Guardian

Artificial intelligence (AI)

US tech giant starts charm offensive on artificial intelligence with basic courses to help firms understand and exploit emerging phenomenon

A larger-than-life Michelle Donelan beams on to a screen in Googles London headquarters. The UK science and innovation secretary is appearing via video to praise the US tech behemoth for its plans to equip workers and bosses with basic skills in artificial intelligence (AI).

The recent explosion in the use of AI tools like ChatGPT and Googles Bard show that we are on the cusp of a new and exciting era in artificial intelligence, and it is one that will dramatically improve peoples lives, says Donelan. Googles ambitious training programme is so important and exceptional in its breadth, she gushes in a five-minute video, filmed in her ministerial office.

Welcome to the AI arms race, where nations are bending over backwards to attract cash and research into the nascent technology. Googles move is a vote of confidence in the UK, supporting the governments aim to make the UK both the intellectual home and the geographical home of AI, says Donelan.

Few countries have been more accommodating than the UK, with Donelans tone underlining the red carpet treatment given by Rishi Sunaks government to tech firms and his desire to lure AI companies in particular.

Googles educational courses cover the basics of AI, which it says will help individuals, businesses and organisations to gain skills in the emerging technology.

The tuition consists of 10 modules on a variety of topics, in the form of 45-minute presentations, two of which, covering growing productivity and understanding machine learning, are already available.

The courses are rudimentary: they cover the basics of AI and Google says they do not require any prior technological knowledge.

About 50 people, including small business owners, attended the first course at Googles Kings Cross offices in London last week, just across the road from where its monolithic 1bn new UK HQ, complete with rooftop exercise trail and pool, is being built.

The UK home to Googles AI research subsidiary, DeepMind is the launchpad for its new training, but the company said it expected to roll it out to other countries in the future. Co-founded in 2011 by Demis Hassabis, a child chess prodigy, DeepMind was sold to Google for 400m in 2014 and now leads Googles AI development under the new Google DeepMind title. It has increasingly embedded itself into the machinery of the state, from controversially partnering with the NHS to try to build apps to help doctors monitor kidney infections, to Hassabis advising the government during the Covid-19 pandemic.

The first sessions are the latest addition to the digital skills training offered by the company in the UK since 2015, accessed by 1m people.

We see a cry for more training in the AI space specifically, Debbie Weinstein, the managing director of Google UK and Ireland, tells the Guardian.

We are hearing this need from people and at the same time we hear from businesses that they are looking for people with digital skills that can help them.

Googles pitch is that AI could increase productivity for businesses, including by taking care of time-consuming administrative tasks. It cites a recent economic impact report, compiled for Google by the market research firm Public First, which estimated that AI could add 400bn in economic value to the UK by 2030, through harnessing innovation powered by AI.

The company said the report also highlighted a lack of tech skills in the UK, which could hold back growing businesses.

But there is little mention of any of the feared downsides of AI, including the impact on huge swathes of the economy by making roles redundant. Those attending the inaugural presentations appear more keen to know basics, such as whether AI can help with tasks including responding to emails and booking appointments.

The charm offensive by Google may also highlight deep unease about the breakneck pace of AI expansion and its potential to completely upend the world of work, and the Silicon Valley companys nervousness over any backlash.

Google and other tech firms, including Microsoft, Amazon and Meta, are working feverishly to develop AI tools, all hoping to steal a march on rivals in what some believe is a winner-takes-all competition with unlimited earnings potential.

Google launched its Bard chatbot in the US and UK in March, its answer to OpenAIs ChatGPT and Microsofts Bing Chat, a service which is capable of answering detailed questions, giving creative answers and engaging in conversations. Facebooks parent company Meta has recently released an open-source version of an AI model, Llama 2.

A recent report by the Organisation for Economic cooperation and Development (OECD) warned that AI-driven automation could trigger mass job losses across skilled professions such as law, medicine and finance, with highly skilled jobs facing the biggest threat of upheaval.

Others are concerned that profit-maximising private tech companies are expanding apace in a fledgling sector where there is now no regulation, with echoes of the early days of the internet, when the land grab by tech companies left regulators and ministers trailing in their wake and eventually forcing a belated reckoning for social media giants.

Dr Andrew Rogoyski, of the Institute for People-Centred Artificial Intelligence at the University of Surrey, says Googles training drive is unlikely to be motivated by altruism. Making free training available makes absolute sense, he says. If you use one companys training material, youre more likely to use their AI platform.

Rogoyski adds that tech firms of all sizes are offering educational courses.

I think a lot of businesses are struggling at the moment with the feeling that they should be doing something with AI and not knowing where to start, he says.

I would like to see more warnings, the things that businesses should be aware of when looking at AI, [that] its not just about technical and coding skills to knock something up that you can push out to your website.

He also wants companies to be aware of potential pitfalls.

There are much more impactful issues that people need to think about such as privacy, security, data basis, all of the concerns and limitations that you might feel are being glossed over if [tech firms] are pushing us to try AI and start tinkering.

Politicians are waking up to the risks of AI. Labours digital spokesperson Lucy Powell recently said the UK should bar technology developers from working on advanced AI tools unless they have a licence to do so. Powell suggested AI should be licensed in a similar way to medicines or nuclear power, both of which are governed by arms-length governmental bodies. But both main parties are captivated by potential prize: Sir Keir Starmer recently held a shadow cabinet meeting at Googles London office, and the Labour leader and Sunak focused on AI in their recent London Tech Week speeches.

Globally, governments including the UKs, are working out how they can reap the benefits of tech firms like Google up-skilling its workforce, at the same time as they are hoping to rein in those very firms.

Sunak has changed his tone on AI in the past couple of months, and is now planning to host a global summit on safety in the nascent technology, as he aims to position the UK as the international hub for its regulation.

The sudden adoption of AI chatbots and other tools are worrying managers in the UK, leaving them fearful about potential job losses triggered by the technology, as well as the associated risks to security and privacy.

Two in five managers (43%) told the Chartered Management Institute (CMI) they were concerned that jobs in their organisations will be at risk from AI technologies, while fewer than one in 10 (7%) managers said employees in their organisation were adequately trained on AI, even on high-profile tools like ChatGPT.

Anthony Painter, the CMIs director of policy, who met a group of Google executives and small business representatives on the sidelines of the training launch, says that AI brings huge opportunity, but also huge risks, and we have to take time to get that right

The practical skills necessary to adopt AI arent where they need to be [among businesses], he says. But we dont have the regulatory structure to do that effectively, and it might not be bad to have a bit of a go-slow while we think through regulation, ethics and skills in practical terms.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original:

AI for all? Google ups the ante with free UK training courses for firms - The Guardian

Read More..