Page 980«..1020..979980981982..9901,000..»

AI and Machine Learning set to drive India’s $8 billion digital advertising industry, say experts | Mint – Mint

Industry experts predict that artificial intelligence and operational machine learning will have a transformative effect on communication sectors such as advertising, public relations, and content creation, reported PTI.

Industry experts predict that artificial intelligence and operational machine learning will have a transformative effect on communication sectors such as advertising, public relations, and content creation, reported PTI.

As per the report, these technologies, including generative intelligence and machine learning, are expected to significantly impact the Indian digital advertising industry, which is currently valued at approximately USD 8 billion (about 66,142 crore), as stated by Siddharth Jhawar, the General Manager of Moloco in India.

As per the report, these technologies, including generative intelligence and machine learning, are expected to significantly impact the Indian digital advertising industry, which is currently valued at approximately USD 8 billion (about 66,142 crore), as stated by Siddharth Jhawar, the General Manager of Moloco in India.

"Advertisers have long pondered the effectiveness of their ads, many relying on intuition alone. Operational machine learning, however, can revolutionise the USD 8-billion Indian digital advertising industry as it can run thousands of mini experiments to decide which ad creative appeals to which type of users," said Jhawar.

"Advertisers have long pondered the effectiveness of their ads, many relying on intuition alone. Operational machine learning, however, can revolutionise the USD 8-billion Indian digital advertising industry as it can run thousands of mini experiments to decide which ad creative appeals to which type of users," said Jhawar.

Given the diverse linguistic and cultural landscape of India, this holds particular significance, he noted.

Given the diverse linguistic and cultural landscape of India, this holds particular significance, he noted.

Notably, Google has introduced advertising campaign processes and the automatic creation of ads through the utilization of Learning Language Models (LLM) and generative AI within Google Ads.

Notably, Google has introduced advertising campaign processes and the automatic creation of ads through the utilization of Learning Language Models (LLM) and generative AI within Google Ads.

According to Dan Taylor, the Vice President of Global Ads at Google, leading companies such as Myntra, Samsung, HDFC, and Tata AIG have witnessed growth rates of up to 18 per cent through the use of Performance Max. This advertising tool incorporates Google's AI technologies for bidding, budget optimization, audience targeting, creative development, and attribution.

According to Dan Taylor, the Vice President of Global Ads at Google, leading companies such as Myntra, Samsung, HDFC, and Tata AIG have witnessed growth rates of up to 18 per cent through the use of Performance Max. This advertising tool incorporates Google's AI technologies for bidding, budget optimization, audience targeting, creative development, and attribution.

Jhawar also emphasized that companies have the opportunity to leverage deep neural networks, which fuel machine learning, to boost revenue using first-party data. Importantly, this can be achieved while upholding data privacy and delivering personalized experiences to customers.

Jhawar also emphasized that companies have the opportunity to leverage deep neural networks, which fuel machine learning, to boost revenue using first-party data. Importantly, this can be achieved while upholding data privacy and delivering personalized experiences to customers.

Wizikey, a communication software-as-a-service provider, has introduced Imara, an AI Avatar designed for public relations and communications. Imara utilizes generative AI to analyze news data and derive valuable insights. This development is in line with the growing trend of brands incorporating generative AI in their operations, which has also sparked interest and discussions within the workforce.

Wizikey, a communication software-as-a-service provider, has introduced Imara, an AI Avatar designed for public relations and communications. Imara utilizes generative AI to analyze news data and derive valuable insights. This development is in line with the growing trend of brands incorporating generative AI in their operations, which has also sparked interest and discussions within the workforce.

Archana Jain, the Founder and Managing Director of PR Pundit, an integrated communications consultancy firm, emphasized that public relations must embrace the ongoing transformation fueled by AI. She believes that the PR industry is on the brink of experiencing further disruptions in the upcoming years.

Archana Jain, the Founder and Managing Director of PR Pundit, an integrated communications consultancy firm, emphasized that public relations must embrace the ongoing transformation fueled by AI. She believes that the PR industry is on the brink of experiencing further disruptions in the upcoming years.

Jain highlighted the benefits of AI in PR, stating that professionals can now analyze vast amounts of data rapidly and efficiently. This ability enables them to make well-informed decisions and develop more effective PR strategies. Additionally, AI facilitates the creation of innovative content tailored to digitally-focused target audiences. Even in fundamental tasks like media coverage tracking, AI plays a pivotal role in enhancing overall efficiency within the industry.

Jain highlighted the benefits of AI in PR, stating that professionals can now analyze vast amounts of data rapidly and efficiently. This ability enables them to make well-informed decisions and develop more effective PR strategies. Additionally, AI facilitates the creation of innovative content tailored to digitally-focused target audiences. Even in fundamental tasks like media coverage tracking, AI plays a pivotal role in enhancing overall efficiency within the industry.

Google has reportedly been testing a product designed to generate news articles by processing ingested information. Positioned as a tool to assist journalists, the product was showcased to media organizations, including The New York Times, The Washington Post, and News Corp, which owns The Wall Street Journal, according to the report.

Google has reportedly been testing a product designed to generate news articles by processing ingested information. Positioned as a tool to assist journalists, the product was showcased to media organizations, including The New York Times, The Washington Post, and News Corp, which owns The Wall Street Journal, according to the report.

According to a McKinsey study, the implementation of AI has the most significant reported revenue impact on marketing and sales.

According to a McKinsey study, the implementation of AI has the most significant reported revenue impact on marketing and sales.

See the rest here:

AI and Machine Learning set to drive India's $8 billion digital advertising industry, say experts | Mint - Mint

Read More..

Exploring the effects of feeding emotional stimuli to large language models – Tech Xplore

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

by Ingrid Fadelli , Tech Xplore

close

Since the advent of OpenAI's ChatGPT, large language models (LLMs) have become significantly popular. These models, trained on vast amounts of data, can answer written user queries in strikingly human-like ways, rapidly generating definitions to specific terms, text summaries, context-specific suggestions, diet plans, and much more.

While these models have been found to perform remarkably well in many domains, their response to emotional stimuli remains poorly investigated. Researchers at Microsoft and CAS Institute of Software recently devised an approach that could improve interactions between LLMs and human users, allowing them to respond to emotion-laced, psychology-based prompts fed to them by human users.

"LLMs have achieved significant performance in many fields such as reasoning, language understanding, and math problem-solving, and are regarded as a crucial step to artificial general intelligence (AGI)," Cheng Li, Jindong Wang and their colleagues wrote in their paper, prepublished on arXiv. "However, the sensitivity of LLMs to prompts remains a major bottleneck for their daily adoption. In this paper, we take inspiration from psychology and propose EmotionPrompt to explore emotional intelligence to enhance the performance of LLMs."

The approach devised by Li, Wang and their colleagues, dubbed EmotionPrompt, draws inspiration from well-established knowledge rooted in psychology and the social sciences. For instance, past psychology studies found that words of encouragement and other emotional stimuli could have positive effects on different areas of a person's life, for instance improving the grades of students, promoting healthier lifestyle choices, and so on.

To see whether emotional prompts could also affect the performance of LLMs, the researchers came up with 11 emotional sentences that could be added to typical prompts fed to the models. These were sentences such as "this is very important for my career," "you'd better be sure," "take pride in your work and give it your best", and "embrace challenges as opportunities for growth."

These sentences were derived from existing psychology literature, such as the social identity theory introduced by Henri Tajfel and John Turner in the 1970s, social cognition theory, and the cognitive emotion regulation theory. The researchers then added these sentences to prompts sent to different LLMs, which asked the models to complete different language tasks.

So far, they tested their approach on four different models: ChatGPT , Vicuna-13b, Bloom and Flan-T5-Large. Overall, they found that it improved the performance of these models on eight different tasks, increasing the accuracy of their responses by more than 10% on over half of these tasks.

"EmotionPrompt operates on a remarkably straightforward principle: the incorporation of emotional stimulus into prompts," Li, Wang and their colleagues wrote. "Experimental results demonstrate that our EmotionPrompt, using the same single prompt templates, significantly outperforms original zero-shot prompt and Zero-shot-CoT on eight tasks with diverse models: ChatGPT, Vicuna-13b, Bloom, and T5. Further, EmotionPrompt was observed to improve both truthfulness and informativeness."

The new approach devised by this team of researchers could soon inspire additional studies aimed at improving human-LLM interactions by introducing emotional/psychology-based prompts. While the results gathered so far are promising, further studies will be needed to validate its effectiveness and generalizability.

"This work has several limitations," the researchers conclude in their paper. "First, we only experiment with four LLMs and conduct experiments in several tasks with few test examples, which are limited. Thus, our conclusions about emotion stimulus can only work on our experiments and any LLMs and datasets out of the scope of this paper might not work with emotion stimulus. Second, the emotional stimulus proposed in this paper may not be general to other tasks, and researchers may propose other useful replacements for your own tasks."

More information: Cheng Li et al, EmotionPrompt: Leveraging Psychology for Large Language Models Enhancement via Emotional Stimulus, arXiv (2023). DOI: 10.48550/arxiv.2307.11760

Journal information: arXiv

2023 Science X Network

Link:

Exploring the effects of feeding emotional stimuli to large language models - Tech Xplore

Read More..

Convergence of Brain-Inspired AI and AGI: Exploring the Path to Intelligent Synergy – EIN News

FAYETTEVILLE, GA, USA, August 3, 2023 /EINPresswire.com/ -- With over 86 billion neurons, each having the ability to form up to 10,000 synapses with other neurons, the human brain gives rise to an exceptionally complex network of connections that underlie the proliferation of intelligence.

There has been a long-standing pursuit of humanity centered around artificial general intelligence (AGI) systems capable of achieving human-level intelligence or even surpassing itenabling AGI to undertake a wide range of intellectual tasks, including reasoning, problem-solving and creativity.

Brain-inspired artificial intelligence is a field that has emerged from this endeavor, integrating knowledge from neuroscience, psychology, and computer science to create AI systems that are not only more efficient but also more powerful. In a new study published in the KeAi journal Meta-Radiology, a team of researchers examined the core elements shared between human intelligence and AGI, with particular emphasis on scale, multimodality, alignment, and reasoning.

Notably, recent advancements in large language models (LLMs) have showcased impressive few-shot and zero-shot capabilities, mimicking human-like rapid learning by capitalizing on existing knowledge, shared Lin Zhao, co-first author of the study. In particular, in-context learning and prompt tuning play pivotal roles in presenting LLMs with exemplars to adeptly tackle novel challenges.

Moreover, the study delved into the evolutionary trajectory of AGI systems, examining both algorithmic and infrastructural perspectives. Through a comprehensive analysis of the limitations and future prospects of AGI, the researchers gained invaluable insights into the potential advancements that lie ahead within the field.

Our study highlights the significance of investigating the human brain and creating AI systems that emulate its structure and functioning, bringing us closer to the ambitious objective of developing AGI that rivals human intelligence, said corresponding author Tianming Liu. AGI, in turn, has the potential to enhance human intelligence and deepen our understanding of cognition. As we progress in both realms of human intelligence and AGI, they synergize to unlock new possibilities.

###

References

Journal Meta-Radiology

DOI 10.1016/j.metrad.2023.100005

Original URL https://doi.org/10.1016/j.metrad.2023.100005

Wendy ChenTranSpread+1 865-405-5638email us here

View original post here:

Convergence of Brain-Inspired AI and AGI: Exploring the Path to Intelligent Synergy - EIN News

Read More..

The Threat Of Climate Misinformation Propagated by Generative AI … – Unite.AI

Artificial intelligence (AI) has transformed how we access and distribute information. In particular, Generative AI (GAI) offers unprecedented opportunities for growth. But, it also poses significant challenges, notably in climate change discourse, especially climate misinformation.

In 2022, research showed that around 60 Twitter accounts were used to make 22,000 tweets and spread false or misleading information about climate change.

Climate misinformation means inaccurate or deceptive content related to climate science and environmental issues. Propagated through various channels, it distorts climate change discourse and impedes evidence-based decision-making.

As the urgency to address climate change intensifies, misinformation propagated by AI presents a formidable obstacle to achieving collective climate action.

False or misleading information about climate change and its impacts is often disseminated to sow doubt and confusion. This propagation of inaccurate content hinders effective climate action and public understanding.

In an era where information travels instantaneously through digital platforms, climate misinformation has found fertile ground to propagate and create confusion among the general public.

Mainly there are three types of climate misinformation:

In 2022, several disturbing attempts to spread climate misinformation came to light, demonstrating the extent of the challenge. These efforts included lobbying campaigns by fossil fuel companies to influence policymakers and deceive the public.

Additionally, petrochemical magnates funded climate change denialist think tanks to disseminate false information. Also, corporate climate skeptic campaigns thrived on social media platforms, exploiting Twitter ad campaigns to spread misinformation rapidly.

These manipulative campaigns seek to undermine public trust in climate science, discourage action, and hinder meaningful progress in tackling climate change.

Image Source

Generative AI technology, particularly deep learning models like Generative Adversarial Networks (GANs) and transformers, can produce highly realistic and plausible content, including text, images, audio, and videos. This advancement in AI technology has opened the door for the rapid dissemination of climate misinformation in various ways.

Generative AI can make up stories that aren't true about climate change. Although 5.18 billion people use social media today, they are more aware of current world issues. But, they are 3% less likely to spot false tweets generated by AI than those written by humans.

Some of the ways generative AI can promote climate misinformation:

Generative AI tools that produce realistic synthetic content are becoming increasingly accessible through public APIs and open-source communities. This ease of access allows for the deliberate generation of false information, including text and photo-realistic fake images, contributing to the spread of climate misinformation.

Generative AI enables the creation of longer, authoritative-sounding articles, blog posts, and news stories, often replicating the style of reputable sources. This sophistication can deceive and mislead the audience, making it difficult to distinguish AI-generated misinformation from genuine content.

Large language models (LLMs) integrated into AI agents can engage in elaborate conversations with humans, employing persuasive arguments to influence public opinion. Generative AI's ability to generate personalized content is undetectable by current bot detection tools. Moreover, GAI bots can amplify disinformation efforts and enable small groups to appear larger online.

Hence, it is crucial to implement robust fact-checking mechanisms, media literacy programs, and close monitoring of digital platforms to combat the dissemination of AI-propagated climate misinformation effectively. Strengthening information integrity and critical thinking skills empowers individuals to navigate the digital landscape and make informed decisions amidst the rising tide of climate misinformation.

Though AI technology has facilitated the rapid spread of climate misinformation, it can also be part of the solution. AI-driven algorithms can identify patterns unique to AI-generated content, enabling early detection and intervention.

However, we are still in the early stages of building robust AI detection systems. Hence, humans can take the following steps to minimize the risk of climate misinformation:

In the battle against AI-propagated climate misinformation, upholding ethical principles in AI development and responsible usage is paramount. By prioritizing transparency, fairness, and accountability, we can ensure that AI technologies serve the public good and contribute positively to our understanding of climate change.

To learn more about generative AI or AI-related content, visit unite.ai.

See the original post here:

The Threat Of Climate Misinformation Propagated by Generative AI ... - Unite.AI

Read More..

TinyML: the mini-me of AI – Gilbert + Tobin

TinyML means machine learning (ML) on tiny, low powered, low cost computers, giving them the capability to perform on-device (on-board) analytics of vision, audio and speech.

TinyML would upend the current architecture Internet of Things (IoT) supporting the swarms of relatively dumb sensor devices being embedded in our daily lives. We would go from a data enabled physical environment to a smart physical environment a fridge that can think lurking in your kitchen!

Traditional IoT systems utilise large fleets of edge devices deployed in the physical environment, such as soil sensors, to gather data which is transmitted back to a cloud-based CPU (now AI-enabled) to process: IoT devices are literally the eyes, ears and touch of AI in the physical world.

In this traditional IOT architecture, the IoT edge devices because they need to be low cost, robust and long-lived have low computing power and memory, and hence low power requirements (and demands on demand life). For example, the majority of IoT edge devices operate at clock speeds (the higher the clock speed, the more processing power) between 101000 MHz, which will not support complex learning models at the edge and less than 1 MB onboard flash memory.

This traditional IoT architecture has its drawbacks:

Hence, the efforts to integrate ML capabilities into the IoT edge devices themselves: intelligence onboard the sensor itself. One commentator compares CPU-based ML under the traditional IOT architecture and TinyML at the network edge as follows:

As per the battery life, TinyML outperforms ordinary ML techniques as the models can run on embedded devices. Cost efficiency is better in TinyML as only one microcontroller is required compared to a PC. Scalability is higher in ordinary ML applications as more computing power is available. Robustness is higher on TinyML deployments as in the case when a node is removed, all information remains intact while on the ML case is server-side based. The deployment is better on ML models as there are many paradigms available online and more widely used. The performance metric is higher on ML cases as TinyML technology has emerged and there are not much models. Lastly, security is higher on TinyML deployments as the information remains within the embedded device and there are no exchanges between third parties.

TinyML would not necessarily operate as a substitute for cloud-based services but would be part of a more decentralised, and robust, ML system. For example, a body sensor may have enough ML capability to work out whether it can diagnosis and solve a problem from patient data it collects or whether the issue is above my pay grade and escalate to the cloud-based AI.

First, patchy mobile coverage is holding back the deployment of digital agriculture. PlantVillage, an open-source project managed by Penn State University, has created Nuru, an artificial intelligence-based program that can function on mobile phones without internet connectivity and is deploying it in Africa to help farmers identify and respond to hazards for cassava crops, which is a key food source for hundreds of millions.

Second, data processing on-board IoT devices will enable uses which otherwise would be impractical, if not dangerous, because of the delay (latency) if the IoT device had to transmit the data, the cloud process the answer and transmit it back to the IoT device. Use cases under development include:

Third, TinyML will allow close-at-hand monitoring and maintenance work on in-field equipment. Ping, an Australian company, has developed a TinyML device that continuously monitors the acoustic signature of wind turbine blades to detect and notify any change or damage, using advanced acoustic analysis.

Lastly, TinyML can be inserted into manufacturing processes to manage and adjust machinery. Perhaps less consequentially for humanity, TinyML can result in better roasted coffee. It is critical to identify the first crack in any beans since the time spent roasting after the first crack has a major impact on the quality and flavour of the processed beans. Two Norwegian businesses, Roest and Soundsensing, have added a microcontroller with TinyML in their bean roasting equipment to more quickly identify that first crack.

The fundamental problem is that ML/AI and IoT device design have been heading in the opposite directions algorithms have become depend on vastly increasing data inputs and IoT devices have been designed to consume ever lower levels of energy (and therefore reinforcing their limited computing capacity). As one commentator has said:

Lets take an example to understand this better. GPT-3 has 175 billion parameters and got trained on 570 GB of text. On the other hand, we have Google Assistant that can detect speech with a model thats only 14 KB. So, that can fit on something as small as a microprocessor, but GPT-3 cannot.

TinyML depends on the intersection of three trends:

On the first trend, last year we discussed the development of IoT edge devices which could supplement or replace battery power through two techniques which harvested energy from thin air, as it were, by the following techniques:

On the second trend, there have steady, large gains in the performance of typical microcontroller processors. In 2004, Arm introduced the Cortex-M 32-bit processor family which helped create a powerful new generation of low-cost microcontrollers. The Cortex-M4 processor introduced hardware floating point support and the ability to perform multiple integer calculations, which has made it easier to perform complex calculations on microcontrollers, which is essential for machine learning algorithms. A recent development is the introduction of the Ethos Neural Processing Unit (NPU), which allows algorithms to run small microcontrollers with around a 480 times performance boost.

The third development, scaling down algorithms to operate within the power and memory constraints of IoT edge devices, is where most of the focus of TinyML research currently is. There are a number of different approaches.

First, federated learning involves creating an ML model from decentralized data and in a distributed way, as follows:

a variety of edge devices collaborate so as to build a global model using only local copies of the data and then each device downloads a copy of the model and updates the local parameters. Finally, the central server aggregates all model updates and proceeds with the training and evaluation without exchanging data to other parties.

Second, transfer learningis where a machine learning model developed for one task is reused as the starting point for a model on a second task. By drawing on the pre-trained or existing learned experience, the ML model can be adapted to the new task with relatively less data, training effort an therefore computing power. A leading transfer learning tool is TensorFlow Lite for Microcontrollers, Googles open source program, which it describes as follows:

The intuition behind transfer learning for image classification is that if a model is trained on a large and general enough dataset, this model will effectively serve as a generic model of the visual world. You can then take advantage of these learned feature maps without having to start from scratch by training a large model on a large dataset.

Models in the TensorFlow Lite framework can use just a few lines of code to be adapted to a new task, and then can be deployed onboard IoT edge devices.

TinyML could supercharge IoT, allowing IoT devices to take intelligent decisions in the field, literally. However, there is still the challenge of squeezing machine intelligence on the head of a pin, literally.

Industry standards as in IoT generally are desperately needed to reign in the often chaotic heterogeneity of software, devices, and power requirements. An industry association has been set up to facilitate the emergence of standards

Read more:TinyML: Tools, Applications, Challenges, and Future Research Directions

View original post here:

TinyML: the mini-me of AI - Gilbert + Tobin

Read More..

Microsoft Partners: This Is Your Copilot Speaking – CRN

Software News Wade Tyler Millward August 07, 2023, 10:00 AM EDT

Microsoft CEO Satya Nadella has called the spread of generative AI a collective mission for the company and its partners, a mission that in part will be fueled by Microsoft 365 Copilot. Heres how and where a number of partners are placing their bets.

Vineet Aroras belief in the promise of Microsoft Copilot and the vendors other generative artificial intelligence technology runs so deep hes betting his job on it.

The CTO of WinWire Technologies -- a Santa Clara, Calif.-based 2023 Microsoft Partner of the Year finalist and No. 233 on CRNs 2023 Solution Provider 500 -- has transitioned his role to focus on technology from Microsoft and Microsoft-backed OpenAI that can quickly create content, analyses and summaries based on user prompts.

Arora heads a small team of WinWire employees -- data scientists, data architects and data engineers found through an internal hackathon earlier in the year -- focused on partnering with Microsoft to provide AI technology to customers. WinWire has more than 1,000 employees worldwide.

Since April, hes conducted a seven-city road show with Microsoft technologists to demonstrate what generative AI can do in health care and has done more than 24 envisioning sessions with customers around the technology. WinWire is already piloting generative AI solutions for healthcare and life sciences customers -- all of this before some of Microsofts most impressive generative AI tools become generally available.

And he even publishes two to three articles a day to a Teams group chat called All Things OpenAI.

The scale of the investment that we have planned is [incomparable] to anything else that we have done in the last 16 years, Arora told CRN. My entire time is going into this. The opportunity size, the technology, the speed at which it is evolving requires 100 percent -- 110 percent -- dedication from our side. So the scale and the level of focus has been different from anything else that weve done in the past.

Tech vendors and solution providers alike are fueling a generative AI gold rush. And Redmond, Wash.-based Microsoft is certainly considered a leader in generative AI thanks to its reported $13 billion investment in OpenAI -- the startup that on Nov. 30 publicly launched the ChatGPT text-generating software -- and Microsofts own March introduction of Microsoft 365 Copilot, which will bring generative AI to Teams, Outlook, Word, Excel and other popular productivity applications.

At the vendors Inspire conference held online last month, Microsoft Chairman and CEO Satya Nadella called next-generation AI a partner ecosystem opportunity that could span $4 trillion to $6.5 trillion. The most forward-looking of its 400,000-member partner ecosystem have now put their best and brightest employees to work developing generative AI solutions.

Solution providers are key to scaling Microsofts generative AI for businesses, according to partners themselves as well as industry observers.

Dan Ives, managing director of equity research at Los Angeles-based financial services firm Wedbush Securities, told CRN that partners hold the key to spreading the gospel and ultimately getting pen on paper to deals involving generative AI.

Getting partners bought in on the ChatGPT/AI strategy on cloud is integral to what I view as just a golden opportunity for Nadella and team over the coming years, Ives said. Microsofts partner distribution is unrivaled. In other words, I view that as a huge asset for them in this arms race thats going on.

Ives -- who forecasts an upcoming $800 billion AI spending wave over the next decade -- sees generative AI as more than just hype. In research notes for Wedbush, he estimates that AI will comprise up to 10 percent of overall IT budgets in 2024, compared with about 1 percent in 2023.

And he sees generative AI giving Microsoft and its ecosystem greater leadership in cloud computing. By most measures it sits in the No. 2 position behind Amazon Web Services and ahead of Google Cloud.

Ives expects Microsofts cloud total addressable market to expand by up to 40 percent over the next decade thanks to generative AI.

I would compare it to Nadellas vision when he took over as CEO and morphed Microsoft through Azure into a cloud behemoth, he said. I would compare it to that in terms of this transformation.

At Inspire 2023, multiple Microsoft executives addressed the need for solution providers to implement Copilot and other generative AI tools.

Nadella called the spread of generative AI a collective mission for Microsoft and its partners.

Theres no question the opportunity is tremendous, Nadella said during his keynote address. And our partner ecosystem, the uniqueness of what you all represent, really is what gets me most excited. Its a map of the world. We can reach every community in every country in every sector of the economy, both public and private. Thats whats exciting. In the last six months, as Ive been

going to different parts of the world, talking to lots of you, whats unbelievable is the rate of diffusion of this technology. Thanks to all your capability, [that] is whats most exciting about this particular revolution and platform shift, and thats what really grounds us in our mission to empower every person and every organization on the planet to achieve more.

Nicole Dezen, Microsofts chief partner officer and corporate vice president of the Global Partner Solutions group, told CRN after Inspire that she hopes solution providers see Microsofts commitment to not only AI, but to partner success and enablement. Microsoft only wins when partners and customers succeed, Dezen said. Its core to our mission. And so we were very intentional in making so many big investments this year in all things partner -- everything from the program itself, all the way through to specializations and designations to really shore up training capability.

Microsoft investments and incentives are meaningful dollars to help partners go deliver fast time-to-value for customers because thats where the truth is. That is the measure of success. When customers are realizing value, then were all successful, she said. More than 9,400 customers use Azure OpenAI Service, making it the fastest-growing Azure service in the technologys history, according to Microsoft. And 9,100 partners transact on Azure AI.

More than 2,300 of those partners transact on Azure OpenAI, according to Microsoft. Partners have also activated thousands of customers on core Azure migration and modernization scenarios. Azure OpenAI Service, GitHub Copilot and Sales Copilot are examples of currently available Microsoft generative AI offerings. Other generative AI offerings such as the more broad Microsoft 365 Copilot do not have launch dates yet. However, Microsoft did announce at Inspire that the M365 Copilot will cost $30 per user, per month, for buyers of Microsoft 365 E3, E5, Business Standard and Business Premium licenses.

Power Automate Process Mining -- an AI-infused Microsoft offering for process improvement through automation and low-code apps -- is scheduled for general availability on Aug. 1. Copilot in SharePoint is expected to roll out in November.

And Microsoft already provides a wide variety of resources for channel partners with AI practices, such as training, education and an assortment of programs and initiatives. These include the Data and AI Industry Partner Activation Kit, which equips partners with reference architectures, application demonstrations and solution accelerators.

The cloud service provider also has Microsoft- Azure specializations to validate partner abilities and an Azure Analytics and AI Accelerate Program to provide support to partners across all sales stages.

More than 200 partners already met the prerequisites for the Build and Modernize AI Apps with Microsoft Azure specialization at launch, according to the vendor.

Some of the biggest partner news out of Inspire included the rechristening of the Microsoft Cloud Partner Program -- unveiled in October -- as the Microsoft AI Cloud Partner Program, in a reflection of how important AI technology is for Microsoft and its partners futures.

The vendor also tripled its investment in Azure Migrate and Modernize -- a renamed Azure Migration and Modernization Program -- which provides assessments, more partner incentives and support for additional workloads.

Separately, Microsoft said it has invested $100 million in a new Azure Innovate offering that aims to help partners and customers infuse AI into applications, experiences, advanced analytics and custom cloud-native app building,

And Microsoft unveiled new Era of AI marketing campaigns in a box and an AI transformation playbook with guidance on skilling, innovating, marketing and selling.

While generative AI is in its earliest days, some Microsoft solution providers are already delivering generative AI solutions. That work ranges from WinWire AI journey maps to EYs payroll chatbot and from PricewaterhouseCoopers saving employees 80 percent of work time to Core BTS launching an AI readiness assessment.

At WinWire, Arora works with customers -- health care, manufacturing and retail are some of its biggest verticals -- on AI journey maps, maps that Arora considers to be WinWires intellectual property.

Youre not going to jump into building full-fledged production-ready solutions off the bat, he said. There is a step-by-step approach that you need to take.

He asks customer CIOs and CTOs what their vision is and their familiarity with generative AI. Hell take about 30 minutes to demystify generative AI and then identifies one to three use cases for a proof of concept or pilot. WinWire then works with customers on budgeting for the work.

We are able to supercharge the adoption of that technology into their areas, he said. They are Microsofts biggest customers, some of them. And as Microsoft starts bringing those technologies [to market], we are there to help them adopt them in the right manner. And thats a promise that I provide to them.

Tony Guidi, WinWires senior vice president of alliances, told CRN that the solution providers years of experience in modernizing customer data estates allowed us to leapfrog a lot of other Microsoft partners.

For years, weve been doing modern data, app innovation, all the things that this technology is going to help even further accelerate, Guidi said. So we were in a unique position where we could take our 16 years of experience and very quickly pivot to focus in this space and build out the significant team with some of our best people to just begin to develop use cases by industry, [custom] Copilots, proofs of concept, accelerator approaches in order for customers to make this technology real.

Indianapolis-based Core BTS, a Microsoft partner and No. 168 on CRNs 2023 Solution Provider 500, has a responsible AI readiness assessment available through the Azure Marketplace for customers to identify high-value, low-risk opportunities to use generative AI on business functions within the organization.

Finding a document within a business group, generating relevant data content from the document and using generative AI to model the data for company stakeholders to access is one example, Perry Thompson, managing director of technology strategy at Core BTS, told CRN.

The solution providers 400-plus Microsoft certified professionals are also working on AI at the networking layer to help find anomalies and behavior patterns that run afoul of policies.

Core BTS employees have looked at customers internal data, the value that data provides business functions and the datas classification and taxonomy. The solution provider is also working on guardrails to protect sensitive information -- keeping healthcare customers compliant with federal privacy laws, for example, Thompson said.

Were starting right off the bat because we recognize that this is something that is really exploding in the market, he said. But when you start to get into the actual processing of information and trying to give controls and automation, thats the piece that you have to be very sensitive about. And you have to work with your leaders in your organization to set that vision first and then be able to define strategic goals that actually align to their business and their values.

Customers seeking generative AI are really seeking more connections across business units to share data quickly, make decisions faster and get closer to end customers, he said.

This is why we have to take a step back and be able to try to set that strategy, identify those key areas, to define some mitigation aspects of the actual highlighted area -- if it exists -- go into the pilot mode for them to be able to look at it at a small scale. And then we turn back around and plan for the scaling up of the entire solution and make sure that we have a continuous feedback process that actually continuously improves, he said.

Global systems integrator and Microsoft solution provider EY -- whose accolades include winning multiple 2023 Partner of the Year awards -- is in the piloting phase for an Intelligent Payroll Chatbot for a handful of customers, not to mention conducting proofs of concept in other areas of generative AI for EY and with clients.

Jim Little, a partner principal and global Microsoft alliance lead and Americas technology strategy lead at EY, told CRN that the company is piloting the chatbot with several customers before going industrial scale with the chatbot.

For now, the chatbot knows 27 languages and takes about 15 seconds to answer payroll questions compared with the baseline of 2.2 days for client contact centers.

The accuracy rate for proofs of concept and client pilots has increased to 94 percent from a baseline of 67 percent for human agents. The chatbot also nearly doubled first-resolution percentages for questions from human agents -- from 47 percent to 80 percent.

EY will continue to train the chatbot through prompt engineering and cleansing the data that powers it to improve accuracy.

Its been a tremendous success, Little said. We saw a real opportunity to disrupt. And weve done it. Its going to give us a great market advantage and learning that we can continue to leverage.

Like WinWire, PwC -- a global systems integrator, Microsoft solution provider and 2023 Microsoft Partner of The Year -- has joined Microsoft on the event trail, educating hundreds of financial services, health-care, energy and utilities customers on generative AI.

Were involved in client delivery work across almost every sector, and certainly proposals in every single sector, Bret Greenstein, PwC U.S. data and analytics partner, told CRN. [There is] just a huge volume of discussions to help people think this through. Its one of the fastest sales cycles Ive ever seen, but at the same time its highly transformative. So the work is actually really hard. But [customers get interested very quickly] because its not as if anyone has to be convinced that generative AI matters or is going to be disruptive to business.

Some of the early generative AI use cases include offloading work from contact center agents, having AI collect the customers problem, answer the customer if the problem is lower level and automate service ticket processing. PwC has seen reductions of work for employees of up to 80 percent.

These are often workflows where there are inbound communications coming in, he said. And previously, there was not a good way to vet inbound service tickets, service requests, insurance claims, requests -- whatever comes in always comes in as email, text or some workflow. And it was so hard to automate that because you cant do that with just a bunch of clicks or code. But generative AI does a nice job of reading, so to speak, incoming unstructured text and then characterizing high, middle, low risk and then even resolving some of the issues automatically.

So we had some processes where were just seeing an 80 percent drop in what goes to the actual person, which allows the person to focus on the harder cases and the easy ones or the incomplete requests get handled automatically. Even the communication back to people with natural language is a really cool process that speeds up the experience, reduces the labor, allows people to scale.

In mergers and acquisitions, generative AI has helped leverage and analyze unstructured data used in complex due diligence, Greenstein said. In sales, it has helped internal sellers support customer needs and questions. In software development, generative AI has helped with creation, testing, evaluation, documentation and code optimization -- improving efficiency by as much as 50 percent. In human resources, generative AI has helped with job postings, performance management and other tasks. And in IT, it has saved workers from answering low-level questions.

It also helps end users who actually just dont know what they want, but they know how to say it, he said. And so they dont know how to run SQL, but they do know how to ask for things.

When introducing generative AI to a customer, PwC looks for areas of constraint, he said.

Theres a financial services company Ive been talking to, and theyre just excited because there are certain parts of their business that theyre about to grow drastically as interest rates change, he said. And they want to scale up and go faster without hiring a million people. And they couldnt find them if they wanted to. So this ability for a midsize company to amplify their business or to scale their workforce is pretty powerful.

The key for customers is that generative AI wont solve everything, he said. If you just let everyone build a chatbot, youre just going to get a lot of noise and not a lot of high quality, he said. But if you create a solution that trains people how to build high-quality customer service, you can adapt it to HR or to other things.

A generative AI solution brings in just about every technology discipline at PwC, he said. These include the cloud, infrastructure, security, applications, data, business functional operations such as finance and IT, and then sector-based use cases as well.

This is the greatest unifier for us because we all have to come together for clients to help them, he said.

Mike Wilson, CTO and partner at Mason, Ohio-based Microsoft partner Interlink Cloud Advisors -- a member of CRNs 2023 Managed Service Provider 500 -- told CRN that his company has used the hype around Microsoft 365 Copilot to educate customers on Microsofts other generative AI wares.

Azure OpenAI, Azure Cognitive Services, Syntex and Power Platforms form recognition capabilities are some of those wares. Wilson and his customers have even talked about how individual job roles may change with generative AI.

There are a lot of other AI technologies that exist, he said. So theres a possibility here when were looking at readiness around things like Copilot to also start looking across their business. We can go through and do that business consulting, look role by role in your organization, identify AI use cases.

An eventual conversation with customers is how MSPs that adopt generative AI may even need to change billing because of the increase to productivity, he said.

Do I bill an hour? Do I bill five? Do I bill one hour at five times the rate? Were going to have to navigate through that stuff -- both for me as a professional services firm and I think for a lot of organizations, he said.

If all of the sudden your employees are more productive, do they work less? Do they work the same? Do they demand more money? How much does the organization give credit for providing the AI tools versus what the employee gets for being more productive and using them? Were going to have to navigate those conversations because there is going to be a lot of change as peoples job roles adapt. I dont see it replacing a lot of jobs, but I do think it ultimately changes many of them.

Phillip Walker, CEO of Manhattan Beach, Calif.-based Microsoft partner Network Solutions Provider -- a member of CRNs 2023 MSP 500 -- said that the generative AI fervor has had a ripple effect on customers seeking Microsoft Power BI AI capabilities.

His company has worked with customers on data lake cleanup to prepare for generative AI.

The golden nugget is going to be Power BI, Walker said. Its going to be analyzing data. Its going to be real-time information. Its going to be recommendations. Because before, you had to have SAP HANA installed. You had to have NetSuite installed. You had to have some enterprise Oracle ERP to get that type of insight for your business. And you had to have a team of analysts from Harvard sitting in a room looking at the data to make sense of it. Those days are now going away.

Gordon McKenna, global vice president of alliances at Downers Grove, Ill.-based Microsoft partner Ensono -- No. 103 on CRNs 2023 Solution Provider 500 -- told CRN that an example of his companys early generative AI work has been demonstrating an OpenAI application that takes in a vast number of anonymized Statements of Work to create recommendations.

You can literally query the tool and say, Ive got this type of customer, and theyve got this type of requirement. What do I need to put in the SoW? McKenna said. And it literally scours through those Statements of Work and comes up with a response.

A member of the Microsoft Partner Advisory Council, McKenna said that Microsofts investment in AI is for the long haul and that it has prompted AWS, Google Cloud and other rivals to publicly announce how they are tackling this technology.

I would say the pivot that Microsoft made around AGI [artificial general intelligence] is a sharper pivot than they made going all in on the cloud, McKenna said. And its absolutely the right pivot to make. They bet the house on it. I applaud Microsoft. Its the right move. It is very exciting.

Microsoft taking generative AI seriously is why the vendors partners need to as well, he said. Interested partners will probably see opportunities for on-premises work, private offerings with more security, and hybrid cloud.

Any partners that arent leaning into this now are going to be dead, same way as cloud, McKenna said. But I think its a quicker pivot. The time is now. In a years time, two years time, the market will be saturated. Not only by partners that have transformed, but new partners coming into the market. There are a lot of startups that are like, I want a piece of this, and are leaning into the market and are going to become very quick contenders for the crown.

Although some enterprising solution providers have already plunged into generative AI, all the partners who spoke with CRN had suggestions for how Microsoft can make Copilots ready for the channel once they become generally available.

WinWires Arora said he knows Microsoft understands that even businesses won over by generative AI may not have budgeted for its adoption. AI-specific incentive programs for solution providers go a long way, he said.

When asked about bringing generative AI to capital-conscious customers, Microsofts Dezen said that solving for particular business problems and communicating that this isnt technology for technologys sake is key.

This isnt a science project, she said. This is about how you help customers solve their real business problems, and then you can get to the crux of what do you need to do and what are the resources, investments you need in order to solve this problem.

The productivity benefits -- and employee morale boost from no longer working on mundane automated tasks in favor of more complex problem-solving -- can help offset the price of adopting generative AI, she said.

Productivity is a pretty obvious place to get a direct cost savings, Dezen said. This is where you realize savings through time savings. I love the way that we talk about Microsoft 365 Copilot and how you can eliminate those mundane tasks. With the elimination of those mundane tasks is the elimination of hours of work. Theres real cost savings there.

WinWire has preview versions of certain technology from the vendor, Arora said. More previews will help WinWire and other solution providers evangelize the technology to customers concerned about content safety and accuracy.

WinWires Guidi said that Microsoft-funded assessment programs and ways to help customers with building costs will help move the technology forward.

When asked about incentives, more specializations, more training and more resources for partners, Dezen said this is only the beginning.

Theres a lot more in the road map, she said. Were not slowing down at all.

PwCs Greenstein said he looks forward to more ways to scale vector databases, provide access control and synchronize data -- not to mention integration to prevent employees from having to work with dozens of different AI systems.

Those are going to be critical as this becomes an enterprise capability, he said.

In a June blog post, Microsoft suggested that people interested in staying updated on Copilot developments use Message center in the Microsoft 365 admin center and the online Microsoft 365 road map.

Interlinks Wilson said Microsoft needs to give partners concrete release dates for when Copilots and other generative AI offerings go generally available. The vendor also needs to release more information on how much generative AI will cost customers and better address hallucinations -- the term for when AI models produce false information.

Although the hallucination issue is improving and solution providers have techniques they can use when interacting with models to minimize the concern, sometimes the model just has to be able to say, I dont know, versus making stuff up, Wilson said.

Providing solution providers with solid training on leveraging generative AI and interpreting output will help with user adoption, he said.

EYs Little said solution providers can put guardrails in place for customers and investigate the causes of hallucinations. Sometimes, a hallucination reveals an issue with the customers data used to power the AI, Little said.

We were demoing [the payroll chatbot] to a client, he said. And the client just asked a question, came back, and the client said, Well, thats wrong. We were like, OK, well, lets click.

So we clicked to the data source. And actually, what we found was the data had wrong information. So actually, we found a secondary effect in circumstances where it ends up helping us from a data cleansing perspective and continuously making it better.

Microsoft making governance tools and prompt engineering tool sets more available will help customers, Little said.

Core BTS Thompson said Microsoft has done a good job of talking about responsible AI broadly, but he wants to hear more about industry-specific responsibility and how to factor for customer interpretations.

See the original post here:

Microsoft Partners: This Is Your Copilot Speaking - CRN

Read More..

The Role of Artificial Intelligence in the Future of Media – Fagen wasanni

There has been some confusion and concern among people about the role of artificial intelligence (AI) in our lives. However, AI is simply a technology that can perform tasks requiring human intelligence. It learns from data and improves its performance over time. AI has the potential to drive nearly 45% of the economy by 2023.

AI can be categorized into three types: Narrow AI, General AI, and Super AI. Narrow AI is designed for specific tasks, while General AI can perform any intellectual task that a human can do, although it doesnt exist yet. Super AI is purely theoretical and surpasses human intelligence in every aspect.

For media companies, AI applications like content personalization, automated content generation, sentiment analysis, and audience targeting can greatly benefit content delivery and audience engagement. AI can analyze customer data for targeted marketing campaigns, create personalized content, predict customer behavior, analyze visual content, and assist in social media management.

Companies can transition to AI by identifying pain points, collecting and preparing relevant data, starting with narrow applications, collaborating with AI experts, and forming a task force to integrate AI across the organization. AI can automate repetitive tasks, enhance decision-making, and free up human resources for more strategic work.

However, it is important for brands to maintain authenticity and embrace diversity while using AI for marketing. AI algorithms are only as unbiased as the data they are trained on, so brands should use diverse data and establish ethical guidelines to mitigate biases. Human creativity and understanding are irreplaceable, and brands should emphasize the importance of human-AI collaboration.

Overall, AI has the potential to revolutionize the media industry by improving customer experiences, optimizing operations, and delivering relevant content. It is crucial for companies to understand and leverage the power of AI to stay competitive in the evolving digital landscape.

See the original post here:

The Role of Artificial Intelligence in the Future of Media - Fagen wasanni

Read More..

The Growing Importance of Global Risk Analytics in Internet Security – Fagen wasanni

Understanding the Growing Importance of Global Risk Analytics in Internet Security

The growing importance of global risk analytics in internet security cannot be overstated. As the digital landscape continues to evolve, so too do the threats that businesses and individuals face online. Cybersecurity is no longer just about protecting against viruses and malware; its about understanding and mitigating risks on a global scale. This is where global risk analytics comes into play.

Global risk analytics is a field that combines data analysis with risk management to identify, assess, and mitigate potential threats. It involves collecting and analyzing vast amounts of data from various sources, including social media, news reports, and other online platforms, to identify patterns and trends that could indicate potential risks. This information is then used to develop strategies to prevent or mitigate these risks, thereby enhancing internet security.

The rise of global risk analytics in internet security can be attributed to several factors. Firstly, the increasing digitization of businesses and the proliferation of internet-connected devices have created a vast amount of data that can be analyzed for potential threats. Secondly, the growing sophistication of cybercriminals and their ability to launch complex, coordinated attacks across borders has necessitated a more comprehensive approach to cybersecurity.

Moreover, the global nature of the internet means that threats can originate from anywhere in the world. Cybercriminals often exploit differences in legal and regulatory frameworks between countries to evade detection and prosecution. Global risk analytics allows businesses to understand and anticipate these threats, enabling them to implement effective security measures regardless of where the threats originate.

The use of global risk analytics in internet security also has significant implications for regulatory compliance. Many countries have introduced stringent data protection laws that require businesses to take proactive steps to protect their customers data. By identifying potential risks and implementing appropriate security measures, businesses can ensure they remain compliant with these regulations, thereby avoiding hefty fines and damage to their reputation.

Furthermore, global risk analytics can also help businesses to protect their bottom line. Cyberattacks can result in significant financial losses, both in terms of the immediate costs of dealing with the attack and the longer-term impact on customer trust and brand reputation. By identifying and mitigating potential threats before they materialize, businesses can avoid these costs and ensure their continued profitability.

In conclusion, the growing importance of global risk analytics in internet security reflects the changing nature of the digital landscape. As businesses become increasingly digital and cyber threats become more sophisticated, the need for a comprehensive, data-driven approach to cybersecurity has never been greater. Global risk analytics provides businesses with the tools they need to understand and mitigate these threats, ensuring their continued success in the digital age. As such, it is likely that the role of global risk analytics in internet security will continue to grow in the coming years.

The rest is here:
The Growing Importance of Global Risk Analytics in Internet Security - Fagen wasanni

Read More..

Cyber Security challenge: SMBs and large enterprises face common threat but separate response routes – The Economic Times

Cybersecurity is a critical concern for businesses of all sizes, however, it poses distinct challenges for small and medium businesses (SMBs) due to their constraints of resources and access to expertise. The attack profiles are increasingly similar between SMBs and large organisations.According to data from the Data Breach Investigation Report 2023 by Verizon, SMBs experience 699 incidents annually with 381 cases of confirmed data disclosure and large businesses face 496 incidents annually with 227 cases of confirmed data disclosure. System intrusion is the top common threat.There are several safeguards companies can implement to protect themselves, says the report. It points to the controls offered by the Center for Internet Security (CIS) a nonprofit that provides products and services to help organisations safeguard their system and data from cyber threats as a good starting point.The nonprofit has developed an interactive software, CIS critical security controls navigator, to assist organisations to analyse their cybersecurity status. It also helps organisations track their advancements in implementing CIS controls, which are guidelines generated by CIS to reduce cyber risk and enhance their defences. It offers a tailored approach by classifying the CIS controls into three implementation groups (IG1, IG2, and IG3) based on the organisation's security maturity level and resources.The classifications are:IG1: Essential cyber hygiene for small businesses with limited resources, providing fundamental steps to defend against common cyber threats.IG2: Advances protection for midsize businesses, addressing social engineering threats and incident response management.

IG3: Comprehensive defence for larger SMBs, incorporating application software security and penetration testing to enhance information security posture.

While the CIS controls provide a strong foundation, each organisation must customise its security measures based on its unique risk profile and tolerance. Regularly tracking security metrics and the ongoing improvements to the security posture are essential for staying ahead of cyber threats.

Original post:
Cyber Security challenge: SMBs and large enterprises face common threat but separate response routes - The Economic Times

Read More..

2022 Top Routinely Exploited Vulnerabilities – CISA

SUMMARY

The following cybersecurity agencies coauthored this joint Cybersecurity Advisory (CSA):

This advisory provides details on the Common Vulnerabilities and Exposures (CVEs) routinely and frequently exploited by malicious cyber actors in 2022 and the associated Common Weakness Enumeration(s) (CWE). In 2022, malicious cyber actors exploited older software vulnerabilities more frequently than recently disclosed vulnerabilities and targeted unpatched, internet-facing systems.

The authoring agencies strongly encourage vendors, designers, developers, and end-user organizations to implement the recommendations found within the Mitigations section of this advisoryincluding the followingto reduce the risk of compromise by malicious cyber actors.

Download the PDF version of this report:

In 2022, malicious cyber actors exploited older software vulnerabilities more frequently than recently disclosed vulnerabilities and targeted unpatched, internet-facing systems. Proof of concept (PoC) code was publicly available for many of the software vulnerabilities or vulnerability chains, likely facilitating exploitation by a broader range of malicious cyber actors.

Malicious cyber actors generally have the most success exploiting known vulnerabilities within the first two years of public disclosurethe value of such vulnerabilities gradually decreases as software is patched or upgraded. Timely patching reduces the effectiveness of known, exploitable vulnerabilities, possibly decreasing the pace of malicious cyber actor operations and forcing pursuit of more costly and time-consuming methods (such as developing zero-day exploits or conducting software supply chain operations).

Malicious cyber actors likely prioritize developing exploits for severe and globally prevalent CVEs. While sophisticated actors also develop tools to exploit other vulnerabilities, developing exploits for critical, wide-spread, and publicly known vulnerabilities gives actors low-cost, high-impact tools they can use for several years. Additionally, cyber actors likely give higher priority to vulnerabilities that are more prevalent in their specific targets networks. Multiple CVE or CVE chains require the actor to send a malicious web request to the vulnerable device, which often includes unique signatures that can be detected through deep packet inspection.

Table 1 shows the top 12 vulnerabilities the co-authors observed malicious cyber actors routinely exploiting in 2022:

In addition to the 12 vulnerabilities listed in Table 1, the authoring agencies identified vulnerabilitieslisted in Table 2that were also routinely exploited by malicious cyber actors in 2022.

The authoring agencies recommend vendors and developers take the following steps to ensure their products are secure by design and default:

For more information on designing secure-by-design and -default products, including additional recommended secure-by-default configurations, see joint guide Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default.

The authoring agencies recommend end-user organizations implement the mitigations below to improve cybersecurity posture on the basis of the threat actors activity. These mitigations align with the cross-sector Cybersecurity Performance Goals (CPGs) developed by CISA and the National Institute of Standards and Technology (NIST). The CPGs provide a minimum set of practices and protections that CISA and NIST recommend all organizations implement. CISA and NIST based the CPGs on existing cybersecurity frameworks and guidance to protect against the most common and impactful threats, tactics, techniques, and procedures. Visit CISAs Cross-Sector Cybersecurity Performance Goals for more information on CPGs, including additional recommended baseline protections.

The information in this report is being provided as is for informational purposes only. CISA, FBI, NSA, ACSC, CCCS, NCSC-NZ, CERT NZ, and NCSC-UK do not endorse any commercial product or service, including any subjects of analysis. Any reference to specific commercial products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply endorsement, recommendation, or favoring.

This document was developed by CISA, NSA, FBI, ACSC, CCCS, NCSC-NZ, CERT NZ, and NCSC-UK in furtherance of their respective cybersecurity missions, including their responsibilities to develop and issue cybersecurity specifications and mitigations.

[1] Apache Log4j Vulnerability Guidance

August 3, 2023: Initial version.

CVE

Vendor

Affected Products and Versions

Patch Information

Resources

CVE-2017-0199

Microsoft

Multiple Products

Microsoft Office/WordPad Remote Code Execution Vulnerability w/Windows

CVE-2017-11882

Microsoft

Office, Multiple Versions

Microsoft Office Memory Corruption Vulnerability, CVE-2017-11882

CVE-2018-13379

Fortinet

FortiOS and FortiProxy 2.0.2, 2.0.1, 2.0.0, 1.2.8, 1.2.7, 1.2.6, 1.2.5, 1.2.4, 1.2.3, 1.2.2, 1.2.1, 1.2.0, 1.1.6

FortiProxy - system file leak through SSL VPN special crafted HTTP resource requests

Joint CSAs:

Iranian Government-Sponsored APT Cyber Actors Exploiting Microsoft Exchange and Fortinet Vulnerabilities in Furtherance of Malicious Activities

Russian State-Sponsored Cyber Actors Target Cleared Defense Contractor Networks to Obtain Sensitive U.S. Defense Information and Technology

APT Actors Chaining Vulnerabilities Against SLTT, Critical Infrastructure, and Elections Organizations

CVE-2019-11510

Ivanti

Pulse Secure Pulse Connect Secure versions, 9.0R1 to 9.0R3.3, 8.3R1 to 8.3R7, and 8.2R1 to 8.2R12

SA44101 - 2019-04: Out-of-Cycle Advisory: Multiple vulnerabilities resolved in Pulse Connect Secure / Pulse Policy Secure 9.0RX

CISA Alerts:

Continued Exploitation of Pulse Secure VPN Vulnerability

Chinese Ministry of State Security-Affiliated Cyber Threat Actor Activity

ACSC Advisory:

2019-129: Recommendations to mitigate vulnerability in Pulse Connect Secure VPN Software

Joint CSA:

APT Actors Chaining Vulnerabilities Against SLTT, Critical Infrastructure, and Elections Organizations

CCCS Alert:

APT Actors Target U.S. and Allied Networks - Update 1

CVE-2019-0708

Microsoft

Remote Desktop Services

Remote Desktop Services Remote Code Execution Vulnerability

CVE-2019-19781

Citrix

ADC and Gateway version 13.0 all supported builds before 13.0.47.24

NetScaler ADC and NetScaler Gateway, version 12.1 all supported builds before 12.1.55.18; version 12.0 all supported builds before 12.0.63.13; version 11.1 all supported builds before 11.1.63.15; version 10.5 all supported builds before 10.5.70.12

SD-WAN WANOP appliance models 4000-WO, 4100-WO, 5000-WO, and 5100-WO all supported software release builds before 10.2.6b and 11.0.3b

CVE-2019-19781 - Vulnerability in Citrix Application Delivery Controller, Citrix Gateway, and Citrix SD-WAN WANOP appliance

Joint CSAs:

APT Actors Chaining Vulnerabilities Against SLTT, Critical Infrastructure, and Elections Organizations

Chinese Ministry of State Security-Affiliated Cyber Threat Actor Activity

CCCS Alert:

Detecting Compromises relating to Citrix CVE-2019-19781

CVE-2020-5902

F5

BIG IP versions 15.1.0, 15.0.0 to 15.0.1, 14.1.0 to 14.1.2, 13.1.0 to 13.1.3, 12.1.0 to 12.1.5, and 11.6.1 to 11.6.5

K52145254: TMUI RCE vulnerability CVE-2020-5902

CISA Alert:

Threat Actor Exploitation of F5 BIG-IP CVE-2020-5902

CVE-2020-1472

Microsoft

Windows Server, Multiple Versions

Microsoft Security Update Guide: Netlogon Elevation of Privilege Vulnerability, CVE-2020-1472

ACSC Advisory:

2020-016: Netlogon Elevation of Privilege Vulnerability (CVE-2020-1472)

Joint CSA:

APT Actors Chaining Vulnerabilities Against SLTT, Critical Infrastructure, and Elections Organizations

CCCS Alert:

Microsoft Netlogon Elevation of Privilege Vulnerability - CVE-2020-1472 - Update 1

CVE-2020-14882

Oracle

WebLogic Server, versions 10.3.6.0.0, 12.1.3.0.0, 12.2.1.3.0, 12.2.1.4.0, 14.1.1.0.0

Oracle Critical Patch Update Advisory - October 2020

CVE-2020-14883

Oracle

WebLogic Server, versions 10.3.6.0.0, 12.1.3.0.0, 12.2.1.3.0, 12.2.1.4.0, 14.1.1.0.0

Oracle Critical Patch Update Advisory - October 2020

CVE-2021-20016

SonicWALL

SSLVPN SMA100, Build Version 10.x

Confirmed Zero-day vulnerability in the SonicWall SMA100 build version 10.x

CVE-2021-26855

Microsoft

Exchange Server, Multiple Versions

Microsoft Exchange Server Remote Code Execution Vulnerability, CVE-2021-26855

CISA Alert:

See more here:
2022 Top Routinely Exploited Vulnerabilities - CISA

Read More..