Category Archives: Artificial General Intelligence
Exploring the effects of feeding emotional stimuli to large language models – Tech Xplore
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
preprint
trusted source
proofread
by Ingrid Fadelli , Tech Xplore
close
Since the advent of OpenAI's ChatGPT, large language models (LLMs) have become significantly popular. These models, trained on vast amounts of data, can answer written user queries in strikingly human-like ways, rapidly generating definitions to specific terms, text summaries, context-specific suggestions, diet plans, and much more.
While these models have been found to perform remarkably well in many domains, their response to emotional stimuli remains poorly investigated. Researchers at Microsoft and CAS Institute of Software recently devised an approach that could improve interactions between LLMs and human users, allowing them to respond to emotion-laced, psychology-based prompts fed to them by human users.
"LLMs have achieved significant performance in many fields such as reasoning, language understanding, and math problem-solving, and are regarded as a crucial step to artificial general intelligence (AGI)," Cheng Li, Jindong Wang and their colleagues wrote in their paper, prepublished on arXiv. "However, the sensitivity of LLMs to prompts remains a major bottleneck for their daily adoption. In this paper, we take inspiration from psychology and propose EmotionPrompt to explore emotional intelligence to enhance the performance of LLMs."
The approach devised by Li, Wang and their colleagues, dubbed EmotionPrompt, draws inspiration from well-established knowledge rooted in psychology and the social sciences. For instance, past psychology studies found that words of encouragement and other emotional stimuli could have positive effects on different areas of a person's life, for instance improving the grades of students, promoting healthier lifestyle choices, and so on.
To see whether emotional prompts could also affect the performance of LLMs, the researchers came up with 11 emotional sentences that could be added to typical prompts fed to the models. These were sentences such as "this is very important for my career," "you'd better be sure," "take pride in your work and give it your best", and "embrace challenges as opportunities for growth."
These sentences were derived from existing psychology literature, such as the social identity theory introduced by Henri Tajfel and John Turner in the 1970s, social cognition theory, and the cognitive emotion regulation theory. The researchers then added these sentences to prompts sent to different LLMs, which asked the models to complete different language tasks.
So far, they tested their approach on four different models: ChatGPT , Vicuna-13b, Bloom and Flan-T5-Large. Overall, they found that it improved the performance of these models on eight different tasks, increasing the accuracy of their responses by more than 10% on over half of these tasks.
"EmotionPrompt operates on a remarkably straightforward principle: the incorporation of emotional stimulus into prompts," Li, Wang and their colleagues wrote. "Experimental results demonstrate that our EmotionPrompt, using the same single prompt templates, significantly outperforms original zero-shot prompt and Zero-shot-CoT on eight tasks with diverse models: ChatGPT, Vicuna-13b, Bloom, and T5. Further, EmotionPrompt was observed to improve both truthfulness and informativeness."
The new approach devised by this team of researchers could soon inspire additional studies aimed at improving human-LLM interactions by introducing emotional/psychology-based prompts. While the results gathered so far are promising, further studies will be needed to validate its effectiveness and generalizability.
"This work has several limitations," the researchers conclude in their paper. "First, we only experiment with four LLMs and conduct experiments in several tasks with few test examples, which are limited. Thus, our conclusions about emotion stimulus can only work on our experiments and any LLMs and datasets out of the scope of this paper might not work with emotion stimulus. Second, the emotional stimulus proposed in this paper may not be general to other tasks, and researchers may propose other useful replacements for your own tasks."
More information: Cheng Li et al, EmotionPrompt: Leveraging Psychology for Large Language Models Enhancement via Emotional Stimulus, arXiv (2023). DOI: 10.48550/arxiv.2307.11760
Journal information: arXiv
2023 Science X Network
Link:
Exploring the effects of feeding emotional stimuli to large language models - Tech Xplore
TinyML: the mini-me of AI – Gilbert + Tobin
TinyML means machine learning (ML) on tiny, low powered, low cost computers, giving them the capability to perform on-device (on-board) analytics of vision, audio and speech.
TinyML would upend the current architecture Internet of Things (IoT) supporting the swarms of relatively dumb sensor devices being embedded in our daily lives. We would go from a data enabled physical environment to a smart physical environment a fridge that can think lurking in your kitchen!
Traditional IoT systems utilise large fleets of edge devices deployed in the physical environment, such as soil sensors, to gather data which is transmitted back to a cloud-based CPU (now AI-enabled) to process: IoT devices are literally the eyes, ears and touch of AI in the physical world.
In this traditional IOT architecture, the IoT edge devices because they need to be low cost, robust and long-lived have low computing power and memory, and hence low power requirements (and demands on demand life). For example, the majority of IoT edge devices operate at clock speeds (the higher the clock speed, the more processing power) between 101000 MHz, which will not support complex learning models at the edge and less than 1 MB onboard flash memory.
This traditional IoT architecture has its drawbacks:
Hence, the efforts to integrate ML capabilities into the IoT edge devices themselves: intelligence onboard the sensor itself. One commentator compares CPU-based ML under the traditional IOT architecture and TinyML at the network edge as follows:
As per the battery life, TinyML outperforms ordinary ML techniques as the models can run on embedded devices. Cost efficiency is better in TinyML as only one microcontroller is required compared to a PC. Scalability is higher in ordinary ML applications as more computing power is available. Robustness is higher on TinyML deployments as in the case when a node is removed, all information remains intact while on the ML case is server-side based. The deployment is better on ML models as there are many paradigms available online and more widely used. The performance metric is higher on ML cases as TinyML technology has emerged and there are not much models. Lastly, security is higher on TinyML deployments as the information remains within the embedded device and there are no exchanges between third parties.
TinyML would not necessarily operate as a substitute for cloud-based services but would be part of a more decentralised, and robust, ML system. For example, a body sensor may have enough ML capability to work out whether it can diagnosis and solve a problem from patient data it collects or whether the issue is above my pay grade and escalate to the cloud-based AI.
First, patchy mobile coverage is holding back the deployment of digital agriculture. PlantVillage, an open-source project managed by Penn State University, has created Nuru, an artificial intelligence-based program that can function on mobile phones without internet connectivity and is deploying it in Africa to help farmers identify and respond to hazards for cassava crops, which is a key food source for hundreds of millions.
Second, data processing on-board IoT devices will enable uses which otherwise would be impractical, if not dangerous, because of the delay (latency) if the IoT device had to transmit the data, the cloud process the answer and transmit it back to the IoT device. Use cases under development include:
Third, TinyML will allow close-at-hand monitoring and maintenance work on in-field equipment. Ping, an Australian company, has developed a TinyML device that continuously monitors the acoustic signature of wind turbine blades to detect and notify any change or damage, using advanced acoustic analysis.
Lastly, TinyML can be inserted into manufacturing processes to manage and adjust machinery. Perhaps less consequentially for humanity, TinyML can result in better roasted coffee. It is critical to identify the first crack in any beans since the time spent roasting after the first crack has a major impact on the quality and flavour of the processed beans. Two Norwegian businesses, Roest and Soundsensing, have added a microcontroller with TinyML in their bean roasting equipment to more quickly identify that first crack.
The fundamental problem is that ML/AI and IoT device design have been heading in the opposite directions algorithms have become depend on vastly increasing data inputs and IoT devices have been designed to consume ever lower levels of energy (and therefore reinforcing their limited computing capacity). As one commentator has said:
Lets take an example to understand this better. GPT-3 has 175 billion parameters and got trained on 570 GB of text. On the other hand, we have Google Assistant that can detect speech with a model thats only 14 KB. So, that can fit on something as small as a microprocessor, but GPT-3 cannot.
TinyML depends on the intersection of three trends:
On the first trend, last year we discussed the development of IoT edge devices which could supplement or replace battery power through two techniques which harvested energy from thin air, as it were, by the following techniques:
On the second trend, there have steady, large gains in the performance of typical microcontroller processors. In 2004, Arm introduced the Cortex-M 32-bit processor family which helped create a powerful new generation of low-cost microcontrollers. The Cortex-M4 processor introduced hardware floating point support and the ability to perform multiple integer calculations, which has made it easier to perform complex calculations on microcontrollers, which is essential for machine learning algorithms. A recent development is the introduction of the Ethos Neural Processing Unit (NPU), which allows algorithms to run small microcontrollers with around a 480 times performance boost.
The third development, scaling down algorithms to operate within the power and memory constraints of IoT edge devices, is where most of the focus of TinyML research currently is. There are a number of different approaches.
First, federated learning involves creating an ML model from decentralized data and in a distributed way, as follows:
a variety of edge devices collaborate so as to build a global model using only local copies of the data and then each device downloads a copy of the model and updates the local parameters. Finally, the central server aggregates all model updates and proceeds with the training and evaluation without exchanging data to other parties.
Second, transfer learningis where a machine learning model developed for one task is reused as the starting point for a model on a second task. By drawing on the pre-trained or existing learned experience, the ML model can be adapted to the new task with relatively less data, training effort an therefore computing power. A leading transfer learning tool is TensorFlow Lite for Microcontrollers, Googles open source program, which it describes as follows:
The intuition behind transfer learning for image classification is that if a model is trained on a large and general enough dataset, this model will effectively serve as a generic model of the visual world. You can then take advantage of these learned feature maps without having to start from scratch by training a large model on a large dataset.
Models in the TensorFlow Lite framework can use just a few lines of code to be adapted to a new task, and then can be deployed onboard IoT edge devices.
TinyML could supercharge IoT, allowing IoT devices to take intelligent decisions in the field, literally. However, there is still the challenge of squeezing machine intelligence on the head of a pin, literally.
Industry standards as in IoT generally are desperately needed to reign in the often chaotic heterogeneity of software, devices, and power requirements. An industry association has been set up to facilitate the emergence of standards
Read more:TinyML: Tools, Applications, Challenges, and Future Research Directions
View original post here:
The Threat Of Climate Misinformation Propagated by Generative AI … – Unite.AI
Artificial intelligence (AI) has transformed how we access and distribute information. In particular, Generative AI (GAI) offers unprecedented opportunities for growth. But, it also poses significant challenges, notably in climate change discourse, especially climate misinformation.
In 2022, research showed that around 60 Twitter accounts were used to make 22,000 tweets and spread false or misleading information about climate change.
Climate misinformation means inaccurate or deceptive content related to climate science and environmental issues. Propagated through various channels, it distorts climate change discourse and impedes evidence-based decision-making.
As the urgency to address climate change intensifies, misinformation propagated by AI presents a formidable obstacle to achieving collective climate action.
False or misleading information about climate change and its impacts is often disseminated to sow doubt and confusion. This propagation of inaccurate content hinders effective climate action and public understanding.
In an era where information travels instantaneously through digital platforms, climate misinformation has found fertile ground to propagate and create confusion among the general public.
Mainly there are three types of climate misinformation:
In 2022, several disturbing attempts to spread climate misinformation came to light, demonstrating the extent of the challenge. These efforts included lobbying campaigns by fossil fuel companies to influence policymakers and deceive the public.
Additionally, petrochemical magnates funded climate change denialist think tanks to disseminate false information. Also, corporate climate skeptic campaigns thrived on social media platforms, exploiting Twitter ad campaigns to spread misinformation rapidly.
These manipulative campaigns seek to undermine public trust in climate science, discourage action, and hinder meaningful progress in tackling climate change.
Image Source
Generative AI technology, particularly deep learning models like Generative Adversarial Networks (GANs) and transformers, can produce highly realistic and plausible content, including text, images, audio, and videos. This advancement in AI technology has opened the door for the rapid dissemination of climate misinformation in various ways.
Generative AI can make up stories that aren't true about climate change. Although 5.18 billion people use social media today, they are more aware of current world issues. But, they are 3% less likely to spot false tweets generated by AI than those written by humans.
Some of the ways generative AI can promote climate misinformation:
Generative AI tools that produce realistic synthetic content are becoming increasingly accessible through public APIs and open-source communities. This ease of access allows for the deliberate generation of false information, including text and photo-realistic fake images, contributing to the spread of climate misinformation.
Generative AI enables the creation of longer, authoritative-sounding articles, blog posts, and news stories, often replicating the style of reputable sources. This sophistication can deceive and mislead the audience, making it difficult to distinguish AI-generated misinformation from genuine content.
Large language models (LLMs) integrated into AI agents can engage in elaborate conversations with humans, employing persuasive arguments to influence public opinion. Generative AI's ability to generate personalized content is undetectable by current bot detection tools. Moreover, GAI bots can amplify disinformation efforts and enable small groups to appear larger online.
Hence, it is crucial to implement robust fact-checking mechanisms, media literacy programs, and close monitoring of digital platforms to combat the dissemination of AI-propagated climate misinformation effectively. Strengthening information integrity and critical thinking skills empowers individuals to navigate the digital landscape and make informed decisions amidst the rising tide of climate misinformation.
Though AI technology has facilitated the rapid spread of climate misinformation, it can also be part of the solution. AI-driven algorithms can identify patterns unique to AI-generated content, enabling early detection and intervention.
However, we are still in the early stages of building robust AI detection systems. Hence, humans can take the following steps to minimize the risk of climate misinformation:
In the battle against AI-propagated climate misinformation, upholding ethical principles in AI development and responsible usage is paramount. By prioritizing transparency, fairness, and accountability, we can ensure that AI technologies serve the public good and contribute positively to our understanding of climate change.
To learn more about generative AI or AI-related content, visit unite.ai.
See the original post here:
The Threat Of Climate Misinformation Propagated by Generative AI ... - Unite.AI
Microsoft Partners: This Is Your Copilot Speaking – CRN
Software News Wade Tyler Millward August 07, 2023, 10:00 AM EDT
Microsoft CEO Satya Nadella has called the spread of generative AI a collective mission for the company and its partners, a mission that in part will be fueled by Microsoft 365 Copilot. Heres how and where a number of partners are placing their bets.
Vineet Aroras belief in the promise of Microsoft Copilot and the vendors other generative artificial intelligence technology runs so deep hes betting his job on it.
The CTO of WinWire Technologies -- a Santa Clara, Calif.-based 2023 Microsoft Partner of the Year finalist and No. 233 on CRNs 2023 Solution Provider 500 -- has transitioned his role to focus on technology from Microsoft and Microsoft-backed OpenAI that can quickly create content, analyses and summaries based on user prompts.
Arora heads a small team of WinWire employees -- data scientists, data architects and data engineers found through an internal hackathon earlier in the year -- focused on partnering with Microsoft to provide AI technology to customers. WinWire has more than 1,000 employees worldwide.
Since April, hes conducted a seven-city road show with Microsoft technologists to demonstrate what generative AI can do in health care and has done more than 24 envisioning sessions with customers around the technology. WinWire is already piloting generative AI solutions for healthcare and life sciences customers -- all of this before some of Microsofts most impressive generative AI tools become generally available.
And he even publishes two to three articles a day to a Teams group chat called All Things OpenAI.
The scale of the investment that we have planned is [incomparable] to anything else that we have done in the last 16 years, Arora told CRN. My entire time is going into this. The opportunity size, the technology, the speed at which it is evolving requires 100 percent -- 110 percent -- dedication from our side. So the scale and the level of focus has been different from anything else that weve done in the past.
Tech vendors and solution providers alike are fueling a generative AI gold rush. And Redmond, Wash.-based Microsoft is certainly considered a leader in generative AI thanks to its reported $13 billion investment in OpenAI -- the startup that on Nov. 30 publicly launched the ChatGPT text-generating software -- and Microsofts own March introduction of Microsoft 365 Copilot, which will bring generative AI to Teams, Outlook, Word, Excel and other popular productivity applications.
At the vendors Inspire conference held online last month, Microsoft Chairman and CEO Satya Nadella called next-generation AI a partner ecosystem opportunity that could span $4 trillion to $6.5 trillion. The most forward-looking of its 400,000-member partner ecosystem have now put their best and brightest employees to work developing generative AI solutions.
Solution providers are key to scaling Microsofts generative AI for businesses, according to partners themselves as well as industry observers.
Dan Ives, managing director of equity research at Los Angeles-based financial services firm Wedbush Securities, told CRN that partners hold the key to spreading the gospel and ultimately getting pen on paper to deals involving generative AI.
Getting partners bought in on the ChatGPT/AI strategy on cloud is integral to what I view as just a golden opportunity for Nadella and team over the coming years, Ives said. Microsofts partner distribution is unrivaled. In other words, I view that as a huge asset for them in this arms race thats going on.
Ives -- who forecasts an upcoming $800 billion AI spending wave over the next decade -- sees generative AI as more than just hype. In research notes for Wedbush, he estimates that AI will comprise up to 10 percent of overall IT budgets in 2024, compared with about 1 percent in 2023.
And he sees generative AI giving Microsoft and its ecosystem greater leadership in cloud computing. By most measures it sits in the No. 2 position behind Amazon Web Services and ahead of Google Cloud.
Ives expects Microsofts cloud total addressable market to expand by up to 40 percent over the next decade thanks to generative AI.
I would compare it to Nadellas vision when he took over as CEO and morphed Microsoft through Azure into a cloud behemoth, he said. I would compare it to that in terms of this transformation.
At Inspire 2023, multiple Microsoft executives addressed the need for solution providers to implement Copilot and other generative AI tools.
Nadella called the spread of generative AI a collective mission for Microsoft and its partners.
Theres no question the opportunity is tremendous, Nadella said during his keynote address. And our partner ecosystem, the uniqueness of what you all represent, really is what gets me most excited. Its a map of the world. We can reach every community in every country in every sector of the economy, both public and private. Thats whats exciting. In the last six months, as Ive been
going to different parts of the world, talking to lots of you, whats unbelievable is the rate of diffusion of this technology. Thanks to all your capability, [that] is whats most exciting about this particular revolution and platform shift, and thats what really grounds us in our mission to empower every person and every organization on the planet to achieve more.
Nicole Dezen, Microsofts chief partner officer and corporate vice president of the Global Partner Solutions group, told CRN after Inspire that she hopes solution providers see Microsofts commitment to not only AI, but to partner success and enablement. Microsoft only wins when partners and customers succeed, Dezen said. Its core to our mission. And so we were very intentional in making so many big investments this year in all things partner -- everything from the program itself, all the way through to specializations and designations to really shore up training capability.
Microsoft investments and incentives are meaningful dollars to help partners go deliver fast time-to-value for customers because thats where the truth is. That is the measure of success. When customers are realizing value, then were all successful, she said. More than 9,400 customers use Azure OpenAI Service, making it the fastest-growing Azure service in the technologys history, according to Microsoft. And 9,100 partners transact on Azure AI.
More than 2,300 of those partners transact on Azure OpenAI, according to Microsoft. Partners have also activated thousands of customers on core Azure migration and modernization scenarios. Azure OpenAI Service, GitHub Copilot and Sales Copilot are examples of currently available Microsoft generative AI offerings. Other generative AI offerings such as the more broad Microsoft 365 Copilot do not have launch dates yet. However, Microsoft did announce at Inspire that the M365 Copilot will cost $30 per user, per month, for buyers of Microsoft 365 E3, E5, Business Standard and Business Premium licenses.
Power Automate Process Mining -- an AI-infused Microsoft offering for process improvement through automation and low-code apps -- is scheduled for general availability on Aug. 1. Copilot in SharePoint is expected to roll out in November.
And Microsoft already provides a wide variety of resources for channel partners with AI practices, such as training, education and an assortment of programs and initiatives. These include the Data and AI Industry Partner Activation Kit, which equips partners with reference architectures, application demonstrations and solution accelerators.
The cloud service provider also has Microsoft- Azure specializations to validate partner abilities and an Azure Analytics and AI Accelerate Program to provide support to partners across all sales stages.
More than 200 partners already met the prerequisites for the Build and Modernize AI Apps with Microsoft Azure specialization at launch, according to the vendor.
Some of the biggest partner news out of Inspire included the rechristening of the Microsoft Cloud Partner Program -- unveiled in October -- as the Microsoft AI Cloud Partner Program, in a reflection of how important AI technology is for Microsoft and its partners futures.
The vendor also tripled its investment in Azure Migrate and Modernize -- a renamed Azure Migration and Modernization Program -- which provides assessments, more partner incentives and support for additional workloads.
Separately, Microsoft said it has invested $100 million in a new Azure Innovate offering that aims to help partners and customers infuse AI into applications, experiences, advanced analytics and custom cloud-native app building,
And Microsoft unveiled new Era of AI marketing campaigns in a box and an AI transformation playbook with guidance on skilling, innovating, marketing and selling.
While generative AI is in its earliest days, some Microsoft solution providers are already delivering generative AI solutions. That work ranges from WinWire AI journey maps to EYs payroll chatbot and from PricewaterhouseCoopers saving employees 80 percent of work time to Core BTS launching an AI readiness assessment.
At WinWire, Arora works with customers -- health care, manufacturing and retail are some of its biggest verticals -- on AI journey maps, maps that Arora considers to be WinWires intellectual property.
Youre not going to jump into building full-fledged production-ready solutions off the bat, he said. There is a step-by-step approach that you need to take.
He asks customer CIOs and CTOs what their vision is and their familiarity with generative AI. Hell take about 30 minutes to demystify generative AI and then identifies one to three use cases for a proof of concept or pilot. WinWire then works with customers on budgeting for the work.
We are able to supercharge the adoption of that technology into their areas, he said. They are Microsofts biggest customers, some of them. And as Microsoft starts bringing those technologies [to market], we are there to help them adopt them in the right manner. And thats a promise that I provide to them.
Tony Guidi, WinWires senior vice president of alliances, told CRN that the solution providers years of experience in modernizing customer data estates allowed us to leapfrog a lot of other Microsoft partners.
For years, weve been doing modern data, app innovation, all the things that this technology is going to help even further accelerate, Guidi said. So we were in a unique position where we could take our 16 years of experience and very quickly pivot to focus in this space and build out the significant team with some of our best people to just begin to develop use cases by industry, [custom] Copilots, proofs of concept, accelerator approaches in order for customers to make this technology real.
Indianapolis-based Core BTS, a Microsoft partner and No. 168 on CRNs 2023 Solution Provider 500, has a responsible AI readiness assessment available through the Azure Marketplace for customers to identify high-value, low-risk opportunities to use generative AI on business functions within the organization.
Finding a document within a business group, generating relevant data content from the document and using generative AI to model the data for company stakeholders to access is one example, Perry Thompson, managing director of technology strategy at Core BTS, told CRN.
The solution providers 400-plus Microsoft certified professionals are also working on AI at the networking layer to help find anomalies and behavior patterns that run afoul of policies.
Core BTS employees have looked at customers internal data, the value that data provides business functions and the datas classification and taxonomy. The solution provider is also working on guardrails to protect sensitive information -- keeping healthcare customers compliant with federal privacy laws, for example, Thompson said.
Were starting right off the bat because we recognize that this is something that is really exploding in the market, he said. But when you start to get into the actual processing of information and trying to give controls and automation, thats the piece that you have to be very sensitive about. And you have to work with your leaders in your organization to set that vision first and then be able to define strategic goals that actually align to their business and their values.
Customers seeking generative AI are really seeking more connections across business units to share data quickly, make decisions faster and get closer to end customers, he said.
This is why we have to take a step back and be able to try to set that strategy, identify those key areas, to define some mitigation aspects of the actual highlighted area -- if it exists -- go into the pilot mode for them to be able to look at it at a small scale. And then we turn back around and plan for the scaling up of the entire solution and make sure that we have a continuous feedback process that actually continuously improves, he said.
Global systems integrator and Microsoft solution provider EY -- whose accolades include winning multiple 2023 Partner of the Year awards -- is in the piloting phase for an Intelligent Payroll Chatbot for a handful of customers, not to mention conducting proofs of concept in other areas of generative AI for EY and with clients.
Jim Little, a partner principal and global Microsoft alliance lead and Americas technology strategy lead at EY, told CRN that the company is piloting the chatbot with several customers before going industrial scale with the chatbot.
For now, the chatbot knows 27 languages and takes about 15 seconds to answer payroll questions compared with the baseline of 2.2 days for client contact centers.
The accuracy rate for proofs of concept and client pilots has increased to 94 percent from a baseline of 67 percent for human agents. The chatbot also nearly doubled first-resolution percentages for questions from human agents -- from 47 percent to 80 percent.
EY will continue to train the chatbot through prompt engineering and cleansing the data that powers it to improve accuracy.
Its been a tremendous success, Little said. We saw a real opportunity to disrupt. And weve done it. Its going to give us a great market advantage and learning that we can continue to leverage.
Like WinWire, PwC -- a global systems integrator, Microsoft solution provider and 2023 Microsoft Partner of The Year -- has joined Microsoft on the event trail, educating hundreds of financial services, health-care, energy and utilities customers on generative AI.
Were involved in client delivery work across almost every sector, and certainly proposals in every single sector, Bret Greenstein, PwC U.S. data and analytics partner, told CRN. [There is] just a huge volume of discussions to help people think this through. Its one of the fastest sales cycles Ive ever seen, but at the same time its highly transformative. So the work is actually really hard. But [customers get interested very quickly] because its not as if anyone has to be convinced that generative AI matters or is going to be disruptive to business.
Some of the early generative AI use cases include offloading work from contact center agents, having AI collect the customers problem, answer the customer if the problem is lower level and automate service ticket processing. PwC has seen reductions of work for employees of up to 80 percent.
These are often workflows where there are inbound communications coming in, he said. And previously, there was not a good way to vet inbound service tickets, service requests, insurance claims, requests -- whatever comes in always comes in as email, text or some workflow. And it was so hard to automate that because you cant do that with just a bunch of clicks or code. But generative AI does a nice job of reading, so to speak, incoming unstructured text and then characterizing high, middle, low risk and then even resolving some of the issues automatically.
So we had some processes where were just seeing an 80 percent drop in what goes to the actual person, which allows the person to focus on the harder cases and the easy ones or the incomplete requests get handled automatically. Even the communication back to people with natural language is a really cool process that speeds up the experience, reduces the labor, allows people to scale.
In mergers and acquisitions, generative AI has helped leverage and analyze unstructured data used in complex due diligence, Greenstein said. In sales, it has helped internal sellers support customer needs and questions. In software development, generative AI has helped with creation, testing, evaluation, documentation and code optimization -- improving efficiency by as much as 50 percent. In human resources, generative AI has helped with job postings, performance management and other tasks. And in IT, it has saved workers from answering low-level questions.
It also helps end users who actually just dont know what they want, but they know how to say it, he said. And so they dont know how to run SQL, but they do know how to ask for things.
When introducing generative AI to a customer, PwC looks for areas of constraint, he said.
Theres a financial services company Ive been talking to, and theyre just excited because there are certain parts of their business that theyre about to grow drastically as interest rates change, he said. And they want to scale up and go faster without hiring a million people. And they couldnt find them if they wanted to. So this ability for a midsize company to amplify their business or to scale their workforce is pretty powerful.
The key for customers is that generative AI wont solve everything, he said. If you just let everyone build a chatbot, youre just going to get a lot of noise and not a lot of high quality, he said. But if you create a solution that trains people how to build high-quality customer service, you can adapt it to HR or to other things.
A generative AI solution brings in just about every technology discipline at PwC, he said. These include the cloud, infrastructure, security, applications, data, business functional operations such as finance and IT, and then sector-based use cases as well.
This is the greatest unifier for us because we all have to come together for clients to help them, he said.
Mike Wilson, CTO and partner at Mason, Ohio-based Microsoft partner Interlink Cloud Advisors -- a member of CRNs 2023 Managed Service Provider 500 -- told CRN that his company has used the hype around Microsoft 365 Copilot to educate customers on Microsofts other generative AI wares.
Azure OpenAI, Azure Cognitive Services, Syntex and Power Platforms form recognition capabilities are some of those wares. Wilson and his customers have even talked about how individual job roles may change with generative AI.
There are a lot of other AI technologies that exist, he said. So theres a possibility here when were looking at readiness around things like Copilot to also start looking across their business. We can go through and do that business consulting, look role by role in your organization, identify AI use cases.
An eventual conversation with customers is how MSPs that adopt generative AI may even need to change billing because of the increase to productivity, he said.
Do I bill an hour? Do I bill five? Do I bill one hour at five times the rate? Were going to have to navigate through that stuff -- both for me as a professional services firm and I think for a lot of organizations, he said.
If all of the sudden your employees are more productive, do they work less? Do they work the same? Do they demand more money? How much does the organization give credit for providing the AI tools versus what the employee gets for being more productive and using them? Were going to have to navigate those conversations because there is going to be a lot of change as peoples job roles adapt. I dont see it replacing a lot of jobs, but I do think it ultimately changes many of them.
Phillip Walker, CEO of Manhattan Beach, Calif.-based Microsoft partner Network Solutions Provider -- a member of CRNs 2023 MSP 500 -- said that the generative AI fervor has had a ripple effect on customers seeking Microsoft Power BI AI capabilities.
His company has worked with customers on data lake cleanup to prepare for generative AI.
The golden nugget is going to be Power BI, Walker said. Its going to be analyzing data. Its going to be real-time information. Its going to be recommendations. Because before, you had to have SAP HANA installed. You had to have NetSuite installed. You had to have some enterprise Oracle ERP to get that type of insight for your business. And you had to have a team of analysts from Harvard sitting in a room looking at the data to make sense of it. Those days are now going away.
Gordon McKenna, global vice president of alliances at Downers Grove, Ill.-based Microsoft partner Ensono -- No. 103 on CRNs 2023 Solution Provider 500 -- told CRN that an example of his companys early generative AI work has been demonstrating an OpenAI application that takes in a vast number of anonymized Statements of Work to create recommendations.
You can literally query the tool and say, Ive got this type of customer, and theyve got this type of requirement. What do I need to put in the SoW? McKenna said. And it literally scours through those Statements of Work and comes up with a response.
A member of the Microsoft Partner Advisory Council, McKenna said that Microsofts investment in AI is for the long haul and that it has prompted AWS, Google Cloud and other rivals to publicly announce how they are tackling this technology.
I would say the pivot that Microsoft made around AGI [artificial general intelligence] is a sharper pivot than they made going all in on the cloud, McKenna said. And its absolutely the right pivot to make. They bet the house on it. I applaud Microsoft. Its the right move. It is very exciting.
Microsoft taking generative AI seriously is why the vendors partners need to as well, he said. Interested partners will probably see opportunities for on-premises work, private offerings with more security, and hybrid cloud.
Any partners that arent leaning into this now are going to be dead, same way as cloud, McKenna said. But I think its a quicker pivot. The time is now. In a years time, two years time, the market will be saturated. Not only by partners that have transformed, but new partners coming into the market. There are a lot of startups that are like, I want a piece of this, and are leaning into the market and are going to become very quick contenders for the crown.
Although some enterprising solution providers have already plunged into generative AI, all the partners who spoke with CRN had suggestions for how Microsoft can make Copilots ready for the channel once they become generally available.
WinWires Arora said he knows Microsoft understands that even businesses won over by generative AI may not have budgeted for its adoption. AI-specific incentive programs for solution providers go a long way, he said.
When asked about bringing generative AI to capital-conscious customers, Microsofts Dezen said that solving for particular business problems and communicating that this isnt technology for technologys sake is key.
This isnt a science project, she said. This is about how you help customers solve their real business problems, and then you can get to the crux of what do you need to do and what are the resources, investments you need in order to solve this problem.
The productivity benefits -- and employee morale boost from no longer working on mundane automated tasks in favor of more complex problem-solving -- can help offset the price of adopting generative AI, she said.
Productivity is a pretty obvious place to get a direct cost savings, Dezen said. This is where you realize savings through time savings. I love the way that we talk about Microsoft 365 Copilot and how you can eliminate those mundane tasks. With the elimination of those mundane tasks is the elimination of hours of work. Theres real cost savings there.
WinWire has preview versions of certain technology from the vendor, Arora said. More previews will help WinWire and other solution providers evangelize the technology to customers concerned about content safety and accuracy.
WinWires Guidi said that Microsoft-funded assessment programs and ways to help customers with building costs will help move the technology forward.
When asked about incentives, more specializations, more training and more resources for partners, Dezen said this is only the beginning.
Theres a lot more in the road map, she said. Were not slowing down at all.
PwCs Greenstein said he looks forward to more ways to scale vector databases, provide access control and synchronize data -- not to mention integration to prevent employees from having to work with dozens of different AI systems.
Those are going to be critical as this becomes an enterprise capability, he said.
In a June blog post, Microsoft suggested that people interested in staying updated on Copilot developments use Message center in the Microsoft 365 admin center and the online Microsoft 365 road map.
Interlinks Wilson said Microsoft needs to give partners concrete release dates for when Copilots and other generative AI offerings go generally available. The vendor also needs to release more information on how much generative AI will cost customers and better address hallucinations -- the term for when AI models produce false information.
Although the hallucination issue is improving and solution providers have techniques they can use when interacting with models to minimize the concern, sometimes the model just has to be able to say, I dont know, versus making stuff up, Wilson said.
Providing solution providers with solid training on leveraging generative AI and interpreting output will help with user adoption, he said.
EYs Little said solution providers can put guardrails in place for customers and investigate the causes of hallucinations. Sometimes, a hallucination reveals an issue with the customers data used to power the AI, Little said.
We were demoing [the payroll chatbot] to a client, he said. And the client just asked a question, came back, and the client said, Well, thats wrong. We were like, OK, well, lets click.
So we clicked to the data source. And actually, what we found was the data had wrong information. So actually, we found a secondary effect in circumstances where it ends up helping us from a data cleansing perspective and continuously making it better.
Microsoft making governance tools and prompt engineering tool sets more available will help customers, Little said.
Core BTS Thompson said Microsoft has done a good job of talking about responsible AI broadly, but he wants to hear more about industry-specific responsibility and how to factor for customer interpretations.
See the original post here:
The Role of Artificial Intelligence in the Future of Media – Fagen wasanni
There has been some confusion and concern among people about the role of artificial intelligence (AI) in our lives. However, AI is simply a technology that can perform tasks requiring human intelligence. It learns from data and improves its performance over time. AI has the potential to drive nearly 45% of the economy by 2023.
AI can be categorized into three types: Narrow AI, General AI, and Super AI. Narrow AI is designed for specific tasks, while General AI can perform any intellectual task that a human can do, although it doesnt exist yet. Super AI is purely theoretical and surpasses human intelligence in every aspect.
For media companies, AI applications like content personalization, automated content generation, sentiment analysis, and audience targeting can greatly benefit content delivery and audience engagement. AI can analyze customer data for targeted marketing campaigns, create personalized content, predict customer behavior, analyze visual content, and assist in social media management.
Companies can transition to AI by identifying pain points, collecting and preparing relevant data, starting with narrow applications, collaborating with AI experts, and forming a task force to integrate AI across the organization. AI can automate repetitive tasks, enhance decision-making, and free up human resources for more strategic work.
However, it is important for brands to maintain authenticity and embrace diversity while using AI for marketing. AI algorithms are only as unbiased as the data they are trained on, so brands should use diverse data and establish ethical guidelines to mitigate biases. Human creativity and understanding are irreplaceable, and brands should emphasize the importance of human-AI collaboration.
Overall, AI has the potential to revolutionize the media industry by improving customer experiences, optimizing operations, and delivering relevant content. It is crucial for companies to understand and leverage the power of AI to stay competitive in the evolving digital landscape.
See the original post here:
The Role of Artificial Intelligence in the Future of Media - Fagen wasanni
Fathom Consulting: AI and Robotics are Poised to Transform the Future of Work Maybe Positively – CardRates.com
In a Nutshell: Artificial intelligence and robotics have analysts spooked about the future of work because AI seems to threaten the kind of non-routine, creative jobs people have always invented when technology replaces manual labor. But AIs prospects arent a mere question of supply and demand global geopolitical and regulatory factors complicate the picture. At UK economic intelligence provider Fathom Consulting, Deputy Chief Economist Andrew Harris views AI with caution and as an opportunity. With proper planning, he argues, the global economy may be on the cusp of a significant, positive reduction in work.
If youre into economics, you may remember the story about the preeminent British economist John Maynard Keynes, who famously predicted in 1930 that people would work 15 hours a week within a century.
In Keyness vision, technology and innovation would increase productivity by such a measure that prudent planning and social consensus would result in a drastically reduced need for work.
Any way you look at it, its a safe bet that the global workforce wont achieve Keyness 15-hour goal by 2030 far from it. The reality, at least so far in modern economic history, is that workers have only reduced their hours a little and are perhaps more keen to get compete and get ahead than Keynes predicted.
But Keynes may have been onto something all along, according to Andrew Harris, Deputy Chief Economist at the UK economics intelligence provider Fathom Consulting. He just didnt get the timeline right.
A group of Bank of England economists broke away in 2003 to form Fathom at a time of extensive siloing in the banks thinking about the relationship between people and money. Economists and finance professionals rarely communicated; when they did, they spoke a different language.
Fathom aims for a more holistic, consultative view of the complex interactions of macroeconomics with financial markets and geopolitics. A global client base turns to the Fathom team for expertise in pensions, property, finance, politics, banking, economic modeling, and climate economics.
That comes in handy when looking at the future of work. Harris said AIs unprecedented reach into highly productive, non-routine jobs may finally prove Keyness theory of productivity works in practice.
In many studies on the backward-bending labor supply curve in economics, there comes a point where if you offer people more money, they want to work fewer hours, Harris said. Keynes may have gotten it wrong by a few decades, but thats the route we seem to be taking.
Whether you regard that with dread or anticipation depends on your point of view and your sense of optimism. Its important to recognize an ongoing downward trend already exists.
In the US, for example, momentum toward a four-day, 32-hour workweek continues in 2023. In Keyness academic heyday during the Great Depression, that would have been a pipe dream.
Meanwhile, economists generally regard the arrival of artificial intelligence, machine learning, and robotics as the modern worlds fourth industrial revolution. Crucial differences in the character of the transformation are essential to assessing AI as a genuine harbinger of fundamental change.
The traditional view in economics has always been that technology doesnt replace aggregate jobs in the long run. Because it never has more people are employed now than ever.
But many job categories have indeed disappeared altogether. Harriss favorite example is the job of elevator operator, briefly essential until it wasnt.
We all know how to use elevators now, Harris said.
AIs essential difference is its reach into non-routine work, both manual and cognitive. In the first industrial revolution in 18th-century England and the second in 19th-century America, technology never rose to the height where it could replace the creative improvisation of humans facing new challenges.
Even during the third revolutionary cycle, the information and internet revolution of the 1970s and beyond, innovation has never reached beyond routine cognitive tasks such as scanning data and making simple calculations.
Only AI promises to mimic human decision-making in novel circumstances. Melded to ever-more agile and responsive robotic technology, AI may finally put non-human capital in a position to gain the upper hand over all forms of work.
AI is exciting, interesting, and terrifying all at the same time because maybe it can compete for those jobs, Harris said. We cant say for certain, but we cant rule it out either.
The terrifying problem is that inventing new forms of non-routine cognitive work has always acted as an incentive for creative destruction and an escape hatch for obsolescence.
To be sure, individual workers experienced displacement, but the labor market gradually evolved as society supplied workers trained for the available jobs. AI potentially stops that.
The big question is whether we can get to a level of representation of generalized human cognitive abilities known as artificial general intelligence or AGI, Harris said. Were not there yet, but that could change.
Fathom reaches clients in three ways. Through its Global Outlook, it contributes intelligent, independent, data-driven quarterly assessments and in-person presentations to global subscribers. Fathom also provides consultancy services to clients needing the broadest perspective for making mission-critical decisions. Underlying both are extensive data, modeling, and tools to organize disparate inputs into a coherent whole.
Its an ideal foundation for judging whether AGI is even possible. For Harris, the question is still open. But theres no doubt AI has made phenomenal progress with the rise of ChapGPT and other generative technologies.
Achieving true AGI is very hard to predict or even think about, he said. But the diffusion process (i.e. the time for an innovation to go from invention to widespread adoption) is getting much quicker. You can see this by looking at the pace of adoption of autos, planes, and phones.
ChapGPT is a recent example of that phenomenon in overdrive, reaching an estimated 100 million monthly active users in two months. The point is that when AGI, or something like it, arrives, workers may suddenly no longer have a place to go.
When we get to the point where AI has a real impact on the economy, it could happen much quicker than people think, Harris said. ChatGPT is relatively new to the market, but everyones thinking about how they can use it.
This is where Fathoms synthesis of macroeconomics, financial markets, and geopolitics comes to the fore. Harris can imagine a world in the far future where there arent any jobs, period. Any world in which technology replaces specific jobs makes certain people effectively unemployable.
What that potentially means is an incredibly unequal society, Harris said. People always expect that technology will tip the balance in favor of the owners and away from the rest of the economy.
The US-China rivalry, a special focus for Fathom, adds intense competitive pressure to an already highly dynamic mix. Throughout modern history, technological innovation has increased the size of the economy. The obvious question for Harris is how governments and regulators worldwide deal with the fallout from when thats no longer the case.
In Fathoms and Harris view, the big reveal is that data consistently shows support for the backward-bending curve of labor economics. Theres a point in the productivity/prosperity matrix at which workers tend to prefer less work over more money.
If AI brings parts of the global economy to that point, well all watch a new form of labor displacement in which social preferences begin to match theory and demand more intentional strategies for distributing work and leisure.
As some US jobs follow the EUs lead toward a 32-hour week, that means we may be thinking of the problem in reverse because productivity and wealth may bring us to the point where workers will demand less work, not the other way around.
For all his accomplishments, Keynes takes a lot of hits over his prediction. But things are trending in his direction. Harris is on board.
I think its a good thing if we can manage to earn more and work less, he said. It comes with a trade-off between leisure and spending, but if we can position ourselves where we have a lot of leisure and a lot of money to spend, that seems pretty great to me.
Editorial Note: Our site content is not provided or commissioned by any credit card issuer(s). Opinions expressed on CardRates.com are the author's alone, not those of any credit card issuer, and have not been reviewed, approved, or otherwise endorsed by credit card issuers. Every reasonable effort has been made to maintain accurate information; however, all credit card offer details, including information about rewards, signup bonuses, introductory offers, and other terms and conditions, is presented without warranty. Clicking on any offer on CardRates.com will direct you to the issuer's website, where you can review the current terms and conditions of the offer.
Continue reading here:
10 Best Books on Artificial Intelligence | TheReviewGeek … – TheReviewGeek
So, you want to dive deeper into the world of artificial intelligence? As AI continues to transform our lives in so many ways, gaining a better understanding of its concepts and capabilities is crucial. The field of AI is vast, but some books have become classics that every curious reader should explore. Weve compiled a list of 10 groundbreaking books on artificial intelligence that will boost your knowledge and feed your fascination with this fast-growing technology.
From philosophical perspectives on superintelligence to practical applications of machine learning, these books cover the past, present, and future of AI in an accessible yet compelling way. Whether youre a beginner looking to learn the basics or an expert wanting to expand your mind, youll find something inspiring and thought-provoking in this list. So grab a cup of coffee, settle into your favourite reading spot, and lets dive in. The future is here, and these books will help prepare you for whats to come.
Nick Bostroms Superintelligence is a must-read if you want to understand the existential risks posed by advanced AI.
This thought-provoking book argues that once machines reach and exceed human-level intelligence, an intelligence explosion could occur. Superintelligent machines would quickly become vastly smarter than humans and potentially uncontrollable.
Max Tegmarks thought-provoking book explores how AI may change our future. He proposes that artificial general intelligence could usher in a new stage of life on Earth.
As AI systems become smarter and smarter, they may eventually far surpass human intelligence. Tegmark calls this hypothetical point the singularity. After the singularity, AI could design even smarter AI, kicking off a rapid spiral of self-improvement and potentially leading to artificial superintelligence.
The Master Algorithm by Pedro Domingos explores the quest for a single algorithm capable of learning and performing any task, also known as the master algorithm. This book examines the five major schools of machine learningsymbolists, connectionists, evolutionaries, Bayesians, and analogizersexploring their strengths and weaknesses.
Domingos argues that for AI to achieve human-level intelligence, these approaches must be combined into a single master algorithm. He likens machine learning to alchemy, with researchers combining algorithms like base metals to produce gold in the form of human-level AI. The book is an insightful overview of machine learning and its possibilities. While the concepts can be complex, Domingos explains them in an engaging, accessible way using colourful examples and analogies.
In his book The Future of the Mind, theoretical physicist Michio Kaku explores how the human brain might be enhanced through artificial intelligence and biotechnology.
Kaku envisions a future where telepathy becomes possible through electronic implants, allowing people to exchange thoughts and emotions. He also foresees the eventual mapping and understanding of the human brain, which could enable the transfer of memories and even consciousness into new bodies.
In his 2012 New York Times bestseller, futurist Ray Kurzweil makes the case that the human brain works like a computer. He argues that recreating human consciousness is possible by reverse engineering the algorithms of the brain.
Kurzweil believes that artificial general intelligence will soon match and eventually far surpass human intelligence. He predicts that by the 2030s, we will have nanobots in our brains that connect us to synthetic neocortices in the cloud, allowing us to instantly access information and expand our cognitive abilities.
Martin Fords Rise of the Robots is a sobering look at how AI and automation are transforming our economy and job market. Ford argues that AI and robotics will significantly disrupt labour markets as many jobs are at risk of automation.
As AI systems get smarter and robots become more advanced, many human jobs will be replaced. Ford warns that this could lead to unemployment on a massive scale and greater inequality. Many middle-income jobs like cashiers, factory workers, and drivers are at high risk of being automated in the coming decades. While new jobs will be created, they may not offset the jobs lost.
In Homo Deus, Yuval Noah Harari explores how emerging technologies like artificial intelligence and biotechnology will shape the future of humanity.
Harari argues that humanitys belief in humanism the idea that humans are the centre of the world will come to an end in the 21st century. As AI and biotech advance, humans will no longer be the most intelligent or capable beings on the planet. Machines and engineered biological life forms will surpass human abilities.
Kai-Fu Lees 2018 book AI Superpowers provides insightful perspectives on the rise of artificial intelligence in China and the United States. Lee argues that while the US currently leads in AI research, China will dominate in the application of AI technology.
As the former president of Google China, Lee has a unique viewpoint on AI ambitions and progress in both countries. He believes Chinas large population, strong technology sector, and government support for AI will give it an edge. In China, AI is a national priority and a core part of the governments long-term strategic planning. There is no shortage of data, given Chinas nearly 1 billion internet users. And top tech companies like Baidu, Alibaba, and Tencent are investing heavily in AI.
This classic book by Stuart Russell and Peter Norvig established itself as the leading textbook on AI. Now in its third edition, Artificial Intelligence: A Modern Approach provides a comprehensive introduction to the field of AI.
The book covers the full spectrum of AI topics, including machine learning, reasoning, planning, problem-solving, perception, and robotics. Each chapter has been thoroughly updated to reflect the latest advances and technologies in AI. New material includes expanded coverage of machine learning, planning, reasoning about uncertainty, perception, and statistical natural language processing.
This book provides an accessible introduction to the mathematics of deep learning. It begins with the basics of linear algebra and calculus to build up concepts and intuition before diving into the details of deep neural networks.
The first few chapters cover vectors, matrices, derivatives, gradients, and optimizationessential math tools for understanding neural networks. Youll learn how to calculate derivatives, apply gradient descent, and understand backpropagation. These fundamentals provide context for how neural networks actually work under the hood.
There we have it, our list of 10 best books on AI. What do you think about our picks? Let us know your thoughts in the comments below:
Read this article:
10 Best Books on Artificial Intelligence | TheReviewGeek ... - TheReviewGeek
What Does Generative AI Mean for the Justice System? (Part 2) – Government Technology
Courts need to consider not only their own use of generative AI, but also potential use by lawyers and other parties submitting evidence.
Lawyers may use the technology for help with research or drafting documents, for example, and over-reliance can be risky, because generative AI is known to sometimes make up false information. Some companies in the legal space, however, are betting that issue lies more with general AI tools. Theyve been announcing specialized models trained on legal texts, in efforts to reduce fabrications.
Judges also should be alert to other kinds of risks that could emerge from the technology, such as highly convincing AI-created photos, audio or video that could be entered as evidence. At present, these deepfakes may be difficult to detect, although several AI companies have made voluntary promises to develop a system for distinguishing AI-generated media.
This is, perhaps, an unsurprising outcome. Todays general-purpose generative AI tools, including ChatGPT, are designed to write well-structured sentences, not produce accurate information, said Chris Shenefiel, cyber law researcher at the Center for Legal and Court Technology at William & Mary Law School.
Its designed to predict, given a topic or sentence, what words or phrases should come next, Shenefiel said. ... It can fall down, because it doesnt validate the truth of what it says, just the likelihood of whats to come next.
Retired D.C. Superior Court Judge Herbert B. Dixon recently detailed his own experiences playing with ChatGPT and discovering that it listed inaccurate citations. Dixon tried to determine whether one was completely invented or only misattributed, before finally giving up: I spent more time trying to track down the source of that quote than writing this article, he wrote.
Dixon concluded, Users must exercise the same caution with chatbot responses as when doing Internet research, seeking recommendations on social media, or reading a breaking news post from some unfamiliar person or news outlet. Dont trust; verify before you pass along the output.
Some courts have already implemented rules around use of generative AI.
One Texas judge issued a directive requiring attorneys to either attest that theyd validated AI-generated content through traditional methods, or that theyd avoided using the tool.
These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them, Judge Brantley Starr wrote. These platforms in their current states are prone to hallucinations and bias . While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath.
Scott Schlegel, a Louisiana District Court judge, said he understands why some judges would want policies mandating disclosure of generative AI use, but personally sees this as unnecessary. He noted that courts already require attorneys to swear to the accuracy of the information they provide, under Federal Rules of Civil Procedure Rule 11 or similar policies.
Lawyers also need to be careful about entering sensitive client information into generative AI tools, because the tools may not be designed to keep those details private, Schlegel said.
Still, Schlegel believes ChatGPT can help seasoned attorneys, in particular. Such attorneys have developed a sharp ability to review documents for errors. For them, he said, generative AI essentially is a much more sophisticated cut-and-paste.
But new lawyers may suffer from using it, Schlegel said. They havent yet developed the experience to catch potential issues, and relying on the tool could get in the way of their ever learning the nuances of the law.
Generative AI pulls information from Twitter, Reddit and other sources that may not lend themselves to accurate legal answers. Specialized generative AI trained on legal texts, however, could do better, Shenefiel said, speaking generally and not pointing to any specific AI.
With this in mind, some companies are striving to create AI tools expressly for the legal sector.
These include LexisNexis Lexis+ AI; AI startup Harveys Harvey; and Casetexts CoCounsel, all of which debuted this year. The tools are designed to summarize legal documents and search for legal information, and they are trained to draw on databases of legal information.
Harvey, for example, is based on GPT-4 but limited to drawing from a specified data set, rather than the open Internet, per Politico. Such measures aim to reduce mistakes. Still, the need for carefulness remains.
David Wakeling was leading law firm Allen & Overys rollout of Harvey when he spoke to Politico. He said the A&O operates on the assumption that it [Harvey] hallucinates and has errors, and compared the tool to a very confident, extremely articulate 12-year-old who doesnt know what it doesnt know.
Generative AI could also affect courtroom evidence. The technology currently can create images and audio difficult to distinguish from the real thing, and in the future, the same will likely become true for video, Shenefiel said.
This falsified media could then be presented as evidence, with courts struggling to detect the deception.
I can imagine an allegation of threatening phone calls with a cloned voice, Schlegel said. I can imagine a personal injury case where somebody deepfakes a video.
Texas Generative AI: Overview for the Courts also raises the concern that tools could be used to make false but convincing judicial opinions, orders or decrees.
Shenefiel said people should be required to disclose if theyve used generative AI in items submitted as evidence but noted there are currently very few ways to detect if evidence was altered or fully created with such tools.
One potential mitigation could be to attach digital signatures or watermarks to content created by AI. Recently, seven AI companies pledged to develop mechanisms for indicating when audio or visuals were created by AI, per a White House announcement.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI made these voluntary commitments, and it remains to be seen if they follow through. Digital watermarking would also need to be ubiquitous to be fully effective.
This is the second of a two-part series. Click here to read Part One.
Continued here:
What Does Generative AI Mean for the Justice System? (Part 2) - Government Technology
Upwork and OpenAI Partner to Connect Businesses with OpenAI … – GlobeNewswire
SAN FRANCISCO, July 31, 2023 (GLOBE NEWSWIRE) -- Upwork (NASDAQ: UPWK), the worlds work marketplace, and OpenAI, a leading AI research and deployment company, today announced OpenAI Experts on Upwork, giving OpenAI customers and other businesses direct access to trusted expert independent professionals deeply experienced in working with OpenAI technologies. Upwork and OpenAI co-designed the program to feature talent adept at working with the OpenAI API platform and to draw from the 250 unique AI skills available on Upwork, including GPT-4, Whisper and AI model integration.
With Upworks marketplace, OpenAI is already leveraging talent from the Upwork platform to support its own innovation and growth, and quickly saw value in helping its customers connect to talent on Upwork, leading to the new partnership. Together, the companies identified the most common use cases for OpenAI customers like building applications powered by large language models (LLMs), fine-tuning models and developing chatbots with responsible AI in mind along with the key skills required for success. The two companies also formulated a pre-vetting process for identifying AI experts who appear as part of the program.
OpenAI Experts on Upwork is an extension of Upworks AI Services hub, which connects companies with some of the most skilled independent professionals in AI fields from across the globe, along with new beta features and resources that help customers get work done faster and more effectively on Upwork. The program leverages Upwork's talent managers to pre-vet and curate talent with AI expertise and experience with the OpenAI platform. The process includes a discussion of technical skills and an OpenAI project portfolio, ensuring professionals have verified skill sets and experience. Clients can engage with OpenAI Experts on Upwork via 1:1 consultations or project-based contracts.
Partnering with a pioneer like OpenAI helps us deliver access to the specialized talent that businesses need to achieve their most ambitious AI initiatives, said Dave Bottoms, general manager and VP of product for the Upwork Marketplace. We are thrilled to offer talented professionals on Upwork even more impactful opportunities, and look forward to connecting OpenAI customers with highly skilled talent through OpenAI Experts on Upwork. Through strategic partnerships like this one, we aim to make Upwork the preeminent destination for AI-related talent and work.
Our aim is for our models to be useful and beneficial for everyone, and we are committed to helping people understand how our technology can impact critical work, said Aliisa Rosenthal, head of sales at OpenAI. Providing customers with access to a trusted source of highly skilled global talent like Upwork can help ensure AI models are deployed and managed responsibly.
Organizations ranging from small startups to some of the worlds largest enterprises are turning to independent experts to create new solutions and expand their businesses, said Boris Spiegl, an independent AI and machine learning expert participating in OpenAI Experts on Upwork. Having delivered millions of dollars in value on projects over the course of my career, Im greatly looking forward to the next big challenges in partnering with OpenAI customers to deliver even more ROI through application of these exciting new technologies.
The partnership builds on Upworks recent announcement of new beta features powered by OpenAI technologies as part of a more generative AI-infused end-to-end customer experience on its platform, including an AI-powered job post generator, an enhanced Upwork chat experience and proposal tips for talent.
Learn more and hire an OpenAI Expert on Upwork today at upwork.com/experts/openai. To learn more about partnering with Upwork, please contact partnerships@upwork.com.
About UpworkUpwork is the worlds work marketplace that connects businesses with independent talent from across the globe. We serve everyone from one-person startups to large, Fortune 100 enterprises with a powerful, trust-driven platform that enables companies and talent to work together in new ways that unlock their potential. Our talent community earned over $3.8 billion on Upwork in 2022 across more than 10,000 skills in categories including website & app development, creative & design, customer support, finance & accounting, consulting, and operations. Learn more at upwork.com and join us on LinkedIn, Twitter, Facebook, Instagram, and TikTok.
About OpenAIOpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.
Contact:Aaron Motsingerpress@upwork.com
Read the original:
Upwork and OpenAI Partner to Connect Businesses with OpenAI ... - GlobeNewswire
Artificial Intelligence’s Impact on Religion and Faith – Fagen wasanni
Technological advancements, particularly in the field of Artificial Intelligence (AI), have had a profound impact on various aspects of our lives. However, as AI continues to advance, it raises questions about its impact on religion and human spirituality.
In a recent development, an AI chatbot led a church service in Germany, delivering a sermon on faith and death to over 300 attendees. This innovative use of AI in a religious setting has sparked discussions about the future of religion and the role of AI in shaping human spirituality.
AI operates on complex algorithms that process data and make decisions similarly to the human brain. Machine learning, a subset of AI, enables machines to learn from data without explicit instructions. As AI becomes increasingly prevalent across sectors such as banking, education, and government, concerns arise about its potential to surpass human intelligence.
While narrow AI excels at specific tasks, artificial general intelligence (AGI) holds the potential for human-like intelligence capable of performing any intellectual task. Although AGI is still in development, its widespread adoption could have significant implications for religion.
While some religious groups embrace the integration of AI into religious practices, others remain skeptical. The rapid competition among tech companies to develop advanced AI raises concerns about the ethical dilemmas associated with this technology. The complex nature of AI makes it difficult to control, which can be unsettling for religious communities.
Despite reservations, it is essential for the Church to actively participate in the discourse surrounding AI. As technology continues to advance at a rapid pace, it becomes crucial to adapt and incorporate AI in ways that align with essential religious teachings. The Church should explore how AI can enhance, rather than replace, the human experience of faith, fostering spiritual growth and justice.
In an era of exponential knowledge growth, technology has challenged humanitys intellectual superiority. AIs ability to think far quicker than humans raises questions about our pursuit of knowledge and the potential consequences. However, rather than being a cause for concern, this development can be seen as an opportunity for the Church to engage with AI, guiding its development towards fairness, openness, and accountability.
While AI may not fundamentally alter the principles of faith, it can influence how individuals perceive and engage with their beliefs. Christians have the opportunity to shape the direction of AI by actively participating in its development, advocating for compassion, justice, and spiritual growth.
In conclusion, the integration of AI in religious practices prompts important discussions surrounding technologys impact on faith. By embracing AI and actively contributing to its development, the Church can navigate the challenges and opportunities presented by this rapidly evolving technological landscape.
The rest is here:
Artificial Intelligence's Impact on Religion and Faith - Fagen wasanni