Page 890«..1020..889890891892..900910..»

Governor Newsom Signs Executive Order to Prepare California for … – Office of Governor Gavin Newsom

WHAT YOU NEED TO KNOW:California is the global hub forgenerative artificial intelligence(GenAI) we are the natural leader in this emerging field of technology tools that could very well change the world.To capture its benefits for the good of society, but also to protect against its potential harms, Governor Newsom issued an executive order today laying out how Californias measured approach will focus on shaping the future of ethical, transparent, and trustworthy AI, while remaining the worlds AI leader.

SACRAMENTO With GenAIs wide-ranging potential for Californians and the states economy, Governor Gavin Newsom today signed an executive orderto study the development, use, and risks of artificial intelligence (AI) technology throughout the state and to develop a deliberate and responsible process for evaluation and deployment of AI within state government.

WHAT GOVERNOR NEWSOM SAID:This is a potentially transformative technology comparable to the advent of the internet and were only scratching the surface of understanding what GenAI is capable of. We recognize both the potential benefits and risksthese tools enable. Were neither frozenby the fears nor hypnotized by the upside. Were taking a clear-eyed, humble approach to this world-changing technology. Asking questions. Seeking answers from experts. Focused on shaping the future of ethical, transparent, and trustworthy AI. Doing what California always does leading the world in technological progress.

AI IN CALIFORNIA:For decades, California has been a global leader in education, innovation, research, development, talent, entrepreneurship, and new technologies. As these technologies continue to grow and develop, California has established itself as the world leader in GenAI innovation with 35 of the worlds top 50 AI companies and a quarter of all AI patents, conference papers, and companies globally.

California is also home to world-leading GenAI research institutions the University of California, Berkeleys College of Computing, Data Science, and Society and Stanford Universitys Institute for Human-Centered Artificial Intelligence providing a unique opportunity for academic research and government collaboration.

WHATS IN THE EXECUTIVE ORDER

To deploy GenAIethically and responsiblythroughout state government,protect and preparefor potential harms, andremain the worlds AI leader, the Governors executive order includes a number of provisions:

Risk-Analysis Report:Direct state agencies and departments to perform a joint risk-analysis of potential threats to and vulnerabilities of Californias critical energy infrastructure by the use of GenAI.

Procurement Blueprint:To support a safe, ethical, and responsible innovation ecosystem inside state government, agencies will issue general guidelines for public sector procurement, uses, and required training for application of GenAI building on the White Houses Blueprint for an AI Bill of Rights and the National Institute for Science and Technologys AI Risk Management Framework. State agencies and departments will consider procurement and enterprise use opportunities where GenAI can improve the efficiency, effectiveness, accessibility, and equity of government operations.

Beneficial Uses of GenAI Report:Direct state agencies and departments to develop a report examining the most significant and beneficial uses of GenAI in the state. The report will also explain the potential harms and risks for communities, government, and state government workers.

Deployment and Analysis Framework:Develop guidelines for agencies and departments to analyze the impact that adopting GenAI tools may have on vulnerable communities. The state will establish the infrastructure needed to conduct pilots of GenAI projects, including California Department of Technology approved environments or sandboxes to test such projects.

State Employee Training:To support Californias state government workforce and prepare for the next generation of skills needed to thrive in the GenAI economy, agencies will provide trainings for state government workers to use state-approved GenAI to achieve equitable outcomes, and will establish criteria to evaluate the impact of GenAI to the state government workforce.

GenAI Partnership and Symposium:Establish a formal partnership with the University of California, Berkeley and Stanford University to consider and evaluate the impacts of GenAI on California and what efforts the state should undertake to advance its leadership in this industry. The state and the institutions will develop and host a joint summit in 2024 to engage in meaningful discussions about the impacts of GenAI on California and its workforce.

Legislative Engagement:Engage with Legislative partners and key stakeholders in a formal process to develop policy recommendations for responsible use of AI, including any guidelines, criteria, reports, and/or training.

Evaluate Impacts of AI on an Ongoing Basis:Periodically evaluate for potential impact of GenAI on regulatory issues under the respective agency, department, or boardsauthority and recommend necessary updates as a result of this evolving technology.

Read the Executive Order Here

The Administration will work throughout the next year, in collaboration with our states workforce, to implement the provisions of the executive order, and engage the Legislature and stakeholders to develop policy recommendations.

###

Excerpt from:
Governor Newsom Signs Executive Order to Prepare California for ... - Office of Governor Gavin Newsom

Read More..

Artificial Intelligence and Education: A Reading List – JSTOR Daily

How should education change to address, incorporate, or challenge todays AI systems, especially powerful large language models? What role should educators and scholars play in shaping the future of generative AI? The release of ChatGPT in November 2022 triggered an explosion of news, opinion pieces, and social media posts addressing these questions. Yet many are not aware of the current and historical body of academic work that offers clarity, substance, and nuance to enrich the discourse.

Linking the terms AI and education invites a constellation of discussions. This selection of articles is hardly comprehensive, but it includes explanations of AI concepts and provides historical context for todays systems. It describes a range of possible educational applications as well as adverse impacts, such as learning loss and increased inequity. Some articles touch on philosophical questions about AI in relation to learning, thinking, and human communication. Others will help educators prepare students for civic participation around concerns including information integrity, impacts on jobs, and energy consumption. Yet others outline educator and student rights in relation to AI and exhort educators to share their expertise in societal and industry discussions on the future of AI.

Whether were aware of it or not, AI was already widespread in education before ChatGPT. Nabeel Gillani et al. describe AI applications such as learning analytics and adaptive learning systems, automated communications with students, early warning systems, and automated writing assessment. They seek to help educators develop literacy around the capacities and risks of these systems by providing an accessible introduction to machine learning and deep learning as well as rule-based AI. They present a cautious view, calling for scrutiny of bias in such systems and inequitable distribution of risks and benefits. They hope that engineers will collaborate deeply with educators on the development of such systems.

Jrgen Rudolph et al. give a practically oriented overview of ChatGPTs implications for higher education. They explain the statistical nature of large language models as they tell the history of OpenAI and its attempts to mitigate bias and risk in the development of ChatGPT. They illustrate ways ChatGPT can be used with examples and screenshots. Their literature review shows the state of artificial intelligence in education (AIEd) as of January 2023. An extensive list of challenges and opportunities culminates in a set of recommendations that emphasizes explicit policy as well as expanding digital literacy education to include AI.

Student and faculty understanding of the risks and impacts of large language models is central to AI literacy and civic participation around AI policy. This hugely influential paper details documented and likely adverse impacts of the current data-and-resource-intensive, non-transparent mode of development of these models. Bender et al. emphasize the ways in which these costs will likely be borne disproportionately by marginalized groups. They call for transparency around the energy use and cost of these models as well as transparency around the data used to train them. They warn that models perpetuate and even amplify human biases and that the seeming coherence of these systems outputs can be used for malicious purposes even though it doesnt reflect real understanding.

The authors argue that inclusive participation in development can encourage alternate development paths that are less resource intensive. They further argue that beneficial applications for marginalized groups, such as improved automatic speech recognition systems, must be accompanied by plans to mitigate harm.

Erik Brynjolfsson argues that when we think of artificial intelligence as aiming to substitute for human intelligence, we miss the opportunity to focus on how it can complement and extend human capabilities. Brynjolfsson calls for policy that shifts AI development incentives away from automation toward augmentation. Automation is more likely to result in the elimination of lower-level jobs and in growing inequality. He points educators toward augmentation as a framework for thinking about AI applications that assist learning and teaching. How can we create incentives for AI to support and extend what teachers do rather than substituting for teachers? And how can we encourage students to use AI to extend their thinking and learning rather than using AI to skip learning?

Brynjolfssons focus on AI as augmentation converges with Microsoft computer scientist Kevin Scotts focus on cognitive assistance. Steering discussion of AI away from visions of autonomous systems with their own goals, Scott argues that near-term AI will serve to help humans with cognitive work. Scott situates this assistance in relation to evolving historical definitions of work and the way in which tools for work embody generalized knowledge about specific domains. Hes intrigued by the way deep neural networks can represent domain knowledge in new ways, as seen in the unexpected coding capabilities offered by OpenAIs GPT-3 language model, which have enabled people with less technical knowledge to code. His article can help educators frame discussions of how students should build knowledge and what knowledge is still relevant in contexts where AI assistance is nearly ubiquitous.

How can educators prepare students for future work environments integrated with AI and advise students on how majors and career paths may be affected by AI automation? And how can educators prepare students to participate in discussions of government policy around AI and work? Laura Tyson and John Zysman emphasize the importance of policy in determining how economic gains due to AI are distributed and how well workers weather disruptions due to AI. They observe that recent trends in automation and gig work have exacerbated inequality and reduced the supply of good jobs for low- and middle-income workers. They predict that AI will intensify these effects, but they point to the way collective bargaining, social insurance, and protections for gig workers have mitigated such impacts in countries like Germany. They argue that such interventions can serve as models to help frame discussions of intelligent labor policies for an inclusive AI era.

Educators considerations of academic integrity and AI text can draw on parallel discussions of authenticity and labeling of AI content in other societal contexts. Artificial intelligence has made deepfake audio, video, and images as well as generated text much more difficult to detect as such. Here, Todd Helmus considers the consequences to political systems and individuals as he offers a review of the ways in which these can and have been used to promote disinformation. He considers ways to identify deepfakes and ways to authenticate provenance of videos and images. Helmus advocates for regulatory action, tools for journalistic scrutiny, and widespread efforts to promote media literacy. As well as informing discussions of authenticity in educational contexts, this report might help us shape curricula to teach students about the risks of deepfakes and unlabeled AI.

Students, by definition, are engaged in developing their cognitive capacities; their understanding of their own intelligence is in flux and may be influenced by their interactions with AI systems and by AI hype. In his review of The Myth of Artificial Intelligence: Why Computers Cant Think the Way We Do by Erik J. Larson, William Hasselberger warns that in overestimating AIs ability to mimic human intelligence we devalue the human and overlook human capacities that are integral to everyday life decision making, understanding, and reasoning. Hasselberger provides examples of both academic and everyday common-sense reasoning that continue to be out of reach for AI. He provides a historical overview of debates around the limits of artificial intelligence and its implications for our understanding of human intelligence, citing the likes of Alan Turing and Marvin Minsky as well as contemporary discussions of data-driven language models.

Gwo-Jen Hwang and Nian-Shing Chen are enthusiastic about the potential benefits of incorporating generative AI into education. They outline a variety of roles a large language model like ChatGPT might play, from student to tutor to peer to domain expert to administrator. For example, educators might assign students to teach ChatGPT on a subject. Hwang and Chen provide sample ChatGPT session transcripts to illustrate their suggestions. They share prompting techniques to help educators better design AI-based teaching strategies. At the same time, they are concerned about student overreliance on generative AI. They urge educators to guide students to use it critically and to reflect on their interactions with AI. Hwang and Chen dont touch on concerns about bias, inaccuracy, or fabrication, but they call for further research into the impact of integrating generative AI on learning outcomes.

Lauren Goodlad and Samuel Baker situate both academic integrity concerns and the pressures on educators to embrace AI in the context of market forces. They ground their discussion of AI risks in a deep technical understanding of the limits of predictive models at mimicking human intelligence. Goodlad and Baker urge educators to communicate the purpose and value of teaching with writing to help students engage with the plurality of the world and communicate with others. Beyond the classroom, they argue, educators should question tech industry narratives and participate in public discussion on regulation and the future of AI. They see higher education as resilient: academic skepticism about former waves of hype around MOOCs, for example, suggests that educators will not likely be dazzled or terrified into submission to AI. Goodlad and Baker hope we will instead take up our place as experts who should help shape the future of the role of machines in human thought and communication.

How can the field of education put the needs of students and scholars first as we shape our response to AI, the way we teach about it, and the way we might incorporate it into pedagogy? Kathryn Conrads manifesto builds on and extends the Biden administrations Office of Science and Technology Policy 2022 Blueprint for an AI Bill of Rights. Conrad argues that educators should have input into institutional policies on AI and access to professional development around AI. Instructors should be able to decide whether and how to incorporate AI into pedagogy, basing their decisions on expert recommendations and peer-reviewed research. Conrad outlines student rights around AI systems, including the right to know when AI is being used to evaluate them and the right to request alternate human evaluation. They deserve detailed instructor guidance on policies around AI use without fear of reprisals. Conrad maintains that students should be able to appeal any charges of academic misconduct involving AI, and they should be offered alternatives to any AI-based assignments that might put their creative work at risk of exposure or use without compensation. Both students and educators legal rights must be respected in any educational application of automated generative systems.

Support JSTOR Daily! Join our new membership program on Patreon today.

Read more from the original source:
Artificial Intelligence and Education: A Reading List - JSTOR Daily

Read More..

TargetRecruit Unveils Copilot: Revolutionising Artificial Intelligence for the Recruitment Industry – Yahoo Finance

SYDNEY, Sept. 11, 2023 /PRNewswire/ -- TargetRecruit is thrilled to announce Copilot, the first introduction of Generative AI, and an incredible leap forward in establishing the foundation for diverse native AI functionality within the TargetRecruit platform.

TargetRecruit Logo

Copilot is a feature that elevates user interaction with GPT-based models through seamless text generation capabilities, based on prompt input and context. Copilot leverages automated prompts to craft comprehensive, tailored job descriptions that perfectly match recruitment needs, save time, and streamline recruiter workflows with just a few clicks. Copilot's user-friendly configuration empowers customisation, with the initial integration including OpenAI's ChatGPT.

Underpinning Copilot is an advanced AI Integration Framework designed to seamlessly integrate with any REST API-based Generative AI API, allowing the flexibility to connect with a wide array of AI models in the future. Enabling plug-and-play capabilities with preferred AI services will pave the way for a series of upcoming AI capabilities that will accelerate sales and recruiting productivity and efficiency.

Copilot represents a significant milestone in TargetRecruit's commitment to excellence where the power of Artificial Intelligence is propelling recruitment software into an era of unparalleled efficiency and innovation. As we move forward, we are excited to continue leading the way in recruitment software and artificial intelligence.

About TargetRecruit

TargetRecruit provides a powerful CRM/ATS, sales, and middle office solution built on Salesforce the world's #1 platform. Headquartered in Houston, with offices in London, Sydney, and Bangalore, TargetRecruit employs over 100 people globally. To learn more, visit https://au.targetrecruit.com/.

Media contact: marketing@targetrecruit.com, +61 (0) 2 8365 3160

Cision

View original content:https://www.prnewswire.com/apac/news-releases/targetrecruit-unveils-copilot-revolutionising-artificial-intelligence-for-the-recruitment-industry-301922299.html

SOURCE TargetRecruit

Follow this link:
TargetRecruit Unveils Copilot: Revolutionising Artificial Intelligence for the Recruitment Industry - Yahoo Finance

Read More..

What’s Next in AI? Predicting the Trends for the Upcoming Decade – DataDrivenInvestor

Photo by Milad Fakurian on Unsplash

Hey Futurists and AI Aficionados! Buckle up because were about to jump into a time machine and peek into the next decade of AI and Data Science. If you think what weve seen so far is mind-blowing, you aint seen nothin yet! So, lets put on our prediction hats and delve into the trends that will shape the AI landscape in the upcoming years.

First off, a reality check. AI has been the buzzword for years, and while weve made some amazing strides (Hello, self-driving cars and personalized medicine!), were not living in a sci-fi movie yet. No, robots arent taking over the world, but they are about to make our lives much more interesting!

Currently, most AI algorithms need robust servers and data centers to function. But what if that could change? Were looking at a future where AI models will run efficiently on your devices yes, your smartphone could soon be your AI assistant in a much more sophisticated way than Siri or Alexa could ever be!

Heres a big one: using AI to combat climate change. Algorithms are already getting better at predicting weather patterns, analyzing soil health, and tracking endangered species. But in the next decade, AI will play a crucial role in resource optimization and perhaps even in engineering solutions to reverse environmental damage. The planets heroes may be lines of code!

Currently, most AI falls under narrow or specialized intelligence good at one thing but pretty useless otherwise. However, were inching closer to Artificial General Intelligence (AGI), where machines can understand, learn, and apply knowledge across different domains. Imagine an AI that can compose music, diagnose diseases, and manage city traffic while teaching itself quantum physics!

Read the original:

What's Next in AI? Predicting the Trends for the Upcoming Decade - DataDrivenInvestor

Read More..

U. community discusses integration of AI into academic, points to … – The Brown Daily Herald

Provost Francis J. Doyle III identified the intersection of artificial intelligence and higher education as a University priority in an Aug. 31 letter to the community titled Potential impact of AI on our academic mission. Doyles address comes at a time of uncertainty as educational institutions struggle to make sense of the roles and regulations of artificial intelligence tools in academia.

Doyles letter begins by zooming in on generative AI tools such as ChatGPT, which soared in popularity after its debut in late November of last year. The program, an open-access online chatbot, raked in over 100 million monthly users within the first two months of its launch, according to data from Pew Research Center.

There is no shortage of public analysis regarding the ways in which the use of generative artificial intelligence tools which are open-access tools that can generate realistic text, computer code and other content in response to prompts from the user provide both challenges and opportunities in higher education, Doyle wrote in the letter.

Exploring the use of AI in ways that align with Browns values has been a topic of discussion among our senior academic leaders for several months, he continued.

Doyle did not prescribe University-wide AI policies in the letter but encouraged instructors to offer clear, unambiguous guidelines about AI usage in their courses. He also provided a variety of resources for students seeking guidelines on citing AI-generated content, as well as how to use AI as a research tool.

As we identify the ways in which AI can enhance academic activities, we must also ensure these tools are understood and used appropriately and ethically, Doyle wrote.

The contention presented by Doyle is one mirrored by educators and administrators nationwide: How can academic institutions strike a balance between using AI as a learning tool and regulating it enough to avoid misuse?

The upsides to AI tools such as ChatGPT that are often touted include improved student success, the ability to tailor lessons to individual needs, immediate feedback for students and better student engagement, Doyle wrote in a message to The Herald. But it is important for students to understand the inherent risks associated with any open-access technology, in terms of privacy, intellectual property ownership and more.

Doyle told The Herald that he anticipates prolonged discussions with academic leadership, faculty and students as the University continues to monitor the evolution of AI tools and discovers innovative applications to improve learning outcomes and inform research directions.

Michael Vorenberg, associate professor of history, is finding creative ways to bring AI into the classroom. On the first day of his weekly seminar, HIST 1972A: American Legal History, 1760-1920, Vorenberg spoke candidly with his students about general attitudes regarding AI in education and the opportunities for exploration these developments afford.

Most of what educators are hearing about are the negative sides of generative AI programs, Vorenberg wrote in a message to The Herald. I am also interested in how generative AI might be used as a teaching tool.

Vorenberg outlined two broad potential uses for AI in his class: The examination of sources generated by ChatGPT allowing students to probe into the appropriateness of the retrieved documents from a historians perspective and the intentional criticism of said generated sources, understanding how a historians perspective could have produced a stronger source.

The underlying assumption behind the exercise is that even a moderately skilled historian can do better at this sort of task than a generative AI program, Vorenberg explained. Until (this) situation changes, we who teach history have an opportunity to use generative AI to give concrete examples of the ways that well-trained human historians can do history better than AI historians.

Given the Universitys large pool of students interested in pursuing computer science The Heralds recent first-year poll shows computer science as the top indicated concentration for the class of 2027 Brown has the potential to shape the future of AI.

Doyle told The Herald that the University is well-situated to contribute our creativity (and) our entrepreneurial spirit to making an impact as researchers continue to strengthen these tools.

Jerry Lu 25, who is concentrating in both computer science and economics, obsessively followed the growing momentum behind Open AI, ChatGPT and developments in automation.

Lu believes there are two ways the University can best support its students in navigating artificial intelligence one from an educational perspective, and another from a more career-oriented view.

In terms of education, Lu said he hopes that the University would approach AI not just through computer science classes, but from a sociology approach or humanities lens as well to equip all students with the necessary skills to address how AI will undoubtedly affect society.

Get The Herald delivered to your inbox daily.

Lu also pointed to the restructured Center for Career Exploration as a potential resource for preparing students to enter a workforce heavily influenced by AI.

The new Career LAB should be cognizant of how these new technologies are going to impact careers, Lu said. Offering guidance on how students should think about AI and how they can navigate (it) or use (it) to their advantage, I think that that would be really key.

When asked about how the universities should engage with AI, ChatGPT focused on the pursuit for a common good.

Universities have a critical role to play in the responsible development and application of artificial intelligence, it replied. They should focus on research, education, ethics, collaboration and societal impact to ensure that AI technologies benefit humanity as a whole while minimizing potential harms.

Sofia Barnett is a University News editor overseeing the faculty and higher education beat. She is a sophomore from Texas studying history, politics and nonfiction writing.

Read more:

U. community discusses integration of AI into academic, points to ... - The Brown Daily Herald

Read More..

AI expert is a hot new position in the freelance jobs market – CNBC

Jakub Porzycki/NurPhoto via Getty Images

Vlad Hu began his career as a software engineer and eventually founded his own software agency, but over the past year, the big work opportunity has been freelance artificial intelligence expert gigs. Hu isn't alone. The rise of generative AI is rapidly reshaping the freelance tech job market, with AI-related job posts from employers and searches among job seekers surging across career and freelance job platforms, including LinkedIn, Upwork and Fiverr.

Three years ago, becoming an AI expert would involve "deep knowledge in machine learning algorithms, deep learning AI in general, and a lot of technical things," said Hu, who works through Fiverr on chatbot implementation projects.

According to data from Indeed, generative AI-related job posts have increased on its platform nearly 250% from July 2021 to July 2023.

According to LinkedIn which is owned by Microsoft, OpenAI's primary investment backer since the large language model first broke through with the public in November 2022, LinkedIn member searches on gen AI terms continue to grow. Since early April, the number of U.S. LinkedIn member posts mentioning gen AI keywords has increased 25% month over month. By June, AI keywords like "ChatGPT," "prompt engineering," and "prompt crafting," were being added to profiles 15 times more frequently than at the beginning of the year.

"Many companies are exploring ways to integrate AI into their business platforms and working with skilled freelance developers," said a Fiverr spokeswoman.

Hu said businesses interested in introducing a ChatGPT or similar AI bot to an app often contact him to understand the technology. Fiverr also has seen an explosion of interest in AI-related video creation over the past six months, according to the spokeswoman, as well as hiring firms searching for AI app development experts.

Demand for AI freelance experts should continue to grow, according to LinkedIn, with a June survey it conducted among executives finding that 44% in the U.S. intend to expand their use of AI technologies in the next year; 47% say they expect it will increase productivity.

"AI is already driving changes in the workforce," wrote Dr. Karin Kimbrough, chief economist at LinkedIn, in a recent report which found just under half of executives say AI will increase productivity. "In the past year, we've seen professionals globally adopting AI skills at a rapid rate; this is happening in parallel with employers increasingly looking for talent with knowledge of how to effectively use new AI technologies to enhance productivity in their organizations."

There is opportunity for freelancers expert in AI to take advantage of the lack of AI skills among existing industry professionals, across sectors of the economy. In the U.S. job market, for example, what LinkedIn classifies as the technology, information, and media sector has the most members proficient in AI, at just 2.2%. Other industries are experiencing rapid adoption of AI core competencies, including retail and financial services, but off a very low base percentage of current employees who are proficient.

Freelance job platform Upwork, which recently signed a deal with OpenAI to connect businesses with experts familiar with its large language models, says the total number of AI skills being marketed by experts is upwards of 250. According to Margaret Lilani, Upwork's vice president of talent solutions, although there are multiple pathways to AI consulting, a strong foundation in computer science, knowledge of machine learning algorithms, proficiency in programming languages like Python, or experience in data management and analysis are often needed across job tasks.

Many AI experts also have related college degrees or experience, such as a bachelor's or master's degree in fields including computer science or engineering. Even so, "ultimately landing work within the AI space comes down to showcasing that you have the skills, ability and expertise to take on a particular project," Lilani said.

At online learning company Udacity, there has been an increase of 33% over the past year in interest for AI-based courses deep learning, AI programming with python, AI for trading, machine learning DevOps engineer, computer vision, and natural language processing among the in-demand courses."To meet this demand, roughly 20% of our current content development roadmap includes Generative AI and Generative AI-related content," said Victoria Papalian, general manager of Udacity's consumer division.

For those not yet in the job market and interested in the AI field, Lilani suggests getting an early start by taking classes in computer science. She says a foundation will be built in the programming languages needed for AI expertise, especially for high school students looking to become familiar with the building blocks of many AI fields. She added that independent methods of education, including YouTube videos or blogs focused on AI skills, are becoming more sought after in the workforce.Learning new concepts and tools like ChatGPT will become important as all types of professionals across industries advance in their careers.

Hu said to start with the basics, including use of OpenAI tools, like ChatGPT, ChatGPT API, Dall-E, and davinci.But he added that proficiency in these areas of AI is just the start. Spending time determining how to use these tools in business is critical. AI's value is limited by a user's application of the technology, so knowledge needs to be supplemented with intention for its use."It's how you bridge the gap with the real world problem that really matters," Hu said.

Follow this link:

AI expert is a hot new position in the freelance jobs market - CNBC

Read More..

The Race to Lead the AI Revolution: Tech Giants, Cloud Titans and … – Medium

Artificial intelligence promises to transform industries and generate immense economic value over the coming decades. Tech giants, cloud computing leaders and semiconductor firms are fiercely competing to provide the foundational AI infrastructure and services fueling this revolution. In this high-stakes battle to dominate the AI sphere, these companies are rapidly advancing hardware, software, cloud platforms, developer tools and applications. For investors, understanding the dynamic competitive landscape is key to identifying leaders well-positioned to capitalize on surging AI demand.

The worlds largest technology companies view leadership in artificial intelligence as vital to their futures. AI permeates offerings from Amazon, Microsoft, Google, Facebook and Apple as they fight for market share. The cloud has become the primary arena for delivering AI capabilities to enterprise customers. Amazon Web Services, Microsoft Azure and Google Cloud Platform offer integrated machine learning, data analytics and AI services through their cloud platforms.

The tech titans are also racing to advance AI assistant technologies like Alexa, Siri and Cortana for consumer and business use. IoT ecosystems that accumulate data to train AI depend on cloud infrastructure. Tech firms battle to attract top AI engineering talent and acquire promising startups. Government scrutiny of their AI competitive tactics is growing. But the tech giants continue aggressively investing in R&D and new partnerships to expand their AI footprints.

The major cloud providers have emerged as gatekeepers for enterprise AI adoption. AWS, Microsoft Azure, Google Cloud and IBM Cloud aggressively market integrated machine learning toolkits, neural network APIs, automated ML and other services that remove AI complexities. This strategy drives more customers to their clouds to access convenient AI building blocks.

Cloud platforms also offer vast on-demand computing power and storage for AI workloads. Firms like AWS and Google Cloud tout specialized AI accelerators on their servers. The cloud battleground has expanded to wearable, mobile and edge devices with AI capabilities. Cloud leaders aim to keep customers within their ecosystems as AI proliferates.

Graphical processing units (GPUs) from Nvidia, AMD and Intel currently dominate AI computing. But rising challengers like Cerebras, Graphcore and Tenstorrent are designing specialized processing chips just for deep learning. Known as AI accelerators, these chips promise faster training and inference than repurposed GPUs. Startups have attracted huge investments to develop new accelerator architectures targeted at AI workloads.

Big tech companies are also muscling into the AI chip space. Googles Tensor Processing Units power many internal workloads. Amazon has designed AI inference chips for Alexa and AWS. Microsoft relies on FPGA chips from Xilinx but is also developing dedicated AI silicon. As AI proliferates, intense competition in AI-optimized semiconductors will shape the future landscape.

Much AI innovation comes from open source projects like TensorFlow, PyTorch, MXNet and Keras. Tech giants liberally adopt each others frameworks into their own stacks. This open ecosystem drives rapid advances through collaboration between intense competitors. But tech firms then differentiate by offering proprietary development environments, optimized runtimes and additional services around the open source cores.

Leading corporate sponsors behind frameworks like Facebooks PyTorch and AWSs Gluon intend to benefit by steering standards and features. However, generous licensing enables wide adoption and growth. The symbiotic relationship between open source and proprietary AI has greatly accelerated overall progress.

Beyond core technology purveyors, many other players want a slice of the AI market. Consulting firms sell AI strategy and implementation services. Cloud data warehouse vendors feature AI-driven analytics. Low-code platforms incorporate AI-powered automation. Cybersecurity companies inject AI into threat detection. AI success will ultimately require an entire ecosystem integrating hardware, software, infrastructure, tools and expertise into multi-layered technology stacks.

Current AI capabilities remain narrow and require extensive human guidance. But rapid advances in foundational machine learning approaches, computing power and neural network design point to a future Artificial General Intelligence that mimics human-level capacities. Tech giants are investing today in moonshot projects like robotics, quantum computing and neuro-symbolic AI to fuel the next paradigm shifts.

Government regulation will also shape AIs evolution, balancing innovation with ethics. Despite uncertainties, AI will undoubtedly transform business and society over the next decade through visionary efforts underway today across the technology landscape.

For investors, AI represents an enormously valuable mega-trend with a long runway for growth. While hype exceeds reality today, practical AI adoption is accelerating. The tech giants have tremendous balance sheet resources to sustain investment. But they also face anti-trust scrutiny that could advantage smaller players.

Seeking exposure across the AI ecosystem is ideal to benefit from both large established players and potential rising challengers. AI promises outsized returns for those investors savvy enough to identify leaders powering this transformative technology through its period of exponential growth.

Sign up for the SEEKME newsletter to receive the latest artificial intelligence insights, case studies and research direct to your inbox each month. Stay ahead of the AI curve.

Follow this link:

The Race to Lead the AI Revolution: Tech Giants, Cloud Titans and ... - Medium

Read More..

As regulators talk tough, tackling AI bias has never been more urgent – VentureBeat

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

The rise of powerful generative AI tools like ChatGPT has been described as this generations iPhone moment. In March, the OpenAI website, which lets visitors try ChatGPT, reportedly reached847 million unique monthly visitors. Amid this explosion of popularity, the level of scrutiny placed on gen AI has skyrocketed, with several countries acting swiftly to protect consumers.

In April, Italy became the first Western country toblockChatGPT on privacy grounds, only to reverse the ban four weeks later. Other G7 countries areconsidering a coordinated approachto regulation.

The UK will host thefirst global AI regulation summitin the fall, with Prime Minister Rishi Sunak hoping the country can drive the establishment of guardrails on AI. Itsstated aimis to ensure AI is developed and adopted safely and responsibly.

Regulation is no doubt well-intentioned. Clearly, many countries are aware of the risks posed by gen AI. Yet all this talk of safety is arguably masking a deeper issue: AI bias.

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Although the term AI bias can sound nebulous, its easy to define. Also known as algorithm bias, AI bias occurs when human biases creep into the data sets on which the AI models are trained. This data, and the subsequent AI models, then reflect any sampling bias, confirmation bias and human biases (against gender, age, nationality, race, for example) and clouds the independence and accuracy of any output from the AI technology.

As gen AI becomes more sophisticated, impacting society in ways it hadnt before, dealing with AI bias is more urgent than ever. This technology isincreasingly usedto inform tasks like face recognition, credit scoring and crime risk assessment. Clearly, accuracy is paramount with such sensitive outcomes at play.

Examples of AI bias have already been observed in numerous cases. When OpenAIs Dall-E 2, a deep learning model used to create artwork, wasasked to create an imageof a Fortune 500 tech founder, the pictures it supplied were mostly white and male. When asked if well-known Blues singer Bessie Smith influenced gospel singer Mahalia Jackson, ChatGPTcould not answer the question without further prompts, raising doubts about its knowledge of people of color in popular culture.

Astudyconducted in 2021 around mortgage loans discovered that AI models designed to determine approval or rejection did not offer reliable suggestions for loans to minority applicants.These instances prove that AI bias can misrepresent race and gender with potentially serious consequences for users.

AI that produces offensive results can be attributed to the way the AI learns and the dataset it is built upon. If the data over-represents or under-represents a particular population, the AI will repeat that bias, generating even more biased data.

For this reason, its important that any regulation enforced by governments doesnt view AI as inherently dangerous. Rather, any danger it possesses is largely a function of the data its trained on. If businesses want to capitalize on AIs potential, they must ensure the data it is trained on is reliable and inclusive.

To do this, greater access to an organizations data to all stakeholders, both internal and external, should be a priority. Modern databases play a huge role here as they have the ability to manage vast amounts of user data, both structured and semi-structured, and have capabilities to quickly discover, react, redact and remodel the data once any bias is discovered. This greater visibility and manageability over large datasets means biased data is at less risk of creeping in undetected.

Furthermore, organizations must train data scientists to better curate data while implementing best practices for collecting and scrubbing data. Taking this a step further, the data training algorithms must be made open and available to as many data scientists as possible to ensure that more diverse groups of people are sampling it and can point out inherent biases. In the same way modern software is often open source, so too should appropriate data be.

Organizations have to be constantly vigilant and appreciate that this is not a one-time action to complete before going into production with a product or a service. The ongoing challenge of AI bias calls for enterprises to look at incorporating techniques that are used in other industries to ensure general best practices.

Blind tasting tests borrowed from the food and drink industry, red team/blue team tactics from the cybersecurity world or the traceability concept used in nuclear power could all provide valuable frameworks for organizations in tackling AI bias. This work will help enterprises to understand the AI models, evaluate the range of possible future outcomes and gain sufficient trust with these complex and evolving systems.

In previous decades, talk of regulating AI was arguably putting the cart before the horse. How can you regulate something whose impact on society is unclear? A century ago, no one dreamt of regulating smoking because it wasnt known to be dangerous. AI, by the same token, wasnt something under serious threat of regulation any sense of its danger was reduced tosci-fi filmswith no basis in reality.

But advances in gen AI and ChatGPT, as well as advances towards artificial general Intelligence (AGI), have changed all that. Some national governments seem to be working in unison to regulate AI, while paradoxically, others are jockeying for position as AI regulators-in-chief.

Amid this hubbub, its crucial that AI bias doesnt become overly politicized and is instead viewed as a societal issue that transcends political stripes. Across the world, governments alongside data scientists, businesses and academics must unite to tackle it.

Ravi Mayuram is CTO of Couchbase.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Read the rest here:

As regulators talk tough, tackling AI bias has never been more urgent - VentureBeat

Read More..

Google looks to make Artificial Intelligence as simple as Search – Times of India

SAN FRANCISCO: Google is now doing to AI what it did to the internet. "We are taking the sophistication of the AI model and putting it behind a simple interface called chat which then lets you open it up to every department," Google Cloud's CEO Thomas Kurian said.Duet AI in Workspace and Vertex AI - both recently launched products by Google - are expected to revolutionise the market, he added. Kurian was speaking with some members of the press last week on the sidelines of the three-day Google Cloud Next - a mega event at Moscone Center in San Francisco from August 29."AI can be used in virtually every department, every business function in a company, and every industry. Retailers are testing it for shopping and commerce. Telecommunication companies are using it for customer service. Banks are using it to synthesise financial statements for their wealth managers. We expect the number of people who can use AI to grow just like when we simplified access to the internet and broadened it," he added.Vertex AI Search and Conversation, which was made available during the Cloud Next event, allows developers with minimum machine learning knowledge to take data, customise it, build an interactive chatbot or search engine within it, and deploy the apps within a few hours.Aparna Pappu, VP and general manager of Google Workspace, said Duet AI has your back. "It can help write emails and make presentations using different sources and summarise what was said in a virtual meeting and even attend the meet on the user's behalf," she said in another media interaction during the event.Kurian said that generative AI is moving technology out of the IT department to many other functions in companies. "When we look at users of generative AI - marketing departments, HR, supply chain organisations - none of them were talking to us earlier, but at this conference, many are from non-engineering backgrounds... from different business lines because they want to understand how they can use generative AI technology," he added.Google has provided an AI platform that protects data and ensures that it does not leak out. "We have capability in Vertex so data can be kept and any feedback or changes to the model are private to you," he added. Kurian said they have analysed a million users, understood their behaviour, and found that an average user of Duet can typically write 30-40% more emails with more than 50% of the content generated by the model.

Read the original here:

Google looks to make Artificial Intelligence as simple as Search - Times of India

Read More..

Decoding Opportunities and Challenges for LLM Agents in … – Unite.AI

We are seeing a progression of Generative AI applications powered by large language models (LLM) from prompts to retrieval augmented generation (RAG) to agents. Agents are being talked about heavily in industry and research circles, mainly for the power this technology provides to transform Enterprise applications and provide superior customer experiences. There are common patterns for building agents that enable first steps towards artificial general intelligence (AGI).

In my previous article, we saw a ladder of intelligence of patterns for building LLM powered applications. Starting with prompts that capture problem domain and use LLM internal memory to generate output. With RAG, we augment the prompt with external knowledge searched from a vector database to control the outputs. Next by chaining LLM calls we can build workflows to realize complex applications. Agents take this to a next level by auto determining how these LLM chains are to be formed. Let's look in detail.

A key pattern with agents is that they use the language understanding power of LLM to make a plan on how to solve a given problem. The LLM understands the problem and gives us a sequence of steps to solve the problem. However, it doesn't stop there. Agents are not a pure support system that will provide you recommendations on solving the problem and then pass on the baton to you to take the recommended steps. Agents are empowered with tooling to go ahead and take the action. Scary right!?

If we ask an agent a basic question like this:

Human: Which company did the inventor of the telephone start?

Following is a sample of thinking steps that an agent may take.

Agent (THINKING):

Agent (RESPONSE): Alexander Graham Bell co-founded AT&T in 1885

You can see that the agent follows a methodical way of breaking down the problem into subproblems that can be solved by taking specific Actions. The actions here are recommended by the LLM and we can map these to specific tools to implement these actions. We could enable a search tool for the agent such that when it realizes that LLM has provided search as an action, it will call this tool with the parameters provided by the LLM. The search here is on the internet but can as well be redirected to search an internal knowledge base like a vector database. The system now becomes self-sufficient and can figure out how to solve complex problems following a series of steps. Frameworks like LangChain and LLaMAIndex give you an easy way to build these agents and connect to toolings and API. Amazon recently launched their Bedrock Agents framework that provides a visual interface for designing agents.

Under the hood, agents follow a special style of sending prompts to the LLM which make them generate an action plan. The above Thought-Action-Observation pattern is popular in a type of agent called ReAct (Reasoning and Acting). Other types of agents include MRKL and Plan & Execute, which mainly differ in their prompting style.

For more complex agents, the actions may be tied to tools that cause changes in source systems. For example, we could connect the agent to a tool that checks for vacation balance and applies for leave in an ERP system for an employee. Now we could build a nice chatbot that would interact with users and via a chat command apply for leave in the system. No more complex screens for applying for leaves, a simple unified chat interface. Sounds exciting!?

Now what if we have a tool that invokes transactions on stock trading using a pre-authorized API. You build an application where the agent studies stock changes (using tools) and makes decisions for you on buying and selling of stock. What if the agent sells the wrong stock because it hallucinated and made a wrong decision? Since LLM are huge models, it is difficult to pinpoint why they make some decisions, hence hallucinations are common in absence of proper guardrails.

While agents are all fascinating you probably would have guessed how dangerous they can be. If they hallucinate and take a wrong action that could cause huge financial losses or major issues in Enterprise systems. Hence Responsible AI is becoming of utmost importance in the age of LLM powered applications. The principles of Responsible AI around reproducibility, transparency, and accountability, try to put guardrails on decisions taken by agents and suggest risk analysis to decide which actions need a human-in-the-loop. As more complex agents are being designed, they need more scrutiny, transparency, and accountability to make sure we know what they are doing.

Ability of agents to generate a path of logical steps with actions gets them really close to human reasoning. Empowering them with more powerful tools can give them superpowers. Patterns like ReAct try to emulate how humans solve the problem and we will see better agent patterns that will be relevant to specific contexts and domains (banking, insurance, healthcare, industrial, etc.). The future is here and technology behind agents is ready for us to use. At the same time, we need to keep close attention to Responsible AI guardrails to make sure we are not building Skynet!

Read the rest here:

Decoding Opportunities and Challenges for LLM Agents in ... - Unite.AI

Read More..