Page 1,163«..1020..1,1621,1631,1641,165..1,1701,180..»

A.I. has a discrimination problem. In banking, the consequences can be severe – CNBC

Artificial intelligence algorithms are increasingly being used in financial services but they come with some serious risks around discrimination.

Sadik Demiroz | Photodisc | Getty Images

AMSTERDAM Artificial intelligence has a racial bias problem.

From biometric identification systems that disproportionately misidentify the faces of Black people and minorities, to applications of voice recognition software that fail to distinguish voices with distinct regional accents, AI has a lot to work on when it comes to discrimination.

And the problem of amplifying existing biases can be even more severe when it comes to banking and financial services.

Deloitte notes that AI systems are ultimately only as good as the data they're given: Incomplete or unrepresentative datasets could limit AI's objectivity, while biases in development teams that train such systems could perpetuate that cycle of bias.

Nabil Manji, head of crypto and Web3 at Worldpay by FIS, said a key thing to understand about AI products is that the strength of the technology depends a lot on the source material used to train it.

"The thing about how good an AI product is, there's kind of two variables," Manji told CNBC in an interview. "One is the data it has access to, and second is how good the large language model is. That's why the data side, you see companies like Reddit and others, they've come out publicly and said we're not going to allow companies to scrape our data, you're going to have to pay us for that."

As for financial services, Manji said a lot of the back-end data systems are fragmented in different languages and formats.

"None of it is consolidated or harmonized," he added. "That is going to cause AI-driven products to be a lot less effective in financial services than it might be in other verticals or other companies where they have uniformity and more modern systems or access to data."

Manji suggested that blockchain, or distributed ledger technology, could serve as a way to get a clearer view of the disparate data tucked away in the cluttered systems of traditional banks.

However, he added that banks being the heavily regulated, slow-moving institutions that they are are unlikely to move with the same speed as their more nimble tech counterparts in adopting new AI tools.

"You've got Microsoft and Google, who like over the last decade or two have been seen as driving innovation. They can't keep up with that speed. And then you think about financial services. Banks are not known for being fast," Manji said.

Rumman Chowdhury, Twitter's former head of machine learning ethics, transparency and accountability, said that lending is a prime example of how an AI system's bias against marginalized communities can rear its head.

"Algorithmic discrimination is actually very tangible in lending," Chowdhury said on a panel at Money20/20 in Amsterdam. "Chicago had a history of literally denying those [loans] to primarily Black neighborhoods."

In the 1930s, Chicago was known for the discriminatory practice of "redlining," in which the creditworthiness of properties was heavily determined by the racial demographics of a given neighborhood.

"There would be a giant map on the wall of all the districts in Chicago, and they would draw red lines through all of the districts that were primarily African American, and not give them loans," she added.

"Fast forward a few decades later, and you are developing algorithms to determine the riskiness of different districts and individuals. And while you may not include the data point of someone's race, it is implicitly picked up."

Indeed, Angle Bush, founder of Black Women in Artificial Intelligence, an organization aiming to empower Black women in the AI sector, tells CNBC that when AI systems are specifically used for loan approval decisions, she has found that there is a risk of replicating existing biases present in historical data used to train the algorithms.

"This can result in automatic loan denials for individuals from marginalized communities, reinforcing racial or gender disparities," Bush added.

"It is crucial for banks to acknowledge that implementing AI as a solution may inadvertently perpetuate discrimination," she said.

Frost Li, a developer who has been working in AI and machine learning for more than a decade, told CNBC that the "personalization" dimension of AI integration can also be problematic.

"What's interesting in AI is how we select the 'core features' for training," said Li, who founded and runs Loup, a company that helps online retailers integrate AI into their platforms. "Sometimes, we select features unrelated to the results we want to predict."

When AI is applied to banking, Li says, it's harder to identify the "culprit" in biases when everything is convoluted in the calculation.

"A good example is how many fintech startups are especially for foreigners, because a Tokyo University graduate won't be able to get any credit cards even if he works at Google; yet a person can easily get one from community college credit union because bankers know the local schools better," Li added.

Generative AI is not usually used for creating credit scores or in the risk scoring of consumers.

"That is not what the tool was built for," said Niklas Guske,chief operating officer at Taktile, a startup that helps fintechs automate decision-making.

Instead, Guske said the most powerful applications are in pre-processing unstructured data such as text files like classifying transactions.

"Those signals can then be fed into a more traditional underwriting model," said Guske. "Therefore, Generative AI will improve the underlying data quality for such decisions rather than replace common scoring processes."

But it's also difficult to prove. Apple and Goldman Sachs, for example, were accused of giving women lower limits for the Apple Card. But these claims were dismissed by the New York State Department of Financial Services after the regulator found no evidence of discrimination based on sex.

The problem, according to Kim Smouter, director of the group European Network Against Racism, is that it can be challenging to substantiate whether AI-based discrimination has actually taken place.

"One of the difficulties in the mass deployment of AI," he said, "is the opacity in how these decisions come about and what redress mechanisms exist were a racialized individual to even notice that there is discrimination."

"Individuals have little knowledge of how AI systems work and that their individual case may, in fact, be the tip of a systems-wide iceberg. Accordingly, it's also difficult to detect specific instances where things have gone wrong," he added.

Smouter cited the example of the Dutch child welfare scandal, in which thousands of benefit claims were wrongfully accused of being fraudulent. The Dutch government was forced to resign after a 2020 report found that victims were "treated with an institutional bias."

This, Smouter said, "demonstrates how quickly such dysfunctions can spread and how difficult it is to prove them and get redress once they are discovered and in the meantime significant, often irreversible damage is done."

Chowdhury says there is a need for a global regulatory body, like the United Nations, to address some of the risks surrounding AI.

Though AI has proven to be an innovative tool, some technologists and ethicists have expressed doubts about the technology's moral and ethical soundness. Among the top worries industry insiders expressed are misinformation; racial and gender bias embedded in AI algorithms; and "hallucinations" generated by ChatGPT-like tools.

"I worry quite a bit that, due to generative AI, we are entering this post-truth world where nothing we see online is trustworthy not any of the text, not any of the video, not any of the audio, but then how do we get our information? And how do we ensure that information has a high amount of integrity?" Chowdhury said.

Now is the time for meaningful regulation of AI to come into force but knowing the amount of time it will take regulatory proposals like the European Union's AI Act to take effect, some are concerned this won't happen fast enough.

"We call upon more transparency and accountability of algorithms and how they operate and a layman's declaration that allows individuals who are not AI experts to judge for themselves, proof of testing and publication of results, independent complaints process, periodic audits and reporting, involvement of racialized communities when tech is being designed and considered for deployment," Smouter said.

The AI Act, the first regulatory framework of its kind,has incorporated a fundamental rights approach and concepts like redress, according to Smouter, adding that the regulation will be enforced in approximately two years.

"It would be great if this period can be shortened to make sure transparency and accountability are in the core of innovation," he said.

Excerpt from:

A.I. has a discrimination problem. In banking, the consequences can be severe - CNBC

Read More..

WEDNESDAY: West Seattle facilitator hosting ‘civic conversation … – West Seattle Blog

Been seeing the seemingly endless headlines about AI artificial intelligence but not sure how you feel about it? Or, maybe youre already using it, and excited about its possibilities. Or, perhaps youre somewhere between worried and terrified of where it might take us. However you feel about AI, if youre interested in a facilitated civic conversation about it, your West Seattle neighbor James Boutin is hosting one this Wednesday evening (June 28th), 5-7 pm, at C & P Coffee (5612 California SW; WSB sponsor). When James sent us the announcement for the WSB West Seattle Event Calendar, we asked why whats his stake in AI? He replied that first and foremost, Im a citizen who cares a great deal about democracy and believes the public is in desperate need of public spaces to talk openly about the speed at which AI technology is advancing (among many other issues important to our world). He also is an educator and facilitator who is hoping to get more practice under my belt in facilitating these types of conversations. I just completed a masters program on facilitation and conflict studies at the Processwork Institute of Portland, OR, and Im dedicated to practicing the skills I learned about holding open forums out in the world. (His website is here.) James suggests a $15 donation to help me cover the costs of preparation and spreading the word, but folks are also welcome to donate less or come for free.

Read the rest here:

WEDNESDAY: West Seattle facilitator hosting 'civic conversation ... - West Seattle Blog

Read More..

Why smart AI regulation is vital for innovation and US leadership – TechCrunch

Image Credits: Dragon Claws / Getty Images

As a teenager, I immersed myself in science fiction. While the visions of many films and novels havent come to pass, Im still amazed by legendary writer Isaac Asimovs ability to imagine a future of artificial intelligence and robotics. Now, amid all the hype around generative AI and other AI tools, its time for us to follow Asimovs lead and write a new set of rules.

Of course, AI rules for the 21st century wont be quite as simple as Asimovs three rules of robotics (popularized in I, Robot). But amid anxiety around the rise of AI tools and a misguided push for a moratorium on advanced AI research, industry can and should be pushing for rules for responsible AI development. Certainly, the past centurys advances in technology have given us plenty of experience in evaluating both the benefits of technological progress and the potential pitfalls.

Technology itself is neutral. Its how we use it and the guardrails we set up around it that dictate its impact. As humans, harnessing the power of fire allowed us to stay warm and extend food storage time. But fire can still be destructive.

Think of how the recent Canadian wildfires threatened lives and property in Canada and damaged U.S. air quality. Nuclear power in the form of atomic bombs killed thousands in Japan during WWII, but nuclear energy lights up much of France and powers U.S. aircraft carriers.

In the case of AI, new tools and platforms can solve big global problems and create valuable knowledge. At a recent meeting of Detroit-area chief information officers, attendees shared how generative AI is already speeding up time-to-market and making their companies more competitive.

Generative AI will help us listen to different animal species. AI will improve our health by supporting drug discovery and disease diagnosis. Similar tools are providing everything from personalized care for elders to better security for our homes. More, AI will improve our productivity, with a new study by McKinsey showing generative AI could boost the global economy by $4.4 trillion annually.

With all this possibility, can such an amazing technology also be bad? Some of the concerns around AI platforms are legitimate. We should be concerned about the risk of deep fakes, political manipulation, and fraud aimed at vulnerable populations, but we can also use AI to recognize, intercept and block harmful cyber intrusions. Both barriers and solutions may be difficult and complex, and we need to work on them.

Some may also be simple; we already see schools experimenting with oral exams to test a students knowledge. Addressing those issues head-on, rather than sticking our heads in the sand with a pause on research that would be impossible to enforce and ripe for exploitation by bad actors, will position the United States as a leader on the world stage.

While the U.S. approach to AI has been mixed, other countries seem locked in to a hyper-regulatory stampede. The EU is on the precipice of passing a sweeping AI Act that would require companies to ask permission to innovate. In practice, that would mean that only the government or huge companies with the finances and capacity to afford the certification labyrinth covering privacy, IP, and a host of social protection requirements could develop new AI tools.

A recent study from Stanford University also found that the EUs AI Bill would bar all of the currently existing large language models, including OpenAIs GPT-4 and Googles Bard. Canadian lawmakers are moving forward an overly broad AI bill that could similarly stifle innovation. Most concerning, China is rapidly pursuing civil and military AI dominance through massive government support. More, it has a different view of human rights and privacy protection that may help its AI efforts but is antithetical to our values. The U.S. must act to protect citizens and advance AI innovation or we will be left behind.

What would that look like? To start, the U.S. needs a preemptive federal privacy bill. Todays patchwork of state-by-state rules means that data is treated differently each time it crosses an invisible border causing confusion and compliance hurdles for small businesses. We need a national privacy law with clear guidelines and standards for how companies collect, use, and share data. It would also help create transparency for consumers and ensure that companies can foster trust as the digital economy grows.

We also need a set of principles around responsible AI use. While I prefer less regulation, managing emerging technologies like AI requires clear rules that set out how this technology can be developed and deployed. With new innovations in AI unveiled almost daily, legislators should focus on guardrails and outcomes, rather than attempting to rein in specific technologies.

Rules should also consider the level of risk, focusing on AI systems that could meaningfully hurt Americans fundamental rights or access to critical services. As our government determines what good policy looks like, industry will have a vital role to play. The Consumer Technology Association is working closely with industry and policymakers to develop unified principles for AI use.

Were at a pivotal moment for the future of an amazing, complex and consequential technology. We cant afford to let other countries take the lead.

See the rest here:

Why smart AI regulation is vital for innovation and US leadership - TechCrunch

Read More..

MongoDB Embraces AI & Reduces Developer Friction With New Features – Forbes

MongoDB

SOPA Images/LightRocket via Getty Images

Even if you havent heard of MongoDB, odds are good that you touch it in your daily online life. MongoDB has enabled more than 43,000 organizations to build solutions leveraging MongoDB technology, including some of the biggest names in technology, healthcare, telecom, and financial services. The company's horizontal document-oriented (often called NoSQL) database technology underpins a broad swath of workloads that all need modern data services needs that often dont directly map to the constraints of traditional relational databases.

Servicing the quickly evolving needs of modern application development requires rapid innovation and fast product cycles. MongoDB demonstrated both last week at its MongoDB.local 2023 event in New York City, introducing a compelling set of new features and services.

The announcements cover a wide breadth of territory, with new capabilities to leverage the latest AI technology, features that enable greater developer productivity, ease the burden of enterprise application development, and even a new program to simplify deploying MongoDB technology into a targeted set of verticals. There's a lot to delve into.

Its impossible to talk about application development today without touching on artificial intelligence. Generative AI, typified by large language models (LLMs) such as ChapGPT, capture headlines daily. The question technology companies and IT practitioners alike most often ask me is how AI will affect them. MongoDB this past week illustrated how generative AI impacts the data plane.

Technologies such as generative AI changes how we think about managing the data that feeds AI-driven systems. Language processing, for example, utilize attributes on data called "vectors."

You can think of vector embeddings as tags placed on data as an AI model that define the relationship between words. These vectors are then used as efficient shortcuts when running generative AI models (this is a simplistic explanation of vectors; interested readers should read this more in-depth explanation).

MongoDB's new MongoDB Atlas Vector Search is designed to simplify the development of AI language and generative AI applications. The new capability allows vector embedding directly on data stored by MongoDB, allowing new generative AI applications to be quickly and efficiently developed on MongoDB Atlas.

MongoDB Vector Search

MongoDB

MongoDB Atlas Vector Search is also integrated with the open-source LangChain and LlamaIndex frameworks with tools for accessing and managing LLMs for various applications.

Building and deploying applications leveraging the latest in AI technology can be daunting. The concepts, tools, and even infrastructure significantly differ from more traditional software development approaches. AI applications can also require multiple iterations of model training as the application evolves, adding significant development costs.

Last week, recognizing the unique challenges of developing AI applications, MongoDB announced its new MongoDB AI Innovators Program, designed to ease the unique burdens of developing AI applications. The new program offers several benefits, including providing eligible organizations with up to $25,000 in credits for MongoDB Atlas.

The AI Innovators Program also includes engagement opportunities with MongoDB to fast-track strategic partnerships and joint go-to-market activities with what the company calls its AI Amplify track. Companies participating in AI Amplify track have their submissions evaluated by MongoDB to gauge the appropriateness of a potential partnership. MongoDB technical experts are also available for solutions architecture and to help identify compelling use cases to use in co-marketing opportunities.

Finally, MongoDB is making its partner ecosystem available to program participants. Organizations participating in the MongoDB AI Innovators Program will have prioritized access to opportunities with MongoDB Partners, and eligible organizations can be fast-tracked to join the MongoDB Partner Ecosystem to build seamless, interoperable integrations and joint solutions. MongoDB has over 1,000 partners, making this an attractive benefit of the program.

In addition to the new vector search capabilities already mentioned, there were four additional capabilities introduced into MongoDB Atlas:

Keeping with its theme of simplifying the developer experience, these new features should ease the burden of developing applications using MongoDB Atlas as an intelligent data platform.

MongoDB is a foundational component for data modernization, but it is only part of the solution. Mongo recognizes this, calling its technology a Developer Data Platform. The phrase emphasizes the importance of empowering developers to build next-generation AI-enabled applications, often while also using AI. MongoDB empowers developers by delivering a data plane offering the capabilities most needed for modern applications.

Mongo announced new programming language support to facilitate adoption across multiple environments. The company added support for server-side Kotlin applications (Kotlin is a programming language designed for cross-platform application development). There is also new support for data processing and analytics with Python as MongoDB makes its open-source PyMongoArrow library generally available, allowing developers to efficiently convert data stored in MongoDB using some of the most popular Python-based analytics frameworks.

MongoDB is also adding additional support for deploying and managing MongoDB using Amazon AWS infrastructure-as-code (IaC) capabilities. MongoDB released a new integration with the AWS Cloud Development Kit (CDK), allowing developers to manage MongoDB Atlas resources with C#, Go, Java, and Python. This is a significant enabler for developers deploying on AWS.

MongoDB also simplified its Kubernetes integration with improvements to its MongoDB Atlas Kubernetes Operator. The new functionality allows developers to install MongoDBs horizontal document-oriented (often called NoSQL) database technology underpins a broad swath of workloads that all need modern data services needs that often dont directly map to the constraints of traditional relational databases.

Finally, MongoDB announced its new MongoDB Relational Migrator tool. The new tool makes migrating from traditional legacy databases into a MongoDB environment easier and significantly faster. MongoDB Relational Migrator analyzes legacy databases, automatically generates new data schema and code, and then executes a seamless migration to MongoDB Atlas without downtime. This capability will reduce the pain often experienced when moving data into a new environment from a legacy data store.

MongoDB held an investor conference parallel to its developer-focused MongoDB.local event. At the investor event, MongoDB's chief product officer, Sahir Azam, described how the company builds its product strategy and GTM activities around its understanding of the customer's journey.

The features, and new business opportunities, announced by MongoDB make sense to anyone familiar with the development of a modern data-driven application. The new offerings help developers leverage MongoDB technology to create new applications while also implementing the features required to develop next-generation AI-enabled solutions.

Theres no question that developers appreciate what the company is delivering. As an enabling technology for other applications, MongoDB's approach not only makes sense, it's also necessary. Its also paying off.

MongoDB has beaten consensus estimates in its earnings for seventeen straight quarters, with its most recent earnings besting EPS estimates by nearly 195%. The most recent quarter also saw Mongo growing its top-line revenue by 29% year-over-year. The company has increased revenue by 8x since 2018. That's a tremendous vote of confidence from its customers, especially in a market thats still hampering growth for nearly every foundational technology company.

MongoDB competes in a crowded segment, and we see innovation coming from its closest competitors, evidenced by recent announcements from competitors such as Elastic. At the same time, MongoDB stands out in this intensely competitive environment with its relentless focus on improving the experience for developers, quickly adapting to new trends in data analysis and AI, and implementing programs that allow its customers to launch new applications quickly. Seventeen straight earnings beats, over a thousand partners, and more than 43,000 customers all show that MongoDB is earning its success.

Disclosure: Steve McDowell is an industry analyst, and NAND Research an industry analyst firm, that engages in, or has engaged in, research, analysis, and advisory services with many technology companies, which may include those mentioned in this article. Mr. McDowell does not hold any equity positions with any company mentioned in this article.

Steve McDowell is principal analyst and founding partner at NAND Research.Steve is a technologist with over 25 years of deep industry experience in a variety of strategy, engineering, and strategic marketing roles, all with the unifying theme of delivering innovative technologies into the enterprise infrastructure market.

Originally posted here:

MongoDB Embraces AI & Reduces Developer Friction With New Features - Forbes

Read More..

The Next Token of Progress: 4 Unlocks on the Generative AI Horizon – Andreessen Horowitz

Large language models (LLMs) have taken the tech industry by storm, powering experiences that can only be described as magicalfrom writing a weeks worth of code in seconds to generating conversations that feel even more empathetic than the ones we have with humans. Trained on trillions of tokens of data with clusters of thousands of GPUs, LLMs demonstrate remarkable natural language understanding and have transformed fields like copy and code, propelling us into the new and exciting generative era of AI. As with any emerging technology, generative AI has been met with some criticism. Though some of this criticism does reflect current limits of LLMs current capabilities, we see these roadblocks not as fundamental flaws in the technology, but as opportunities for further innovation.

To better understand the near-term technological breakthroughs for LLMs and prepare founders and operators for whats around the bend, we spoke to some of the leading generative AI researchers who are actively building and training some of the largest and most cutting edge models: Dario Amodei, CEO of Anthropic; Aidan Gomez, CEO of Cohere; Noam Shazeer, CEO of Character.AI; and Yoav Shoham of AI21 Labs. These conversations identified 4 key innovations on the horizon: steering, memory, arms and legs, and multimodality. In this piece, we discuss how these key innovations will evolve over the next 6 to 12 months and how founders curious about integrating AI into their own businesses might leverage these new advances.

Many founders are understandably wary of implementing LLMs in their products and workflows because of these models potential to hallucinate and reproduce bias. To address these concerns, several of the leading model companies are working on improved steeringa way to place better controls on LLM outputsto focus model outputs and help models better understand and execute on complex user demands. Noam Shazeer draws a parallel between LLMs and children in this regard: its a question of how to direct [the model] better We have this problem with LLMs that we just need the right ways of telling them to do what we want. Small children are like this as wellthey make things up sometimes and dont have a firm grasp of fantasy versus reality. Though there has been notable progress in steerability among the model providers as well as the emergence of tools like Guardrails and LMQL, researchers are continuing to make advancements, which we believe is key to better productizing LLMs among end users.

Improved steering becomes especially important in enterprise companies where the consequences of unpredictable behavior can be costly. Amodei notes that the unpredictability of LLMs freaks people out and, as an API provider, he wants to be able to look a customer in the eye and say no, the model will not do this, or at least does it rarely. By refining LLM outputs, founders can have greater confidence that the models performance will align with customer demands. Improved steering will also pave the way for broader adoption in other industries with higher accuracy and reliability requirements, like advertising, where the stakes of ad placement are high. Amodei also sees use cases ranging from legal use cases, medical use cases, storing financial information and managing financial bets, [to] where you need to preserve the company brand. You dont want the tech you incorporate to be unpredictable or hard to predict or characterize. With better steering, LLMs will also be able to do more complex tasks with less prompt engineering, as they will be able to better understand overall intent.

Advances in LLM steering also have the potential to unlock new possibilities in sensitive consumer applications where users expect tailored and accurate responses. While users might be willing to tolerate less accurate outputs from LLMs when engaging with them for conversational or creative purposes, users want more accurate outputs when using LLMs to assist them in daily tasks, advise them on major decisions, or augment professionals like life coaches, therapists, and doctors. Some have pointed out that LLMs are poised to unseat entrenched consumer applications like search, but we likely need better steering to improve model outputs and build user trust before this becomes a real possibility.

Key unlock: users can better tailor the outputs of LLMs.

Copywriting and ad-generating apps powered by LLMs have already seen great results, leading to quick uptake among marketers, advertisers, and scrappy entrepreneurs. Currently, however, most LLM outputs are relatively generalized, which makes it difficult to leverage them for use cases requiring personalization and contextual understanding. While prompt engineering and fine-tuning can offer some level of personalization, prompt engineering is less scalable and fine-tuning tends to be expensive, since it requires some degree of re-training and often partnering closely with mostly closed source LLMs. Its often not feasible or desirable to fine-tune a model for every individual user.

In-context learning, where the LLM draws from the content your company has produced, your companys specific jargon, and your specific context, is the holy grailcreating outputs that are more refined and tailored to your particular use case. In order to unlock this, LLMs need enhanced memory capabilities. There are two primary components to LLM memory: context windows and retrieval. Context windows are the text that the model can process and use to inform its outputs in addition to the data corpus it was trained on. Retrieval refers to retrieving and referencing relevant information and documents from a body of data outside the models training data corpus (contextual data). Currently, most LLMs have limited context windows and arent able to natively retrieve additional information, and so generate less personalized outputs. With bigger context windows and improved retrieval, however, LLMs can directly offer much more refined outputs tailored to individual use cases.

With expanded context windows in particular, models will be able to process larger amounts of text and better maintain context, including maintaining continuity through a conversation. This will, in turn, significantly enhance models ability to carry out tasks that require a deeper understanding of longer inputs, such as summarizing lengthy articles or generating coherent and contextually accurate responses in extended conversations. Were already seeing significant improvement with context windowsGPT-4 has both an 8k and 32k token context window, up from 4k and 16k token context windows with GPT-3.5 and ChatGPT, and Claude recently expanded its context window to an astounding 100k tokens.

Expanded context windows alone dont sufficiently improve memory, since cost and time of inference scale quasi-linearly, or even quadratically, with the length of the prompt. Retrieval mechanisms augment and refine the LLMs original training corpus with contextual data that is most relevant to the prompt. Because LLMs are trained on one body of information and are typically difficult to update, there are two primary benefits of retrieval according to Shoham: First, it allows you to access information sources you didnt have at training time. Second, it enables you to focus the language model on information you believe is relevant to the task. Vector databases like Pinecone have emerged as the de facto standard for the efficient retrieval of relevant information and serve as the memory layer for LLMs, making it easier for models to search and reference the right data amongst vast amounts of information quickly and accurately.

Together, increased context windows and retrieval will be invaluable for enterprise use cases like navigating large knowledge repositories or complex databases. Companies will be able to better leverage their proprietary data, like internal knowledge, historical customer support tickets, or financial results as inputs to LLMs without fine-tuning. Improving LLMs memory will lead to improved and deeply customized capabilities in areas like training, reporting, internal search, data analysis and business intelligence, and customer support.

In the consumer space, improved context windows and retrieval will enable powerful personalization features that can revolutionize user experiences. Noam Shazeer believes that one of the big unlocks will be developing a model that both has a very high memory capacity to customize for each user but can still be served cost-effectively at scale. You want your therapist to know everything about your life; you want your teacher to understand what you know already; you want a life coach who can advise you about things that are going on. They all need context. Aidan Gomez is similarly excited by this development. By giving the model access to data thats unique to you, like your emails, calendar, or direct messages, he says, the model will know your relationships with different people and how you like to talk to your friends or your colleagues and can help you within that context to be maximally useful.

Key unlock: LLMs will be able to take into account vast amounts of relevant information and offer more personalized, tailored, and useful outputs.

The real power of LLMs lies in enabling natural language to become the conduit for action. LLMs have a sophisticated understanding of common and well-documented systems, but they cant execute on any information they extract from those systems. For example, OpenAIs ChatGPT, Anthropics Claude, and Character AIs Lily can describe, in detail, how to book a flight, but they cant natively book that flight themselves (though advancements like ChatGPTs plugins are starting to push this boundary). Theres a brain that has all this knowledge in theory and is just missing the mapping from names to the button you press, says Amodei. It doesnt take a lot of training to hook those cables together. You have a disembodied brain that knows how to move, but it doesnt have arms or legs attached yet.

Weve seen companies steadily improve LLMs ability to use tools over time. Incumbents like Bing and Google and startups like Perplexity and You.com introduced search APIs. AI21 Labs introduced Jurassic-X, which addressed many of the flaws of standalone LLMs by combining models with a predetermined set of tools, including a calculator, weather API, wiki API, and database. OpenAI betaed plugins that allow ChatGPT to interact with tools like Expedia, OpenTable, Wolfram, Instacart, Speak, a web browser, and a code interpreteran unlock that drew comparisons to Apples App Store moment. And more recently, OpenAI introduced function calling in GPT-3.5 and GPT-4, which allows developers to link GPTs capabilities to whatever external tools they want.

By shifting the paradigm from knowledge excavation to an action orientation, adding arms and legs has the potential to unlock a range of use cases across companies and user types. For consumers, LLMs may soon be able to give you recipe ideas then order the groceries you need, or suggest a brunch spot and book your table. In the enterprise, founders can make their apps easier to use by plugging in LLMs. As Amodei notes, for features that are very hard to use from a UI perspective, we may be able to make complicated things happen by just describing them in natural language. For instance, for apps like Salesforce, LLM integration should allow users to give an update in natural language and have the model automatically make those changessignificantly cutting down the time required to maintain the CRM. Startups like Cohere and Adept are working on integrations into these kinds of complex tools.

Gomez believes that, while its increasingly likely that LLMs will be able to use apps like Excel within 2 years, theres a bunch of refinement that still needs to happen. Well have a first generation of models that can use tools that will be compelling but brittle. Eventually, well get the dream system, where we can give any software to the model with some description of heres what the tool does, heres how you use it, and itll be able to use it. Once we can augment LLMs with specific and general tools, the sort of automation it unlocks is the crown jewel of our field.

Key unlock: LLMs will be able to interact much more effectively with the tools we use today.

While the chat interface is exciting and intuitive for many users, humans hear and speak language as or more often than they write or read it. As Amodei notes, there is a limit to what AI systems can do because not everything is text. Models featuring multimodality, or the ability to seamlessly process and generate content across multiple audio or visual formats, changes this interaction to beyond language. Models like GPT-4, Character.AI, and Metas ImageBind already process and generate images, audio, and other modalities, but they do so at a more basicthough quickly improvinglevel. In Gomezs words, our models are blind in a literal sense todaythat needs to change. Weve built a lot of graphical user interfaces (GUIs) that assume [the user] can see.

As LLMs evolve to better understand and interact with multiple modalities, theyll be able to use existing apps that rely on GUIs today, like the browser. They can also offer more engaging, connected, and comprehensive experiences to consumers, who will be able to engage outside of a chat interface. A lot of great integration with multimodal models can make things a lot more engaging and connected to the user, Shazeer points out. I believe, for now, most of the core intelligence comes from text, but audio and video can make these things more fun. From video chats with AI tutors to iterating and writing TV pilot scripts with an AI partner, multimodality has the potential to change entertainment, learning and development, and content generation across a variety of consumer and enterprise use cases.

Multimodality is also closely tied to tool use. While LLMs might initially connect with outside software through APIs, multimodality will enable LLMs to use tools designed for humans that dont have custom integrations, like legacy ERPs, desktop applications, medical equipment, or manufacturing machinery. Were already seeing exciting developments on this front: Googles Med-PaLM-2 model, for instance, can synthesize mammograms and X-rays. And as we think longer-term, multimodalityparticularly integration with computer visioncan extend LLMs into our own physical reality through robotics, autonomous vehicles, and other applications that require real-time interaction with the physical world.

Key unlock: Multimodal models can reason about images, video, or even physical environments without significant tailoring.

While there are real limitations to LLMs, researchers have made astounding improvements to these models in a short amount of timein fact, weve had to update this article multiple times since we started writing it, a testament to the lightning-fast progression of this technology in the field. Gomez agrees: An LLM making up facts 1 in 20 times is obviously still too high. But I really still feel quite confident that its because this is the first time weve built a system like that. Peoples expectations are quite high, so the goal post has moved from computer is dumb and does only math to a human couldve done this better. Weve sufficiently closed the gap so that criticism is around what a human can do.

Were particularly excited about these 4 innovations, which are on the cusp of changing the way founders build products and run their companies. The potential is even greater in the long term. Amodei predicts that, at some point, we could have a model that will read through all the biological data and say: heres the cure for cancer. Realistically, the best new applications are likely still unknown. At Character.AI, Shazeer lets the users develop those use cases: Well see a lot of new applications unlocked. Its hard for me to say what the applications are. There will be millions of them and the users are better at figuring out what to do with the technology than a few engineers. We cant wait for the transformative effect these advancements will have on the way we live and work as founders and companies are empowered with these new tools and capabilities.

Thanks to Matt Bornstein, Guido Appenzeller,and Rajko Radovanovi for their input and feedback during the writing process.

* * *

The views expressed here are those of the individual AH Capital Management, L.L.C. (a16z) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.

This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available athttps://a16z.com/investments/.

Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please seehttps://a16z.com/disclosuresfor additional important information.

See more here:

The Next Token of Progress: 4 Unlocks on the Generative AI Horizon - Andreessen Horowitz

Read More..

Generation AI: It is Indias time to play chief disruptor | Mint – Mint

Rapid advancements in Generative Artificial Intelligence (AI) and in its ability to mimic human intelligence to write computer code and much more have caught us all by surprise. We have seen and lived through many technology shifts, but few as significant as this one. Jensen Huang, CEO of Nvidia, summed it up as the most powerful democratization source we have ever seen. A tool for engineers and coders till now, today you dont need to know any of the programming languages to use AI tools. A simple prompt in English is enough.

Rapid advancements in Generative Artificial Intelligence (AI) and in its ability to mimic human intelligence to write computer code and much more have caught us all by surprise. We have seen and lived through many technology shifts, but few as significant as this one. Jensen Huang, CEO of Nvidia, summed it up as the most powerful democratization source we have ever seen. A tool for engineers and coders till now, today you dont need to know any of the programming languages to use AI tools. A simple prompt in English is enough.

As per Goldman Sachs, the right use of Generative AI can add around $7 trillion to the global GDP in the next decade. In a world faced with slowing economic growth, we must invest in technologies that aid growth and productivity, solve complex problems like climate change, and create a more inclusive society. However, this needs to be done with the right guard-rails.

Hi! You're reading a premium article

As per Goldman Sachs, the right use of Generative AI can add around $7 trillion to the global GDP in the next decade. In a world faced with slowing economic growth, we must invest in technologies that aid growth and productivity, solve complex problems like climate change, and create a more inclusive society. However, this needs to be done with the right guard-rails.

India today has the opportunity to provide much-needed thought leadership globally on Generative AI. Our proven track record of creating inclusive technology for societal impact at scale, be it UPI for financial inclusion or Diksha for education, provides a framework to create open, interoperable and affordable platforms for services and products that can be scaled exponentially. Can we build on these learnings and envision it as a strategic inflection point for India to lead the world into an era of Augmented Intelligence? India sowed the seed for it when we designed our national AI strategy to enable AI for All in 2019. Since then, initiatives across government, industry and startups have focused on the inclusive and responsible adoption of AI.

India now needs to prioritize a comprehensive Generative AI strategy on top of its AI foundation to drive breakthrough productivity gains across all aspects of life, business and society. If leveraged right, we can embed a foundational layer of intelligence in every product, service and process, significantly boosting overall productivity. To achieve this, we recommend a four-pronged approach that brings together diverse stakeholders and is enabled by strong execution.

First, real competitive differentiation will be about talent and skills. While India ranks first in AI skills penetration as per the OECD, we need to move the narrative from AI talent to Generation AI. Can we unlock the full potential of Indias demographic dividend and its propensity to technology adoption by creating a generation of AI-literate citizens who know how to use it responsibly? For this, we recommend a tiered strategy:

For few: Aim to become the world leader in data and AI skills by training 1 million world-class AI professionals to meet global demand in areas such as Natural Language Processing, Large Language Models, responsible AI, and data fundamentals.

For many: Empower an 18-20 million workforce to use AI tools for enhanced productivity across sectors. Provide training in domain fundamentals, AI tool usage and responsible AI.

For all: India needs AI literate citizens who constitute an entire generation of AI users equipped with skills and capabilities to unlock its potential across all aspects of life. Provide education in AI application security, AI awareness and responsible AI.

Second, for innovation to thrive, we need to build the infrastructure needed for a globally competitive AI ecosystem. An AI boom depends on three things: large amounts of data, mega computing power to process it, and budgets to afford it. India must invest in building its capabilities, including:

Foundation models: Invest in the development and promotion of large trustworthy AI models that will address our language diversity and our cultural context.

Computing access: Set up a national GPU Cloud with at least Exaflop AI capacity and around 25,000 A100 GPUs or above. This could be done by the government in partnership with industry players by setting up such infrastructure under incentive programmes and recovering the running cost from users over, say, a 10-year period, with subsidized pricing for Indian academia and startups.

Special economic AI clusters: Create virtual AI clusters for core sectors like healthcare, agriculture, energy, manufacturing, defence, etc, to turbocharge innovation and product development. Each cluster could provide access to foundation models and the GPU cloud as well as access to patient capital, fast-track patent approvals, mentorship, and industry links.

Third, India should drive scalable adoption of AI across sectors. We should build a responsible AI stack with our know-how to accelerate adoption across core sectors. India should also set up a sand-box for responsible usage cases.

Lastly, we need a pro-innovation policy formulation that catalyses responsible and ethical AI usage and creation. India should develop a tailored approach that aligns existing laws, identifies gaps and establishes a governance framework to manage risk and foster innovation. This will help protect us against potential harm while driving our advancement and economic growth.

While the world worries about what AI could do, we believe India can and must show what AI should do as a force for good. With our young talent, tech know-how and a strong commitment to leverage technology as an equalizer, India must lead a global shift from Artificial Intelligence to HumanCentric Augmented Intelligence that is designed to make the world better.

Debjani Ghosh is president of Nasscom.

Continued here:

Generation AI: It is Indias time to play chief disruptor | Mint - Mint

Read More..

Bets on A.I. and innovation help this tech-focused T. Rowe Price … – CNBC

While attending Colby College in the late 1990s, Ken Allen spent half his time shooting hoops and the other half devoted to day trading. Torn when the time came to search for a job, the economics major first consulted a mentor who connected him with a former student. That connection led the Maine native to T. Rowe Price in Baltimore. Since joining investment management firm in 2000, Allen's worked his way up the corporate ladder, taking the helm as portfolio manager of the T. Rowe Price Science & Technology (PRSCX) fund in 2009. The growth fund launched in 1987 aims to capitalize on innovation. In the wake of the latest artificial intelligence boom that's driven up last year's beaten-up technology stocks, it's skyrocketed nearly 38% in 2023. Longer-term, the fund, with a 0.84% expense ratio, offers 10- and 15-year trailing returns of more than 16% and about 13%, respectively. PRSCX YTD mountain PRSCX shares in 2023 Key to Allen's strategy since he began running the more than $7-billion fund is finding stocks he views as undervalued, hinging his investment framework around an analysis of cash flows. Allen also views solid, independent research as critical pieces of his methodology. This year, Allen has benefited from a spectacular run-up in technology stocks, fueled by a mania for all things tied to artificial intelligence. That said, "I try to stay really disciplined around valuation," Allen maintains. "I think especially in tech investing, valuation tends to get ignored or largely ignored." Betting on FAANG stocks PRSCX owns all the major FAANG names (Meta, Apple, Amazon and Alphabet), except for Netflix, and dominant chipmakers benefiting from AI such as Nvidia and Advanced Micro Devices , up 189% and 70%, respectively, this year. Other key holdings include Microsoft , Salesforce and German online retailer Zalando. Just two stocks combined Microsoft and Alphabet made up nearly a fifth of the fund's portfolio as of as of March 31. Both stocks have gained more than 38% this year as investors bet on the promise of AI and the pair battle to offer the best generative AI model. But the fund's favored Microsoft since 2008 and bought Alphabet last summer, long before their latest gains. Allen expects recent technology trends to continue boosting what he regards as high-quality names, and applies the same philosophy to a position in Amazon now the fund's fourth-largest holding. Allen stood by Microsoft even as a PC-fueled down cycle and uncertainty rocked the technology sector in 2022, driving down shares in the Xbox maker by about 29%. Allen opened his position in Alphabet last year, just as the slowing advertising cycle turned many investors away from the Google parent, dropping its multiple to the low teens. Allen viewed its solid fundamentals as unchanged. "It's kind of unusual to get a great business at a lower PE than it's likely ...growth rate, and that's why I felt really comfortable having a particularly large position then in the stock," he explained. The same idea cemented Allen's faith in Salesforce last year, even as shares cratered 48%, its business slowed and the co-CEO announced his departure. The stock proved one of Allen's biggest losers in 2022, but the portfolio manager expects significant upside as IT spending improves. Salesforce has rebounded more than 58% this year. "I looked at this company that I think is a top line grower, well into the double digits, with a ton of margin expansion, trading at 15 times free cash flow, and it just didn't make sense to me that a business this good would be this discounted," Allen said. Given his devotion to valuation, Allen has lately pulled back his exposure to some technology names that have rallied, significantly reducing stakes in Nvidia, Meta Platforms, Advanced Micro Devices and Amazon. Unusual plays and new additions Not every name in Allen's portfolio is widely bought on Wall Street. Despite a recent selloff in Zalando , Allen said shares look "really cheap" and the company appears well-situated to take market share with its vast item selection. The stock, one of Allen's top 10 as of March 31, is undervalued by at least half on the basis of its long-term cash flow projections, and could potentially triple within the next few years, he calculates, he said. Accenture , which this week forecast weaker-than-estimated revenue in the current quarter , remains a "premier" technology services company able to guide businesses as they look to implement AI. It was Allen's 7th largest position at the end of March. Recent additions to the fund include Mastercard , Apple and Texas Instruments . While Mastercard lags the stock performance of many technology behemoths, Allen believes it offers similar growth potential, and less cyclicality and fewer risks. A low-risk approach also extends to Texas Instruments. While the analog semiconductor name is ahead less than 2% this year, Allen said it offers a strong track record of driving shareholder value and returning capital through buybacks and dividends. TXN yields 3%. Lately, the T. Rowe Price fund has outperformed even when it's slumped. The fund tumbled more than 35% during last year's rout, but other tech funds lost an average of 37.4%, according to Morningstar. For Allen, every investment and every cycle marks another learning opportunity to perfect his craft. "It's really important to learn over time when things go well and especially when things don't go well," Allen said. "One of the things that I focus on a lot, and have for the 23 years I've been doing this, is thoughtfully evaluating what I can do to just get incrementally better while sticking to a process that I believe in."

See the original post here:

Bets on A.I. and innovation help this tech-focused T. Rowe Price ... - CNBC

Read More..

AWS is investing $100 million in generative A.I. center in race to keep up with Microsoft and Google – CNBC

Amazon Web Services CEO Adam Selipsky speaks at the CERAWeek by S&P Global conference in Houston on March 7, 2023.

F. Carter Smith | Bloomberg | Getty Images

Amazon's cloud unit said Thursday that it's allocating $100 million for a center to help companies use generative artificial intelligence, the technology that's taken off in the months since OpenAI unleashed its ChatGPT chatbot on the public.

It's a small investment for a company with $64 billion in cash and half a trillion dollars a year in operating expenses. But the announcement shows that Amazon Web Services recognizes the significance of the current moment in generative AI and the importance of being in the conversation, alongside rivals Microsoft and Google.

"You ask yourself the question where are the different runners three steps into a 10K race?" AWS CEO Adam Selipsky said in an interview this week with CNBC. "Does it really matter? The point is, you're three steps in, and it's a 10K race."

As part of the latest announcement, Amazon said it will be adding some data scientists, engineers and solutions architects to the payroll. AWS said the center is already working with Highspot, Twilio, RyanAir and Lonely Planet. The company told CNBC that it's a "program" rather than a physical center.

Amazon, which beat Microsoft and Google to the business of renting out servers and data storage to companies and other organizations, enjoys a commanding lead in the cloud infrastructure market. However, those rivals have had splashier entrances into generative AI, even though Amazon has drawn broadly on AI for years to show shopping recommendations and operate its Alexa voice assistant.

Microsoft has been spending billions on a multilayered alliance with OpenAI, and Google is moving quickly to deploy AI tools it's built in-house for consumers and businesses.

Nor does Amazon have the first popular large language model that can enable a chatbot or a tool for summarizing documents.

Selipsky said he isn't concerned. He joined the company in 2005, a year before the launch of AWS' core services for computing and storage. Echoing Amazon founder and longtime CEO Jeff Bezos, Selipsky said the company has succeeded by listening to customers.

"Amazon has had many examples in its history where it said, we're going to focus on customers and have steadfast belief that we're going to work with customers, we're going to build what they want," Selipsky said. "And if people want to perceive us in a certain way, we're misunderstood, that's OK, as long as customers understand where we're going."

One challenge Amazon currently faces is in meeting demand for AI chips. The company chose to start building chips to supplement graphics processing units from Nvidia, the leader in the space. Both companies are racing to get more supply on the market.

"I think the whole world has a shortage in the short term of compute capacity for doing generative AI and machine learning in general right now," Selipsky said. People are impatient, and the situation will improve in the next few months, he added.

Selipsky is also reckoning with a slowdown in customer spending on cloud, as businesses prepare for ongoing economic uncertainty.

"A lot of customers are largely through their cost optimization, but there have been other customers who are still right in the middle of it," he said. "It's hard to predict exactly when that particular trend will be over. But we're still in the middle of it."

Still, the AI trend is real, he insists. For Amazon, that momentum applies to its Bedrock generative AI service and its Titan models as well as the new innovation center.

"AI is going to be this next wave of innovation in the cloud," he said. "It's going to be the next big thing that pushes even more customers to want to be in the cloud. Really, you need the cloud for generative AI."

Also, the way Selipsky sees it, AWS provides a measure of credibility in offering generative AI that eludes others in the space.

"I can't tell you how many Fortune 500 companies I've talked to who banned ChatGPT in the enterprise," Selipsky said. "Because at least the initial versions of it just didn't have that concept of enterprise security."

WATCH: Amazon lawsuit is test of what FTC considers 'dark patterns'

Original post:

AWS is investing $100 million in generative A.I. center in race to keep up with Microsoft and Google - CNBC

Read More..

AI watch: from Wimbledon to job losses in journalism – The Guardian

Technology

This week in artificial intelligence

Fri 23 Jun 2023 08.00 EDT

Artificial intelligence is either going to save humanity or finish it off, depending on who you speak to. Either way, every week there are new developments and breakthroughs. Here are just some of the AI stories that have emerged in recent days

The Wimbledon tennis tournament revealed it will be introducing AI-generated audio and text commentary in its online highlights this year. The All England Club has teamed up with the tech group IBM to provide automatically created voiceovers and captions for its footage. The move, which is separate to the BBCs coverage of the tournament, follows use of the cloned voice of a British athletics commentator, Hannah England, for online coverage of the European Athletics Championships. Generative AI refers to the creation of text and images from a human prompt think ChatGPT and Midjourney but voice is becoming a prominent development in this area as well.

Fears over the existential threat posed by AI have come to the fore in recent months, but the potential impact on jobs is never far behind. A US visual effects company was forced this week to state that the use of AI in the opening sequence of a Disney+ series, Marvels Secret Invasion, did not mean someones job had been displaced.

The film industry has been a locus for AI-related job concerns in recent months, which is understandable given that generative AI has obvious implications for workers and artists in fields such as film, TV and music. Fears over the use of AI in scriptwriting have been a factor in the US screenwriters strike, while the US arts and media union Sag-Aftra is demanding guardrails for replicating actors images and voices in productions.

Another example of how AI could end up affecting journalism was highlighted when Germanys Bild tabloid, the biggest-selling newspaper in Europe, announced a 100m (85m) cost-cutting programme that would lead to about 200 redundancies. It warned staff that it expected to make further editorial cuts owing to the opportunities of artificial intelligence. Bilds publisher, Axel Springer SE, said in an email to staff seen by the rival Frankfurter Allgemeine newspaper that it would unfortunately be parting ways with colleagues who have tasks that in the digital world are performed by AI and/or automated processes.

Advances in AI are exciting, but just as important to the spread of the technology is its productisation: how it gets turned from a promising tech to a real product. Take FabricGenie, from the Millshop Online, a curtain retailer. Enter your design preferences as text, image or sketch, and the company runs a simple AI image generator to spit out unique patterns that you can print on to personalised drapes. Its not going to win any awards for cutting-edge technology, but its the sort of thing that will be more and more common across society over the coming years.

On Thursday a US judge ordered two lawyers and their law firm to pay a $5,000 (4,000) fine after ChatGPT generated fake citations in a legal filing. A district judge in Manhattan ordered lawyers Steven Schwartz, Peter LoDuca and their law firm Levidow, Levidow & Oberman to pay the fine after fictitious legal research was used in an aviation injury claim. Schartz had admitted that ChatGPT, whose responses can appear very plausible, had invented six cases he referred to in a legal brief in a case against the airline Avianca. The legal work sector is a prime candidate for being transformed by generative AI, but this case raises questions over the extent to which AI can replace human work for now.

The UK government is taking warnings about artificial intelligence and safety seriously, before Rishi Sunak hosts a global summit on AI safety in the autumn. Last Sunday it announced that a tech entrepreneur who has warned about an unchecked race to achieve godlike AI will be the head of a new AI advisory body. Ian Hogarth wrote in April that a small number of companies were competing to achieve a breakthrough in computer superintelligence without knowing how to pursue their aim safely and with no oversight. Existential fears about AI include the emergence of a system that evades human intervention, or makes decisions that deviate from human moral values.

Hogarth will now have some influence in moderating the AI arms race as the chair of the UK governments AI Foundation Model taskforce (referring to the underlying technology for AI tools such as text or image generators). Writing in the Times after his appointment was announced, Hogarth said he had saw reasons for more optimism including further calls for action from AI experts and a 100m spending pledge for the UK taskforce, whose role will include identifying and tackling the safety challenges posed by the technology.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Original post:

AI watch: from Wimbledon to job losses in journalism - The Guardian

Read More..

ChatGPT-maker OpenAI planning to launch marketplace for AI applications – Business Today

The company behind the famous platform ChatGPT is reportedly planning to launch a marketplace. This platform by OpenAI will mainly focus on use by enterprises. This marketplace would enable developers to sell their AI models built on top of OpenAI's own AI technology, according to a report by The Information.

The marketplace aims to cater to enterprise customers who often customize OpenAI's ChatGPT technology for specific use cases, such as fraud detection or market research. The models created by these developers could be made available to other businesses through OpenAI's proposed marketplace.

The report suggests the potential plans for the marketplace were revealed by OpenAI CEO Sam Altmanduring a meeting with developers in London. If realized, this marketplace could potentially compete with existing app stores operated by OpenAI's customers and technology partners, including Salesforce and Microsoft. Additionally, it could help OpenAI expand its customer base and make its technology more accessible to a wider audience.

Also read:Hed been misunderstood: Anand Mahindra and Sam Altman foster positive outlook on Indian innovation

The Information, which reported on these developments, also mentioned that two OpenAI customers, Aquant (a manufacturer software provider) and Khan Academy (an education app maker), may express interest in offering their AI models powered by ChatGPT on OpenAI's marketplace.

Since its launch in late 2022, ChatGPT has gained significant adoption among businesses looking to automate tasks and enhance operational efficiency. As companies race to leverage the capabilities of advanced large language models (LLMs), offering customers new tools based on AI software has become a competitive market.

Also read:BT Best B-School & HR Summit: Should writers be worried about ChatGPT?

Continued here:

ChatGPT-maker OpenAI planning to launch marketplace for AI applications - Business Today

Read More..