Category Archives: Artificial General Intelligence

Driving the future of mobility with SenseAuto: The AGI Power Behind … – Automotive News Europe

SenseAuto is a leading global provider of artificial general intelligence (AGI) technology for the smart auto era. By integrating intelligent cabin, intelligent driving and AI cloud, SenseAuto empowers the next-generation mobility with its full-stack AGI capabilities to create a safer, smarter, and more enjoyable third living space experience.

Its product portfolio includes the vision-based Driver Monitoring System, Occupant Monitoring System, Near-Field Monitoring System, Innovative Cabin App, Cabin Brain as well as the ADAS offerings for pilot driving and parking.

SenseAuto is committed to upholding high industry standards to ensure a safe and seamless journey for all users. The Company has obtained the ASPICE L2, ISO 26262 ASIL B and ASIL D, ISO9001 and ISO/SAE 21434 certificates, along with other certificates for security and quality management.

With extensive experience in mass production, SenseAuto has established successful partnerships with over 30 renowned car manufacturers worldwide including Chery, GAC, Great Wall, HiPhi, NIO, SAIC, and ZEEKR. SenseAuto is the designated supplier for more than 36 million vehicles accumulatively, covering over 160 diverse models. The Company has active R&D presence in China (Shanghai, Beijing, Shenzhen and Guangzhou), Germany and Japan.

For more information, please visit SenseAutos website and LinkedIn page.

Read the original here:

Driving the future of mobility with SenseAuto: The AGI Power Behind ... - Automotive News Europe

Defence force chief Angus Campbell warns deepfakes and AI will drive era of ‘truth decay’ – ABC News

The defence force chief has warned the world is entering an era of "truth decay" where misinformation will undermine democracy and leave liberal, Western societies like Australia increasingly exposed to enemies.

General Angus Campbell has outlined future challenges where rapidly advancing technology could soon make it impossible for most people to distinguish fact from fiction.

In an overnight speech, the defence chief has predicted artificial intelligence and deepfakes will further damage public confidence in elected officials and pose a serious risk.

"As these technologies quickly mature, there may soon come a time when it is impossible for the average person to distinguish fact from fiction, and although a tech counter response can be anticipated, the first impression is often the most powerful," he told the Australian Strategic Policy Institute (ASPI).

"This tech future may accelerate truth decay, greatly challenging the quality of what we call public 'common sense', seriously damaging public confidence in elected officials, and undermining the trust that binds us."

General Campbellcited a "deep fake" video of Ukrainian President Volodymyr Zelenskyy, which emerged last year falsely portraying the wartime leader urging his military to surrender to invading Russian forces.

"Uncertainty erodes our traditional understanding of deterrence by undermining our calculus of capability, our assurance of credibility, and our clarity of communication," General Campbell declared.

"Uncertainty is the bedfellow of timidity, the perfect foundation from which others may win without fighting," he added.

General Campbell nominated China's People's Liberation Army (PLA) as an expert proponent of psychological, legal and information warfare techniques that could disrupt democratic societies and undermine their will to fight.

"While these operations are, of course, not new phenomena, informatic disruption is exponentially, instantaneously and globally enhancing the prevalence and effectiveness of a three-warfares approachby any reasonably sophisticated practitioner," he said.

"Such an approach may bypass the need for a physical attack and strike directly at the psychological, changing perceptions of reality, with profound implications for deterrence".

During his address, General Campbell also warned of future crises involving food and water security, as well as waves of migration due to climate change.

"This disruption is happening fasterand less predictably than we all hoped," General Campbell told his Canberra audience.

"Without the global momentum needed, we may all be humbled by a planet made angry by our collective neglect."

See the rest here:

Defence force chief Angus Campbell warns deepfakes and AI will drive era of 'truth decay' - ABC News

Industrial AI Challenges and the Path Forward – ARC Advisory Group

In the ever-evolving landscape of Artificial Intelligence (AI), generative AI has emerged as a key player, promising to revolutionize industries and business processes. However, it's not without its share of complexities and confusion. To shed light on this innovative technology, I committed to explore the latest breakthroughs and their applicability to Industrial AI with a set of ARC blogs, podcasts, Insights and Reports. However with the general confusion being created by generative AI myths and misconceptions, I've worked with the team at ARC to lay some AI foundations, including a glossary of terms, in our initial report on The Industrial AI (R)Evolution.

Industrial AI, a subset of the broader field of artificial intelligence (AI), refers to the application of AI technologies (including Generative AI) in industrial settings to augment the workforce in pursuit of growth, profitability, more sustainable products and production processes, enhanced customer service, and business outcomes. Industrial AI leverages machine learning, deep learning, neural networks, and other approaches. Some of these techniques have been used for decades to build AI systems using data from various sources within an industrial environment, such as sensors, machinery, industrial engineers, and frontline workers.

Among these AI techniques and technologies, generative AI has caught the attention of many, particularly within various industries such as Aerospace & Defense, Automotive, Electric Power & Smart Grid, Industrial Equipment, Oil & Gas, Semi-Conductors, and more.

Generative AI, powered by machine learning and neural networks used for decades in various Industrial AI use cases, but with new genuine breakthroughs in natural language processing (using GANs, transformers and LLMs), are revolutionizing how we interact with everything around us, whether those technologies are inherently smart, industrialized, or not.

However, this revolutionary technology often leads to confusion due to general media hype about AI, extravagant marketing claims from software suppliers struggling to get due credit for having invested in AI technologies long before the current wave of generative AI breakthroughs, its technical complexity and the rapid pace at which AI continues to evolve.

The current state of industrial AI presents a complex picture. On one hand, we have a myriad of AI solutions promising to revolutionize processes and boost efficiency. On the other hand, there is a lack of clarity regarding which technologies truly deliver on these promises.

One of the main challenges faced by organizations is discerning valuable AI breakthroughs from the hype. While many AI technologies have proven their worth, others are still emerging, and their long-term value remains uncertain.

Moreover, there are numerous myths and misconceptions surrounding which of these AI techniques and technologies relevant to industrial AI use cases. These include the belief that AI implementation requires massive upfront investment or that it will lead to widespread job displacement. Such misconceptions often deter organizations from exploring the potential benefits of AI.

To address this confusion, the ARC Advisory Group has embarked on a mission to simplify the complex, identify relevant breakthroughs, and cut through the hype surrounding industrial AI. Central to this mission is The Industrial AI (R)Evolution that cuts through the Generative AI hype, dispels myths and summarizes the latest developments and trends in the field.

The report covers a wide range of topics, including data governance, cybersecurity risks, high-value industrial AI use cases, and the societal impact of AI. Additionally, it dives into the intricacies of various AI techniques, including unsupervised, semi-supervised, supervised, and reinforcement learning, as well as Large Language Models (LLMs) and Foundation Models (FMs).

The Industrial AI Report also dispels myths and misconceptions. One common myth is that AI implementation requires substantial upfront investment. While initial costs can be high, the long-term benefits often outweigh these costs. Another prevalent misconception is that AI will eliminate jobs. However, while AI may automate certain tasks, it also creates new roles and opportunities.

It discusses the shift in priorities from Industrial Metaverse to Industrial AI, emphasizing the potential of AI to drive efficiency and innovation in industries. For more on this particular topic read my blog on how Industrial AI is paving the way for Industrial Metaverse(s).

Industrial organizations can leverage ARC's Industrial AI Impact Assessment Model used by ARC's own team of Analysts, to guide their own AI evaluation and implementation process. This model offers a structured approach to assess the potential impact of AI on various aspects of the organization, including operations, strategy, and workforce.

As we continue to explore the potential of generative AI and other breakthroughs in industrial AI, collaboration and knowledge sharing become increasingly important. We invite you to join us in this journey, sharing your questions, experiences, learnings, and solutions.

The future of Industrial AI is promising, with its potential to transform industries and societal structures. By deepening our understanding and effectively applying AI technologies, we can unlock their true potential in the industrial realm.

For more information or to contribute to Industrial AI research, please contact Colin Masson at cmasson@arcweb.com.

Excerpt from:

Industrial AI Challenges and the Path Forward - ARC Advisory Group

Here are the Top 3 AI Crypto Coins for 2023 SingularityNET … – Cryptonews

Image by Tara Winstead

Artificial Intelligence (AI) and cryptocurrencies are two technologies that encapsulate the spirit of our times. It was only a matter of time before these two domains intersected, giving rise to AI crypto.

These specialized tokens function as the operational fuel for AI platforms built on blockchain technology. By spending the tokens, users can access and benefit from the integrated AI features of the platforms.

In this article, we will explore three AI crypto projectsSingularityNET, Ocean Protocol, and yPredictthat show promise in effectively merging these two dynamic fields.

Visit yPredict Here

SingularityNET offers a decentralized AI marketplace, using blockchain technology to give open access to various AI algorithms and tools. The platform aspires to develop artificial general intelligence (AGI), a form of AI that can perform multiple tasks rather than just specialized ones.

Founded by Dr. Ben Goertzel and Dr. David Hanson in 2017, the latter also runs Hanson Robotics, known for creating the humanoid robot Sophia. SingularityNET operates on both the Ethereum and Cardano blockchains, allowing developers to share their AI services for public or corporate use. The native currency of the platform is AGIX, which is used for internal transactions.

SingularityNET was among the early initiatives to combine AI and cryptocurrency technologies. By utilizing blockchain, it strives to make AI more accessible and foster a community where developers can collaborate and enhance their AI services.

The platform currently hosts over 70 AI services, developed by a global community of creators. These services range from real-time voice cloning to image generation, and they are designed to be user-friendly, so even those not well-versed in AI can easily use them.

Smart contracts play a pivotal role in SingularityNET, making transactions fair and straightforward. These self-executing contracts outline terms and conditions for users who want to access a particular AI service. This not only simplifies the transaction process but also allows developers to focus on what they do bestinnovating and refining AI systems.

Being decentralized, SingularityNET ensures that no single party can exert undue control or restrict access to AI services. Prices for these services are automatically set and enforced by smart contracts.

AGIX serves multiple purposes within SingularityNET, such as:

Although initially designed for a single blockchain, AGIX has evolved to be compatible with multiple blockchain systems, including Cardano, Polygon, and Binance Smart Chain.

Ocean Protocol is a blockchain-based platform designed to facilitate the exchange between data providers and consumers. Established in 2017, the platform makes data a tradeable asset through a unique tokenization process.

On Ocean Protocol, data sets and services are converted into ERC-20 tokens, which are based on the Ethereum blockchain. This allows data providers to securely sell access to their data assets.

The tokenization model of Ocean Protocol offers a streamlined and secure way for individuals and businesses to monetize their data. As a result, the platform hosts an expansive data marketplace where data analysts, researchers, and scientists can easily acquire valuable data sets. This capability is especially relevant for sectors that rely heavily on data, such as artificial intelligence.

The platform's core team has a strong background in big data and AI, adding to the platform's credibility and focus. Ocean Protocol's native currency, OCEAN, is the primary medium for transactions on the platform and enables community governance. The token also offers staking rewards, making it multi-functional within the Ocean Protocol ecosystem.

The total supply of OCEAN tokens is capped at 1.41 billion, with roughly 613 million in circulation.

Though yPredict is still in the presale phase, it has already gained substantial attention. The platform, which operates on the Polygon Matic chain, has raised more than $3.83 million towards its $4.6 million goal.

yPredict's key offering will be a prediction marketplace. Here, financial data scientists can monetize their predictive models by offering them as subscription services. Traders can then subscribe to these services using YPRED tokens to get valuable market forecasts.

The YPRED tokens will also be used for other functionalities, like cryptocurrency analysis and access to data-driven insights. Token holders will have the option to stake their tokens in high-yield pools.

The total supply of YPRED tokens is capped at 100 million, with 80 million set aside for the presale. The remaining are allocated for liquidity and development. YPRED tokens will also enable holders to participate in governance activities within the yPredict platform.

yPredict is also planning to offer analytical tools, such as pattern recognition, sentiment analysis, and transaction analytics. The platform is even working on an AI-powered backlink estimator. Initially free, this feature has now been priced at $99 per query due to high demand.

In the current technological landscape where AI and cryptocurrency are increasingly becoming central, the fusion of these domains in the form of AI crypto is a development worth noting. These tokens serve as a bridge, allowing users to tap into AI capabilities via blockchain platforms. Each project offers a unique approach to integrating AI and blockchain, expanding the possibilities of what these technologies can achieve when combined.

Visit yPredict Here

Disclaimer: Crypto is a high-risk asset class. This article is provided for informational purposes and does not constitute investment advice. You could lose all of your capital.

Link:

Here are the Top 3 AI Crypto Coins for 2023 SingularityNET ... - Cryptonews

What will the rise of AI really mean for schools? – TES

Advances in artificial intelligence (AI) are accelerating at breakneck speed. Systems like ChatGPT are approaching, and by some measures exceeding, human-level performance in many domains.

But what does the growth of these systems mean for schools?

We see tremendous potential for these technologies to enhance and augment human capabilities, making us smarter, more efficient and able to solve problems that currently seem impossible to manage.

However, we also see significant downsides. Without thoughtful intervention, AI could diminish human agency, stifle creativity and potentially stunt our collective progress.

Nowhere are the stakes higher than in education. Schools and universities have helped generations climb the ladder of knowledge and skills. But if machines can soon out-think us, whats the point of learning? Why invest time and effort acquiring expertise that could soon be obsolete?

To explore these questions, we recently co-authored a paper analysing the staggering pace of progress in AI and the potential implications for education.

Systems like GPT-4 are already scoring higher than well over 90 per cent of humans on academic tests of literacy and quantitative skills. Many experts predict AI will reach human-level reasoning across all domains in the next decade or two.

Once achieved, these artificial general intelligence systems could quickly exceed the combined brainpower of every person who has ever lived.

Faced with these exponential advances, how might society respond? We foresee four possible scenarios, all of which would have different implications for schools:

One option is that governments recognise the risks and halt further AI development, through regulation or restricting hardware supply. This might slow things down and buy some time.

Bans are hard to enforce, often porous and would mean that we would forfeit many potential benefits that carefully governed AI systems could bring. However, if AI advances get curtailed at, say, GPT-4.5 there is a greater chance that humanity stays in the driving seat and we still benefit from education.

In fact, with suitable guardrails, many of the recent AI advances might greatly accelerate our thinking skills, for example by providing high-quality supplementary AI tuition to all students and by acting as a digital personal assistant to teachers.

A second pathway is that AI takes over most jobs, but legislation forces companies to keep employing humans alongside the machines, in largely ceremonial roles. The risk here is that this fake work risks infantilising people.

As AI thinking accelerates, our stunted contributions could create bottlenecks, leaving us disempowered spectators rather than active participants.

This pathway also requires only a basic level of education - we would simply need to turn up and read out the script displayed in our AI glasses. After all, our own thinking and words would never exceed the abilities of the machines.

Wanting to remain competitive, some might opt to biologically or digitally upgrade their brains through gene editing or neural implants. This might sound like science fiction, but is not beyond the realm of possibility - and such a scenario would have profound implications for education.

We might be able to literally download new knowledge, skills and abilities in milliseconds. No more need for schooling.

But in making ourselves more machine-like, would we risk losing our humanity?

A final scenario is that we accept economic irrelevance and embrace universal basic income - paid for by taxing the fruits of AI labour. Freed from work, people would focus on sports, hobbies, rituals and human connections.

But devoid of productive purpose, might we lose our vital force and struggle to get out of bed in the morning?

All these paths are, in different ways, problematic. So, before we sleepwalk into one, we need urgent debate on the destination we want.

Our paper offers 13 pragmatic proposals to regulate and slow down AI, to buy time for this discussion by, for example: requiring frontier AI models to be government licensed before their release; making it illegal for systems to impersonate humans; implementing guardrails to stop AI systems from giving students the answers; and making system developers accountable for untruths, harms and bad advice generated by their systems.

At the same time, we must also re-examine educations role in society. If humans can add only marginal value working alongside AI, schools may need to pivot from preparation for employment to nurturing distinctly human traits: ethics, empathy, creativity, playfulness and curiosity.

As AI excels at information retrieval and analysis, we must double down on contextual reasoning, wisdom, judgement and morality. However, even here, we must be realistic that (eventually) AI is likely to be able to emulate all these human traits as well.

Some skills like literacy might also become less essential - for example, if we can learn through verbal discourse with AI or by porting into realistic simulations.

Yet foundational knowledge will likely remain crucial, enabling us to meaningfully prompt and critique AI. And direct instruction, whether by teacher or AI, will still help students to grasp concepts more quickly than trial-and-error discovery. We must, therefore, identify the irreducible core of timeless human competencies to pass on.

None of this is preordained. With vigilance, foresight and governance, AI can uplift humanity in the same way that prior innovations have. But we must act decisively. Timelines are highly uncertain. AI capabilities could exceed our own in a decade or two. Either way, the hinge point of history is now.

We hope these proposals stimulate urgent debate on the society and education system we want to build - before the choice is made for us.

Dylan Wiliam is emeritus professor of educational assessment at the UCL Institute of Education. John Hattie is emeritus laureate professor of education at the University of Melbourne. Arran Hamilton is group director, education, at Cognition Learning Group. His contributions had editorial support from Claude AI

Go here to read the rest:

What will the rise of AI really mean for schools? - TES

Accubits, Bud Ecosystem open-source Large Language Model, drive it among global top – BusinessLine

Thiruvananthapuram-based Accubits Technologies has open-sourced GenZ 70B, a Large Language Model (LLM), which is now among the top listed onHuggingFaces leaderboard, aglobal platform that curates, evaluates, and compares AI models.

A 70-billion-parameter fine-tuned model, it is ranked number one on the HuggingFace leaderboard for instruction-tuned LLMs and sixth for open LLMs in all categories. It was open-sourced collaboratively with Bud Ecosystem, a separate Accubits company, says Aharsh MS, Chief Marketing Officer, Accubits Technologies. Bud focuses on fundamental research in artificial general intelligence (AGI) and behavioural science., and is building an ecosystem around multi-modal, multi-task foundational models.

An LLM (for instance, GPT4 by Open AI) is a type of machine learning model specifically designed for processing and generating human-like text based on vast amounts of textual data. GPT-4 is the largest model in OpenAIs GPT series, released this year. Its parameter count has not been released to the public, though it is speculated that the model has more than 1.7 trillion.

An LLM model from India ranking top on global scale is significant, and can be an inspiration for the local developer community, says Aharsh MS. GenZ is an auto-regressive language model with an optimised transformer architecture. We fine-tuned the model with curated datasets using the Supervised Fine-Tuning (SFT) method, Aharsh explained to businessline.

It used OpenAssistants instruction fine-tuning dataset and Thought Source for the Chain Of Thought (CoT) approach. With extensive fine-tuning, it has acquired additional skills and capabilities beyond what a pre-trained model can offer. Aharsh offered deeper insight into the world of natural language processing computer programmes in an interview.

Excerpts:

Are Generative AI and LLMs the same thing?

No. LLMs fall under the umbrella of generative AI, the reverse isnt true. Not every generative AI model is an LLM. The difference primarily hinges on the type of content a model is designed to produce and its specialised applications. Generative AI refers to a broader class of AI models designed to generate new content. This creation capability isnt restricted solely to text; it spans a diverse array of outputs, including images, music compositions, and even videos. On the other hand, LLMs represent a specific subset within the generative AI spectrum. These models are meticulously designed and optimised for tasks related to language. Trained on immense volumes of text data, LLMs excel in generating coherent and contextually apt textual outputs. This might range from crafting detailed paragraphs to answering intricate questions or even extending given textual prompts.

Why did youopen-source the model?

Accubits and Bud Ecosystem worked on the GenZ 70B suite of open-source LLMs to democratise access to Generative AI-based technologies. We believe that Generative AI is one of the most disruptive technologies, perhaps more significant than the invention of fire itself. Such a technology must be freely available for everyone to experiment and innovate.

With this objective in mind, we are open-sourcing models that can be hosted even on a laptop. GenZs GTPQ & GGML-based models can be hosted on a personal laptop without a GPU. Bud Ecosystem has its own proprietary multi-modal, multi-task models, which is used to build its own products. Accubits is already helping its customers adopt Generative AI-based technologies at scale, helping them build RoI-driven products and solutions.

How do you look to stay ahead of fine-tuning models being extensively released now?

The training data used and our fundamental research on attention mechanisms, model alignment, consistency, and reliability have enabled us to build GenAI models with good performance. Most fine-tuned models do not offer commercial licenses. Which means, businesses do not have the freedom to use them for building commercial applications. GenZ 70B stands out mainly for two reasons: one, it offers a commercial license, and two, it offers good performance. Our model is primarily instruct-tuned for better reasoning, role play and writing capabilities, making it more suitable for business applications.

Are there any limitations to the model?

Like any Large Language Model, GenZ also carries risks. We recommend users to consider fine-tuning it for specific set of tasks of interest. Appropriate precautions and guardrails should be taken for any production use. Using GenZ 70B responsibly is key to unlocking its full potential while maintaining a safe and respectful environment.

Have you looked at how it can replicate the work of models such as ChatGPT?

GenZ 70Bs performance is impressive, especially in relation to its size. The GenZ 70B model scored 7.34 on the MT-Bench benchmark, which is close to GPT 3.5s score of 7.94. Considering that GenZ 70B is 2.5 times smaller than ChatGPT 3.5 yet nearly matches its performance, Id say its surprisingly efficient for its size. The model size is critical when considering real-world commercial use cases. Smaller models are usually easier to work with, use less computing power, and can be much more budget-friendly. GenZ can offer performance on par with GPT 3.5 in a much smaller package, making it very suitable for content creation.

What is the rate of accuracy of LLMs with respect to low-resource or distant languages (other than English)?

LLMs thrive on quality and quantity of training data. Since English is a dominant language on the internet, many of these are trained extensively on English data. This results in high accuracy when dealing with English language tasks, from simple text generation to more complex problem-solving.

In contrast, accuracy level might be different with less common or less-studied languages, primarily because of relative scarcity of quality training data. Its worth noting the inherent capabilities of LLMs are not restricted to English or any specific language. If provided with extensive and diverse training data, an LLM can achieve better accuracy for a less common language. In essence, the performance in any given language is reflective of the amount and quality of the training data.

(E.o.m.)

See the article here:

Accubits, Bud Ecosystem open-source Large Language Model, drive it among global top - BusinessLine

Discover the Performance Gain with Retrieval Augmented Generation – The New Stack

Large Language Models (LLMs) are smart enough to understand context. They can answer questions, leveraging their vast training data to provide coherent and contextually relevant responses, no matter whether the topic is astronomy, history or even physics. However, LLMs tend to hallucinate (deliver compelling yet false facts) when asked to answer questions outside the scope of their training data, or when they cant remember the details in the training data.

A new technique, Retrieval Augmented Generation (RAG), fills the knowledge gaps, reducing hallucinations by augmenting prompts with external data. Combined with a vector database (like MyScale), it substantially increases the performance gain in extractive question answering.

To this end, this article focuses on determining the performance gain with RAG on the widely-used MMLU dataset. We find that both the performance of commercial and open source LLMs can be significantly improved when knowledge can be retrieved from Wikipedia using a vector database. More interestingly, this result is achieved even when Wikipedia is already in the training set of these models.

You can find the code for the benchmark framework and this example here.

But first, lets describe Retrieval Augmented Generation (RAG).

Research projects aim to enhance LLMs like gpt-3.5 by coupling them with external knowledge bases (like Wikipedia), databases, or the internet to create more knowledgeable and contextually aware systems. For example, lets assume a user asks an LLM what Newtons most important result is. To help the LLM retrieve the correct information, we can search for Newtons wiki and provide the wiki page as context to the LLM.

This method is called Retrieval Augmented Generation (RAG). Lewis et al. in Retrieval Augmented Generation for Knowledge-Intensive NLP Tasks define Retrieval Augmented Generation as:

A type of language generation model that combines pre-trained parametric and non-parametric memory for language generation.

Moreover, the authors of this academic paper go on to state that they:

Endow pre-trained, parametric-memory generation models with a non-parametric memory through a general-purpose fine-tuning approach.

Note: Parametric-memory LLMs are massive self-reliant knowledge repositories like ChatGPT and Googles PaLM. Non-parametric memory LLMs leverage external resources that add additional context to parametric-memory LLMs.

Combining external resources with LLMs seems feasible as LLMs are good learners, and referring to specific external knowledge domains can improve truthfulness. But how much of an improvement will this combination be?

Two major factors affect a RAG system:

Both of these factors are hard to evaluate. The knowledge gained by the LLM from the context is implicit, so the most practical way to assess these factors is to examine the LLMs answer. However, the accuracy of the retrieved context is also tricky to evaluate.

Measuring the relevance between paragraphs, especially in question answering or information retrieval, can be a complex task. The relevance assessment is crucial to determine whether a given section contains information directly related to a specific question. This is especially important in tasks that involve extracting information from large datasets or documents, like the WikiHop dataset.

Sometimes, datasets employ multiple annotators to assess the relevance between paragraphs and questions. Using multiple annotators to vote on relevance helps mitigate subjectivity and potential biases that can arise from individual annotators. This method also adds a layer of consistency and ensures that the relevance judgment is more reliable.

As a consequence of all these uncertainties, we developed an open-sourced end-to-end evaluation of the RAG system. This evaluation considers different model settings, retrieval pipelines, knowledge base choices, and search algorithms.

We aim to provide valuable baselines for RAG system designs and hope that more developers and researchers join us in building a comprehensive and systematic benchmark. More results will help us disentangle these two factors and create a dataset closer to real-world RAG systems.

Note: Share your evaluation results at GitHub. PRs are very welcome!

In this article, we focus on a simple baseline evaluated on an MMLU (Massive Multitask Language Understanding Dataset), a widely used benchmark for LLMs, containing multiple-choice single-answer questions on many subjects like history, astronomy and economy.

We set out to find out if an LLM can learn from extra contexts by letting it answer multiple-choice questions.

To achieve our aim, we chose Wikipedia as our source of truth because it covers many subjects and knowledge domains. And we used the version cleaned by Cohere.aion Hugging Face, which includes 34,879,571 paragraphs belonging to 5,745,033 titles. An exhaustive search of these paragraphs will take quite a long time, so we need to use the appropriate ANNS (Approximate Nearest Neighbor Search) algorithms to retrieve relevant documents. Additionally, we use the MyScale database with the MSTG vector index to retrieve the relevant documents.

Semantic search is a well-researched topic with many models with detailed benchmarks available. When incorporated with vector embeddings, semantic search gains the ability to recognize paraphrased expressions, synonyms, and contextual understanding.

Moreover, embeddings provide dense and continuous vector representations that enable the calculation of meaningful metrics of relevance. These dense metrics capture semantic relationships and context, making them valuable for assessing relevance in LLM information retrieval tasks.

Taking into account the factors mentioned above, we have decided to use the paraphrase-multilingual-mpnet-base-v2 model from Hugging Face to extract features for retrieval tasks. This model is part of the MPNet family, designed to generate high-quality embeddings suitable for various NLP tasks, including semantic similarity and retrieval.

For our LLMs, we chose OpenAIs gpt-3.5-turbo and llama2-13b-chat with quantization in six bits. These models are the most popular in commercial and open-source trends. The LLaMA2 model is quantized by llama.cpp. We chose this 6-bit quantization setup because it is affordable without sacrificing performance.

Note: You can also try other models to test their RAG performance.

The following image describes how to formulate a simple RAG system:

Figure 1: Simple Benchmarking RAG

Note: Transform can be anything as long as it can be fed into the LLM, returning the correct answer. In our use case, Transform injects context into the question.

Our final LLM prompt is as follows:

```pythontemplate = ("The following are multiple choice questions (with answers) with context:""nn{context}Question: {question}n{choices}Answer: ")```

```python

template =

("The following are multiple choice questions (with answers) with context:"

"nn{context}Question: {question}n{choices}Answer: ")

```

Now lets move on to the result.

Our benchmark test results are collated in Table 1 below.

But first, our summarized findings are:

In these benchmarking tests, we compared performance with and without context. The test without context represents how internal knowledge can solve questions. Secondly, the test with context shows how an LLM can learn from context.

Note: Both llama2-13b-chat and gpt-3.5-turbo are enhanced by around 3-5% overall, even with only one extra context.

The table reports that some numbers are negative, for example, when we insert context into clinical-knowledge to gpt-3.5-turbo.

This might be related to the knowledge base, saying that Wikipedia does not have much information on clinical knowledge or because OpenAIs terms of use and guidelines are clear that using their AI models for medical advice is strongly discouraged and may even be prohibited. Despite this, the increase is quite evident for both models.

Notably, the gpt-3.5-turbo results claim that the RAG system might be powerful enough to compete with other language models. Some of the reported numbers, such as those on prehistory and astronomy are pushing towards the performance of gpt4 with extra tokens, suggesting RAG could be another solution to specialized Artificial General Intelligence (AGI) when compared to fine-tuning.

Note: RAG is more practical than fine-tuning models as it is a plug-in solution and works with both self-hosted and remote models.

Figure 2: Performance Gain vs. the Number of Contexts

The benchmark above suggests that you need as much context as possible. In most cases, LLMs will learn from all the supplied contexts. Theoretically, the model provides better answers as the number of retrieved documents is increased. However, our benchmarking shows that some numbers dropped the greater the contexts retrieved.

By way of validating our benchmarking results, a paper by Stanford University titled: Lost in the Middle: How Language Models Use Long Contexts suggests the LLM only looks at the contexts head and tail. Therefore, choose fewer but more accurate contexts from the retrieval system to augment your LLM.

The larger the LLM, the more knowledge it stores. Larger LLMs tend to have a greater capacity to store and understand information, which often translates to a broader knowledge base of generally understood facts. Our benchmarking tests tell the same story: the smaller LLMs lack knowledge and are hungrier for more knowledge.

Our results report that llama2-13b-chat shows a more significant increase in knowledge than gpt-3.5-turbo, suggesting context injects more knowledge into an LLM for information retrieval. Additionally, these results imply gpt-3.5-turbo was given information it already knows while llama2-13b-chat is still learning from the context.

Almost every LLM uses the Wikipedia corpus as a training dataset, meaning both gpt-3.5-turbo and llama2-13b-chat should be familiar with the contexts added to the prompt. Therefore, the questions that beg are:

We currently dont have any answers to these questions. As a result, research is still needed.

Contribute to research to help others.

We can only cover a limited set of evaluations in this blog. But we know more is needed. The results of every benchmark test matter, regardless of whether they are replications of existing tests or some new findings based on novel RAGs.

With the aim of helping everyone create benchmark tests to test their RAG systems, we have open sourced our end-to-end benchmark framework. To fork our repository, check out our GitHub page.

This framework includes the following tools:

Its up to you to create your own benchmark. We believe RAG can be a possible solution to AGI. Therefore, we built this framework for the community to make everything trackable and reproducible.

PRs are welcome.

We have evaluated a small subset of MMLU with a simple RAG system built with different LLMs and vector search algorithms and described our process and results in this article. We also donated the evaluation framework to the community and called for more RAG benchmarks. We will continue to run benchmarking tests and update the latest results to GitHub and the MyScale blog, so follow us on Twitter or join us on Discord to stay updated.

Here is the original post:

Discover the Performance Gain with Retrieval Augmented Generation - The New Stack

IFA Berlin 2023 Showfloor Impressions – Dealerscope

IFA Berlin 2023 showcased a myriad of cutting-edge innovations and captivating showfloor experiences that left a lasting impression on visitors, exhibitors, and industry leaders. Here, we present the highlights and testimonials from those who witnessed the future of consumer electronics at this years event.

My name is Lester, Im from the Netherlands and our company, Purpose, is based out of Rotterdam. Our first time at IFA has been amazing. Weve been here for three days and already have made some good connections with different smaller companies and startups that we can work together with. The feedback of some visitors who already want to buy the product has also been great. Weve talked to a few distributors, a few investors also interested in our product, and made some good connections with a lot of people in the industry!

This is our third year at IFA. Weve come to the IFA Next stage multiple times because every time when we talk to people, they recommend going to IFA Next because its probably the most interesting intersection of people meeting from all kinds of backgrounds. We always enjoy IFA, in previous years, but also this year as well. Its a fantastic show! Well be here next year for sure.

Can I say massive? Its my first time here, and we knew it was going to be massive, but this is different. Ive been to other expos all over the world, but for the first time, we hear talk about sustainability everywhere. Everyone is talking about reducing their impact, reducing energy needs, more efficient kinds of products, more sustainable products, a better supply chain. So many people are actually invested and interested in this topic.

The good thing about IFA is the b2c and b2b relationships you can make. You can talk with so many people: colleagues from other companies, regular people visiting, and even kids interested in learning about growing their own balcony gardens. Its huge for this fun us!

I think IFA has been fantastic exposure for people to learn about the serious problem of microplastics. People have really seen the way that weve approached it and they typically say Well, thats not that much work. Why wouldnt I do that? And I think thats really a great way to show whats possible to combat microplastic pollution and get feedback that we havent been able to do on a global scale. Being in the sustainability village for the first time, pretty much everybody walks in and comes right to us, which is great. Weve really had great conversations, great meetings with washing machine manufacturers, great meetings with our potential partners. And, I think its been an excellent event where weve been able to pull all the right team players together and really start pushing as we get ready to go to market next year. Very cool. Very cool.

I remember visiting IFA for the first time 12 years ago and being amazed then; this year, I was simply astonished. I never would have imagined that change could happen so quickly. IFA 2023 showcased the exponential pace of tech innovation in ways that are hard to describe. Voting for groundbreaking startups who pitched in fields such as augmented reality to a jury at the Next stage made me feel part of this rapid change. Seeing humanoid robots from Yaya Labs and Boston Dynamics Spot at the House of Robots left me in awe. The intellectual depth added by discussions on Artificial General Intelligence (AGI), coupled with the chance to talk to speakers in person afterward, made the event not just inspiring but also deeply thought-provoking. The advancements in smart home systems confirmed that were not just moving forward; were sprinting.

Follow this link:

IFA Berlin 2023 Showfloor Impressions - Dealerscope

What’s Next in AI? Predicting the Trends for the Upcoming Decade – DataDrivenInvestor

Photo by Milad Fakurian on Unsplash

Hey Futurists and AI Aficionados! Buckle up because were about to jump into a time machine and peek into the next decade of AI and Data Science. If you think what weve seen so far is mind-blowing, you aint seen nothin yet! So, lets put on our prediction hats and delve into the trends that will shape the AI landscape in the upcoming years.

First off, a reality check. AI has been the buzzword for years, and while weve made some amazing strides (Hello, self-driving cars and personalized medicine!), were not living in a sci-fi movie yet. No, robots arent taking over the world, but they are about to make our lives much more interesting!

Currently, most AI algorithms need robust servers and data centers to function. But what if that could change? Were looking at a future where AI models will run efficiently on your devices yes, your smartphone could soon be your AI assistant in a much more sophisticated way than Siri or Alexa could ever be!

Heres a big one: using AI to combat climate change. Algorithms are already getting better at predicting weather patterns, analyzing soil health, and tracking endangered species. But in the next decade, AI will play a crucial role in resource optimization and perhaps even in engineering solutions to reverse environmental damage. The planets heroes may be lines of code!

Currently, most AI falls under narrow or specialized intelligence good at one thing but pretty useless otherwise. However, were inching closer to Artificial General Intelligence (AGI), where machines can understand, learn, and apply knowledge across different domains. Imagine an AI that can compose music, diagnose diseases, and manage city traffic while teaching itself quantum physics!

Read the original:

What's Next in AI? Predicting the Trends for the Upcoming Decade - DataDrivenInvestor

AI expert is a hot new position in the freelance jobs market – CNBC

Jakub Porzycki/NurPhoto via Getty Images

Vlad Hu began his career as a software engineer and eventually founded his own software agency, but over the past year, the big work opportunity has been freelance artificial intelligence expert gigs. Hu isn't alone. The rise of generative AI is rapidly reshaping the freelance tech job market, with AI-related job posts from employers and searches among job seekers surging across career and freelance job platforms, including LinkedIn, Upwork and Fiverr.

Three years ago, becoming an AI expert would involve "deep knowledge in machine learning algorithms, deep learning AI in general, and a lot of technical things," said Hu, who works through Fiverr on chatbot implementation projects.

According to data from Indeed, generative AI-related job posts have increased on its platform nearly 250% from July 2021 to July 2023.

According to LinkedIn which is owned by Microsoft, OpenAI's primary investment backer since the large language model first broke through with the public in November 2022, LinkedIn member searches on gen AI terms continue to grow. Since early April, the number of U.S. LinkedIn member posts mentioning gen AI keywords has increased 25% month over month. By June, AI keywords like "ChatGPT," "prompt engineering," and "prompt crafting," were being added to profiles 15 times more frequently than at the beginning of the year.

"Many companies are exploring ways to integrate AI into their business platforms and working with skilled freelance developers," said a Fiverr spokeswoman.

Hu said businesses interested in introducing a ChatGPT or similar AI bot to an app often contact him to understand the technology. Fiverr also has seen an explosion of interest in AI-related video creation over the past six months, according to the spokeswoman, as well as hiring firms searching for AI app development experts.

Demand for AI freelance experts should continue to grow, according to LinkedIn, with a June survey it conducted among executives finding that 44% in the U.S. intend to expand their use of AI technologies in the next year; 47% say they expect it will increase productivity.

"AI is already driving changes in the workforce," wrote Dr. Karin Kimbrough, chief economist at LinkedIn, in a recent report which found just under half of executives say AI will increase productivity. "In the past year, we've seen professionals globally adopting AI skills at a rapid rate; this is happening in parallel with employers increasingly looking for talent with knowledge of how to effectively use new AI technologies to enhance productivity in their organizations."

There is opportunity for freelancers expert in AI to take advantage of the lack of AI skills among existing industry professionals, across sectors of the economy. In the U.S. job market, for example, what LinkedIn classifies as the technology, information, and media sector has the most members proficient in AI, at just 2.2%. Other industries are experiencing rapid adoption of AI core competencies, including retail and financial services, but off a very low base percentage of current employees who are proficient.

Freelance job platform Upwork, which recently signed a deal with OpenAI to connect businesses with experts familiar with its large language models, says the total number of AI skills being marketed by experts is upwards of 250. According to Margaret Lilani, Upwork's vice president of talent solutions, although there are multiple pathways to AI consulting, a strong foundation in computer science, knowledge of machine learning algorithms, proficiency in programming languages like Python, or experience in data management and analysis are often needed across job tasks.

Many AI experts also have related college degrees or experience, such as a bachelor's or master's degree in fields including computer science or engineering. Even so, "ultimately landing work within the AI space comes down to showcasing that you have the skills, ability and expertise to take on a particular project," Lilani said.

At online learning company Udacity, there has been an increase of 33% over the past year in interest for AI-based courses deep learning, AI programming with python, AI for trading, machine learning DevOps engineer, computer vision, and natural language processing among the in-demand courses."To meet this demand, roughly 20% of our current content development roadmap includes Generative AI and Generative AI-related content," said Victoria Papalian, general manager of Udacity's consumer division.

For those not yet in the job market and interested in the AI field, Lilani suggests getting an early start by taking classes in computer science. She says a foundation will be built in the programming languages needed for AI expertise, especially for high school students looking to become familiar with the building blocks of many AI fields. She added that independent methods of education, including YouTube videos or blogs focused on AI skills, are becoming more sought after in the workforce.Learning new concepts and tools like ChatGPT will become important as all types of professionals across industries advance in their careers.

Hu said to start with the basics, including use of OpenAI tools, like ChatGPT, ChatGPT API, Dall-E, and davinci.But he added that proficiency in these areas of AI is just the start. Spending time determining how to use these tools in business is critical. AI's value is limited by a user's application of the technology, so knowledge needs to be supplemented with intention for its use."It's how you bridge the gap with the real world problem that really matters," Hu said.

Follow this link:

AI expert is a hot new position in the freelance jobs market - CNBC