Accubits, Bud Ecosystem open-source Large Language Model, drive it among global top – BusinessLine

Thiruvananthapuram-based Accubits Technologies has open-sourced GenZ 70B, a Large Language Model (LLM), which is now among the top listed onHuggingFaces leaderboard, aglobal platform that curates, evaluates, and compares AI models.

A 70-billion-parameter fine-tuned model, it is ranked number one on the HuggingFace leaderboard for instruction-tuned LLMs and sixth for open LLMs in all categories. It was open-sourced collaboratively with Bud Ecosystem, a separate Accubits company, says Aharsh MS, Chief Marketing Officer, Accubits Technologies. Bud focuses on fundamental research in artificial general intelligence (AGI) and behavioural science., and is building an ecosystem around multi-modal, multi-task foundational models.

An LLM (for instance, GPT4 by Open AI) is a type of machine learning model specifically designed for processing and generating human-like text based on vast amounts of textual data. GPT-4 is the largest model in OpenAIs GPT series, released this year. Its parameter count has not been released to the public, though it is speculated that the model has more than 1.7 trillion.

An LLM model from India ranking top on global scale is significant, and can be an inspiration for the local developer community, says Aharsh MS. GenZ is an auto-regressive language model with an optimised transformer architecture. We fine-tuned the model with curated datasets using the Supervised Fine-Tuning (SFT) method, Aharsh explained to businessline.

It used OpenAssistants instruction fine-tuning dataset and Thought Source for the Chain Of Thought (CoT) approach. With extensive fine-tuning, it has acquired additional skills and capabilities beyond what a pre-trained model can offer. Aharsh offered deeper insight into the world of natural language processing computer programmes in an interview.

Excerpts:

Are Generative AI and LLMs the same thing?

No. LLMs fall under the umbrella of generative AI, the reverse isnt true. Not every generative AI model is an LLM. The difference primarily hinges on the type of content a model is designed to produce and its specialised applications. Generative AI refers to a broader class of AI models designed to generate new content. This creation capability isnt restricted solely to text; it spans a diverse array of outputs, including images, music compositions, and even videos. On the other hand, LLMs represent a specific subset within the generative AI spectrum. These models are meticulously designed and optimised for tasks related to language. Trained on immense volumes of text data, LLMs excel in generating coherent and contextually apt textual outputs. This might range from crafting detailed paragraphs to answering intricate questions or even extending given textual prompts.

Why did youopen-source the model?

Accubits and Bud Ecosystem worked on the GenZ 70B suite of open-source LLMs to democratise access to Generative AI-based technologies. We believe that Generative AI is one of the most disruptive technologies, perhaps more significant than the invention of fire itself. Such a technology must be freely available for everyone to experiment and innovate.

With this objective in mind, we are open-sourcing models that can be hosted even on a laptop. GenZs GTPQ & GGML-based models can be hosted on a personal laptop without a GPU. Bud Ecosystem has its own proprietary multi-modal, multi-task models, which is used to build its own products. Accubits is already helping its customers adopt Generative AI-based technologies at scale, helping them build RoI-driven products and solutions.

How do you look to stay ahead of fine-tuning models being extensively released now?

The training data used and our fundamental research on attention mechanisms, model alignment, consistency, and reliability have enabled us to build GenAI models with good performance. Most fine-tuned models do not offer commercial licenses. Which means, businesses do not have the freedom to use them for building commercial applications. GenZ 70B stands out mainly for two reasons: one, it offers a commercial license, and two, it offers good performance. Our model is primarily instruct-tuned for better reasoning, role play and writing capabilities, making it more suitable for business applications.

Are there any limitations to the model?

Like any Large Language Model, GenZ also carries risks. We recommend users to consider fine-tuning it for specific set of tasks of interest. Appropriate precautions and guardrails should be taken for any production use. Using GenZ 70B responsibly is key to unlocking its full potential while maintaining a safe and respectful environment.

Have you looked at how it can replicate the work of models such as ChatGPT?

GenZ 70Bs performance is impressive, especially in relation to its size. The GenZ 70B model scored 7.34 on the MT-Bench benchmark, which is close to GPT 3.5s score of 7.94. Considering that GenZ 70B is 2.5 times smaller than ChatGPT 3.5 yet nearly matches its performance, Id say its surprisingly efficient for its size. The model size is critical when considering real-world commercial use cases. Smaller models are usually easier to work with, use less computing power, and can be much more budget-friendly. GenZ can offer performance on par with GPT 3.5 in a much smaller package, making it very suitable for content creation.

What is the rate of accuracy of LLMs with respect to low-resource or distant languages (other than English)?

LLMs thrive on quality and quantity of training data. Since English is a dominant language on the internet, many of these are trained extensively on English data. This results in high accuracy when dealing with English language tasks, from simple text generation to more complex problem-solving.

In contrast, accuracy level might be different with less common or less-studied languages, primarily because of relative scarcity of quality training data. Its worth noting the inherent capabilities of LLMs are not restricted to English or any specific language. If provided with extensive and diverse training data, an LLM can achieve better accuracy for a less common language. In essence, the performance in any given language is reflective of the amount and quality of the training data.

(E.o.m.)

See the article here:

Accubits, Bud Ecosystem open-source Large Language Model, drive it among global top - BusinessLine

Related Posts

Comments are closed.