Page 873«..1020..872873874875..880890..»

4 Best Automation Tools for Developers in 2023 – TechRepublic

Automation tools help developers streamline their processes to save time, boost productivity and concentrate on their most critical tasks. This guide will break down the following top automation tools in terms of their features, pros, cons and pricing:

Jump to:

Gradle is a fast and flexible open-source build automation tool that accelerates development while improving software quality.

Gradles list of features is highlighted by:

Gradle helps developers build multiple projects at the same time, making it ideal for large projects with multiple subprojects. Its rich API lets developers customize builds to fit their needs, and the automation tool also manages dependencies so your projects always have the most updated versions they need.

You can use Gradle to run tests to ensure your project is working as intended, plus deploy projects to different targets, such as cloud platforms or servers. Gradle also offers seamless integrations with popular IDEs like Android Studio and IntelliJ IDEA.

Gradles pros include:

Gradle is quite speedy compared to other build automation tools. Its Free plan is ideal for budget-minded individual developers, and the software is quite flexible in terms of supported programming languages and platforms. Gradle is also highly customizable, allowing developers to use it for diverse projects.

Gradles cons include:

Getting to understand Gradle can be a daunting task with its extensive documentation. Building tasks with the tool takes some technical know-how. All of the above make it not too beginner-friendly.

Gradle Enterprise has a free trial and is offered in two pricing tiers:

The Free plan offers unlimited build scans, a visual build timeline, performance data, a dependency graph, custom values, tagging, environment data, test behavior details, an enhanced console log and a build performance summary.

The Core plan offers distributed cache node management, access control, cross-build analysis, build failure aggregation, comprehensive failure metrics, Enterprise REST API and more.

The developer tool also offers separate extensions to meet additional productivity needs, such as test distribution, test failure analytics and predictive test selection.

Apache Maven is an open-source build automation tool. Launched over 20 years ago with Java developers in mind, it is now a popular DevOps tool used by Agile development teams, project managers, etc., for its ability to help them build, publish and deploy several projects simultaneously, dependency and release management features and more.

Some of the features that make Maven a popular DevOps automation tool include:

Maven boosts productivity by letting developers get started with new projects and modules quickly and allowing them to work on multiple projects simultaneously. The automation software is easily extensible via plugins written in Java or scripting languages and an extensive repository of libraries and metadata.

When new features are available, you will not have to worry about time-consuming configuration with Maven. And it also saves time via model-based builds and quickly-generated documentation built off project metadata. Lastly, Maven offers dependency management (transitive dependencies, Ant tasks, automatic updating, etc.) and release management.

Mavens advantages include:

Since Maven is open-source, developers can use it at no cost, which is excellent news if you are on a limited budget. The developer tool is easy to use and configure, consistent since it follows a standardized approach that can easily replicate future processes, and offers fuss-free documentation generation.

Mavens disadvantages include:

While Maven is user-friendly, it may not be beginner-friendly for those unfamiliar with configurations, terminology, etc. Some have complained that the programming tool is lacking in documentation and support. Others, meanwhile, have noted slow performance when dealing with complex projects.

Since Maven is an open-source DevOps tool, developers can enjoy it for free. There may be additional costs for premium extensions or plugins, however.

Travis CI is an easy-to-maintain cloud-based CI/CD tool with time-saving one-command automations that supports over 30 coding languages.

Travis CIs top features that have allowed its popularity to grow in the DevOps community include:

Travis CIs multi-language build matrix supports more than 30 coding languages. Developers can run and test simultaneously in different environments, plus automate tasks for validation, integration and deployment with a single command. The DevOps tool integrates with popular third-party developer tools like Slack, Perforce, Docker, etc., and has a feature that catches code failures and bugs on autopilot.

Travis CIs strengths include:

Developers seeking a fuss-free automation tool get just that with Travis CI. Thanks to its cloud-based options, developers can enjoy Travis CIs time-saving features with minimal setup and maintenance.

Travis CI also employs a lot less code (around one-third less) than competing programmer tools and is quite flexible with its support for over 30 coding languages.

Areas where Travis CI could improve include:

Budget-minded software development teams may get turned off by the lack of a Travis CI free plan. The developer tools cost can creep up quickly as your need for added concurrent jobs grows, and the customer support has received complaints of being slow to respond to issues.

Travis CIs pricing is split into cloud and enterprise options. Cloud pricing is as follows:

Each cloud plan comes with unlimited repositories, collaborators and build minutes, plus a free trial. Choose the self-hosted Enterprise plan, and you will pay $34 per user per month and have the option to host Travis CI on-premise or in your private cloud. The Enterprise plan offers premium support and Subversion and Perforce CI/CD.

Katalon Studio is a codeless test automation tool that offers a low-code experience for beginners and advanced testing for experts.

Some of Katalon Studios best features for testing automation are:

Katalon Studio lets developers automate tests with varying sets of data. This is ideal for applications that handle confidential or sensitive data. The automation software also supports keyword-driven testing, allowing developers to create reusable test scripts for multiple applications.

API testing comes in handy for testing back-end applications and services, and if there are any broken tests, Katalon will fix them automatically. Katalon Studio records and plays back user actions on applications to help create automated tests, and it offers detailed reporting to help troubleshoot.

Katalon Studios pros include:

The programmer tools Free version is a plus for those with limited budgets seeking basic test automation capabilities. Katalon Studios low-code approach makes it beginner-friendly, and the interface is easy to navigate and user-friendly. The fact that the automation software works with multiple environments (Windows, macOS, Google Chrome, Firefox, Android, iOS, etc.) is another plus.

Katalon Studios cons include:

Some have complained that Katalon Studios performance is less than stellar, and the automation tool can sometimes lag or freeze. Support is noted for being slow to respond. Since Katalon is relatively new and has a smaller community, you are less likely to find fast help from colleagues. The desktop app can also be a memory hog when booting or running tests.

Katalon Studio has three pricing plans:

The Free plan offers test automation for mobile, web, API and desktop applications. Enterprise adds debugging, custom reports and advanced API testing. And Ultimate adds 24/7 support and a dedicated onboarding manager.

As the need for automation grows, so does the number of automation tools that hit the market.

How can you pick the right one? Besides looking at the price (some offer free plans) to find an automation solution that fits your budget, read reviews regarding user-friendliness. Depending on your team size and goals, you may want something scalable that can grow as you expand.

Customer support and community size are other factors to consider, as are features. Useful features to look for in automation software include configuration management, reporting, workflow management, CI/CD, monitoring, orchestration, version control support and solid security. You should also look for plenty of third-party integrations with popular developer tools for added functionality, if needed, and support for the programming languages you use.

The automation tools listed above can help developers enjoy increased speed and productivity without sacrificing the quality of their releases. Before choosing an automation tool for your software development team, make sure it fits your needs in terms of features, user-friendliness and pricing.

Also See: Top DevOps Monitoring Tools

More:
4 Best Automation Tools for Developers in 2023 - TechRepublic

Read More..

BT plays it cool with new sustainability solutions – TelecomTV

IPSWICH, UK BTs Sustainability Festival 2023 As part of its efforts to become a green telco, BT is exploring the use of liquid cooling solutions across its network and IT infrastructure, and, according to the operator, its initial trials with an abundance of partners suggest significant cost reductions could be achieved.

During its first Sustainability Festival, held at its Adastral Park research centre this week, the UK telco demonstrated some of the emerging technologies that, it hopes, will help it become net zero by the end of March 2031 and the majority of them are focused on ways to keep datacentre equipment cool, a challenge often cited by the whole tech sector as a major pain point.

While the fundamental technology is owned by the vendors of the solutions it had on display at Adastral Park, BT is engaged with them in terms of specific design requirements. Particularly noteworthy were two demonstrations of equipment being fully immersed in liquid. Branded by BTs team as revolutionary, the concept is based on taking any piece of network equipment, removing its fans, and immersing it in dielectric fluid that does not conduct electricity.

One of BTs liquid cooling solution partners is Immersion4, which is working on a trial to house servers in an immersion tank (see image, above). The companys founder, Serge Conesa, explained that Immersion4s technology can deliver up to 70% in energy savings as it focuses on cooling only the equipment, rather than the whole building (an approach he likened to chilling a bottle of champagne rather than cooling an entire venue).

In another showcase, BT demonstrated full immersion technology from partner PeaSoup Cloud that can save up to 20% in energy consumption (see image, below). BT explained that with this solution not only is there no need to cool the whole building but it means the equipment doesnt need to deal with excessive heat and is not put under strain, which results in longer-lasting components.

See the original post here:
BT plays it cool with new sustainability solutions - TelecomTV

Read More..

TCS+ | Altron’s Caryn Vos on why, in security, encryption is king – TechCentral

Caryn Vos

Cryptography and encryption are pivotal in modern cybersecurity operations. Altron Systems Integration senior manager for cryptography Caryn Vos tells TechCentrals TCS+ technology show about key reasons why this is the case.

Vos tells TechCentral editor Duncan McLeod about why organisations need to get encryption right if theyre serious about protecting their data assets.

The conversation explores the evolving security threat landscape and why companies need to respond appropriately. Ransomware remains a primary risk to South African organisations and organisations around the world but other risks are also emerging, says Vos.

In this episode of TCS+, Vos unpacks how encryption can help in the fight against ransomware and other threats; how encryption protects data both at move and at rest; the impact of corporate espionage; the role of regulations and legislation such as the Protection of Personal Information Act and how encryption can help with regulatory compliance; and how encryption can be used to build trust with customers and other stakeholders.

Dont miss a great discussion with an expert in the field.

Caryn Vos has specialised in information security for over 20 years, during which time she has dealt with all facets of this industry. This has given her a deep and broad understanding of information security as a whole. While she has focused on the financial services sector for many years, she has also worked with most industries during the course of her career. She has built an extensive network throughout the channel and end-user customer base and has extensive experience in dealing with end users as well as through partners. For more information, contact Vos via LinkedIn.

Altron Systems Integration is a specialist provider, leveraging world-class technology and leading practices to architect, implement and support sustainable ICT-based business solutions which are strategically aligned, fit for purpose, cost effective and optimised for performance. We have highly skilled, experienced professionals directed at solving business-critical customer problems through a consultative approach. We also bring our collective knowledge to every deployment engagement. From legacy integration to digital transformation, our solutions catalogue includes the full spectrum of ICT services. Few other organisations can claim the depth and breadth of the capabilities and experience we provide. For more information visit http://www.altronsystemsintegration.co.za or follow us on Facebook, LinkedIn and Twitter.

To subscribe toany of TechCentrals shows, including theTechCentral Show (TCS),TCS+orEverything PC, please use the links below:

TCS+ episodes arepaid for by the party concerned.

Here is the original post:
TCS+ | Altron's Caryn Vos on why, in security, encryption is king - TechCentral

Read More..

Combating Ransomware: 3 Growing Trends That Security Teams Must Watch – TechSpective

For organizations around the world, ransomware continues to be a growing problem, with many well-known companies and institutions falling victim. Within the last year, the Medusa ransomware group breached Minneapolis Public Schools and leaked the personal identifiable information (PII) including medical records of children. Additionally, the attack conducted by an individual associated with the REvil ransomware group on the Colonial Pipeline demonstrated the vulnerability of critical infrastructure. Amid increasing risks of ransomware attacks and data theft, global leaders created the Joint Ransomware Task Force in 2022 and recently held a summit at the White House to unveil additional cybersecurity funding for K-12 educational institutions.

Unfortunately, there is no immediate end in sight to the problem. A recent WatchGuard Internet Security Report found that endpoint ransomware detections increased 627% in Q4 last year. Attacks came in various styles, including IcedID infections, phishing campaigns, data exfiltration, pseudo-ransomware, and more.

As ransomware attacks continue to evolve, its clear that more work is needed from security teams to protect against these threats. Vigilance is key, and security teams must monitor the strategies and tactics of ransomware operations to better defend their organizations. Below, we walk through three emerging ransomware trends that every security team must monitor and tips on how teams should respond to maintain and protect network security.

Ransomware attacks appear to be growing more frequent, as a recent report by Chainalysis, an analysis firm that monitors the blockchain, observed ransomware extortion payments increasing in frequency and amounts. In tracking the inflow of cryptocurrency wallets owned by ransomware groups, Chainalysis found a pattern of increasing payments from victims ranging from thousands of dollars to millions. They also found that ransomware attackers extorted at least $449.1 million in payments in the first half of 2023, an increase of approximately $175 million over the same period in 2022.

However, the number of known victims and cryptocurrency payments dont cover the total number of victims and extortion cases. Some of the wallets ransomware operators use are unknown and difficult to track, especially if operators use cryptocurrency mixers that make tracking cryptocurrency on the public ledger significantly more difficult. As a result, the total number of payments is likely higher than $449.1 million, and the total number of victims is likely higher as well.

Another recent trend is the emergence of attacks that target VMware ESXi, which is a hypervisor that manages and deploys virtual machines within networks. Many active ransomware groups have a VMware ESXi encryptor, which allows them to target virtual machines as well as endpoints and servers. Some active groups using this tactic include Abyss, Akira, Black Basta, LockBit, RansomExx, and Royal. This trend illustrates how modern ransomware groups adapt and evolve to bypass defenses and target the machines that organizations use. It also shows why ransomware groups began to use programming languages like Rust and GoLang more frequently to avoid defenses.

Attacks on VMware ESXi servers made headlines when ransomware dubbed ESXiArgs breached thousands of servers worldwide in a few days. The servers were unpatched instances of VMware, and the attack was automated. This trend underscores why organizations must keep their systems updated and patched and avoid unnecessary internet exposure, as taking these steps can help mitigate such attacks and keep them from getting out of hand.

Ransomware operators can employ various blackmail and extortion tactics to coerce victims into delivering payment. Recently, theres been an increase in cases of data theft without file encryption. Ransomware operators are foregoing data encryption on a victims machine, instead choosing to exfiltrate data to perform a double-extortion attack. Presumably, some groups dont want to bother with deploying an encryptor and know they can use sensitive PII as a bargaining chip. Organizations can combat these double-extortion tactics by having a solid data backup and incident response plan.

Some notorious examples of ransomware data theft include the recent acts of the CL0P ransomware group. In early 2023, the GoAnywhere MFT file transfer software contained a zero-day vulnerability that the CL0P group exploited. Researchers found that after exploiting the software, the group exfiltrated data from dozens of companies that used it, subsequently extorting their victims on the groups double extortion page. Whether the group used an encryptor in its efforts is unknown. Additionally, the group exploited a zero-day vulnerability with MOVEit software, a secure file transfer service. As MOVEit is trusted software for major organizations and governments, hundreds of these entities were exposed to this zero-day vulnerability, and the number continues to grow.

Security teams looking to combat ransomware threats should focus on strengthening their network perimeters, endpoints, and incident response plans. They should also implement regular social engineering training, as more than 90% of all malware attempts begin with a social engineering attack.

Bolstering network perimeters and using technologies such as zero-trust networks are vital steps in ensuring protection. Additionally, ransomware attacks can be thwarted by a well-trained employee or heuristic-based anti-virus that detects abnormal behavior on the endpoint. If those initial security layers fail, an effective incident response plan can stop attacks from becoming too damaging. Combining these layers in a defense-in-depth approach delivers more effective security. Other preemptive steps that organizations should take to protect against ransomware include:

Also, the Joint Ransomware Task Force offers a detailed Blueprint for Ransomware Defense, which provides scores of actionable tips that security professionals can leverage to combat ransomware. Its tips cover a range of categories, including knowing your environment, secure configurations, account and access management, vulnerability management planning, malware defense, security awareness and skills training, and data recovery and incident response.

Ultimately, ransomware is similar to malware, as threat actors in both instances look to gain unauthorized access to your network. Therefore, many tried-and-true security practices apply. Its important to protect your network perimeter, monitor your endpoints for anomalous behavior, back up your systems regularly, and keep all systems up to date. If your organization adopts a broader focus on stopping malware and security breaches in general, deterrence to ransomware will follow suit.

To stay ahead of ransomware threats, security teams must focus on the tactics, techniques, and procedures (TTPs) employed by threat actors that lead to ransomware. Utilizing a defense-in-depth strategy can deter malware from touching your network. If an attack gets through, security teams must have protections in place to neutralize it as soon as possible. Although ransomware threats continue to increase in complexity, adopting a multi-layered security approach will serve as your best defense and help keep out the bad guys.

Original post:
Combating Ransomware: 3 Growing Trends That Security Teams Must Watch - TechSpective

Read More..

Discover the Performance Gain with Retrieval Augmented Generation – The New Stack

Large Language Models (LLMs) are smart enough to understand context. They can answer questions, leveraging their vast training data to provide coherent and contextually relevant responses, no matter whether the topic is astronomy, history or even physics. However, LLMs tend to hallucinate (deliver compelling yet false facts) when asked to answer questions outside the scope of their training data, or when they cant remember the details in the training data.

A new technique, Retrieval Augmented Generation (RAG), fills the knowledge gaps, reducing hallucinations by augmenting prompts with external data. Combined with a vector database (like MyScale), it substantially increases the performance gain in extractive question answering.

To this end, this article focuses on determining the performance gain with RAG on the widely-used MMLU dataset. We find that both the performance of commercial and open source LLMs can be significantly improved when knowledge can be retrieved from Wikipedia using a vector database. More interestingly, this result is achieved even when Wikipedia is already in the training set of these models.

You can find the code for the benchmark framework and this example here.

But first, lets describe Retrieval Augmented Generation (RAG).

Research projects aim to enhance LLMs like gpt-3.5 by coupling them with external knowledge bases (like Wikipedia), databases, or the internet to create more knowledgeable and contextually aware systems. For example, lets assume a user asks an LLM what Newtons most important result is. To help the LLM retrieve the correct information, we can search for Newtons wiki and provide the wiki page as context to the LLM.

This method is called Retrieval Augmented Generation (RAG). Lewis et al. in Retrieval Augmented Generation for Knowledge-Intensive NLP Tasks define Retrieval Augmented Generation as:

A type of language generation model that combines pre-trained parametric and non-parametric memory for language generation.

Moreover, the authors of this academic paper go on to state that they:

Endow pre-trained, parametric-memory generation models with a non-parametric memory through a general-purpose fine-tuning approach.

Note: Parametric-memory LLMs are massive self-reliant knowledge repositories like ChatGPT and Googles PaLM. Non-parametric memory LLMs leverage external resources that add additional context to parametric-memory LLMs.

Combining external resources with LLMs seems feasible as LLMs are good learners, and referring to specific external knowledge domains can improve truthfulness. But how much of an improvement will this combination be?

Two major factors affect a RAG system:

Both of these factors are hard to evaluate. The knowledge gained by the LLM from the context is implicit, so the most practical way to assess these factors is to examine the LLMs answer. However, the accuracy of the retrieved context is also tricky to evaluate.

Measuring the relevance between paragraphs, especially in question answering or information retrieval, can be a complex task. The relevance assessment is crucial to determine whether a given section contains information directly related to a specific question. This is especially important in tasks that involve extracting information from large datasets or documents, like the WikiHop dataset.

Sometimes, datasets employ multiple annotators to assess the relevance between paragraphs and questions. Using multiple annotators to vote on relevance helps mitigate subjectivity and potential biases that can arise from individual annotators. This method also adds a layer of consistency and ensures that the relevance judgment is more reliable.

As a consequence of all these uncertainties, we developed an open-sourced end-to-end evaluation of the RAG system. This evaluation considers different model settings, retrieval pipelines, knowledge base choices, and search algorithms.

We aim to provide valuable baselines for RAG system designs and hope that more developers and researchers join us in building a comprehensive and systematic benchmark. More results will help us disentangle these two factors and create a dataset closer to real-world RAG systems.

Note: Share your evaluation results at GitHub. PRs are very welcome!

In this article, we focus on a simple baseline evaluated on an MMLU (Massive Multitask Language Understanding Dataset), a widely used benchmark for LLMs, containing multiple-choice single-answer questions on many subjects like history, astronomy and economy.

We set out to find out if an LLM can learn from extra contexts by letting it answer multiple-choice questions.

To achieve our aim, we chose Wikipedia as our source of truth because it covers many subjects and knowledge domains. And we used the version cleaned by Cohere.aion Hugging Face, which includes 34,879,571 paragraphs belonging to 5,745,033 titles. An exhaustive search of these paragraphs will take quite a long time, so we need to use the appropriate ANNS (Approximate Nearest Neighbor Search) algorithms to retrieve relevant documents. Additionally, we use the MyScale database with the MSTG vector index to retrieve the relevant documents.

Semantic search is a well-researched topic with many models with detailed benchmarks available. When incorporated with vector embeddings, semantic search gains the ability to recognize paraphrased expressions, synonyms, and contextual understanding.

Moreover, embeddings provide dense and continuous vector representations that enable the calculation of meaningful metrics of relevance. These dense metrics capture semantic relationships and context, making them valuable for assessing relevance in LLM information retrieval tasks.

Taking into account the factors mentioned above, we have decided to use the paraphrase-multilingual-mpnet-base-v2 model from Hugging Face to extract features for retrieval tasks. This model is part of the MPNet family, designed to generate high-quality embeddings suitable for various NLP tasks, including semantic similarity and retrieval.

For our LLMs, we chose OpenAIs gpt-3.5-turbo and llama2-13b-chat with quantization in six bits. These models are the most popular in commercial and open-source trends. The LLaMA2 model is quantized by llama.cpp. We chose this 6-bit quantization setup because it is affordable without sacrificing performance.

Note: You can also try other models to test their RAG performance.

The following image describes how to formulate a simple RAG system:

Figure 1: Simple Benchmarking RAG

Note: Transform can be anything as long as it can be fed into the LLM, returning the correct answer. In our use case, Transform injects context into the question.

Our final LLM prompt is as follows:

```pythontemplate = ("The following are multiple choice questions (with answers) with context:""nn{context}Question: {question}n{choices}Answer: ")```

```python

template =

("The following are multiple choice questions (with answers) with context:"

"nn{context}Question: {question}n{choices}Answer: ")

```

Now lets move on to the result.

Our benchmark test results are collated in Table 1 below.

But first, our summarized findings are:

In these benchmarking tests, we compared performance with and without context. The test without context represents how internal knowledge can solve questions. Secondly, the test with context shows how an LLM can learn from context.

Note: Both llama2-13b-chat and gpt-3.5-turbo are enhanced by around 3-5% overall, even with only one extra context.

The table reports that some numbers are negative, for example, when we insert context into clinical-knowledge to gpt-3.5-turbo.

This might be related to the knowledge base, saying that Wikipedia does not have much information on clinical knowledge or because OpenAIs terms of use and guidelines are clear that using their AI models for medical advice is strongly discouraged and may even be prohibited. Despite this, the increase is quite evident for both models.

Notably, the gpt-3.5-turbo results claim that the RAG system might be powerful enough to compete with other language models. Some of the reported numbers, such as those on prehistory and astronomy are pushing towards the performance of gpt4 with extra tokens, suggesting RAG could be another solution to specialized Artificial General Intelligence (AGI) when compared to fine-tuning.

Note: RAG is more practical than fine-tuning models as it is a plug-in solution and works with both self-hosted and remote models.

Figure 2: Performance Gain vs. the Number of Contexts

The benchmark above suggests that you need as much context as possible. In most cases, LLMs will learn from all the supplied contexts. Theoretically, the model provides better answers as the number of retrieved documents is increased. However, our benchmarking shows that some numbers dropped the greater the contexts retrieved.

By way of validating our benchmarking results, a paper by Stanford University titled: Lost in the Middle: How Language Models Use Long Contexts suggests the LLM only looks at the contexts head and tail. Therefore, choose fewer but more accurate contexts from the retrieval system to augment your LLM.

The larger the LLM, the more knowledge it stores. Larger LLMs tend to have a greater capacity to store and understand information, which often translates to a broader knowledge base of generally understood facts. Our benchmarking tests tell the same story: the smaller LLMs lack knowledge and are hungrier for more knowledge.

Our results report that llama2-13b-chat shows a more significant increase in knowledge than gpt-3.5-turbo, suggesting context injects more knowledge into an LLM for information retrieval. Additionally, these results imply gpt-3.5-turbo was given information it already knows while llama2-13b-chat is still learning from the context.

Almost every LLM uses the Wikipedia corpus as a training dataset, meaning both gpt-3.5-turbo and llama2-13b-chat should be familiar with the contexts added to the prompt. Therefore, the questions that beg are:

We currently dont have any answers to these questions. As a result, research is still needed.

Contribute to research to help others.

We can only cover a limited set of evaluations in this blog. But we know more is needed. The results of every benchmark test matter, regardless of whether they are replications of existing tests or some new findings based on novel RAGs.

With the aim of helping everyone create benchmark tests to test their RAG systems, we have open sourced our end-to-end benchmark framework. To fork our repository, check out our GitHub page.

This framework includes the following tools:

Its up to you to create your own benchmark. We believe RAG can be a possible solution to AGI. Therefore, we built this framework for the community to make everything trackable and reproducible.

PRs are welcome.

We have evaluated a small subset of MMLU with a simple RAG system built with different LLMs and vector search algorithms and described our process and results in this article. We also donated the evaluation framework to the community and called for more RAG benchmarks. We will continue to run benchmarking tests and update the latest results to GitHub and the MyScale blog, so follow us on Twitter or join us on Discord to stay updated.

Here is the original post:

Discover the Performance Gain with Retrieval Augmented Generation - The New Stack

Read More..

IFA Berlin 2023 Showfloor Impressions – Dealerscope

IFA Berlin 2023 showcased a myriad of cutting-edge innovations and captivating showfloor experiences that left a lasting impression on visitors, exhibitors, and industry leaders. Here, we present the highlights and testimonials from those who witnessed the future of consumer electronics at this years event.

My name is Lester, Im from the Netherlands and our company, Purpose, is based out of Rotterdam. Our first time at IFA has been amazing. Weve been here for three days and already have made some good connections with different smaller companies and startups that we can work together with. The feedback of some visitors who already want to buy the product has also been great. Weve talked to a few distributors, a few investors also interested in our product, and made some good connections with a lot of people in the industry!

This is our third year at IFA. Weve come to the IFA Next stage multiple times because every time when we talk to people, they recommend going to IFA Next because its probably the most interesting intersection of people meeting from all kinds of backgrounds. We always enjoy IFA, in previous years, but also this year as well. Its a fantastic show! Well be here next year for sure.

Can I say massive? Its my first time here, and we knew it was going to be massive, but this is different. Ive been to other expos all over the world, but for the first time, we hear talk about sustainability everywhere. Everyone is talking about reducing their impact, reducing energy needs, more efficient kinds of products, more sustainable products, a better supply chain. So many people are actually invested and interested in this topic.

The good thing about IFA is the b2c and b2b relationships you can make. You can talk with so many people: colleagues from other companies, regular people visiting, and even kids interested in learning about growing their own balcony gardens. Its huge for this fun us!

I think IFA has been fantastic exposure for people to learn about the serious problem of microplastics. People have really seen the way that weve approached it and they typically say Well, thats not that much work. Why wouldnt I do that? And I think thats really a great way to show whats possible to combat microplastic pollution and get feedback that we havent been able to do on a global scale. Being in the sustainability village for the first time, pretty much everybody walks in and comes right to us, which is great. Weve really had great conversations, great meetings with washing machine manufacturers, great meetings with our potential partners. And, I think its been an excellent event where weve been able to pull all the right team players together and really start pushing as we get ready to go to market next year. Very cool. Very cool.

I remember visiting IFA for the first time 12 years ago and being amazed then; this year, I was simply astonished. I never would have imagined that change could happen so quickly. IFA 2023 showcased the exponential pace of tech innovation in ways that are hard to describe. Voting for groundbreaking startups who pitched in fields such as augmented reality to a jury at the Next stage made me feel part of this rapid change. Seeing humanoid robots from Yaya Labs and Boston Dynamics Spot at the House of Robots left me in awe. The intellectual depth added by discussions on Artificial General Intelligence (AGI), coupled with the chance to talk to speakers in person afterward, made the event not just inspiring but also deeply thought-provoking. The advancements in smart home systems confirmed that were not just moving forward; were sprinting.

Follow this link:

IFA Berlin 2023 Showfloor Impressions - Dealerscope

Read More..

0.000000000000000005 Seconds Physicists Generate One of the Shortest Signals Ever Produced by Humans – SciTechDaily

Scientists from the University of Konstanz developed a method using femtosecond light flashes to generate electron pulses with a duration of around five attoseconds. This breakthrough, offering a higher time resolution than light waves, paves the way for observing ultrafast phenomena, such as nuclear reactions.

Molecular or solid-state processes in nature can sometimes take place in time frames as brief as femtoseconds (quadrillionths of a second) or attoseconds (quintillionths of a second). Nuclear reactions are even faster. Now, Maxim Tsarev, Johannes Thurner, and Peter Baum, scientists from the University of Konstanz, are using a new experimental set-up to achieve signals of attosecond duration, i.e. the billionths of a nanosecond, which opens up new perspectives in the field of ultrafast phenomena.

Not even light waves can achieve such a time resolution because a single oscillation takes much too long for that. Electrons provide a remedy here, as they enable significantly higher time resolution. In their experimental set-up, the Konstanz researchers use pairs of femtosecond light flashes from a laser to generate their extremely short electron pulses in a free-space beam. The results are reported in the journal Nature Physics.

Similar to water waves, light waves can also superimpose to create standing or traveling wave crests and troughs. The physicists chose the incidence angles and frequencies so that the co-propagating electrons, which fly through a vacuum at half the speed of light, overlap with optical wave crests and troughs of exactly the same speed.

What is known as ponderomotive force then pushes the electrons in the direction of the next wave trough. Thus, after a short interaction, a series of electron pulses is generated which are extremely short in time especially in the middle of the pulse train, where the electric fields are very strong.

For a short time, the temporal duration of the electron pulses is only about five attoseconds. In order to understand that process, the researchers measure the electrons velocity distribution that remains after compression. Instead of a very uniform velocity of the output pulses, you see a very broad distribution that results from the strong deceleration or acceleration of some electrons in the course of compression, explains physicist Johannes Thurner. But not only that: The distribution is not smooth. Instead, it consists of thousands of velocity steps, since only a whole number of light particle pairs can interact with electrons at a time.

Quantum mechanically, the scientist says, this is a temporal superposition (interference) of the electrons with themselves, after experiencing the same acceleration at different times. This effect is relevant for quantum mechanical experiments for example, on the interaction of electrons and light.

What is also remarkable: Plane electromagnetic waves like a light beam normally cannot cause permanent velocity changes of electrons in a vacuum, because the total energy and the total momentum of the massive electron and a zero rest mass light particle (photon) cannot be conserved. However, having two photons simultaneously in a wave traveling slower than the speed of light solves this problem (Kapitza-Dirac effect).

For Peter Baum, physics professor and head of the Light and Matter Group at the University of Konstanz, these results are still clearly basic research, but he emphasizes the great potential for future research: If a material is hit by two of our short pulses at a variable time interval, the first pulse can trigger a change and the second pulse can be used for observation similar to the flash of a camera.

In his view, the great advantage is that no material is involved in the experimental principle and everything happens in free space. Lasers of any power could in principle be used in the future for ever stronger compression. Our new two-photon compression allows us to move into new dimensions of time and perhaps even film nuclear reactions, Baum says.

Reference: Nonlinear-optical quantum control of free-electron matter waves by Maxim Tsarev, Johannes W. Thurner and Peter Baum, 12 June 2023, Nature Physics.DOI: 10.1038/s41567-023-02092-6

Visit link:

0.000000000000000005 Seconds Physicists Generate One of the Shortest Signals Ever Produced by Humans - SciTechDaily

Read More..

A physics-based Ising solver based on standard CMOS technology – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

by Ingrid Fadelli , Phys.org

close

Quantum computers, systems that perform computations by exploiting quantum mechanics phenomena, could help to efficiently tackle several complex tasks, including so-called combinatorial optimization problems. These are problems that entail identifying the optimal combination of variables among several options and under a series of constraints.

Quantum computers that can tackle these problems should be based on reliable hardware systems, which have an intricate all-to-all node connectivity. This connectivity ultimately allows graphs representing arbitrary dimensions of a problem to be directly mapped onto the computer hardware.

Researchers at University of Minnesota recently developed a new electronic device based on standard complementary metal oxide semiconductor (CMOS) technology that could support this crucial mapping process. This device, introduced in a paper in Nature Electronics, is a physics-based Ising solver comprised of coupled ring oscillators and an all-to-all node connected architecture.

"Building an all-to-all connected hardware where each node (i.e., oscillator) can 'talk' to all other nodes is extremely challenging; as the number of coupled nodes (N) increases, the number of connections per node increases by ~N2. This results in a quadratically increasing electrical loading and hardware overhead for each node which makes the coupling less efficient and less uniform," Chris Kim, one of the researchers who carried out the study, told Phys.org.

"Previous works, including our own, focused on locally connected architecture where each node could talk to only a handful (e.g., <10) of nearby nodes. An all-to-all architecture is ideal as problems can be directly mapped to the hardware but up until this point, there was no elegant way to achieve this."

The Ising solver created by Kim and his colleagues has an all-to-all architecture containing 48 spins and a highly uniform coupling circuit. Horizontal oscillator in the device are closely coupled to vertical oscillators, creating pairs of horizontal-vertical oscillators that intersect with other pairs to form a crossbar array.

"The basic idea behind our Ising solver is to propagate an oscillating signal in both horizontal and vertical directions in a way that node i and node j intersect each other throughout a crossbar array," Kim explained. "By placing a coupler circuit at each intersection, we can build a circuit array where each node signal talks to all other node signals. Despite the oscillating signals being phase shifted throughout the array, coupling between two nodes occurs in a way that accounts for the shifted phases which is why the proposed design finds a competitive solution. "

The researchers evaluated their Ising solver in a series of tests, where they used it to perform various statistical operations, gathering measurements for problems of varying sizes and with different graph densities. Their results were promising, as graphs representing the dimensions of these problems could be effectively mapped onto their chip.

"With our new approach, we can directly map a problem graph with up to 48 nodes to the solver hardware," Kim said. "This is a significant improvement over previous designs; for instance, a King's graph-based hardware was demonstrated by several groups including ours, but each node could only talk to eight other neighbors."

In the future, the chip introduced by Kim and his colleagues could inform the creation of further Ising solvers and devices that can map intricate problem graphs. This could ultimately help to further improve the ability of quantum computers to solve combinatorial optimization problems, facilitating their real-world deployment.

"Since the problems we want to solve are much larger than a single hardware instance, we will have to find a way to decompose and recompose sub-problems without sacrificing the solution accuracy," Kim added.

"Another topic of interest is comparing the solution quality of our hardware against existing optimization algorithms such as simulated annealing or tabu search. Finally, we will have to find more systematic ways to formulate a problem to coupling weights; we cannot democratize this computing approach if a human expert is required at every step of the computation."

More information: Hao Lo et al, An Ising solver chip based on coupled ring oscillators with a 48-node all-to-all connected array architecture, Nature Electronics (2023). DOI: 10.1038/s41928-023-01021-y

Journal information: Nature Electronics

2023 Science X Network

Original post:

A physics-based Ising solver based on standard CMOS technology - Phys.org

Read More..

Groundbreaking Quantum Leap: Physicists Turn Schrdinger’s Cat on Its Head – SciTechDaily

Researchers have developed a groundbreaking method to perform the fractional Fourier Transform of optical pulses using quantum memory. This unique achievement involved implementing the transformation on a Schrdingers cat state, having potential applications in telecommunications and spectroscopy.

Researchers from the University of Warsaws Faculty of Physics, in collaboration with experts from the QOT Centre for Quantum Optical Technologies, have pioneered an innovative technique that allows the fractional Fourier Transform of optical pulses to be performed using quantum memory.

This achievement is unique on the global scale, as the team was the first to present an experimental implementation of the said transformation in this type of system. The results of the research were published in the prestigious journal Physical Review Letters. In their work, the students tested the implementation of the fractional Fourier Transform using a double optical pulse, also known as a Schrdingers cat state.

Waves, such as light, have their own characteristic properties pulse duration and frequency (corresponding, in the case of light, to its color). It turns out that these characteristics are related to each other through an operation called the Fourier Transform, which makes it possible to switch from describing a wave in time to describing its spectrum in frequencies.

The fractional Fourier Transform is a generalization of the Fourier Transform that allows a partial transition from a description of a wave in time to a description in frequency. Intuitively, it can be understood as a rotation of a distribution (for example, the chronocyclic Wigner function) of the considered signal by a certain angle in the time-frequency domain.

Students in the laboratory presenting rotation of Schrdinger cat states. No actual cats were hurt during the project. Credit: S. Kurzyna and B. Niewelt, University of Warsaw

It turns out that transforms of this type are exceptionally useful in the design of special spectral-temporal filters to eliminate noise and enable the creation of algorithms that make it possible to use the quantum nature of light to distinguish pulses of different frequencies more precisely than traditional methods. This is especially important in spectroscopy, which helps study the chemical properties of matter, and telecommunications, which requires the transmission and processing of information with high precision and speed.

An ordinary glass lens is capable of focusing a monochromatic beam of light falling on it to almost a single point (focus). Changing the angle of incidence of light on the lens results in a change in the position of the focus. This allows us to convert angles of incidence into positions, obtaining the analogy of the Fourier Transform, in the space of directions and positions. A classical spectrometer based on a diffraction grating uses this effect to convert the wavelength information of light into positions, allowing us to distinguish between spectral lines.

Similarly to the glass lens, time and frequency lenses allow the conversion of a pulses duration into its spectral distribution, or effectively, perform a Fourier transform in time and frequency space. The right selection of powers of such lenses makes it possible to perform a fractional Fourier Transform. In the case of optical pulses, the action of time and frequency lenses corresponds to applying quadratic phases to the signal.

To process the signal, the researchers used a quantum memory or more precisely a memory equipped with quantum light processing capabilities based on a cloud of rubidium atoms placed in a magneto-optical trap. The atoms were cooled to a temperature of tenths of millions of degrees above absolute zero. The memory was placed in a changing magnetic field, allowing components of different frequencies to be stored in different parts of the cloud. The pulse was subjected to a time lens during writing and reading, and a frequency lens acted on it during storage.

The device developed at the UW allows the implementation of such lenses over a very wide range of parameters and in a programmable way. A double pulse is very prone to decoherence, hence it is often compared to the famous Schrdinger cat a macroscopic superposition of being dead and alive, almost impossible to achieve experimentally. Still, the team was able to implement faithful operations on those fragile dual-pulse states.

The publication was the result of work in the Quantum Optical Devices Laboratory and Quantum Memory Laboratory in the Quantum Optical Technologies center with the participation of two masters students: Stanislaw Kurzyna and Marcin Jastrzebski, two undergraduate students Bartosz Niewelt and Jan Nowosielski, Dr. Mateusz Mazelanik, and lab heads Dr. Michal Parniak and Prof. Wojciech Wasilewski. For the described results, Bartosz Niewelt was also awarded a presentation grant award during the recent DAMOP conference in Spokane, WA.

Before direct application in telecommunications, the method must first be mapped to other wavelengths and parameter ranges. Fractional Fourier transform, however, could prove crucial for optical receivers in state-of-the-art networks, including optical satellite links. A quantum light processor developed at the UW makes it possible to find and test such new protocols in an efficient way.

References: Experimental Implementation of the Optical Fractional Fourier Transform in the Time-Frequency Domain by Bartosz Niewelt, Marcin Jastrzbski, Stanisaw Kurzyna, Jan Nowosielski, Wojciech Wasilewski, Mateusz Mazelanik and Micha Parniak, 12 June 2023, Physical Review Letters.DOI: 10.1103/PhysRevLett.130.240801

The Quantum Optical Technologies (MAB/2018/4) project is carried out within the International Research Agendas program of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund.

Read the original:

Groundbreaking Quantum Leap: Physicists Turn Schrdinger's Cat on Its Head - SciTechDaily

Read More..

Researchers make a significant step towards reliably processing quantum information – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

close

Using laser light, researchers have developed the most robust method currently known to control individual qubits made of the chemical element barium. The ability to reliably control a qubit is an important achievement for realizing future functional quantum computers.

The paper, "A guided light system for agile individual addressing of Ba+ qubits with 104 level intensity crosstalk," was published in Quantum Science and Technology.

This new method, developed at the University of Waterloo's Institute for Quantum Computing (IQC), uses a small glass waveguide to separate laser beams and focus them four microns apart, about four-hundredths of the width of a single human hair. The precision and extent to which each focused laser beam on its target qubit can be controlled in parallel is unmatched by previous research.

"Our design limits the amount of crosstalkthe amount of light falling on neighboring ionsto the very small relative intensity of 0.01%, which is among the best in the quantum community," said Dr. K. Rajibul Islam, a professor at IQC and Waterloo's Department of Physics and Astronomy. "Unlike previous methods to create agile controls over individual ions, the fiber-based modulators do not affect each other.

"This means we can talk to any ion without affecting its neighbors while also retaining the capability to control each individual ion to the maximum possible extent. This is the most flexible ion qubit control system with this high precision that we know of anywhere, in both academia and industry."

The researchers targeted barium ions, which are becoming increasingly popular in the field of trapped ion quantum computation. Barium ions have convenient energy states that can be used as the zero and one levels of a qubit and be manipulated with visible green light, unlike the higher energy ultraviolet light needed for other atom types for the same manipulation. This allows the researchers to use commercially available optical technologies that are not available for ultraviolet wavelengths.

The researchers created a waveguide chip that divides a single laser beam into 16 different channels of light. Each channel is then directed into individual optical fiber-based modulators which independently provide agile control over each laser beam's intensity, frequency, and phase. The laser beams are then focused down to their small spacing using a series of optical lenses similar to a telescope. The researchers confirmed each laser beam's focus and control by measuring them with precise camera sensors.

"This work is part of our effort at the University of Waterloo to build barium ion quantum processors using atomic systems," said Dr. Crystal Senko, Islam's co-principal investigator and a faculty member at IQC and Waterloo's Department of Physics and Astronomy. "We use ions because they are identical, nature-made qubits, so we don't need to fabricate them. Our task is to find ways to control them."

The new waveguide method demonstrates a simple and precise method of control, showing promise for manipulating ions to encode and process quantum data and for implementation in quantum simulation and computing.

More information: Ali Binai-Motlagh et al, A guided light system for agile individual addressing of Ba+ qubits with 104 level intensity crosstalk, Quantum Science and Technology (2023). DOI: 10.1088/2058-9565/ace6cb

Journal information: Quantum Science and Technology

More:

Researchers make a significant step towards reliably processing quantum information - Phys.org

Read More..