Page 264«..1020..263264265266..270280..»

SMT Prospects and Perspectives: AI Opportunities, Challenges, and Possibilities, Part 1 – I-Connect007

April 17, 2024

In this installment of my artificial intelligence (AI) series, I will touch on the key foundational technologies that propel and drive the development and deployment of AI, with special consideration of electronics packaging and assembly.

The objectives of the series:

Leverage AI as a virtual tool to facilitate an individuals job efficiency and effectiveness and future job prospects, as well as the enterprise business growth

Breakthroughs and Transformational Technologies

Since the discovery of the electron in 1897 by Joseph John Thomson, striking breakthroughs of the 20th and 21st centuries include:

Introduction of AI ChatGPT-4 by OpenAI in 2023 Based on these breakthrough technologies, many products and services have been developed that improve the quality of human life and spur global prosperityand it all came from the discovery of that tiny unit called an electron.

Operating AI demands the use of heavy-load hardware that processes algorithms, runs the models, and keeps data flowing. These bandwidth-hungry applications necessitate higher-speed data transfer, which opens a crucial role for photons by taking advantage of the speed of light to deliver greater bandwidth and lower latency and power. Hardware components typically will connect via copper interconnects, while the connections between the racks in data centers often use optical fiber. CPUs and GPUs also use optical interconnects for optical signals.

Both electrons and photons will play an increased role. AI will drive the need for near-packaged optics with high-performance PCB substrates (or an interposer) on the host board. Co-packaged optics, a single-package integration of electronic and photonic dies, or photonic integrated circuits (PICs) are expected to play a pivotal role.

AI Market and Hardware To AI, high performance hardware is indispensable, particularly with computing chips. As AI becomes embedded in all sectors of industry and all aspects of daily life and business, the biggest winners so far are hardware manufacturers: 80% of AI servers use GPUs and its expected to grow to 90%. In addition to GPU, the required pairing memory puts high demand for high bandwidth memory (HBM). The advent of generative AI further thrusts accelerated computing, which uses GPUs along with CPUs to meet augmented performances.

Although the estimated forecast of the future AI market varies, according toPwC1, AI could contribute more than $15 trillion to the global economy by 2030. Most agree that the impact of AI adoption could be greater than the inventions of the internet, mobile broadband, and the smartphone combined.

AI Historical Milestones AI is not a new term. John McCarthy coined artificial intelligence and held the first AI conference in 1956. Shakey the Robot, the first general-purpose mobile robot, was built in 1969.

In the succeeding decades, AI went through a roller coaster ride of successes and setbacks until the 2010s, when key events, including the introduction of big data and machine learning (ML), created an age in which machines have the capacity to collect and process huge sums of information too cumbersome for a person to process. Other pace-setting technologiesdeep learning and neural networkwere introduced in 2010, with GAN in 2014, and transformer in 2017.

The 2020s have been when AI finally gained traction, especially with the introduction of generative AI, the release of ChatGPT on Nov. 30, 2022, and the phenomenal ChatGPT-4 on March 14, 2023. It feels like AI has suddenly become a global phenomenon. The rest is history.

AI Bedrock Technologies Generally speaking, AI is a digital technology that mimics the intellectual, analytical, and creative ability of humans, largely by absorbing and finding patterns in an enormous amount of information and data. AI covers a multitude of technologies, including machine learning (ML), deep learning (DL), neural network (NN), natural language processing (NLP), and their closely-aligned technologies. In one way, AI hierarchy can be shown in Figure 1, exhibiting the interrelations and evolution of these underpinning technologies.

Now Id like to briefly highlight each technology:

Machine Learning Machine learning is a technique that collects and analyzes data, looks for patterns, and adjusts its actions accordingly to develop statistical mathematical models. The resulting algorithms allow software applications to predict outcomes without explicit programming and incorporate intelligence into a machine by automatically learning from the data. A learning algorithm then trains a model to generate a prediction for the response to new data or the test datasets.

There are three types of ML: supervised, unsupervised, and reinforcement.

In addition to these basic ML techniques, more advanced ML approaches continue to emerge.

ML understands patterns and can instantly see anomalies that fall outside those patterns, making it a valuable tool in myriad applications, ranging from fraud detection and cyber threat detection to manufacturing and supply chain operation.

Deep Learning

Deep learning is a subset of machine learning based on multi-layered neural networks that learn from vast amounts of data. It comprises a series of algorithms trained and run on deep neural networks that mimic the human brain to incorporate intelligence into a machine. Most deep learning methods use neural network architectures, so they are often referred to as deep neural networks. Software architecture (type, number, and organization of the layers) is built empirically following an intuition-based optimization process, with training data in the loop to tune DL model parameters. Training for DL software occurs atomically and with strong coupling across all layers of the DL software.

The increased accuracy of DL software requires more complex implementations in which the number of layers, their size (number of neurons), and the amount of data used for training increase enormously.

Generative AI

I tried ChatGPT to see how the bot explains generative AI:

Generative AI refers to a category of artificial intelligence (AI) that focuses on creating new and original content. It uses models and algorithms to generate data, such as text, images, audios, or even videos, that resemble human-created content. Generative AI models are trained on large datasets and can generate creative and coherent outputs based on the patterns and information that have been learned. They have applications in various fields, including art, language, music, and more.

A generative AI model, in a mathematical representation implemented as an algorithm, can create something that didn't previously exist by processing a large amount of visual or textual data and then determining what things are most likely to appear near other things using deep learning or neural networks. Programming work goes into creating algorithms that can recognize texts or prompts. It creates output by assessing an enormous corpus of data, responding to prompts with something that falls within the realm of probability as determined by that corpus of data.

Generative AI tools offer the ability to create essays, images, and music in response to simple prompts.

My next column will highlight the foundational technologies behind AI, including the large language model (LLM) and foundation model.

References

1. PwCs Global Artificial Intelligence Study: Exploiting the AI Revolution, pwc.com.

This column originally appeared in the April 2024 issue of SMT007 Magazine.

Here is the original post:
SMT Prospects and Perspectives: AI Opportunities, Challenges, and Possibilities, Part 1 - I-Connect007

Read More..

Why Altcoins Were Struggling to Tread Water on Monday – Yahoo Finance

Is the recent cryptocurrency rout over yet? Probably not, but after booking losses toward the end of last week that were significant at times, the landscape looked a little better for altcoins.

On Monday, quite a few were posting comparatively modest losses, with some even inching cautiously into positive territory. In late afternoon trading, Chainlink (CRYPTO: LINK) was down only marginally, while The Sandbox's (CRYPTO: SAND) price was moving sideways. On the gainer side, VeChain (CRYPTO: VET) was up by 3.5%, and Litecoin (CRYPTO: LTC) posted a 0.5% gain.

Major geopolitical developments usually impact the financial markets to some degree, and cryptocurrency is no exception to this. After a scare that the ever-volatile Middle East dynamic would worsen with Iran's attack on Israel, as of late afternoon Monday, the situation seemed to be cooling off encouragingly.

A more direct source of cautious optimism was the apparent approval of spot crypto exchange-traded funds (ETFs) in Hong Kong, one of the most important financial markets in Asia. Asset managers there said the enclave's Securities and Futures Commission (SFC) gave its first nod to Bitcoin and Ethereum spot ETFs that day, although it was unclear how many or which ones were approved.

The move echoed the U.S. SEC's approval of such securities back in January, which lit quite a fire under the price not only of Bitcoin, but those of a great many altcoins. Crypto bulls were rightly encouraged that if the SEC is favorable to spot Bitcoin ETFs, approvals for altcoin ones are sure to follow.

This occurred on the week widely expected to witness the latest halving of Bitcoin. As the name implies, halving will see the Bitcoin payouts for mining the cryptocurrency reduced by half (a measure that helps control the ultimately limited supply of the coin). History shows that Bitcoin's price tends to rise after halving, so in recent weeks investors have piled into it on anticipation of similar gains.

So, for the most part, investors were cautious as the trading week kicked into gear. We should bear in mind that on a year-to-date basis, many of the top cryptos have risen sharply in value, and in such situations, people tend to worry that they've soared too high.

Regardless, there is much interest in coins and tokens these days, so perhaps a renewed rally is in store. It would be worthwhile to keep an eye on those Hong Kong spot crypto ETFs; if interest in that market is anywhere near what the U.S. experienced, it could provide a nice driver pushing crypto prices upward again.

Story continues

Before you buy stock in VeChain Thor, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the10 best stocks for investors to buy now and VeChain Thor wasnt one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Stock Advisor provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month. The Stock Advisor service hasmore than tripledthe return of S&P 500 since 2002*.

See the 10 stocks

*Stock Advisor returns as of April 15, 2024

Eric Volkman has positions in Bitcoin and Ethereum. The Motley Fool has positions in and recommends Bitcoin, Chainlink, and Ethereum. The Motley Fool has a disclosure policy.

Why Altcoins Were Struggling to Tread Water on Monday was originally published by The Motley Fool

Read the original:
Why Altcoins Were Struggling to Tread Water on Monday - Yahoo Finance

Read More..

The 3 Best Altcoins to Buy in April 2024 – InvestorPlace

The Bitcoin (BTC-USD) halving is finally here, and the outlook remains bullish for the cryptocurrency. Since the last halving in 2020, Bitcoin has surged by 650%. If these returns are replicated, the cryptocurrency can touch $435,000 before the 2028 halving. Of course, thats a long-term view. I believe Bitcoin will likely trade above $100,000 in the current bull market. Therefore, its also a good time to buy some of the best altcoins.

Besides the halving event, there are two more catalysts for a Bitcoin rally. First, multiple rate cuts will probably happen in the next 12 to 18 months. Easy money policies are positive for risky asset classes. Bitcoin and altcoins can, therefore, surge higher.

Further, its predicted that the number of crypto users will swell to one billion by 2030. With limited supply, Bitcoin will likely remain in an uptrend. At the same time, altcoins with a strong use case can be massive wealth creators. For now, lets discuss the best altcoins to buy for the next 18 months for multibagger returns.

Source: Chinnapong / Shutterstock

Akash Network (AKT-USD) is among the best altcoins for massive wealth creation. Its worth noting that the Akash token has skyrocketed by 1,000% in the last 12 months. The rally has, however, been from depressed levels, and I expect the positive momentum to sustain.

As an overview, Akash Network is among the early movers in decentralized cloud computing. Akash is built on a blockchain-based framework that eliminates the dependence on centralized cloud providers. However, thats not the only advantage. Akash Network has a significantly lower fee for cloud services as compared to centralized providers.

Its worth noting that the AKT token has a strong use case. Its a native currency and is integral to securing the network, executing transactions and increasing user participation through staking. With the rising adoption of cryptocurrency, the decentralized world will likely get bigger. Akash is well-positioned to benefit and establish itself among the leading decentralized cloud service providers.

Source: Shutterstock

Zilliqa (ZIL-USD) has not participated in the altcoin rally. In the last 12 months, the ZIL coin has remained largely sideways. In my view, this is a golden opportunity to accumulate. Once the breakout happens, 5x to 10x returns are likely in the blink of an eye.

As an overview, Zilliqa is the worlds first blockchain network that uses the concept of sharding. In this technology, transactions are grouped into smaller groups and divided among miners for parallel transaction verification.

That translates into faster transaction speed and the Zilliqa network has a significantly lower cost when compared to Bitcoin or Ethereum (ETH-USD). Another problem that Zilliqa solves is scalability. The transaction capacity scales as the network size grows.

Its also worth noting that the ZIL coin offers an attractive APR of 10.3% and currently about 29% of the circulating supply is staked. Users can, therefore, secure the network and earn a healthy APR for an undervalued coin.

Source: Shutterstock

KuCoin (KCS-USD) is another token that has remained sideways in the last 12 months. At current levels of $8.9, the KCS token looks attractive and poised for multibagger returns.

As an overview, KuCoin is among the largest centralized exchanges in the world in terms of 24-hour trading volumes. The biggest part of the rally for altcoins is due to the current bull market. As Bitcoin and altcoins trend higher, a significant increase in speculative activity is likely. That could benefit all major centralized and decentralized exchanges.

Specific to KuCoin, the exchange has more than 750 listed coins or tokens. Further, KuCoin has 27 million global users. So, the exchange is well-positioned to have healthy growth in the coming quarters.

Its worth noting that, similar to Coinbase (NASDAQ:COIN), the cryptocurrency exchange has a separate platform for institutional and VIP users. That is another segment likely to grow multi-fold in the next few years.

On the date of publication, Faisal Humayun did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Faisal Humayun is a senior research analyst with 12 years of industry experience in the field of credit research, equity research and financial modeling. Faisal has authored over 1,500 stock specific articles with focus on the technology, energy and commodities sector.

Go here to read the rest:
The 3 Best Altcoins to Buy in April 2024 - InvestorPlace

Read More..

Open source observability for AWS Inferentia nodes within Amazon EKS clusters | Amazon Web Services – AWS Blog

Recent developments in machine learning (ML) have led to increasingly large models, some of which require hundreds of billions of parameters. Although they are more powerful, training and inference on those models require significant computational resources. Despite the availability of advanced distributed training libraries, its common for training and inference jobs to need hundreds of accelerators (GPUs or purpose-built ML chips such as AWS Trainium and AWS Inferentia), and therefore tens or hundreds of instances.

In such distributed environments, observability of both instances and ML chips becomes key to model performance fine-tuning and cost optimization. Metrics allow teams to understand workload behavior and optimize resource allocation and utilization, diagnose anomalies, and increase overall infrastructure efficiency. For data scientists, ML chips utilization and saturation are also relevant for capacity planning.

This post walks you through the Open Source Observability pattern for AWS Inferentia, which shows you how to monitor the performance of ML chips, used in an Amazon Elastic Kubernetes Service (Amazon EKS) cluster, with data plane nodes based on Amazon Elastic Compute Cloud (Amazon EC2) instances of type Inf1 and Inf2.

The pattern is part of the AWS CDK Observability Accelerator, a set of opinionated modules to help you set observability for Amazon EKS clusters. The AWS CDK Observability Accelerator is organized around patterns, which are reusable units for deploying multiple resources. The open source observability set of patterns instruments observability with Amazon Managed Grafana dashboards, an AWS Distro for OpenTelemetry collector to collect metrics, and Amazon Managed Service for Prometheus to store them.

The following diagram illustrates the solution architecture.

This solution deploys an Amazon EKS cluster with a node group that includes Inf1 instances.

The AMI type of the node group is AL2_x86_64_GPU, which uses the Amazon EKS optimized accelerated Amazon Linux AMI. In addition to the standard Amazon EKS-optimized AMI configuration, the accelerated AMI includes the NeuronX runtime.

To access the ML chips from Kubernetes, the pattern deploys the AWS Neuron device plugin.

Metrics are exposed to Amazon Managed Service for Prometheus by the neuron-monitor DaemonSet, which deploys a minimal container, with the Neuron tools installed. Specifically, the neuron-monitor DaemonSet runs the neuron-monitor command piped into the neuron-monitor-prometheus.py companion script (both commands are part of the container):

The command uses the following components:

Data is visualized in Amazon Managed Grafana by the corresponding dashboard.

The rest of the setup to collect and visualize metrics with Amazon Managed Service for Prometheus and Amazon Managed Grafana is similar to that used in other open source based patterns, which are included in the AWS Observability Accelerator for CDK GitHub repository.

You need the following to complete the steps in this post:

Complete the following steps to set up your environment:

The following is our sample output:

COA_AMG_ENDPOINT_URL needs to include https://.

The secret will be accessed by the External Secrets add-on and made available as a native Kubernetes secret in the EKS cluster.

The first step to any AWS CDK deployment is bootstrapping the environment. You use the cdk bootstrap command in the AWS CDK CLI to prepare the environment (a combination of AWS account and AWS Region) with resources required by AWS CDK to perform deployments into that environment. AWS CDK bootstrapping is needed for each account and Region combination, so if you already bootstrapped AWS CDK in a Region, you dont need to repeat the bootstrapping process.

Complete the following steps to deploy the solution:

The actual settings for Grafana dashboard JSON files are expected to be specified in the AWS CDK context. You need to update context in the cdk.json file, located in the current directory. The location of the dashboard is specified by the fluxRepository.values.GRAFANA_NEURON_DASH_URL parameter, and neuronNodeGroup is used to set the instance type, number, and Amazon Elastic Block Store (Amazon EBS) size used for the nodes.

You can replace the Inf1 instance type with Inf2 and change the size as needed. To check availability in your selected Region, run the following command (amend Values as you see fit):

Complete the following steps to validate the solution:

The following screenshot shows our sample output.

The following is our expected output:

The following is our expected output:

The following screenshot shows our expected output.

The following screenshot shows our expected output.

Log in to your Amazon Managed Grafana workspace and navigate to the Dashboards panel. You should see a dashboard named Neuron / Monitor.

To see some interesting metrics on the Grafana dashboard, we apply the following manifest:

This is a sample workload that compiles the torchvision ResNet50 model and runs repetitive inference in a loop to generate telemetry data.

To verify the pod was successfully deployed, run the following code:

You should see a pod named pytorch-inference-resnet50.

After a few minutes, looking into the Neuron / Monitor dashboard, you should see the gathered metrics similar to the following screenshots.

Grafana Operator and Flux always work together to synchronize your dashboards with Git. If you delete your dashboards by accident, they will be re-provisioned automatically.

You can delete the whole AWS CDK stack with the following command:

In this post, we showed you how to introduce observability, with open source tooling, into an EKS cluster featuring a data plane running EC2 Inf1 instances. We started by selecting the Amazon EKS-optimized accelerated AMI for the data plane nodes, which includes the Neuron container runtime, providing access to AWS Inferentia and Trainium Neuron devices. Then, to expose the Neuron cores and devices to Kubernetes, we deployed the Neuron device plugin. The actual collection and mapping of telemetry data into Prometheus-compatible format was achieved via neuron-monitor and neuron-monitor-prometheus.py. Metrics were sourced from Amazon Managed Service for Prometheus and displayed on the Neuron dashboard of Amazon Managed Grafana.

We recommend that you explore additional observability patterns in the AWS Observability Accelerator for CDK GitHub repo. To learn more about Neuron, refer to the AWS Neuron Documentation.

Riccardo Freschi is a Sr. Solutions Architect at AWS, focusing on application modernization. He works closely with partners and customers to help them transform their IT landscapes in their journey to the AWS Cloud by refactoring existing applications and building new ones.

Go here to read the rest:
Open source observability for AWS Inferentia nodes within Amazon EKS clusters | Amazon Web Services - AWS Blog

Read More..

Bitcoin dominance hits 3-year high as BTC price dip pressures altcoins – Cointelegraph

Bitcoin (BTC) market cap dominance has hit its highest level in three years as altcoins feel renewed price pressure.

Data from Cointelegraph Markets Pro and TradingView shows Bitcoins share of the total crypto market cap spiking to 56.3% on April 12.

BTC price action suffered into the weekend with a liquidation cascade bringing BTC/USD below $65,300.

At the same time, however, altcoins faced much worse conditions, data shows many of the top twenty cryptocurrencies by market cap fell more than 15%.

In so doing, altcoins relinquished crypto market share to Bitcoin, and the recent highs mark the most Bitcoin-heavy crypto market since April 2021.

I dont typically look at Bitcoin dominance, but the chart is impressive considering the amount of new altcoins birthed into the market every day, popular trader and social media commentator Bagsy wrote in a response on X.

Fellow trader Daan Crypto Trades was among those noting the difference in drawdown between Bitcoin and altcoins in recent days.

Yes, the actual hit on $BTC was very minimal and the total downside also wasnt very relevant, he told X followers while discussing Bitcoin open interest.

Historically, Bitcoin bull markets tend to see a dominance breakout in their early stages, with altcoins then catching up once BTC/USD sees a prolonged consolidation period.

Related:Bitcoin is hedge against horrible govt fiscal policy Cathie Wood

So far in 2024, altcoins, while performing well, have not witnessed such conditions for a meaningful length of time.

Forecasting what might come next, however, fellow trader Mikybull Crypto argued that change would soon come.

Altcoins market cap is perfectly following the previous Alts season step, he wrote in part of an X post.

An accompanying chart compared Bitcoin and altcoin dominance, drawing comparisons with the end of 2020 the point at which BTC price action had just escaped its previous macro trading range below $20,000.

This article does not contain investment advice or recommendations. Every investment and trading move involves risk, and readers should conduct their own research when making a decision.

Read more:
Bitcoin dominance hits 3-year high as BTC price dip pressures altcoins - Cointelegraph

Read More..

Brake Noise And Machine Learning (4 of 4) – The BRAKE Report

Article by: Antonio Rubio, Project Engineer, Braking Systems in Applus IDIADA

ReviewPart One| ReviewPart Two | Review Part Three

The field of artificial intelligence (AI) has made significant progress in recent years, with applications ranging from natural language processing to computer vision. In recent years, Applus IDIADA Brakes department has presented several studies about artificial intelligence application for detection of brake noises. In this paper, Applus IDIADA presents the research done in this area, but focusing on the development of an AI model for predicting subjective ratings for squeal brake noises based on objective measurements collected through the instrumentation in a typical Brake Noise Durability programme. Subjective ratings are based on human opinions and can be challenging to quantify. Objective measurements, on the other hand, can be objectively quantified and provide a more reliable basis for prediction.

The first part of the article introduced the data processing, whereas the second and third parts focused on the AI model creation and validation, respectively. This fourth part, on the other hand, summarizes the main results and draws the conclusions.

Other drivers evaluations

Subjective ratings from two different highly skilled drivers were used (different from the reference driver selected for the model trained). With that, the noises and conditions of noises should be similar, but drivers evaluations are different. Dataset per rating used to evaluate other drivers evaluations is shown in table 9.

Using different drivers for validation, we are validating at the same time:

Ideally, model prediction accuracy should be similar to the accuracy result that comes from the validation performed on the model with the reference driver. Differences between accuracy of the model of the reference driver and the accuracy with the data set of other drivers, could be attributed to differences in subjective criteria between reference driver and the driver evaluated.

It can be seen that there are more subjective ratings available in the data set with high ratings than for low ratings.

Similar to the validation of the model for the reference driver, results for each driver are presented in terms of accuracy. Results can be checked in table 10 and accuracy per driver/rating in table 11.

Accuracy is calculated comparing the subjective rating prediction from the model with the actual ones of the drivers, meaning a 100% of accuracy a correct prediction (same as driver) of the model for all subjective ratings. In addition, the % of ratings not correctly assigned with a difference error of 1 rating, 2 rating and 3 rating is calculated.

It can be seen that:

It can be seen that:

Summary results

Regarding reference driver validation, close to 70% of prediction ratings are the same as the reference driver rating. Rating discrepancies between model and driver rating are mainly with a 1 rating error. Rating discrepancies between model and driver rating more than 2 points are minimal. Accuracy for rating 9, rating 8 and rating 7 is around 70%. Accuracy for rating 6 or lower decrease to 50% or lower.

Regarding other drivers evaluations, the accuracy is around 50% for both of them. Same tendency in comparison with reference driver results can be shown. There is an increase of rating discrepancies mainly of 1 rating. The decrease of accuracy can be explained with the difference of subjective criteria of the drivers in comparison with the reference driver.

Conclusion

The goal of the project is to replicate the evaluation of brake noise annoyance performed by an expert driver using a model. Data containing noise samples collected during several years of testing at Applus IDIADA from a reference driver and their corresponding subjective ratings are provided for this purpose.

The data analysis revealed that there is a feasible opportunity to clean and preprocess the dataset by removing variables that do not contribute value to the model. Outliers were removed from the dataset. Data has been split in three parts: 70% noise events for training, 20% for test and 10% for validation.

Two artificial intelligence models were trained with the dataset: a classification and a regression model. According to the test phase results of training, it is shown that models achieve a good knowledge of the dataset. Finally, according to the different trials, the final model involves combining the classification and regression models. A threshold is set to determine when to rely on the classification models prediction and when to prioritize the rounded output from the regression model.

The model underwent validation by comparing its results with evaluations from the reference driver using different vehicles in conditions that were used for training. An accuracy of 68.5% was achieved, with rating discrepancies between model and driver rating mainly with a 1 rating.

In addition, predicted ratings from different drivers with model from the reference driver have been compared. It can be seen that accuracy in comparison with the reference driver decreased, but it can be explained as differences in subjective criteria with the other drivers.

The results of the study were promising, obtaining with the model an important level of accuracy in predicting subjective ratings based on objective measurements, indicating that the models predictions were close to the actual subjective ratings. Actually, it can be seen during models training that characterization of the subjective criteria is learnt by the models. Main rating discrepancies between model and driver rating are mainly with a 1 rating error that it could be explained as some uncertainty in the subjective criteria of the reference driver. This uncertainty in the subjective criteria of the driver could be explained by a variety of uncontrolled variables which can result in different subjective ratings for the same noise event. These differences appear mainly for low rating below rating 6. In addition, dataset contained a smaller number of lower rating 6 or below than above 6.

In conclusion, the development of an AI model for predicting subjective ratings based on objective measurements is an important step towards the understanding of subjective ratings and objective measurements for brake squeal noise. Prediction results from the current artificial intelligence model are based in objective measurements from 20 variables at the same time that characterize the most important features of the noise as frequency, amplitude, duration or corner source. Furthermore, the results of this study demonstrate the potential of AI models to be implemented in the near-to-medium future on autonomous vehicles providing more accurate subjective rating based on objective data. Future work in this area could involve expanding the model to include additional variables or incorporating other machine learning techniques to further improve performance.

About Applus IDIADA

With over 25 years experience and 2,450 engineers specializing in vehicle development, Applus IDIADA is a leading engineering company providing design, testing, engineering, and homologation services to the automotive industry worldwide.

Applus IDIADA is located in California and Michigan, with further presence in 25 other countries, mainly in Europe and Asia.

http://www.applusidiada.com

See the original post:
Brake Noise And Machine Learning (4 of 4) - The BRAKE Report

Read More..

Why Many Altcoins Were Swooning This Week – Yahoo Finance

Sooner or later, even the most beloved financial assets obey gravity no matter how high they've soared. This week we saw a decline in a clutch of very recently soaring altcoins, as a higher-than-expected inflation readout and profit-taking hit the cryptocurrency market.

Over the course of the five trading weekdays, according to data compiled by S&P Global Market Intelligence, as of late Friday afternoon both Fantom (CRYPTO: FTM) and Theta Network (CRYPTO: THETA) were trading down by nearly 6% week to date. Bittensor (CRYPTO:TAO) was faring worse, with a more than 8% slide over the period.

One big monster in the room over the past few days was inflation, which came in higher than many had anticipated -- this beast might take a while to tame, after all. On Wednesday, the government's Bureau of Labor Statistics reported that inflation in March had risen by 3.5% year over year, which was 0.3 percentage points higher than the February figure and notably above the estimates of many economists.

Suddenly, there's much less talk of the interest rate cuts Federal Reserve (Fed) officials were hoping to start enacting this year.

The prospect of our current, relatively high rates dragging on for longer than expected is sobering to crypto investors. Lower rates make boring-but-safe investments more attractive, as instruments like bonds pay higher interest and become more competitive with the risky stuff. Despite their rising popularity and the belief some hold that they're ideal stores of value, ultimately cryptocurrencies have to be considered high on the risk scale.

At times, it can take a trading day or two for discouraging news to impact the market. The following Friday saw a wide sell-off of many crypto assets. This includes the Pied Piper behind which every altcoin hops, Bitcoin. That day alone, Bitcoin's price was headed south as of very late afternoon trading at a near 5% clip. And when Bitcoin's having a downer, your favorite altcoin is probably headed south, too.

Some economists are speculating darkly that we're in for more inflation "surprises." What isn't helping at the moment is the apparently insatiable demand for housing; the prices in this category rose notably in March.

Where does this leave cryptocurrency? Coins and tokens might be in for a reckoning, and we shouldn't be surprised to see a period of correction as investors get used to the current situation. That might, however, provide a nice entry point for crypto bulls -- if so, we should keep an eye on altcoins, as the more volatile ones could see outsized price gains on a rally.

Story continues

Ever feel like you missed the boat in buying the most successful stocks? Then youll want to hear this.

On rare occasions, our expert team of analysts issues a Double Down stock recommendationfor companies that they think are about to pop. If youre worried youve already missed your chance to invest, now is the best time to buy before its too late. And the numbers speak for themselves:

Amazon: if you invested $1,000 when we doubled down in 2010, youd have $21,292!*

Apple: if you invested $1,000 when we doubled down in 2008, youd have $33,030!*

Netflix: if you invested $1,000 when we doubled down in 2004, youd have $339,096!*

Right now, were issuing Double Down alerts for three incredible companies, and there may not be another chance like this anytime soon.

See 3 Double Down stocks

*Stock Advisor returns as of April 8, 2024

Eric Volkman has positions in Bitcoin. The Motley Fool has positions in and recommends Bitcoin. The Motley Fool recommends Theta Token. The Motley Fool has a disclosure policy.

Why Many Altcoins Were Swooning This Week was originally published by The Motley Fool

See the original post here:
Why Many Altcoins Were Swooning This Week - Yahoo Finance

Read More..

Artificial Intelligence Tool to Improve Heart Failure Care – UVA Health Newsroom

Heart failure occurs when the heart is unable to pump enough blood. Symptoms can include fatigue, weakness, swollen legs and feet and, ultimately, death.

UVA Health researchers have developed a powerful new risk assessment tool for predicting outcomes in heart failure patients. The researchers have made the tool publicly available for free to clinicians.

The new tool improves on existing risk assessment tools for heart failure by harnessing the power of machine learning (ML) and artificial intelligence (AI) to determine patient-specific risks of developing unfavorable outcomes with heart failure.

Heart failure is a progressive condition that affects not only quality of life but quantity as well. All heart failure patients are not the same. Each patient is on a spectrum along the continuum of risk of suffering adverse outcomes, said researcher Sula Mazimba, MD, a heart failure expert. Identifying the degree of risk for each patient promises to help clinicians tailor therapies to improve outcomes.

Heart failure occurs when the heart is unable to pump enough blood for the bodys needs. Thiscan lead to fatigue, weakness, swollen legs and feet and, ultimately, death.Heart failure isa progressive condition, so it is extremely important forclinicians to be able to identify patients at risk ofadverse outcomes.

Further, heart failure is a growing problem. More than 6 million Americans already have heart failure, and that number is expected to increase to more than 8 million by 2030. The UVA researchers developed their new model, called CARNA, to improve care for these patients. (Finding new ways to improve care for patients across Virginia and beyond is a key component of UVA Healths first-ever10-year strategic plan.)

The researchersdeveloped their model using anonymized data drawn from thousands of patients enrolled in heart failure clinical trialspreviously funded by the National Institutes of Healths National Heart, Lung and Blood Institute. Putting the model to the test, they found it outperformed existing predictors for determining how a broad spectrum of patients would fare in areas such as the need for heart surgery or transplant, the risk of rehospitalization and the risk of death.

The researchers attribute the models successto the use of ML/AI and the inclusion of hemodynamic clinical data, which describe how blood circulates through the heart, lungs and the rest of the body.

This model presents a breakthrough because it ingests complex sets of data and can make decisions even among missing and conflicting factors, said researcher Josephine Lamp, of the University of Virginia School of Engineerings Department of Computer Science. It is really exciting because the model intelligently presents and summarizes risk factors reducing decision burden so clinicians can quickly make treatment decisions.

By using the model, doctors will be better equipped to personalize care to individual patients, helping them live longer, healthier lives, the researchers hope.

The collaborative research environment at the University of Virginia made this work possible by bringing together experts in heart failure, computer science, data science and statistics, said researcher Kenneth Bilchick, MD, a cardiologist at UVA Health. Multidisciplinary biomedical research that integrates talented computer scientists like Josephine Lamp with experts in clinical medicine will be critical to helping our patients benefit from AI in the coming years and decades.

The researchers have made their new tool available online for free athttps://github.com/jozieLamp/CARNA.

In addition, they havepublished the results of their evaluation of CARNA in the American Heart Journal. The research team consisted of Lamp, Yuxin Wu, Steven Lamp, Prince Afriyie, Nicholas Ashur, Bilchick, Khadijah Breathett, Younghoon Kwon, Song Li, Nishaki Mehta, Edward Rojas Pena, Lu Feng and Mazimba. The researchers have no financial interest in the work.

The project was based on one of the winning submissions to the National Heart, Lung and Blood Institutes Big Data Analysis Challenge: Creating New Paradigms for Heart Failure Research. The work was supported by the National Science Foundation Graduate Research Fellowship, grant 842490, and NHLBI grants R56HL159216, K01HL142848 and L30HL148881.

To keep up with the latest medical research news from UVA, subscribe to theMaking of Medicineblog.

See the original post here:
Artificial Intelligence Tool to Improve Heart Failure Care - UVA Health Newsroom

Read More..

Top Altcoins With High Recovery Potential To Buy This Week – Coinpedia Fintech News

Amidst the highly volatile and bearish crypto market, the crucial support levels for many altcoins are turning weak. Failing to absorb the overhead supply, many top performers are under 30% or more correction in the last few days.

However, despite such an immense supply wave igniting a domino effect, some top altcoins show high recovery potential. With a high likelihood of a bull run continuation in these coins, they could spearhead the next recovery rally in the crypto market.

So, lets have a closer look at their price analysis for a more confident approach.

ONDO (ONDO)

TradingView

With a sideways trend in motion, the 4H chart of the ONDO altcoin showcases resilience against the market-wide sell-off. However, the chances of recovery are improving with Bitcoin Halving inching closer and the market sentiment normalizing from fear.

Dominance at the 200 EMA in the 4H timeframe prolongs the consolidation move. Further, the bullish divergence in the daily RSI bolsters the chances of a bullish reversal.

A bounce-back in ONDO price could reclaim the psychological mark of $1 and lead to a new breakout rally. In such a case, the bull run could aim for the 1.618 Fibonacci trend-based retracement level at $1.32.

Read More : Top 3 Undervalued Altcoins Ahead Of Bitcoin Halving

OKB (OKB)

TradingView

With a bullish higher low trend in motion, the OKB weekly price chart showcases a wedge formation. The altcoin takes constant support from the 50W EMA and teases a trend continuation for the overhead trendline breakout.

Currently, the OKB price stands at the baseline and the 50W EMA and projects a new upcycle entry opportunity. Further, the crucial support level coincides with the 23.60% Fibonacci trend-based retracement level. Thus, the solid underlying demand bolsters the chances of an uptrend.

If the uptrend breaks above the 78.60% Fibonacci level at $74.5, the bull run could hike the altcoin prices to $100.

Toncoin (TON)

Tradingview

The remarkable uptrend in the Toncoin price chart displays solid demand at the support trendline. Despite the recent correction visible in the 4H chart, the altcoin is shifting gears to prolong the uptrend with a new bounce back.

The TON price has increased by almost 10% in the last 16 hours and teases a rally beyond the $10 psychological barrier. With such immense demand and the high momentum prevailing uptrend, the bull run could reach the $15 mark in the upcoming altcoin season.

Also Check Out : Top Altcoins to Consider Amid a Fresh Rally Ahead Fueled by Institutional Demand

As the sell-off wave reaches exhaustion in the broader market, the altcoins are picking up pace for a new recovery rally. The above-mentioned altcoins have high potential due to the prevailing uptrend and the underlying demand. Hence, these altcoins could continue the uptrend and spearhead the next bull run if the broader market recovers with Bitcoin Halving.

Read the original post:
Top Altcoins With High Recovery Potential To Buy This Week - Coinpedia Fintech News

Read More..

Machine learning could help reveal undiscovered particles within data from the Large Hadron Collider – Newswise

Newswise Scientists used a neural network, a type of brain-inspired machine learning algorithm, to sift through large volumes of particle collision data.

For over two decades, theATLASparticle detector has recorded the highest energy particle collisions in the world within the Large Hadron Collider (LHC) located atCERN, the European Organization for Nuclear Research. Beams of protons are accelerated around theLHCat close to the speed of light, and upon their collision atATLAS, they produce a cascade of new particles, resulting in over a billion particle interactions per second.

Particle physicists are tasked with mining this massive and growing store of collision data for evidence of undiscovered particles. In particular, theyre searching for particles not included in theStandard Modelof particle physics, our current understanding of the universes makeup that scientists suspect is incomplete.

As part of theATLAScollaboration, scientists at the U.S. Department of Energys (DOE) Argonne National Laboratory and their colleagues recently used a machine learning approach called anomaly detection to analyze large volumes ofATLASdata. The method has never before been applied to data from a collider experiment. It has the potential to improve the efficiency of the collaborations search for something new. The collaboration involves scientists from 172 research organizations.

The team leveraged a brain-inspired type of machine learning algorithm called a neural network to search the data for abnormal features, or anomalies. The technique breaks from more traditional methods of searching for new physics. It is independent of and therefore unconstrained by the preconceptions of scientists.

Rather than looking for very specific deviations, the goal is to find unusual signatures in the data that are completely unexplored, and that may look different from what our theories predict. Physicist Sergei Chekanov

Traditionally,ATLASscientists have relied on theoretical models to help guide their experiment and analysis in the directions most promising for discovery. This often involves performing complex computer simulations to determine how certain aspects of collision data would look according to the Standard Model. Scientists compare these Standard Model predictions to real data fromATLAS. They also compare them to predictions made by new physics models, like those attempting to explaindark matterand other phenomena unaccounted for by the Standard Model.

But so far, no deviations from the Standard Model have been observed in the billions of billions of collisions recorded atATLAS. And since the discovery of theHiggs bosonin 2012, theATLASexperiment has yet to find any new particles.

Anomaly detection is a very different way of approaching this search, said Sergei Chekanov, a physicist in Argonnes High Energy Physics division and a lead author on the study.Rather than looking for very specific deviations, the goal is to find unusual signatures in the data that are completely unexplored and that may look different from what our theories predict.

To perform this type of analysis, the scientists represented each particle interaction in the data as an image that resembles aQRcode. Then, the team trained their neural network by exposing it to 1% of the images.

The network consists of around 2 million interconnected nodes, which are analogous to neurons in the brain. Without human guidance or intervention, it identified and remembered correlations between pixels in the images that characterize Standard Model interactions. In other words, it learned to recognize typical events that fit within Standard Model predictions.

After training, the scientists fed the other 99% of the images through the neural network to detect any anomalies. When given an image as input, the neural network is tasked with recreating the image using its understanding of the data as a whole.

If the neural network encounters something new or unusual, it gets confused and has a hard time reconstructing the image, said Chekanov.If there is a large difference between the input image and the output it produces, it lets us know that there might be something interesting to explore in that direction.

Using computational resources at Argonnes Laboratory Computing Resource Center, the neural network analyzed around 160 million events withinLHCRun-2 data collected from 2015 to 2018.

Although the neural network didnt find any glaring signs of new physics in this data set, it did spot one anomaly that the scientists think is worth further study. An exotic particle decay at an energy of around 4.8 teraelectronvolts results in a muon (a type of fundamental particle) and a jet of other particles in a way that does not fit with the neural networks understanding of Standard Model interactions.

Well have to do more investigation, said Chekanov.It is likely a statistical fluctuation, but theres a chance this decay could indicate the existence of an undiscovered particle.

The team plans to apply this technique to data collected during theLHCRun-3 period, which began in 2022.ATLASscientists will continue to explore the potential of machine learning and anomaly detection as tools for charting unknown territory in particle physics.

The results of the study were published inPhysical Review Letters. This work was funded in part by theDOEOffice of Sciences Office of High Energy Physics and the National Science Foundation.

Argonne National Laboratoryseeks solutions to pressing national problems in science and technology. The nations first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance Americas scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed byUChicago Argonne,LLCfor theU.S. Department of Energys Office of Science.

The U.S. Department of Energys Office of Scienceis the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visithttps://energy.gov/science.

Read more:
Machine learning could help reveal undiscovered particles within data from the Large Hadron Collider - Newswise

Read More..