Page 31«..1020..30313233..4050..»

Synthetic Lagrangian turbulence by generative diffusion models – Nature.com

Shraiman, I. B. & D. Siggia, D. E. Scalar turbulence. Nature 405, 639646 (2000).

Article Google Scholar

La Porta, A., Voth, G. A., Crawford, A. M., Alexander, J. & Bodenschatz, E. Fluid particle accelerations in fully developed turbulence. Nature 409, 10171019 (2001).

Article Google Scholar

Mordant, N., Metz, P., Michel, O. & Pinton, J.-F. Measurement of lagrangian velocity in fully developed turbulence. Phys. Rev. Lett. 87, 214501 (2001).

Article Google Scholar

Falkovich, G., Gawdzki, K. & Vergassola, M. Particles and fields in fluid turbulence. Rev. Mod. Phys. 73, 913975 (2001).

Article MathSciNet Google Scholar

Yeung, P. Lagrangian investigations of turbulence. Annu. Rev. Fluid Mech. 34, 115142 (2002).

Article MathSciNet Google Scholar

Pomeau, Y. The long and winding road. Nat. Phys. 12, 198199 (2016).

Article Google Scholar

Falkovich, G. & Sreenivasan, K. R. Lessons from hydrodynamic turbulence. Phys. Today 59, 43 (2006).

Article Google Scholar

Toschi, F. & Bodenschatz, E. Lagrangian properties of particles in turbulence. Annu. Rev. fluid Mech. 41, 375404 (2009).

Article MathSciNet Google Scholar

Shaw, R. A. Particle-turbulence interactions in atmospheric clouds. Annu. Rev. Fluid Mech. 35, 183227 (2003).

Article Google Scholar

McKee, C. F. & Stone, J. M. Turbulence in the heavens. Nat. Astron. 5, 342343 (2021).

Article Google Scholar

Bentkamp, L., Lalescu, C. C. & Wilczek, M. Persistent accelerations disentangle lagrangian turbulence. Nat. Commun. 10, 3550 (2019).

Article Google Scholar

Sawford, B. L. & Pinton, J.-F. in Ten Chapters in Turbulance (eds. Davidson, P. A., Kaneda, Y. & Sreenivasan, K. R.) 132175 (Cambridge Univ. Press, 2013).

Xia, H., Francois, N., Punzmann, H. & Shats, M. Lagrangian scale of particle dispersion in turbulence. Nat. Commun. 4, 2013 (2013).

Article Google Scholar

Barenghi, C. F., Skrbek, L. & Sreenivasan, K. R. Introduction to quantum turbulence. Proc. Natl Acad. Sci. USA 111, 46474652 (2014).

Article MathSciNet Google Scholar

Xu, H. et al. Flightcrash events in turbulence. Proc. Natl Acad. Sci. USA 111, 75587563 (2014).

Article Google Scholar

Laussy, F. P. Shining light on turbulence. Nat. Photonics 17, 381382 (2023).

Article Google Scholar

Frisch, U.Turbulence: The Legacy of AN Kolmogorov (Cambridge Univ. Press, 1995).

Sawford, B. L. Reynolds number effects in Lagrangian stochastic models of turbulent dispersion. Phys. Fluids A 3, 15771586 (1991).

Article Google Scholar

Pope, S. B. Simple models of turbulent flows. Phys. Fluids 23, 011301 (2011).

Article Google Scholar

Viggiano, B. et al. Modelling lagrangian velocity and acceleration in turbulent flows as infinitely differentiable stochastic processes. J. Fluid Mech. 900, A27 (2020).

Article MathSciNet Google Scholar

Lamorgese, A., Pope, S. B., Yeung, P. & Sawford, B. L. A conditionally cubic-gaussian stochastic lagrangian model for acceleration in isotropic turbulence. J. Fluid Mech. 582, 423448 (2007).

Article MathSciNet Google Scholar

Minier, J.-P., Chibbaro, S. & Pope, S. B. Guidelines for the formulation of lagrangian stochastic models for particle simulations of single-phase and dispersed two-phase turbulent flows. Phys. Fluids 26, 113303 (2014).

Article Google Scholar

Wilson, J. D. & Sawford, B. L. Review of lagrangian stochastic models for trajectories in the turbulent atmosphere. Bound.-Layer. Meteorol. 78, 191210 (1996).

Article Google Scholar

Bourlioux, A., Majda, A. & Volkov, O. Conditional statistics for a passive scalar with a mean gradient and intermittency. Phys. Fluids https://doi.org/10.1063/1.2353880 (2006).

Majda, A. J. & Gershgorin, B. Elementary models for turbulent diffusion with complex physical features: eddy diffusivity, spectrum and intermittency. Philos. Trans. R. Soc. A 371, 20120184 (2013).

Article MathSciNet Google Scholar

Biferale, L., Boffetta, G., Celani, A., Crisanti, A. & Vulpiani, A. Mimicking a turbulent signal: sequential multiaffine processes. Phys. Rev. E 57, R6261 (1998).

Article Google Scholar

Arneodo, A., Bacry, E. & Muzy, J.-F. Random cascades on wavelet dyadic trees. J. Math. Phys. 39, 41424164 (1998).

Article MathSciNet Google Scholar

Bacry, E. & Muzy, J. F. Log-infinitely divisible multifractal processes. Commun. Math. Phys. 236, 449475 (2003).

Article MathSciNet Google Scholar

Chevillard, L. et al. On a skewed and multifractal unidimensional random field, as a probabilistic representation of Kolmogorovs views on turbulence. Ann. Henri Poincar 20, 36933741 (2019).

Article MathSciNet Google Scholar

Sinhuber, M., Friedrich, J., Grauer, R. & Wilczek, M. Multi-level stochastic refinement for complex time series and fields: a data-driven approach. N. J. Phys. 23, 063063 (2021).

Article MathSciNet Google Scholar

Lbke, J., Friedrich, J. & Grauer, R. Stochastic interpolation of sparsely sampled time series by a superstatistical random process and its synthesis in fourier and wavelet space. J. Phys.: Complex. 4, 015005 (2022).

Google Scholar

Zamansky, R. Acceleration scaling and stochastic dynamics of a fluid particle in turbulence. Phys. Rev. Fluids 7, 084608 (2022).

Article Google Scholar

Arnodo, A. et al. Universal intermittent properties of particle trajectories in highly turbulent flows. Phys. Rev. Lett. 100, 254504 (2008).

Article Google Scholar

Kingma, D. P. & Welling, M. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations: Conference Track Proceedings (ICLR, 2014); https://doi.org/10.48550/arXiv.1312.6114

Goodfellow, I. et al. Generative adversarial nets. Adv. Neural Infor. Process. Syst. 27, 26722680 (2014).

Google Scholar

Ho, J., Jain, A. & Abbeel, P. Denoising diffusion probabilistic models. Adv. Neural Inf. Process. Syst. 33, 68406851 (2020).

Google Scholar

Dhariwal, P. & Nichol, A. Diffusion models beat gans on image synthesis. Adv. Neural Inf. Process. Syst. 34, 87808794 (2021).

Google Scholar

van den Oord, A. et al. WaveNet: a generative model for raw audio. Preprint at https://doi.org/10.48550/arXiv.1609.03499 (2016).

Brown, T. et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 33, 18771901 (2020).

Google Scholar

Chen, R. J., Lu, M. Y., Chen, T. Y., Williamson, D. F. & Mahmood, F. Synthetic data in machine learning for medicine and healthcare. Nat. Biomed. Eng. 5, 493497 (2021).

Article Google Scholar

Duraisamy, K., Iaccarino, G. & Xiao, H. Turbulence modeling in the age of data. Annu. Rev. Fluid Mech. 51, 357377 (2019).

Article MathSciNet Google Scholar

Brunton, S. L., Noack, B. R. & Koumoutsakos, P. Machine learning for fluid mechanics. Annu. Rev. Fluid Mech. 52, 477508 (2020).

Article MathSciNet Google Scholar

Vlachas, P. R., Byeon, W., Wan, Z. Y., Sapsis, T. P. & Koumoutsakos, P. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks. Proc. R. Soc. A 474, 20170844 (2018).

Article MathSciNet Google Scholar

Pathak, J., Hunt, B., Girvan, M., Lu, Z. & Ott, E. Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach. Phys. Rev. Lett. 120, 024102 (2018).

Article Google Scholar

Mohan, A. T., Tretiak, D., Chertkov, M. & Livescu, D. Spatio-temporal deep learning models of 3d turbulence with physics informed diagnostics. J. Turbul. 21, 484524 (2020).

Article MathSciNet Google Scholar

Kim, J. & Lee, C. Deep unsupervised learning of turbulence for inflow generation at various reynolds numbers. J. Comput. Phys. 406, 109216 (2020).

Article MathSciNet Google Scholar

Guastoni, L. et al. Convolutional-network models to predict wall-bounded turbulence from wall quantities. J. Fluid Mech. 928, A27 (2021).

Article MathSciNet Google Scholar

Buzzicotti, M., Bonaccorso, F., Di Leoni, P. C. & Biferale, L. Reconstruction of turbulent data with deep generative models for semantic inpainting from turb-rot database. Phys. Rev. Fluids 6, 050503 (2021).

Article Google Scholar

Yousif, M. Z., Yu, L., Hoyas, S., Vinuesa, R. & Lim, H. A deep-learning approach for reconstructing 3d turbulent flows from 2d observation data. Sci. Rep. 13, 2529 (2023).

Article Google Scholar

Shu, D., Li, Z. & Farimani, A. B. A physics-informed diffusion model for high-fidelity flow field reconstruction. J. Comput. Phys. 478, 111972 (2023).

Article MathSciNet Google Scholar

Buzzicotti, M. Data reconstruction for complex flows using AI: recent progress, obstacles, and perspectives. Europhys. Lett. 142, 23001 (2023).

Article Google Scholar

Granero-Belinchon, C. Neural network based generation of a 1-dimensional stochastic field with turbulent velocity statistics. Phys. D 458, 133997 (2024).

Article MathSciNet Google Scholar

Nichol, A. Q. & Dhariwal, P. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning (eds. Meila, M. et al.) 81628171 (PMLR, 2021).

The rest is here:
Synthetic Lagrangian turbulence by generative diffusion models - Nature.com

Read More..

Only 6 altcoins in the top 50 have outperformed Bitcoin this year – Cointelegraph

Only six altcoins among the top 50 tokens by market capitalization have managed to outperform Bitcoin (BTC) so far this year, as Bitcoin dominance reached a three-year high over the weekend.

The memecoin Dogecoin (DOGE) stands as the best-performing altcoin in the top 50, having posted year-to-date gains of just over 77% climbing from $0.09 on Jan. 1 to $0.15 at the time of publication, per TradingView data.

Included in the remaining outperformers are fellow memecoin Shiba Inu (SHIB), Bitcoin smart contract network Stacks (STX), Binances BNB (BNB), Ethereum layer-2 network Mantle (MNT) and GPU-sharing blockchain network Render (RNDR).

Bitcoin has grown from a price of $44,100 on Jan. 1 to $65,000 at the time of publication, a year-to-date gain of 54%.

Many have pegged the price rise to consistent institutional inflows into the 10 United States-traded spot Bitcoin exchange-traded funds (ETFs) approved in January this year, generating more than $12 billion in cumulative net inflows, per Farside Investors data.

Notably, Bitcoin dominance pushed to a new year three-year high of 56.5% on April 13, as the cryptocurrency bounced back sharply from a marketwide sell-off sparked by escalating geopolitical tensions in the Middle East.

The Bitcoin dominance metric refers to the ratio of Bitcoins market cap compared to the cumulative market cap of all other cryptocurrencies.

While Bitcoin recovered ground in the following days, the majority of smaller altcoins failed to find their footing and tumbled significantly in price.

Alternative layer-1 network Aptos (APT) and decentralized crypto exchange Uniswap (UNI) led the decline among the top tokens 50 by market cap, posting losses of 35% and 31%, respectively, in the last seven days.

Related: Bitcoins normal drop leads to $256M longs liquidated Analysts

In an April 14 investment note viewed by Cointelegraph, IG Market analyst Tony Sycamore said Bitcoin appears to be on track for its fourth weekly decline, with the expectations of no further U.S. Federal Reserve rate weighing on crypto investing sentiment.

Despite the current negative-leaning sentiment toward risk assets, Sycamore predicted that Bitcoin would gradually climb to around $80,000 in the coming months depending on whether or not it can hold above its key support mark.

Providing Bitcoin remains above the [$60,000$58,000] support zone, we expect the uptrend to resume towards $80,000, Sycamore wrote.

Magazine: 5 dangers to beware when apeing into Solana memecoins

Read more here:
Only 6 altcoins in the top 50 have outperformed Bitcoin this year - Cointelegraph

Read More..

SMT Prospects and Perspectives: AI Opportunities, Challenges, and Possibilities, Part 1 – I-Connect007

April 17, 2024

In this installment of my artificial intelligence (AI) series, I will touch on the key foundational technologies that propel and drive the development and deployment of AI, with special consideration of electronics packaging and assembly.

The objectives of the series:

Leverage AI as a virtual tool to facilitate an individuals job efficiency and effectiveness and future job prospects, as well as the enterprise business growth

Breakthroughs and Transformational Technologies

Since the discovery of the electron in 1897 by Joseph John Thomson, striking breakthroughs of the 20th and 21st centuries include:

Introduction of AI ChatGPT-4 by OpenAI in 2023 Based on these breakthrough technologies, many products and services have been developed that improve the quality of human life and spur global prosperityand it all came from the discovery of that tiny unit called an electron.

Operating AI demands the use of heavy-load hardware that processes algorithms, runs the models, and keeps data flowing. These bandwidth-hungry applications necessitate higher-speed data transfer, which opens a crucial role for photons by taking advantage of the speed of light to deliver greater bandwidth and lower latency and power. Hardware components typically will connect via copper interconnects, while the connections between the racks in data centers often use optical fiber. CPUs and GPUs also use optical interconnects for optical signals.

Both electrons and photons will play an increased role. AI will drive the need for near-packaged optics with high-performance PCB substrates (or an interposer) on the host board. Co-packaged optics, a single-package integration of electronic and photonic dies, or photonic integrated circuits (PICs) are expected to play a pivotal role.

AI Market and Hardware To AI, high performance hardware is indispensable, particularly with computing chips. As AI becomes embedded in all sectors of industry and all aspects of daily life and business, the biggest winners so far are hardware manufacturers: 80% of AI servers use GPUs and its expected to grow to 90%. In addition to GPU, the required pairing memory puts high demand for high bandwidth memory (HBM). The advent of generative AI further thrusts accelerated computing, which uses GPUs along with CPUs to meet augmented performances.

Although the estimated forecast of the future AI market varies, according toPwC1, AI could contribute more than $15 trillion to the global economy by 2030. Most agree that the impact of AI adoption could be greater than the inventions of the internet, mobile broadband, and the smartphone combined.

AI Historical Milestones AI is not a new term. John McCarthy coined artificial intelligence and held the first AI conference in 1956. Shakey the Robot, the first general-purpose mobile robot, was built in 1969.

In the succeeding decades, AI went through a roller coaster ride of successes and setbacks until the 2010s, when key events, including the introduction of big data and machine learning (ML), created an age in which machines have the capacity to collect and process huge sums of information too cumbersome for a person to process. Other pace-setting technologiesdeep learning and neural networkwere introduced in 2010, with GAN in 2014, and transformer in 2017.

The 2020s have been when AI finally gained traction, especially with the introduction of generative AI, the release of ChatGPT on Nov. 30, 2022, and the phenomenal ChatGPT-4 on March 14, 2023. It feels like AI has suddenly become a global phenomenon. The rest is history.

AI Bedrock Technologies Generally speaking, AI is a digital technology that mimics the intellectual, analytical, and creative ability of humans, largely by absorbing and finding patterns in an enormous amount of information and data. AI covers a multitude of technologies, including machine learning (ML), deep learning (DL), neural network (NN), natural language processing (NLP), and their closely-aligned technologies. In one way, AI hierarchy can be shown in Figure 1, exhibiting the interrelations and evolution of these underpinning technologies.

Now Id like to briefly highlight each technology:

Machine Learning Machine learning is a technique that collects and analyzes data, looks for patterns, and adjusts its actions accordingly to develop statistical mathematical models. The resulting algorithms allow software applications to predict outcomes without explicit programming and incorporate intelligence into a machine by automatically learning from the data. A learning algorithm then trains a model to generate a prediction for the response to new data or the test datasets.

There are three types of ML: supervised, unsupervised, and reinforcement.

In addition to these basic ML techniques, more advanced ML approaches continue to emerge.

ML understands patterns and can instantly see anomalies that fall outside those patterns, making it a valuable tool in myriad applications, ranging from fraud detection and cyber threat detection to manufacturing and supply chain operation.

Deep Learning

Deep learning is a subset of machine learning based on multi-layered neural networks that learn from vast amounts of data. It comprises a series of algorithms trained and run on deep neural networks that mimic the human brain to incorporate intelligence into a machine. Most deep learning methods use neural network architectures, so they are often referred to as deep neural networks. Software architecture (type, number, and organization of the layers) is built empirically following an intuition-based optimization process, with training data in the loop to tune DL model parameters. Training for DL software occurs atomically and with strong coupling across all layers of the DL software.

The increased accuracy of DL software requires more complex implementations in which the number of layers, their size (number of neurons), and the amount of data used for training increase enormously.

Generative AI

I tried ChatGPT to see how the bot explains generative AI:

Generative AI refers to a category of artificial intelligence (AI) that focuses on creating new and original content. It uses models and algorithms to generate data, such as text, images, audios, or even videos, that resemble human-created content. Generative AI models are trained on large datasets and can generate creative and coherent outputs based on the patterns and information that have been learned. They have applications in various fields, including art, language, music, and more.

A generative AI model, in a mathematical representation implemented as an algorithm, can create something that didn't previously exist by processing a large amount of visual or textual data and then determining what things are most likely to appear near other things using deep learning or neural networks. Programming work goes into creating algorithms that can recognize texts or prompts. It creates output by assessing an enormous corpus of data, responding to prompts with something that falls within the realm of probability as determined by that corpus of data.

Generative AI tools offer the ability to create essays, images, and music in response to simple prompts.

My next column will highlight the foundational technologies behind AI, including the large language model (LLM) and foundation model.

References

1. PwCs Global Artificial Intelligence Study: Exploiting the AI Revolution, pwc.com.

This column originally appeared in the April 2024 issue of SMT007 Magazine.

Here is the original post:
SMT Prospects and Perspectives: AI Opportunities, Challenges, and Possibilities, Part 1 - I-Connect007

Read More..

Why Altcoins Were Struggling to Tread Water on Monday – Yahoo Finance

Is the recent cryptocurrency rout over yet? Probably not, but after booking losses toward the end of last week that were significant at times, the landscape looked a little better for altcoins.

On Monday, quite a few were posting comparatively modest losses, with some even inching cautiously into positive territory. In late afternoon trading, Chainlink (CRYPTO: LINK) was down only marginally, while The Sandbox's (CRYPTO: SAND) price was moving sideways. On the gainer side, VeChain (CRYPTO: VET) was up by 3.5%, and Litecoin (CRYPTO: LTC) posted a 0.5% gain.

Major geopolitical developments usually impact the financial markets to some degree, and cryptocurrency is no exception to this. After a scare that the ever-volatile Middle East dynamic would worsen with Iran's attack on Israel, as of late afternoon Monday, the situation seemed to be cooling off encouragingly.

A more direct source of cautious optimism was the apparent approval of spot crypto exchange-traded funds (ETFs) in Hong Kong, one of the most important financial markets in Asia. Asset managers there said the enclave's Securities and Futures Commission (SFC) gave its first nod to Bitcoin and Ethereum spot ETFs that day, although it was unclear how many or which ones were approved.

The move echoed the U.S. SEC's approval of such securities back in January, which lit quite a fire under the price not only of Bitcoin, but those of a great many altcoins. Crypto bulls were rightly encouraged that if the SEC is favorable to spot Bitcoin ETFs, approvals for altcoin ones are sure to follow.

This occurred on the week widely expected to witness the latest halving of Bitcoin. As the name implies, halving will see the Bitcoin payouts for mining the cryptocurrency reduced by half (a measure that helps control the ultimately limited supply of the coin). History shows that Bitcoin's price tends to rise after halving, so in recent weeks investors have piled into it on anticipation of similar gains.

So, for the most part, investors were cautious as the trading week kicked into gear. We should bear in mind that on a year-to-date basis, many of the top cryptos have risen sharply in value, and in such situations, people tend to worry that they've soared too high.

Regardless, there is much interest in coins and tokens these days, so perhaps a renewed rally is in store. It would be worthwhile to keep an eye on those Hong Kong spot crypto ETFs; if interest in that market is anywhere near what the U.S. experienced, it could provide a nice driver pushing crypto prices upward again.

Story continues

Before you buy stock in VeChain Thor, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the10 best stocks for investors to buy now and VeChain Thor wasnt one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Stock Advisor provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month. The Stock Advisor service hasmore than tripledthe return of S&P 500 since 2002*.

See the 10 stocks

*Stock Advisor returns as of April 15, 2024

Eric Volkman has positions in Bitcoin and Ethereum. The Motley Fool has positions in and recommends Bitcoin, Chainlink, and Ethereum. The Motley Fool has a disclosure policy.

Why Altcoins Were Struggling to Tread Water on Monday was originally published by The Motley Fool

Read the original:
Why Altcoins Were Struggling to Tread Water on Monday - Yahoo Finance

Read More..

How Will Bitcoin Halving Affect ETH & Altcoin Prices? – Techopedia

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

Read more here:
How Will Bitcoin Halving Affect ETH & Altcoin Prices? - Techopedia

Read More..

The 3 Best Altcoins to Buy in April 2024 – InvestorPlace

The Bitcoin (BTC-USD) halving is finally here, and the outlook remains bullish for the cryptocurrency. Since the last halving in 2020, Bitcoin has surged by 650%. If these returns are replicated, the cryptocurrency can touch $435,000 before the 2028 halving. Of course, thats a long-term view. I believe Bitcoin will likely trade above $100,000 in the current bull market. Therefore, its also a good time to buy some of the best altcoins.

Besides the halving event, there are two more catalysts for a Bitcoin rally. First, multiple rate cuts will probably happen in the next 12 to 18 months. Easy money policies are positive for risky asset classes. Bitcoin and altcoins can, therefore, surge higher.

Further, its predicted that the number of crypto users will swell to one billion by 2030. With limited supply, Bitcoin will likely remain in an uptrend. At the same time, altcoins with a strong use case can be massive wealth creators. For now, lets discuss the best altcoins to buy for the next 18 months for multibagger returns.

Source: Chinnapong / Shutterstock

Akash Network (AKT-USD) is among the best altcoins for massive wealth creation. Its worth noting that the Akash token has skyrocketed by 1,000% in the last 12 months. The rally has, however, been from depressed levels, and I expect the positive momentum to sustain.

As an overview, Akash Network is among the early movers in decentralized cloud computing. Akash is built on a blockchain-based framework that eliminates the dependence on centralized cloud providers. However, thats not the only advantage. Akash Network has a significantly lower fee for cloud services as compared to centralized providers.

Its worth noting that the AKT token has a strong use case. Its a native currency and is integral to securing the network, executing transactions and increasing user participation through staking. With the rising adoption of cryptocurrency, the decentralized world will likely get bigger. Akash is well-positioned to benefit and establish itself among the leading decentralized cloud service providers.

Source: Shutterstock

Zilliqa (ZIL-USD) has not participated in the altcoin rally. In the last 12 months, the ZIL coin has remained largely sideways. In my view, this is a golden opportunity to accumulate. Once the breakout happens, 5x to 10x returns are likely in the blink of an eye.

As an overview, Zilliqa is the worlds first blockchain network that uses the concept of sharding. In this technology, transactions are grouped into smaller groups and divided among miners for parallel transaction verification.

That translates into faster transaction speed and the Zilliqa network has a significantly lower cost when compared to Bitcoin or Ethereum (ETH-USD). Another problem that Zilliqa solves is scalability. The transaction capacity scales as the network size grows.

Its also worth noting that the ZIL coin offers an attractive APR of 10.3% and currently about 29% of the circulating supply is staked. Users can, therefore, secure the network and earn a healthy APR for an undervalued coin.

Source: Shutterstock

KuCoin (KCS-USD) is another token that has remained sideways in the last 12 months. At current levels of $8.9, the KCS token looks attractive and poised for multibagger returns.

As an overview, KuCoin is among the largest centralized exchanges in the world in terms of 24-hour trading volumes. The biggest part of the rally for altcoins is due to the current bull market. As Bitcoin and altcoins trend higher, a significant increase in speculative activity is likely. That could benefit all major centralized and decentralized exchanges.

Specific to KuCoin, the exchange has more than 750 listed coins or tokens. Further, KuCoin has 27 million global users. So, the exchange is well-positioned to have healthy growth in the coming quarters.

Its worth noting that, similar to Coinbase (NASDAQ:COIN), the cryptocurrency exchange has a separate platform for institutional and VIP users. That is another segment likely to grow multi-fold in the next few years.

On the date of publication, Faisal Humayun did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Faisal Humayun is a senior research analyst with 12 years of industry experience in the field of credit research, equity research and financial modeling. Faisal has authored over 1,500 stock specific articles with focus on the technology, energy and commodities sector.

Go here to read the rest:
The 3 Best Altcoins to Buy in April 2024 - InvestorPlace

Read More..

Open source observability for AWS Inferentia nodes within Amazon EKS clusters | Amazon Web Services – AWS Blog

Recent developments in machine learning (ML) have led to increasingly large models, some of which require hundreds of billions of parameters. Although they are more powerful, training and inference on those models require significant computational resources. Despite the availability of advanced distributed training libraries, its common for training and inference jobs to need hundreds of accelerators (GPUs or purpose-built ML chips such as AWS Trainium and AWS Inferentia), and therefore tens or hundreds of instances.

In such distributed environments, observability of both instances and ML chips becomes key to model performance fine-tuning and cost optimization. Metrics allow teams to understand workload behavior and optimize resource allocation and utilization, diagnose anomalies, and increase overall infrastructure efficiency. For data scientists, ML chips utilization and saturation are also relevant for capacity planning.

This post walks you through the Open Source Observability pattern for AWS Inferentia, which shows you how to monitor the performance of ML chips, used in an Amazon Elastic Kubernetes Service (Amazon EKS) cluster, with data plane nodes based on Amazon Elastic Compute Cloud (Amazon EC2) instances of type Inf1 and Inf2.

The pattern is part of the AWS CDK Observability Accelerator, a set of opinionated modules to help you set observability for Amazon EKS clusters. The AWS CDK Observability Accelerator is organized around patterns, which are reusable units for deploying multiple resources. The open source observability set of patterns instruments observability with Amazon Managed Grafana dashboards, an AWS Distro for OpenTelemetry collector to collect metrics, and Amazon Managed Service for Prometheus to store them.

The following diagram illustrates the solution architecture.

This solution deploys an Amazon EKS cluster with a node group that includes Inf1 instances.

The AMI type of the node group is AL2_x86_64_GPU, which uses the Amazon EKS optimized accelerated Amazon Linux AMI. In addition to the standard Amazon EKS-optimized AMI configuration, the accelerated AMI includes the NeuronX runtime.

To access the ML chips from Kubernetes, the pattern deploys the AWS Neuron device plugin.

Metrics are exposed to Amazon Managed Service for Prometheus by the neuron-monitor DaemonSet, which deploys a minimal container, with the Neuron tools installed. Specifically, the neuron-monitor DaemonSet runs the neuron-monitor command piped into the neuron-monitor-prometheus.py companion script (both commands are part of the container):

The command uses the following components:

Data is visualized in Amazon Managed Grafana by the corresponding dashboard.

The rest of the setup to collect and visualize metrics with Amazon Managed Service for Prometheus and Amazon Managed Grafana is similar to that used in other open source based patterns, which are included in the AWS Observability Accelerator for CDK GitHub repository.

You need the following to complete the steps in this post:

Complete the following steps to set up your environment:

The following is our sample output:

COA_AMG_ENDPOINT_URL needs to include https://.

The secret will be accessed by the External Secrets add-on and made available as a native Kubernetes secret in the EKS cluster.

The first step to any AWS CDK deployment is bootstrapping the environment. You use the cdk bootstrap command in the AWS CDK CLI to prepare the environment (a combination of AWS account and AWS Region) with resources required by AWS CDK to perform deployments into that environment. AWS CDK bootstrapping is needed for each account and Region combination, so if you already bootstrapped AWS CDK in a Region, you dont need to repeat the bootstrapping process.

Complete the following steps to deploy the solution:

The actual settings for Grafana dashboard JSON files are expected to be specified in the AWS CDK context. You need to update context in the cdk.json file, located in the current directory. The location of the dashboard is specified by the fluxRepository.values.GRAFANA_NEURON_DASH_URL parameter, and neuronNodeGroup is used to set the instance type, number, and Amazon Elastic Block Store (Amazon EBS) size used for the nodes.

You can replace the Inf1 instance type with Inf2 and change the size as needed. To check availability in your selected Region, run the following command (amend Values as you see fit):

Complete the following steps to validate the solution:

The following screenshot shows our sample output.

The following is our expected output:

The following is our expected output:

The following screenshot shows our expected output.

The following screenshot shows our expected output.

Log in to your Amazon Managed Grafana workspace and navigate to the Dashboards panel. You should see a dashboard named Neuron / Monitor.

To see some interesting metrics on the Grafana dashboard, we apply the following manifest:

This is a sample workload that compiles the torchvision ResNet50 model and runs repetitive inference in a loop to generate telemetry data.

To verify the pod was successfully deployed, run the following code:

You should see a pod named pytorch-inference-resnet50.

After a few minutes, looking into the Neuron / Monitor dashboard, you should see the gathered metrics similar to the following screenshots.

Grafana Operator and Flux always work together to synchronize your dashboards with Git. If you delete your dashboards by accident, they will be re-provisioned automatically.

You can delete the whole AWS CDK stack with the following command:

In this post, we showed you how to introduce observability, with open source tooling, into an EKS cluster featuring a data plane running EC2 Inf1 instances. We started by selecting the Amazon EKS-optimized accelerated AMI for the data plane nodes, which includes the Neuron container runtime, providing access to AWS Inferentia and Trainium Neuron devices. Then, to expose the Neuron cores and devices to Kubernetes, we deployed the Neuron device plugin. The actual collection and mapping of telemetry data into Prometheus-compatible format was achieved via neuron-monitor and neuron-monitor-prometheus.py. Metrics were sourced from Amazon Managed Service for Prometheus and displayed on the Neuron dashboard of Amazon Managed Grafana.

We recommend that you explore additional observability patterns in the AWS Observability Accelerator for CDK GitHub repo. To learn more about Neuron, refer to the AWS Neuron Documentation.

Riccardo Freschi is a Sr. Solutions Architect at AWS, focusing on application modernization. He works closely with partners and customers to help them transform their IT landscapes in their journey to the AWS Cloud by refactoring existing applications and building new ones.

Go here to read the rest:
Open source observability for AWS Inferentia nodes within Amazon EKS clusters | Amazon Web Services - AWS Blog

Read More..

Bitcoin dominance hits 3-year high as BTC price dip pressures altcoins – Cointelegraph

Bitcoin (BTC) market cap dominance has hit its highest level in three years as altcoins feel renewed price pressure.

Data from Cointelegraph Markets Pro and TradingView shows Bitcoins share of the total crypto market cap spiking to 56.3% on April 12.

BTC price action suffered into the weekend with a liquidation cascade bringing BTC/USD below $65,300.

At the same time, however, altcoins faced much worse conditions, data shows many of the top twenty cryptocurrencies by market cap fell more than 15%.

In so doing, altcoins relinquished crypto market share to Bitcoin, and the recent highs mark the most Bitcoin-heavy crypto market since April 2021.

I dont typically look at Bitcoin dominance, but the chart is impressive considering the amount of new altcoins birthed into the market every day, popular trader and social media commentator Bagsy wrote in a response on X.

Fellow trader Daan Crypto Trades was among those noting the difference in drawdown between Bitcoin and altcoins in recent days.

Yes, the actual hit on $BTC was very minimal and the total downside also wasnt very relevant, he told X followers while discussing Bitcoin open interest.

Historically, Bitcoin bull markets tend to see a dominance breakout in their early stages, with altcoins then catching up once BTC/USD sees a prolonged consolidation period.

Related:Bitcoin is hedge against horrible govt fiscal policy Cathie Wood

So far in 2024, altcoins, while performing well, have not witnessed such conditions for a meaningful length of time.

Forecasting what might come next, however, fellow trader Mikybull Crypto argued that change would soon come.

Altcoins market cap is perfectly following the previous Alts season step, he wrote in part of an X post.

An accompanying chart compared Bitcoin and altcoin dominance, drawing comparisons with the end of 2020 the point at which BTC price action had just escaped its previous macro trading range below $20,000.

This article does not contain investment advice or recommendations. Every investment and trading move involves risk, and readers should conduct their own research when making a decision.

Read more:
Bitcoin dominance hits 3-year high as BTC price dip pressures altcoins - Cointelegraph

Read More..

Brake Noise And Machine Learning (4 of 4) – The BRAKE Report

Article by: Antonio Rubio, Project Engineer, Braking Systems in Applus IDIADA

ReviewPart One| ReviewPart Two | Review Part Three

The field of artificial intelligence (AI) has made significant progress in recent years, with applications ranging from natural language processing to computer vision. In recent years, Applus IDIADA Brakes department has presented several studies about artificial intelligence application for detection of brake noises. In this paper, Applus IDIADA presents the research done in this area, but focusing on the development of an AI model for predicting subjective ratings for squeal brake noises based on objective measurements collected through the instrumentation in a typical Brake Noise Durability programme. Subjective ratings are based on human opinions and can be challenging to quantify. Objective measurements, on the other hand, can be objectively quantified and provide a more reliable basis for prediction.

The first part of the article introduced the data processing, whereas the second and third parts focused on the AI model creation and validation, respectively. This fourth part, on the other hand, summarizes the main results and draws the conclusions.

Other drivers evaluations

Subjective ratings from two different highly skilled drivers were used (different from the reference driver selected for the model trained). With that, the noises and conditions of noises should be similar, but drivers evaluations are different. Dataset per rating used to evaluate other drivers evaluations is shown in table 9.

Using different drivers for validation, we are validating at the same time:

Ideally, model prediction accuracy should be similar to the accuracy result that comes from the validation performed on the model with the reference driver. Differences between accuracy of the model of the reference driver and the accuracy with the data set of other drivers, could be attributed to differences in subjective criteria between reference driver and the driver evaluated.

It can be seen that there are more subjective ratings available in the data set with high ratings than for low ratings.

Similar to the validation of the model for the reference driver, results for each driver are presented in terms of accuracy. Results can be checked in table 10 and accuracy per driver/rating in table 11.

Accuracy is calculated comparing the subjective rating prediction from the model with the actual ones of the drivers, meaning a 100% of accuracy a correct prediction (same as driver) of the model for all subjective ratings. In addition, the % of ratings not correctly assigned with a difference error of 1 rating, 2 rating and 3 rating is calculated.

It can be seen that:

It can be seen that:

Summary results

Regarding reference driver validation, close to 70% of prediction ratings are the same as the reference driver rating. Rating discrepancies between model and driver rating are mainly with a 1 rating error. Rating discrepancies between model and driver rating more than 2 points are minimal. Accuracy for rating 9, rating 8 and rating 7 is around 70%. Accuracy for rating 6 or lower decrease to 50% or lower.

Regarding other drivers evaluations, the accuracy is around 50% for both of them. Same tendency in comparison with reference driver results can be shown. There is an increase of rating discrepancies mainly of 1 rating. The decrease of accuracy can be explained with the difference of subjective criteria of the drivers in comparison with the reference driver.

Conclusion

The goal of the project is to replicate the evaluation of brake noise annoyance performed by an expert driver using a model. Data containing noise samples collected during several years of testing at Applus IDIADA from a reference driver and their corresponding subjective ratings are provided for this purpose.

The data analysis revealed that there is a feasible opportunity to clean and preprocess the dataset by removing variables that do not contribute value to the model. Outliers were removed from the dataset. Data has been split in three parts: 70% noise events for training, 20% for test and 10% for validation.

Two artificial intelligence models were trained with the dataset: a classification and a regression model. According to the test phase results of training, it is shown that models achieve a good knowledge of the dataset. Finally, according to the different trials, the final model involves combining the classification and regression models. A threshold is set to determine when to rely on the classification models prediction and when to prioritize the rounded output from the regression model.

The model underwent validation by comparing its results with evaluations from the reference driver using different vehicles in conditions that were used for training. An accuracy of 68.5% was achieved, with rating discrepancies between model and driver rating mainly with a 1 rating.

In addition, predicted ratings from different drivers with model from the reference driver have been compared. It can be seen that accuracy in comparison with the reference driver decreased, but it can be explained as differences in subjective criteria with the other drivers.

The results of the study were promising, obtaining with the model an important level of accuracy in predicting subjective ratings based on objective measurements, indicating that the models predictions were close to the actual subjective ratings. Actually, it can be seen during models training that characterization of the subjective criteria is learnt by the models. Main rating discrepancies between model and driver rating are mainly with a 1 rating error that it could be explained as some uncertainty in the subjective criteria of the reference driver. This uncertainty in the subjective criteria of the driver could be explained by a variety of uncontrolled variables which can result in different subjective ratings for the same noise event. These differences appear mainly for low rating below rating 6. In addition, dataset contained a smaller number of lower rating 6 or below than above 6.

In conclusion, the development of an AI model for predicting subjective ratings based on objective measurements is an important step towards the understanding of subjective ratings and objective measurements for brake squeal noise. Prediction results from the current artificial intelligence model are based in objective measurements from 20 variables at the same time that characterize the most important features of the noise as frequency, amplitude, duration or corner source. Furthermore, the results of this study demonstrate the potential of AI models to be implemented in the near-to-medium future on autonomous vehicles providing more accurate subjective rating based on objective data. Future work in this area could involve expanding the model to include additional variables or incorporating other machine learning techniques to further improve performance.

About Applus IDIADA

With over 25 years experience and 2,450 engineers specializing in vehicle development, Applus IDIADA is a leading engineering company providing design, testing, engineering, and homologation services to the automotive industry worldwide.

Applus IDIADA is located in California and Michigan, with further presence in 25 other countries, mainly in Europe and Asia.

http://www.applusidiada.com

See the original post:
Brake Noise And Machine Learning (4 of 4) - The BRAKE Report

Read More..