Page 370«..1020..369370371372..380390..»

Generative AI: Understand the challenges to realize the opportunities | Amazon Web Services – AWS Blog

Generative artificial intelligence (AI) allows anyone to leverage machine learning (ML) capabilities using natural language, and it is extremely intuitive to use. When users are able to search, analyze, and draw conclusions in secondsfrom extensive information that exists across their organization or the internetthey can make more informed decisions at speed. This can help them answer customer queries efficiently, pinpoint significant changes to contracts, and assess risks such as fraud more accurately. Organizations can make more effective use of resources and provide better services by gaining useful insights, such as peak use patterns or the likelihood of good outcomes in different scenarios.

Generative AI models are trained on a large volume of datasets, which gives them the ability to generate answers to a range of questions and summarize findings in a meaningful way for the user. Common use cases in public sector could be determining the best way to reduce Friday afternoon congestion, or how to manage building utilities more efficiently.

To suggest answers, generative AI systems can combine and cross-analyse a diverse range of data in milliseconds to produce a spoken, graphical, or easy-to-understand written summary.

Generative AI models are as reliable as the data theyre trained on and can access. There is a risk of hallucination, which is when the models make something up that may sound plausible and factual but which may not be correct. Anyone who bases decisions and actions on the results of an AI-based query needs to be able to stand by that choice and articulate how it was reached, to avoid unfair targeting or other forms of bias, resource waste, or other questionable decisions.

Any organizations or teams that use generative AI to make decisions or prioritize actions, must build responsible AI systems that are fair, explainable, robust, secure, transparent, and that safeguard privacy. Good governance is fundamental for responsible systems. Its important to be able to justify how these process-support systems arrived at choices.

Organizations need to design and use a proven, well-architected AI framework and operating model to provide for continuous monitoring of the system in use. There has to be full awareness of potential issues and whats needed to mitigate them. Those issues could involve limitations with the data (its quality, level of standardization, currency, and completeness) and any risk of bias, data-protection breaches, or other regulatory or legal infringement.

Systems must be transparent: if someone challenges a decision supported by the AI system, they can track the reasoning behind it. Examples of this could be citing specific sources used in summarisation or tracking the customer data that was used in any ML models.

For a deeper dive, watch our four-part AWS Institute Masterclass series on AI/ML:

See the original post:
Generative AI: Understand the challenges to realize the opportunities | Amazon Web Services - AWS Blog

Read More..

Why scientists trust AI too much and what to do about it – Nature.com

AI-run labs have arrived such as this one in Suzhou, China.Credit: Qilai Shen/Bloomberg/Getty

Scientists of all stripes are embracing artificial intelligence (AI) from developing self-driving laboratories, in which robots and algorithms work together to devise and conduct experiments, to replacing human participants in social-science experiments with bots1.

Many downsides of AI systems have been discussed. For example, generative AI such as ChatGPT tends to make things up, or hallucinate and the workings of machine-learning systems are opaque.

Artificial intelligence and illusions of understanding in scientific research

In a Perspective article2 published in Nature this week, social scientists say that AI systems pose a further risk: that researchers envision such tools as possessed of superhuman abilities when it comes to objectivity, productivity and understanding complex concepts. The authors argue that this put researchers in danger of overlooking the tools limitations, such as the potential to narrow the focus of science or to lure users into thinking they understand a concept better than they actually do.

Scientists planning to use AI must evaluate these risks now, while AI applications are still nascent, because they will be much more difficult to address if AI tools become deeply embedded in the research pipeline, write co-authors Lisa Messeri, an anthropologist at Yale University in New Haven, Connecticut, and Molly Crockett, a cognitive scientist at Princeton University in New Jersey.

The peer-reviewed article is a timely and disturbing warning about what could be lost if scientists embrace AI systems without thoroughly considering such hazards. It needs to be heeded by researchers and by those who set the direction and scope of research, including funders and journal editors. There are ways to mitigate the risks. But these require that the entire scientific community views AI systems with eyes wide open.

ChatGPT is a black box: how AI research can break it open

To inform their article, Messeri and Crockett examined around 100 peer-reviewed papers, preprints, conference proceedings and books, published mainly over the past five years. From these, they put together a picture of the ways in which scientists see AI systems as enhancing human capabilities.

In one vision, which they call AI as Oracle, researchers see AI tools as able to tirelessly read and digest scientific papers, and so survey the scientific literature more exhaustively than people can. In both Oracle and another vision, called AI as Arbiter, systems are perceived as evaluating scientific findings more objectively than do people, because they are less likely to cherry-pick the literature to support a desired hypothesis or to show favouritism in peer review. In a third vision, AI as Quant, AI tools seem to surpass the limits of the human mind in analysing vast and complex data sets. In the fourth, AI as Surrogate, AI tools simulate data that are too difficult or complex to obtain.

Informed by anthropology and cognitive science, Messeri and Crockett predict risks that arise from these visions. One is the illusion of explanatory depth3, in which people relying on another person or, in this case, an algorithm for knowledge have a tendency to mistake that knowledge for their own and think their understanding is deeper than it actually is.

How to stop AI deepfakes from sinking society and science

Another risk is that research becomes skewed towards studying the kinds of thing that AI systems can test the researchers call this the illusion of exploratory breadth. For example, in social science, the vision of AI as Surrogate could encourage experiments involving human behaviours that can be simulated by an AI and discourage those on behaviours that cannot, such as anything that requires being embodied physically.

Theres also the illusion of objectivity, in which researchers see AI systems as representing all possible viewpoints or not having a viewpoint. In fact, these tools reflect only the viewpoints found in the data they have been trained on, and are known to adopt the biases found in those data. Theres a risk that we forget that there are certain questions we just cant answer about human beings using AI tools, says Crockett. The illusion of objectivity is particularly worrying given the benefits of including diverse viewpoints in research.

If youre a scientist planning to use AI, you can reduce these dangers through a number of strategies. One is to map your proposed use to one of the visions, and consider which traps you are most likely to fall into. Another approach is to be deliberate about how you use AI. Deploying AI tools to save time on something your team already has expertise in is less risky than using them to provide expertise you just dont have, says Crockett.

Journal editors receiving submissions in which use of AI systems has been declared need to consider the risks posed by these visions of AI, too. So should funders reviewing grant applications, and institutions that want their researchers to use AI. Journals and funders should also keep tabs on the balance of research they are publishing and paying for and ensure that, in the face of myriad AI possibilities, their portfolios remain broad in terms of the questions asked, the methods used and the viewpoints encompassed.

All members of the scientific community must view AI use not as inevitable for any particular task, nor as a panacea, but rather as a choice with risks and benefits that must be carefully weighed. For decades, and long before AI was a reality for most people, social scientists have studied AI. Everyone including researchers of all kinds must now listen.

More here:
Why scientists trust AI too much and what to do about it - Nature.com

Read More..

Detecting cognitive traits and occupational proficiency using EEG and statistical inference | Scientific Reports – Nature.com

Participants

Twenty-seven healthy right-handed volunteers participated in the study. Participants were recruited via social networks after they filled an online screening form for neurological and psychiatric disorders. Data from one individual were excluded from the analysis due to extensive EEG artifacts. The final study sample consisted of 26 participants (age 25.7 4.49, range 19-38; 12 females, 14 males) divided into two groups: M group (12 students or specialists with professional math education and experience), and H group (14 students or specialists in humanities). The principle of dividing individuals into groups based on education was as follows: the participants of the M group were either students of at least the third year of universities in mathematical specialties or working alumni of these universities. The same applies to the participants of the H group, but in humanitarian specialties (history, philology, law). This study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the Institute of Higher Nervous Activity and Neurophysiology of the Russian Academy of Sciences (protocol No3 from 24 August 2017). Volunteers have given written informed consent to their participation in the study, after the procedure was explained to them.

Participants were comfortably seated in a sound-shielded room, 1 meter away from a 19-inch square monitor. During EEG recording, participants were presented with tasks and were asked to mentally solve them, giving equal priority to accuracy and the shortest solving time. The tasks were presented in light gray in the center of a black screen, with the sizes of letters and digits being equal for all tasks. We presented three types of tasks (Table 2) in a pseudorandom order: 60 verbal, 60 arithmetic, and 60 logical tasks. The trial sequence consisted of task instructions (2 s), a fixation cross (0.5 s), the task (< 40 s), and a black screen during the participants response (4 s). Tasks were presented with a limited duration of 40 s. Participants clicked the left PC mouse button when they were ready to provide an answer. If no response was given within 40 s, the task was considered unsolved, disappeared from the screen, and the next trial began. The decision time (DT) was calculated between the task onset and the participants response. Additional long breaks were provided every 20-30 minutes or upon the participants request. The entire experiment typically lasted 2-2.5 hours.

The EEG was recorded using a 128-channel Geodesic Sensor Nets (Electrical Geodesics, Inc (EGI), Eugene, OR, USA) system based on the 1010 electrode montage. The recordings were band-pass filtered with a 0.1 Hz70 Hz analog filter, notch filtered at 50 Hz, and sampled at 1000 Hz with online re-referencing to the average using Net Station software. Impedance was kept below 50K(Omega).

All preprocessing was done using MNE software50. We excluded 46 skirt channels (defined as channels with EGI polar coordinate r > 0.5) near the periphery of the EEG net that are particularly sensitive to noise and muscle artifact. The remaining 83 channels retained for analysis is provided in the supplementary materials and labeled with their EGI channel number.The number of discarded channels was consistent across conditions and participants.

Remaining data were downsampled to 250 Hz. EOG artifacts were removed using automatic ICA in MNE. ICA components were found on high-passed filtered at 1 Hz signal and applied to unfiltered signal. The EEG data were analyzed in epochs of 2 s without overlap starting 5 s after the presentation of each task. As we were more interested in the recognition of mental operations during task solving rather than in the visual perception of the tasks, we assumed that during the first 5 s of cognitive task presentation, EEG could reflect visual retrieval and early (stimulus-driven bottom-up) cognitive stages related to semantic and lexical numerical representations of the stimuli. EEG, accompanied by the visual perception of complex visual stimuli such as math and verbal tasks, has a strong impact on eye-movement activity. Epochs with low signal-to-noise ratio were rejected using minimum and maximum peak-to-peak amplitudes. 94.12.2 % of epochs remained after preprocessing. Following the epoch rejection, there were no statistically significant differences in the number of epochs between the groups (M and H) and conditions.

A mixed-model ANOVA (GROUP, TASK) was applied to compare results on behavioral performance between the groups and 3 types of the tasks. As the distribution of behavioral variables deviated from normal we applied cluster based permutation tests with Spearman coefficient to perform a correlation analysis of EEG patterns and behavioral performance. We examined the mean PSD value of 96-channel data for each frequency range in correlation with behavioral results.

We performed cross-subject group classification to recognize the group type to which the individual EEG data belonged. Additionally, we compared the performance of three pipelines: supervised and unsupervised projections with Logistic Regression and handcrafted power features with LightGBM.

The frequency bands used for feature estimation were as follows: (theta)1 (theta1): 46 Hz, (theta)2 (theta2): 68 Hz, (alpha)1 (alpha1): 810 Hz, (alpha)2 (alpha2): 1012 Hz, (beta)1 (beta1): 1216 Hz, (beta)2 (beta2): 1620 Hz, (beta)3 (beta3): 2024 Hz. For the first two pipelines, we decomposed EEG into these bands using a set of filter banks with Butterworth Bandpass filters, and for each epoch in each band, the covariance spatial matrix was estimated. The feature space before projection and vectorization was ({varvec{X}}in {{mathbb {R}}}^{Ktimes Ntimes N}), where K is the number of frequency bands and N is the number of EEG channels.

For the third pipeline with handcrafted features, power spectral density (PSD) for each epoch and frequency band was estimated using the multitaper method51 that calculates spectral density for orthogonal tapers and then averages them together for each channel. Relative power, equal to the power in each frequency band divided by the total power, was used to form the feature space for classification. Thus, the feature space in this pipeline was ({varvec{X}}in {{mathbb {R}}}^{Ntimes K}), where K is the number of frequency bands and N is the number of EEG channels.

In the space of symmetric positive-definite (SPD) matrices29, the covariance matrices of the CSP filtered signal will take the form of (1), where ({varvec{W}}in {{mathbb {R}}}^{Ntimes J}) and (J le N) is the number of CSP filters sorted by decreasing eigenvalues. Thus, the feature vector will take the form of (2) with the number of components equal to J, where ({varvec{Sigma _{1}}}) is the mean covariance matrix of the first class.

$$begin{aligned} {varvec{Sigma _{{varvec{Z}}_{i}}}}= & {} {varvec{W^{T} Sigma _{{varvec{X}}_{i}} W}} end{aligned}$$

(1)

$$begin{aligned} {varvec{F}}_{i}= & {} begin{bmatrix} log ({varvec{Sigma _{Z_{i}}}}left[ 1,1right] ) - log ({varvec{Sigma _{1}}}left[ 1,1right] ) \ vdots \ log ({varvec{Sigma _{Z_{i}}}}left[ J,Jright] ) - log ({varvec{Sigma _{1}}}left[ J,Jright] ) end{bmatrix} end{aligned}$$

(2)

In the space of SPD matrices52, the covariance matrix of the signal projected with Principal Component Analysis (PCA) will take the form of (3), where ({varvec{W}}in {{mathbb {R}}}^{Ntimes J}) and (J le N) is the number of PCA components sorted by decreasing eigenvalues. Therefore, the minimal representation of a matrix in the Riemannian space will take the form of (4), where ({varvec{F}}{i}) is a feature vector with the number of components equal to (J*(J+1)/2), and (overline{{varvec{Sigma }}}{Z}^{-1/2}) is the mean of (3) matrices according to the Riemannian metric.

$$begin{aligned} {varvec{Sigma }}_{{Z}_{i}}= & {} {varvec{W}}_{UNSUP}^{T} {varvec{Sigma }}_{{X}_{i}}{varvec{W}}_{UNSUP} end{aligned}$$

(3)

$$begin{aligned} {varvec{F}}_{i}= & {} {Upper}(log (overline{{varvec{Sigma }}}_{Z}^{-1/2}{varvec{Sigma }}_{{Z}_{i}}overline{{varvec{Sigma }}}_{Z}^{-1/2})) end{aligned}$$

(4)

Logistic regression is a linear model employed to estimate the likelihood of a specific class, and it can serve as a supervised binary classification algorithm. To avoid overfitting, logistic regression was trained with (L_2) regularization, and the regularization parameter for (L_2) was specified within the range of (left[ 1e^{-10},1e^{9}right]).

LightGBM is a gradient boosting framework that uses a decision tree algorithm with leaf-wise split, and is known for its high performance. To optimize its performance, a cross-validation grid search was used to tune its hyperparameters.

To perform subject-independent classification of the participant group (M versus H), EEG epochs were labeled according to the group to which the subject belonged. For testing, epochs from two randomly selected participants from different groups were chosen, while epochs from two randomly selected participants (one from each group) were chosen for validation. The remaining participants epochs were used for training. This process was repeated ten times, resulting in ten different folds for each type of task.

Balanced accuracy (BA), receiver operating characteristic (ROC) curve and area under the curve (AUC) were used to assess the performance of the models.

One advantage of linear models is their interpretability, which allows for the identification of the strength and direction of specific effects in the features30.

In classification tasks, the backward model transforms the feature space ({varvec{X}}{i} in {{mathbb {R}}}^{Ntimes 1}) into a new representation that maximizes the discriminability between the two classes using the filter ({varvec{W}}in {{mathbb {R}}}^{Ntimes 1}) as shown in Eq. (5). On the other hand, the forward models in Eq. (6) describe sample generation as a multiplication of the activation pattern ({varvec{A}} in {{mathbb {R}}}^{Ntimes 1}) by the factor (s_{i}). The activation factor can be obtained using Eq. (7), where the covariance matrix is ({varvec{Sigma _X}} = mathbb Eleft[ {varvec{X}}{i},{varvec{X}}{i}^{T}right] _{i}).

$$begin{aligned} {{varvec{W}}}^{T}{varvec{X_{i}}}= & {} {hat{s}}_{i} end{aligned}$$

(5)

$$begin{aligned} {varvec{x_{i}}}= & {} s_{i}{varvec{A}}+{{varvec{varepsilon }}_{i}} end{aligned}$$

(6)

$$begin{aligned} {varvec{A}}= & {} {varvec{Sigma _{{varvec{X}}} W Sigma _{hat{{varvec{s}}}}^{-1}}} = {varvec{Sigma _{{varvec{X}}} W}} = Covleft[ {varvec{X}}_{i}, s_iright] end{aligned}$$

(7)

When using CSP, the feature space takes the form of ({varvec{X}}_{i} in {{mathbb {R}}}^{NKtimes 1}) and the filter takes the form of ({varvec{W}} in {{mathbb {R}}}^{NKtimes 1}). This filter has full column rank, which is proven by using the Sylvester rank inequality in Eq. (8).

$$begin{aligned} rank({varvec{W_{1}}}) + rank({varvec{W_{2}}}) - M le rank({varvec{W}}) le min(rank({varvec{W_{1}}}),rank({varvec{W_{2}}})) end{aligned}$$

(8)

For examining the feature importance in the case of LightGBM, we estimated the SHapley Additive exPlanations (SHAP) values32. SHAP values assign an importance value to each feature in a model. Features with positive SHAP values positively impact the prediction, while those with negative values have a negative impact. The magnitude is a measure of how strong the effect is.

A cluster-based permutation test53 was used to investigate differences in EEG power spectral density (PSD) between groups. A two-sided T-statistic with a threshold of 6 was applied and corrected for multiple comparisons using N=1024 permutations. Cluster-level correction based on spatial adjacency was also performed.

See the article here:
Detecting cognitive traits and occupational proficiency using EEG and statistical inference | Scientific Reports - Nature.com

Read More..

Exploring the Potential of Transfer Learning in Small Data Scenarios – KDnuggets

When it comes to machine learning, where the appetite for data is insatiable, not everyone has the luxury of accessing vast datasets to learn from at a whimthat's where transfer learning comes to the rescue, especially when you're stuck with limited data or the cost of acquiring more is just too high.

This article is going to take a closer look at the magic of transfer learning, showing how it cleverly uses models that have already learned from massive datasets to give your own machine learning projects a significant boost, even when your data is on the slim side.

Im going to tackle the hurdles that come with working in data-scarce environments, peek into what the future might hold, and celebrate the versatility and effectiveness of transfer learning across all kinds of different fields.

Transfer learning is a technique used in machine learning that takes a model developed for one task and repurposes it for a second, related task, evolving it further.

At its core, this approach hinges on the idea that knowledge gained while learning one problem can assist in solving another, somewhat similar problem.

For instance, a model trained to recognize objects within images can be adapted to recognize specific types of animals in photos, leveraging its pre-existing knowledge of shapes, textures, and patterns.

It actively accelerates the training process while at the same time also significantly reducing the amount of data thats required. In small data scenarios, this is particularly beneficial, as it circumvents the traditional need for vast datasets to achieve high model accuracy.

Utilizing pre-trained models lets practitioners bypass many of the initial hurdles that are commonly associated with model development, such as feature selection and model architecture design.

Pre-trained models serve as the true foundation for transfer learning, and these models, often developed and trained on large-scale datasets by research institutions or tech giants, are made available for public use.

The versatility of pre-trained models is remarkable, with applications ranging from image and speech recognition to natural language processing. Adopting these models for new tasks can drastically cut down on development time and the resources you need.

For example, models trained on the ImageNet database, which contains millions of labeled images across thousands of categories, provide a rich feature set for a wide range of image recognition tasks.

The adaptability of these models to new, smaller datasets underscores their value, allowing for the extraction of complex features without the need for extensive computational resources.

Working with limited data presents unique challengesthe primary concern is overfitting, where a model learns the training data too well, including its noise and outliers, leading to poor performance on unseen data.

Transfer learning mitigates this risk by using models pre-trained on diverse datasets, thereby enhancing generalization.

However, the effectiveness of transfer learning depends on the relevance of the pre-trained model to the new task. If the tasks involved are too dissimilar, then the benefits of transfer learning may not fully materialize.

Moreover, fine-tuning a pre-trained model with a small dataset requires careful adjustment of parameters to avoid losing the valuable knowledge the model has already acquired.

In addition to these hurdles, another scenario where data can be jeopardized is during the process of compression. This even applies to quite simple actions, like when you want to compress PDF files, but thankfully these kinds of occurrences can be prevented with accurate alterations.

In the context of machine learning, ensuring the completeness and quality of data even when undergoing compression for storage or transmission is vital to developing a reliable model.

Transfer learning, with its reliance on pre-trained models, further highlights the need for careful management of data resources to prevent loss of information, ensuring that every piece of data is used to its fullest potential in the training and application phases.

Balancing the retention of learned features with the adaptation to new tasks is a delicate process that necessitates a deep understanding of both the model and the data at hand.

The horizon of transfer learning is constantly expanding, with research pushing the boundaries of what's possible.

One exciting avenue here is the development of more universal models that can be applied across a broader range of tasks with minimal adjustments needed.

Another area of exploration is the improvement of algorithms for transferring knowledge between vastly different domains, enhancing the flexibility of transfer learning.

There's also a growing interest in automating the process of selecting and fine-tuning pre-trained models for specific tasks, which could further lower the barrier to entry for utilizing advanced machine learning techniques.

These advancements promise to make transfer learning even more accessible and effective, opening up new possibilities for its application in fields where data is scarce or hard to collect.

The beauty of transfer learning lies in its adaptability that applies across all kinds of different domains.

From healthcare, where it can help diagnose diseases with limited patient data, to robotics, where it accelerates the learning of new tasks without extensive training, the potential applications are vast.

In the field of natural language processing, transfer learning has enabled significant advancements in language models with comparatively small datasets.

This adaptability doesnt just showcase the efficiency of transfer learning, it highlights its potential to democratize access to advanced machine learning techniques to allow smaller organizations and researchers to undertake projects that were previously beyond their reach due to data limitations.

Even if its a Django platform, you can leverage transfer learning to enhance your application's capabilities without starting from scratch all over again.

Transfer learning transcends the boundaries of specific programming languages or frameworks, making it possible to apply advanced machine learning models to projects developed in diverse environments.

Transfer learning is not just about overcoming data scarcity; it's also a testament to efficiency and resource optimization in machine learning.

By building on the knowledge from pre-trained models, researchers and developers can achieve significant results with less computational power and time.

This efficiency is particularly important in scenarios where resources are limited, whether its in terms of data, computational capabilities, or both.

Since 43% of all websites use WordPress as their CMS, this is a great testing ground for ML models specializing in, lets say, web scraping or comparing different types of content for contextual and linguistic differences.

This underscores the practical benefits of transfer learning in real-world scenarios, where access to large-scale, domain-specific data might be limited. Transfer learning also encourages the reuse of existing models, aligning with sustainable practices by reducing the need for energy-intensive training from scratch.

The approach exemplifies how strategic resource use can lead to substantial advancements in machine learning, making sophisticated models more accessible and environmentally friendly.

As we conclude our exploration of transfer learning, it's evident that this technique is significantly changing machine learning as we know it, particularly for projects grappling with limited data resources.

Transfer learning allows for the effective use of pre-trained models, enabling both small and large-scale projects to achieve remarkable outcomes without the need for extensive datasets or computational resources.

Looking ahead, the potential for transfer learning is vast and varied, and the prospect of making machine learning projects more feasible and less resource-intensive is not just promising; it's already becoming a reality.

This shift towards more accessible and efficient machine learning practices holds the potential to spur innovation across numerous fields, from healthcare to environmental protection.

Transfer learning is democratizing machine learning, making advanced techniques available to a far broader audience than ever before.

Nahla Davies is a software developer and tech writer. Before devoting her work full time to technical writing, she managedamong other intriguing thingsto serve as a lead programmer at an Inc. 5,000 experiential branding organization whose clients include Samsung, Time Warner, Netflix, and Sony.

See the article here:
Exploring the Potential of Transfer Learning in Small Data Scenarios - KDnuggets

Read More..

Will The New $2B USDT Injection Influence More Market Momentum? – TradingView

Key points:

Prominent USDT stablecoin issuer Tether has continued to mint staggering billion-dollar sums recently, sparking more bullish enthusiasm in the crypto space. On Saturday, Whale Alert disclosed that 1,000,000,000 USDT had been minted at Tether Treasury. Within three minutes of the update, Whale Alert proclaimed another $1 billion USDT minting from Tether.

Cumulatively, $2 billion has been injected into the crypto market within 24 hours. The development has sparked significant interest from community members, as both tweets have been viewed more than one million times.

As the comment section demonstrates, the prevailing sentiment is bullish. Market participants anticipate more significant gains for Bitcoin and the broader crypto market amid the $2 billion injection.

Meanwhile, Tether CEO Paolo Ardoino took the opportunity to offer clarification, aiming to counteract any misinformation regarding the $2 billion USDT minting. He clarified that the transaction was authorized but not yet issued. Emphatically, this amount would serve as inventory for future issuance requests and chain swaps.

Furthermore, X user Exponential Research pointed out that it has only been one day since the last billion and five days since the one before. The commenter views it as extremely bullish for the crypto market.

In particular, the argument posited that Tether printing $2 billion implies an additional $2 billion in demand for spot buying of crypto or as collateral for purchasing crypto on leverage. They believe that the upward trajectory of crypto will persist until significant flows cease.

In another discussion, renowned analyst Michal van de Poppe pointed out that the altcoin market capitalization must increase by 70% to reach the previous cycles peak.

The #Altcoin market capitalization still needs to gain 70% to get towards the high of the previous cycle.

The Altcoins are lagging behind, so the obvious play would be that we're seeing a short-term correction and, from there, altcoins to surge. pic.twitter.com/DsE9INeBD0

The analyst shared the sentiment amid Bitcoin breaking an all-time high (ATH) above $70k. As a result, he observes altcoins are trailing behind Bitcoin. Meanwhile, the analyst suggested that the apparent trend indicates a short-term correction in the market, after which altcoins are expected to surge to an ATH.

Read the rest here:
Will The New $2B USDT Injection Influence More Market Momentum? - TradingView

Read More..

Altseason is coming Or at least data suggests that it’s close – Cointelegraph

Bitcoin (BTC) price witnessed a sharp correction shortly after hitting a new all-time high at $69,324 on March 5.

Meanwhile, altcoins, led by memecoins and AI-themed cryptocurrencies, have outperformed BTC over the last week, initiating a debate on whether the altcoin season (altseason) is here.

Bitcoins brief escapade above $69,000 on March 5 saw the global crypto market value cross the $2.5 trillion mark, reflecting the current bullish momentum. At the time of publishing, this figure stands at $2.52 trillion, according to data from CoinMarketCap.

Altcoins displayed similar strength, with their total market capitalization (see chart below) rising above $1.1 trillion on the same day. This metric measures the total market value of all crypto assets except Bitcoin.

Over the last three months, this value has rallied approximately 64% from $697 billion to $1.14 trillion on March 7. This is a slightly better performance than the 56% posted by Bitcoin over the same period.

This ascent attests to growing investor interest in altcoins and the amazing performance recently displayed by this class of crypto assets.

The tremendous rally displayed by memecoins and AI over the last week could be a sign that the market is at the start of the altseason.

Data from CoinMarketCap shows memecoins posting double and triple-digit gains within the last seven days.

In particular, Dogecoin (DOGE) and Shiba Inu (SHIB) posted 20% and 130% gains this week, respectively. Newer tokens such as Pepe (PEPE), Bonk (BONK) and Dogwifhat (WIF) saw double to triple-digit gains during the same period.

Other notable performances came from tokens within the AI ecosystem led by Fetch.ai (FET), Synesis One (SNS), SingularityNET (AGIX) and Theta Network (THETA), which also produced double to triple-digit gains in 7 days.

Bitcoin has only risen 8.5% over the same period.

The performance of the altcoins seemingly coincides with an improvement in the altcoin season index, indicating that the altcoin season is close. According to this index by Blockchain Center:

This indicator essentially demonstrates that only 69% of the leading 50 altcoins have outperformed Bitcoin in the past 90 days. Although this index has increased over the last few days, it is still insufficient to declare an altcoin season.

However, according to a recent market report by K33 Research, this could change soon. The report noted that Bitcoins rally toward its new all-time high saw its market cap double relative to the value of all cryptocurrencies except BTC and Ether (ETH) since the bottom of November 2022.

Related: Bitcoin accumulation phase ends as ETFs fuel new $100K BTC price target

The K33 Research analysts said the setup mirrors the 2020 altcoin bull run just before they caught up with Bitcoins uptrend.

Independent analyst CrediBULL dives into the connection between Bitcoins performance at the start of the altseason. In a March 4 post in X, CrediBULL outlines two possible scenarios:

A mega breakout when Bitcoin breaks its prior ATH and quickly doubles in a matter of weeks, and alts don't get much love until BTC pauses after this breakout leg, and a consolidation scenario where Bitcoin revisits its prior ATH but faces rejection or consolidates at the highs for a few weeks. In the latter case, altcoins start rallying almost immediately after BTC tags prior to ATH.

CrediBull sais:

However, founder and CEO of MN Trading Consultancy Michal van de Poppe believes that the altcoin season is yet to kick in, pointing out that they still have 40-60% market capitalization before it reaches the 2021 highs above $1.1 trillion.

On his part, Cardano founder Charles Hoskinson believes that the altcoin season will start once DOGE overtakes ADA in market capitalization. In a March 5 post on X social media platform, Hoskinson said:

This article does not contain investment advice or recommendations. Every investment and trading move involves risk, and readers should conduct their own research when making a decision.

Read more from the original source:
Altseason is coming Or at least data suggests that it's close - Cointelegraph

Read More..

Bitcoin halving approaching; AI altcoin raises over $10m in presale – crypto.news

Disclosure: This article does not represent investment advice. The content and materials featured on this page are for educational purposes only.

After Bitcoin halving, miner rewards will drop by 50%. The event occurs every four years and it aims to rein in internal inflation. As Bitcoin becomes scarcer, and assuming sustained demand, prices trend to go up.

So far, Bitcoin is soaring as the demand for spot ETFs explode. Prices recently shot above $68,000. Amid this, new altcoins like InQubeta (QUBE), an AI-focused project, is also benefiting.

InQubeta links upcoming AI startups across the globe with startups.

These startups have leveraged InQubeta for business growth.

The InQubeta platform has so far raised over $10.9 million.

QUBE, the native token of InQubeta, can be staked for rewards.

Staking invites token holders to willingly let their tokens be locked in liquidity pools. These tokens are used for securing the network and aiding the blockchains development.

InQubeta has built a vibrant community of stakeholders with a decentralized governance structure. The community of token holders gives its journey an optimal trajectory. The token holders discuss and vote for proposals seeking changes in the protocol.

They use their voting privileges to express their preferences about a suggested change. Depending on the outcome of such processes, they decide if a proposal should be implemented.

QUBE is also deflationary, helping stave off unfavorable economic forces by lowering the supply.

In the future, this may support prices.

Bitcoin is a peer-to-peer cryptocurrency that can support high-volume transactions securely and without compromising speed.

The transactions made with its native token BTC are confirmed with the proof-of-work protocol.

Bitcoin is currently trading at $65,000 a price level last seen in November 2021.

The token was buoyed by the excitement around BTC ETFs and its halving event in April. Spot ETFs were rolled out in the U.S. in January and their debut has been a watershed movement for the crypto industry.

These financial products have further bridged the gap between mainstream finance and digital assets. Plus, they have ensured regulated exposure to cryptocurrencies without requiring people to own the assets.

Their popularity could be gauged from the fact that leading financial institutions like Wells Fargo and Merrill Lynch are now offering BTC ETFs to their wealth management clients.

Starting 2024 on a positive note, Bitcoin and InQubeta have demonstrated their potential with their sustained growth. Analysts expect the two coins to build upon these gains in the coming months.

Experts are optimistic about the popularity of spot BTC ETFs and the upcoming halving event to consolidate Bitcoins position. For InQubeta, they believe that its presale gains will speed up its journey and even help it gain an edge over new altcoins.

Disclosure: This content is provided by a third party. crypto.news does not endorse any product mentioned on this page. Users must do their own research before taking any actions related to the company.

Continue reading here:
Bitcoin halving approaching; AI altcoin raises over $10m in presale - crypto.news

Read More..

Bitgert: The Altcoin That Could Make You a Fortune This Week – Here’s How to Get In! – Cryptonews

Bitgert coins massive pump on the way. Are you ready to jump in?

BRISE has been trending everywhere and the community has been bullish on this project since their inception in 2021.

Bitgert (BRISE) is an affordable and highly-scalable Crypto platform for various Web3 projects like metaverse, NFTs, Decentralised Finance (DeFi), etc. They were one of the projects building throughout the bear market.

The fact that it has its own centralised exchange is a strong indicator of its commitment to providing a robust and comprehensive ecosystem for its users, potentially fostering greater adoption and liquidity.

Bitgert coin has seen an impressive increase of 18.82% in the last 7 days. And a 71.64% increase in the last month. It has managed to give its early backers impressive returns.

There have been a lot of consistent developments worth noticing which are responsible for the pump in price.

Coin98 wallet has incorporated Bitgert blockchain. Coin98 is a full-fledged financial platform.

Bitgert has also gone multichain with XP.NETWORK NFT Bridge!

This means that users can now easily move their NFTs across different chains, such as Ethereum, BSC, Solana, Polygon, and 25+ more, to Bitgert without losing their metadata, logic, or collection name. They can also access a wider market and audience for their NFTs, as well as enjoy lower fees and faster transactions.

It is all set to be listed on a major European exchange.

Simply sign up on Bitget, complete the identity verification process, and make payments using bank transfers, debit cards, or credit cards, all while ensuring security through crypto wallets. This is a widely adopted method to buy Bitgert.

Bitget Convert offers a secure, swift, and zero transaction fee method to convert crypto to Bitgert. Just choose your available crypto assets, specify the amount for the Bitgert swap, confirm, and check Bitgert instantly credited to your spot account. Ensure sufficient account balance before proceeding with the conversion.

Unlike buying or selling crypto, swapping does not access traditional financial rails since fiat currencies are not involved. Bitget Swap supports over 250,000 cryptocurrencies on 30 major blockchains including Ethereum, Polygon, and Solana. You can exchange any other crypto to Bitgert anytime, anywhere, including cross-chain transactions. Transaction gas fees are automatically converted.

Investing in Bitgert has never been easier.

About the project Bitgert is a DeFi-focused blockchain launched in 2021. The BRISE Chain relies on a system of proof-of-authority (PoA) to support short block times and low fees. The blockchain also boasts zero gas fee and being the fastest growing ecosystem in the industry.

To know more about Bitgert, Visit https://bitgert.com Buy Bitgert coin from the below exchanges now!

Buy on Kucoin BRISE/USDT

Buy on Gate.io BRISE/USDT

Buy on MEXC BRISE/USDT

Buy on Pancakeswap

Buy on Uniswap

Disclaimer: The text above is an advertorial article that is not part ofCryptonews.comeditorial content.

The rest is here:
Bitgert: The Altcoin That Could Make You a Fortune This Week - Here's How to Get In! - Cryptonews

Read More..

Crypto degens say ‘meh’ to Bitcoin’s climb as altcoins record triple-digit gains – KITCO

While the financial world is focused on the price of Bitcoin (BTC) and flows into the recently launched spot BTC exchange-traded funds (ETFs) which continue to set new records on a near daily basis crypto degens have been raking in unprecedented gains as the arrival of institutional attention has supercharged the altcoin market.

Every bull market cycle sees certain sectors of the crypto market outperform others, and this time around, meme coins and artificial intelligence (AI) projects have been leading the charge.

Since the start of February, Bitcoin has increased by more than 54%, an impressive monthly increase for any asset, but it pales in comparison to what meme coins like Dogecoin (+99%), BONK (198%), Shiba Inu (280%), PEPE (710%), and WIF (797%) have done.

It's a similar story, albeit to a lesser degree, with Bitcoins comparison to some of the top AI projects. Render (RNDR) has climbed 101% since Feb. 1, while Ocean Protocol (OCEAN) has gained 168%, SingularityNET (AGIX) is up 283%, and Fetch.AI (FET) has climbed 315%.

While it is normal for the crypto market to see sectors have major run-ups, they often occur during periods of sideways trading for Bitcoin following a significant increase a development that is often referred to as altseason.

But during this cycle, the rallies in various sectors are occurring at the same time as Bitcoins price is surging, which is a notable deviation from the historical trend and signals that this bull cycle will be unique and could blow away all expectations.

The fact that Bitcoin hit a new ATH more than 45 days before its next halving only adds to this outlook, as the feat has never been achieved before, meaning this cycle is already a notable outlier.

In the world of cryptocurrencies, Bitcoin's recent peak at $69,000 has not only set a new benchmark for digital wealth but also ignited a debate about the future of digital currencies, said Mikkel Morch, founder of the digital asset investment fund ARK36, in a note shared with Kitco Crypto. This milestone reflects a growing confidence in Bitcoin as a resilient asset, fueling speculation about its potential for a sudden downturn, a common narrative in the volatile crypto market.

Yet, the real story extends beyond Bitcoin and Ethereum's stellar performance, he said. The positive momentum of these giants is casting a light on altcoins, suggesting a potential ripple effect throughout the cryptocurrency ecosystem.

This dynamic hints at a broader trend: as Bitcoin and Ethereum continue to capture investor interest, altcoins stand on the brink of capitalizing on this surge, potentially experiencing their own moments of glory as seen most recently with both the dog coins, other meme coins and with the AI/DEFI coins that have seen massive gains the last 1 month and even longer, Morch said.

However, this optimistic outlook for altcoins comes with a cautionary note, he warned. The crypto market is renowned for its unpredictability, making it imperative for investors to navigate these waters with a blend of enthusiasm and prudence. Each altcoin carries its own set of risks and opportunities, underscoring the importance of thorough research and strategic investment.

In essence, Bitcoin's landmark achievement is more than a testament to its own success; it's a beacon for the entire cryptocurrency market, hinting at a future where a number of altcoins will likely share the spotlight, he said. As the narrative unfolds, the key to capitalizing on these developments lies in informed, strategic decision-making, acknowledging the interconnected nature of these digital assets while preparing for the uncertainties that lie ahead.

This moment isn't just about celebrating Bitcoin's success it's about recognizing the potential for a broader market evolution, with altcoins poised to define their own paths in the shadow of giants, Morch concluded.

Crypto trader TheFlowHorse also sees altcoins blazing their own trail in the current market and thinks they could generate their own hype and price-building loop as traders rotate in and out of gainers.

And market analyst Elja noted that the Bitcoin dominance chart is at a major resistance level that has historically been followed by an altseason breakout.

Disclaimer:The views expressed in this article are those of the author and may not reflect those of Kitco Metals Inc. The author has made every effort to ensure accuracy of information provided; however, neither Kitco Metals Inc. nor the author can guarantee such accuracy. This article is strictly for informational purposes only. It is not a solicitation to make any exchange in commodities, securities or other financial instruments. Kitco Metals Inc. and the author of this article do not accept culpability for losses and/ or damages arising from the use of this publication.

Original post:
Crypto degens say 'meh' to Bitcoin's climb as altcoins record triple-digit gains - KITCO

Read More..

JPMorgan credits Ethereum for Crypto boost; Whales favoring this new AI Altcoin over Solana – Cyprus Mail

Please read this Cookie Policy carefully as it contains important information on who we are and how we use cookies on our website. This policy should be read together with our Privacy Policy which sets out how and why we collect, store, use and share personal information generally, as well as your rights in relation to your personal information and details of how to contact us and/or the supervisory authorities in case you have any complaint.

This Cookie Policy applies to the access and use of the Cyprus Mail Website hosted at https://cyprus-mail.com/ (hereinafter referred to as the Website), which is operated by NEO CYMED PUBLISHING LIMITED (hereinafter the Company, we or us).

Our Company strives to protect personal data and apply high standards of conduct when it comes to privacy issues. We ensure that we provide our employees and staff with the appropriate training to handle personal data promptly and in accordance with the laws. Furthermore, we endeavour to ensure that any parties with whom we co-operate apply the same high standards when it comes to data protection and privacy as we do.

Cookies

A cookie is a small text file which is placed onto your device (e.g. computer, smartphone or other electronic device) when you use our Website. We use cookies on our Website. These help us improve your experience and at the same time help us to support our security features i.e. detect for malicious visitors, recognize you and your device and store some information about your preferences or past actions.

For example, we may monitor . This information helps us to identify trusted web traffic, generate statistical and analytical data on how our visitors use our Website, our reach and click on links. We also uses cookies to personalize your online experience. Some of this data will be aggregated or statistical, which means that we will not be able to identify you individually.

For further information on our use of cookies, including a detailed list of your information which we and others may collect through cookies, please see below.

For further information on cookies generally, including how to control and manage them, visit the guidance on cookies published by the Office of the Commissioner of Personal Data Protection in Cyprus, or http://www.allaboutcookies.org/.

Consent to use cookies and changing settings

We will ask for your permission (Consent) to place cookies or other similar technologies on your device, except where they are essential for us to provide you with a service that you have requested (enabling basic functions like page navigation and access to secure areas of the Website) and ensure the security of our website and users.

You may withdraw any consent provided regarding the use of cookies or manage any other cookie preferences by clicking on the Cookie Settings icon at the bottom end of any page on our Website. You can then adjust the sliders regarding the cookies as per your preferences. It may be necessary to refresh the page for the updated settings to take effect.

For more information on how you can change your preferences via browser settings please see How to turn off all cookies and consequences of doing so below. It may be necessary to refresh the page for the updated settings to take effect.

Our use of cookies

Categories of Cookies:

The table below provides more information about the cookies we use and why:

Read https://policies.google.com/technologies/partner-sites

Read https://policies.google.com/technologies/partner-sites

Read https://policies.google.com/technologies/partner-sites

Read https://policies.google.com/technologies/partner-sites

Third party access to the cookies

The cookies we use will only be accessed by us and those third parties named in the table above for the purposes referred to in this Cookie Policy. Those cookies will not be accessed by any other third party.

How to turn off all cookies and consequences of doing so

If you do not want to accept any cookies, you may be able to change your browser settings so that cookies (including those which are essential to the services requested) are not accepted. If you do this, please be aware that you may lose some of the functionality of our Website.

How to contact us

Please contact us if you have any questions about this Cookie Policy or the information we hold about you.

If you wish to contact us, please send an email to [emailprotected], write to us at 195, Arch. Makariou III, Neocleous House, 1st-5th floor, 3030, Limassol, Cyprus, or call us at ++357 22818585.

Changes to this policy

This policy was last updated on 4/7/2023.

We may change this policy from time to time, when we do so we will inform you via notification on our Website.

Read the rest here:
JPMorgan credits Ethereum for Crypto boost; Whales favoring this new AI Altcoin over Solana - Cyprus Mail

Read More..