Page 356«..1020..355356357358..370380..»

US prediction market Kalshi to take bets on Bitcoin and Ether – Cointelegraph

The United States-based prediction market Kalshi is preparing to launch bets on the price action of major cryptocurrencies Bitcoin (BTC) and Ether (ETH).

Kalshi is launching prediction contracts for its clients, enabling bets on Bitcoin and Ether price movements, according to its website.

At the time of writing, Kalshis Crypto website section enables clients to bet on at least three Bitcoin contracts, including a contract on when Bitcoin will hit $100,000.

The contract currently lists 10 available monthly markets, with the closest one betting that Bitcoin will reach the value on March 28. The latest available bet is by Dec. 31, 2024.

Other contracts include a prediction on how high Bitcoin will reach in 2024 and one on the daily BTC price. The annual price prediction offers five contracts from $75,000 to $150,000. The daily prediction spans 15 markets, starting from $66,750 and ending at $70,250.

Kalshis Ether predictions offer clients to bet only on the annual and daily price events.

While the bets are related to cryptocurrency prices, Kalshis platform will only accept U.S. dollars for placing the bets. As with most of Kalshis bets, the crypto bets will mostly be yes or no across different possible scenarios.

According to industry media, Kalshi clients will be able to place the first bets on Bitcoin and Ether as early as March 18.

Related: Bitcoin to enter pre-halving danger zone, but crypto CEOs remain bullish

Cointelegraph reached out to Kalshi for comments regarding the launch but had yet to receive a response at the time of publication.

Founded in 2018, Kalshi is a financial exchange offering event contracts, allowing users to bet on certain events in the future, including those related to the economy, politics, health, tech and science, and others. The platform is regulated as a designated contract market under the Commodity Futures Trading Commission.

Bitcoin price predictions have been growing increasingly popular since the cryptocurrency posted a new all-time high above $73,000 on March 13.

Binance CEO Richard Tengpredicted on March 17 that Bitcoin would continue its record-breaking rally and rise above $80,000 by the end of the year. Teng expressed confidence in the success of Bitcoin amid the optimism around the massive adoption of spot Bitcoin exchange-traded funds in the United States.

Crypto.com CEO Kris Marszalekpredictedthat the total crypto market capitalization will surge 177% to reach $7.5 trillion by 2025.

Magazine: Is measuring blockchain transactions per second (TPS) stupid in 2024? Big Questions

See more here:
US prediction market Kalshi to take bets on Bitcoin and Ether - Cointelegraph

Read More..

What’s Behind the Bitcoin Price Surge? Vibes, Mostly – WIRED

The latest surge in the price of bitcoin is increasing the clamor around it, says Dal Bianco, drawing in yet more speculators and creating a self-reinforcing cycle. Likewise, when collective confidence in the prospect of further price growth falters, she says, the resultant downturn can be equally sudden. Under these conditions, demand can vanish as rapidly as it forms.

On March 3, Michael Green, chief strategist at asset management firm Simplify, entered into a wager with Peter McCormack, host of the podcast What Bitcoin Did. They were betting on the price of bitcoin. Green wagered $20,000 that bitcoin would not reach a price of $100,000 per coin by the end of the year. McCormack wagered $100,000 that it would.

The bet, Green says, was in part motivated by a desire to highlight areas of weakness in the economic theory presented as dogma by bitcoin evangelists. He takes issue with the way bitcoin is being sold to the investing public as a store of value designed ultimately to be the currency of the future, he says. I think that is a bunch of economic nonsense. Because the supply of bitcoin will shrink steadily over time as people lose access to irrecoverable wallets, Green argues, it cannot support a system of credit, because the cost of borrowing will eventually rise to a point that almost no one can afford.

In January, US regulators approved the first batch of bitcoin exchange-traded funds, which give people a way to invest in the cryptocurrency through a brokerage, as they would a regular stock. The arrival of bitcoin ETFs is said to have catalyzed the latest surge in price, by unlocking a wave of pent-up demand among investorsboth institutions and regular peoplepreviously unable or unwilling to deal with a crypto exchange or risk storing crypto manually themselves. In approving the new bitcoin funds, says Green, regulators have incentivized financial institutions for whom the ETFs represent a new source of revenue to spend tons of money on marketing to drive demand, and in turn disincentivized any emphasis on deficiencies in the logic of bitcoinomics.

The belief in the future potential of bitcoin has become religious, says Green. That missionary zeal is more likely to influence the price, says Green, than any economic mechanism built into the system. Even if McCormack were to lose the wager, he says, it could be chalked up as a fruitful marketing expense. McCormack told WIRED the wager with Green was not a marketing stunt. I did the bet to prove him wrong, he says.

The influence of evangelism on the price of bitcoin limits the opportunity for good-faith debate about the prospects of the Bitcoin system, says Angel.Once you drink the Kool-Aid, you have a powerful financial incentive to preach to the world that bitcoin is the most wonderful thing, he says. If there were a Nobel prize in marketing, it should be given to Satoshi Nakamoto.

Bitcoins biggest boosters embrace that dynamic as well. Bitcoin price appreciation is an advertisement, says Mow. Investors buy in on the prospect of richesand then fall down the rabbit hole themselves, creating a new generation of believers to spread the Bitcoin gospel.

See original here:
What's Behind the Bitcoin Price Surge? Vibes, Mostly - WIRED

Read More..

Shiba Inu (SHIB) Price Prediction After Bitcoin Halving – Watcher Guru

Bitcoin skyrocketed in price touching a new high of $73,737 during mid-March this month. The phenomenal spike also led Shiba Inu to hit a new yearly high of $0.00004282 at the same period. The sudden spurt in price for Shiba Inu comes on the heels of the Bitcoin halving event.

Also Read: Shiba Inu: Investment of $4,400 Grows To $50 Million Today

For the uninitiated, the Bitcoin halving event is scheduled to take place next month on April 20, 2024. The event will make the supply of BTC cut into half making the cryptocurrency scarcely available. The development will make Bitcoin further shoot up in price as the demand will be high with limited supply. The move will make not only Bitcoin but also Shiba Inu and other leading cryptocurrencies sustainably scale up in the indices.

Leading on-chain metrics and price prediction firm CoinCodex has painted a rosy picture for Shiba Inu. According to the price prediction, SHIB could rise by another 225% on the heels of the Bitcoin halving. The forecast highlights that Shiba Inu could breach its all-time high of $0.00008616 and reach a new ATH of $0.00009 level.

Also Read: Cryptocurrency: 3 Coins Under $1 To Buy This Week For Profits

Thats an uptick and return on investment (ROI) of approximately 226% from its current price of $0.00002783. Therefore, an investment of $10,000 in SHIB could turn into $32,600 next month during Bitcoin halving if the prediction turns accurate. If the token holds on to the momentum, it could also delete its fourth zero and hit the $0.0001 mark.

Also Read:Shiba Inu: How To Make $1 Million If SHIBs Price Hits $0.001

However, the cryptocurrency market is highly volatile and there is no guarantee that SHIB could spike 225% in 30 days. It is advised to do thorough research before taking an entry position in the cryptocurrency market currently. Trade at your own risk as the Bitcoin halving event could make the markets turn volatile.

Read more:
Shiba Inu (SHIB) Price Prediction After Bitcoin Halving - Watcher Guru

Read More..

Introducing Seaborn Objects: One Ring to Rule Them All! – Towards Data Science

Quick Success Data Science One plotting ring to rule them all! One ring to Plot them all (by Dall-E2)

Have you started using the new Seaborn Objects System for plotting with Python? You definitely should; its a wonderful thing.

Introduced in late 2022, the new system is based on the Grammar of Graphics paradigm that powers Tableau and Rs ggplot2. This makes it more flexible, modular, and intuitive. Plotting with Python has never been better.

In this Quick Success Data Science project, youll get a quick start tutorial on the basics of the new system. Youll also get several useful cheat sheets compiled from the Seaborn Objects official docs.

Well use the following open-source libraries for this project: pandas, Matplotlib, and seaborn. You can find installation instructions in each of the previous hyperlinks. I recommend installing these in a virtual environment or, if youre an Anaconda user, in a conda environment dedicated to this project.

The goal of Seaborn has always been to make Matplotlib Pythons primary plotting library both easier to use and nicer to look at. As part of this, Seaborn has relied on declarative plotting, where much of the plotting code is abstracted away.

The new system is designed to be even more intuitive and to rely less on difficult Matplotlib syntax. Plots are built incrementally, using interchangeable marker types. This reduces the number of things you need to remember while allowing for a logical, repeatable workflow.

The use of a modular approach means you dont need to remember a dozen or more method names like barplot() or scatterplot() to build plots. Every plot is now initiated with a single Plot() class.

The Plot() class sets up the blank canvas for your graphic. Enter the following code to see an example (shown using JupyterLab):

Original post:

Introducing Seaborn Objects: One Ring to Rule Them All! - Towards Data Science

Read More..

The Road to Biology 2.0 Will Pass Through Black-Box Data – Towards Data Science

AI-first Biotech This year marks perhaps the zenith of expectations for AI-based breakthroughs in biology, transforming it into an engineering discipline that is programmable, predictable, and replicable. Drawing insights from AI breakthroughs in perception, natural language, and protein structure prediction, we endeavour to pinpoint the characteristics of biological problems that are most conducive to being solved by AI techniques. Subsequently, we delineate three conceptual generations of bio AI approaches in the biotech industry and contend that the most significant future breakthrough will arise from the transition away from traditional white-box data, understandable by humans, to novel high-throughput, low-cost AI-specific black-box data modalities developed in tandem with appropriate computational methods. 46 min read

This post was co-authored with Luca Naef.

The release of ChatGPT by OpenAI in November 2022 has thrust Artificial Intelligence into the global public spotlight [1]. It likely marked the first instance where even people far from the field realised that AI is imminently and rapidly altering the very foundations of how humans will work in the near future [2]. A year down the road, once the limitations of ChatGPT and similar systems have become better understood [3], the initial doom predictions ranging from the more habitual panic about future massive job replacement by AI to declaring OpenAI as the bane of Google, have given place to impatience why is it so slow?, in the words of Sam Altman, the CEO of OpenAI [4]. Familiarity breeds contempt, as the saying goes.

We are now seeing the same frenetic optimism around AI in the biological sciences, with hopes that are probably best summarised by DeepMind

See the original post here:

The Road to Biology 2.0 Will Pass Through Black-Box Data - Towards Data Science

Read More..

Largest-ever map of universes active superma – EurekAlert

image:

An infographic explaining the creation of a new map of around 1.3 million quasars from across the visible universe.

Credit: ESA/Gaia/DPAC; Lucy Reading-Ikkanda/Simons Foundation; K. Storey-Fisher et al. 2024

Astronomers have charted the largest-ever volume of the universe with a new map of active supermassive black holes living at the centers of galaxies. Called quasars, the gas-gobbling black holes are, ironically, some of the universes brightest objects.

The new map logs the location of about 1.3 million quasars in space and time, the furthest of which shone bright when the universe was only 1.5 billion years old. (For comparison, the universe is now 13.7 billion years old.)

This quasar catalog is different from all previous catalogs in that it gives us a three-dimensional map of the largest-ever volume of the universe, says map co-creator David Hogg, a senior research scientist at the Flatiron Institutes Center for Computational Astrophysics in New York City and a professor of physics and data science at New York University. It isnt the catalog withthe most quasars, and it isnt the catalog with the best-quality measurements of quasars, but it is the catalog with the largest total volume of the universe mapped.

Hogg and his colleagues present the map in a paper published March 18 in The Astrophysical Journal. The papers lead author, Kate Storey-Fisher, is a postdoctoral researcher at the Donostia International Physics Center in Spain.

The scientists built the new map using data from the European Space Agencys Gaia space telescope. While Gaias main objective is to map the stars in our galaxy, it also inadvertently spots objects outside the Milky Way, such as quasars and other galaxies, as it scans the sky.

We were able to make measurements of how matter clusters together in the early universe that are as precise as some of those from major international survey projects which is quite remarkable given that we got our data as a bonus from the Milky Wayfocused Gaia project, Storey-Fisher says.

Quasars are powered by supermassive black holes at the centers of galaxies and can be hundreds of times as bright as an entire galaxy. As the black holes gravitational pull spins up nearby gas, the process generates an extremely bright disk and sometimes jets of light that telescopes can observe.

The galaxies that quasars inhabit are surrounded by massive halos of invisible material called dark matter. By studying quasars, astronomers can learn more about dark matter, such as how much it clumps together.

Astronomers can also use the locations of distant quasars and their host galaxies to better understand how the cosmos expanded over time. For example, scientists have already compared the new quasar map with the oldest light in our cosmos, the cosmic microwave background. As this light travels to us, it is bent by the intervening web of dark matter the same web mapped out by the quasars. By comparing the two, scientists can measure how strongly matter clumps together.

It has been very exciting to see this catalog spurring so much new science, Storey-Fisher says. Researchers around the world are using the quasar map to measure everything from the initial density fluctuations that seeded the cosmic web to the distribution of cosmic voids to the motion of our solar system through the universe.

The team used data from Gaias third data release, which contained 6.6 million quasar candidates, and data from NASAs Wide-Field Infrared Survey Explorer and the Sloan Digital Sky Survey. By combining the datasets, the team removed contaminants such as stars and galaxies from Gaias original dataset and more precisely pinpointed the distances to the quasars. The team also created a map showing where dust, stars and other nuisances are expected to block our view of certain quasars, which is critical for interpreting the quasar map.

This quasar catalog is a great example of how productive astronomical projects are, says Hogg. Gaia was designed to measure stars in our own galaxy, but it also found millions of quasars at the same time, which give us a map of the entire universe.

ABOUT THE FLATIRON INSTITUTE

The Flatiron Institute is the research division of the Simons Foundation. The institute's mission is to advance scientific research through computational methods, including data analysis, theory, modeling and simulation. The institute's Center for Computational Astrophysics creates new computational frameworks that allow scientists to analyze big astronomical datasets and to understand complex, multi-scale physics in a cosmological context.

The Astrophysical Journal

Observational study

Not applicable

18-Mar-2024

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Continue reading here:

Largest-ever map of universes active superma - EurekAlert

Read More..

Understanding Impact of Advanced Retrievers on RAG Behavior through Visualization – Towards Data Science

13 min read

LLMs have become adept at text generation and question-answering, including some smaller models such as Gemma 2B and TinyLlama 1.1B. Even with such performant pre-trained models, they may not perform well when queried about some documents not seen during training. In such a scenario, supplementing your question with relevant context from the documents is an effective approach. This approach termed Retrieval-Augmented Generation (RAG) has gained significant popularity, due to its simplicity and effectiveness.

Retriever is a key component of a RAG system, which involve obtaining relevant document chunks from a back end vector store. In a recent survey paper on the evolution of RAG systems, the authors have classified such systems into three categories, namely Naive, Advanced and Modular [1]. Within the advanced category, post-retrieval optimization techniques such summarizing as well as re-ranking retrieved documents have been identified as some key improvement techniques over the naive approach.

In this article, we will look at how a naive retriever as well as two advanced retrievers influence RAG behavior. To better represent and characterize their influence, we will be visualizing the document vector space along with the related documents in 2-D using visualization library, renumics-spotlight. This library boasts powerful features to visualize the intricacies of document embeddings, and yet it is easy to use. And for our LLM of choice, we will be using TinyLlama 1.1B Chat, a compact model, but without a proportional drop in accuracy [2]. It makes this LLM ideal for rapid experimentation.

Disclaimer: I dont have any affiliation with Renumics or its creators. This article provides an unbiased view of the library usage based on my personal experience with the intention to make its knowledge available to the masses.

Table of Contents1.0 Environment and Key Components 2.0 Design and Implementation 2.1 Module LoadVectorize 2.2 The main Module 3.0 Knobs on Spotlight UI 4.0 Comparison of Retrievers 5.0 Closing Remarks

Continue reading here:

Understanding Impact of Advanced Retrievers on RAG Behavior through Visualization - Towards Data Science

Read More..

Probably the Best Data Visualisation for Showing Many-to-Many Proportion In Python – Towards Data Science

How to draw a fancy chord chart with links using PyCirclize

In my previous article, I have introduced the Python library called PyCirclize. It can help us to generate very nice Circos Charts (or Chord Charts if you like) with very little effort. If you want to know how it can make the Data Visualisation well- Rounded, please don't miss out.

However, dont worry if you are only interested in the Chord Charts with Links. This article will make sure you understand how to draw this type of chart.

In this article, Ill introduce another type of Chord Chart that PyCirclize can do. That is a Chord Chart with links that will visualize proportional relationships between many-to-many entities very well, and so far is the best one among all the known typical diagram types.

Before we start, just make sure to use pip for installing the library as follows. Then, we are all good to go. Lets explore this fancy chart together!

As usual, lets start with something abstract but easy to follow. The purpose is to show you what the chart looks like and whats the basic way of plotting it. Let me put the full code and the diagram at the beginning.

Read the original:

Probably the Best Data Visualisation for Showing Many-to-Many Proportion In Python - Towards Data Science

Read More..

Optimizing Pandas Code: The Impact of Operation Sequence – Towards Data Science

PYTHON PROGRAMMING Learn how to rearrange your code to achieve significant speed improvements. 9 min read

Pandas offer a fantastic framework to operate on dataframes. In data science, we work with small, big and sometimes very big dataframes. While analyzing small ones can be blazingly fast, even a single operation on a big dataframe can take noticeable time.

In this article I will show that often you can make this time shorter by something that costs practically nothing: the order of operations on a dataframe.

Imagine the following dataframe:

With a million rows and 25 columns, its big. Many operation on such a dataframe will be noticeable on current personal computers.

Imagine we want to filter the rows, in order to take those which follow the following condition: a < 50_000 and b > 3000 and select five columns: take_cols=['a', 'b', 'g', 'n', 'x']. We can do this in the following way:

In this code, we take the required columns first, and then we perform the filtering of rows. We can achieve the same in a different order of the operations, first performing the filtering and then selecting the columns:

We can achieve the very same result via chaining Pandas operations. The corresponding pipes of commands are as follows:

Since df is big, the four versions will probably differ in performance. Which will be the fastest and which will be the slowest?

Lets benchmark this operations. We will use the timeit module:

Visit link:

Optimizing Pandas Code: The Impact of Operation Sequence - Towards Data Science

Read More..

What Does it Take to Get into Data Engineering in 2024? – Towards Data Science

Career advice for aspiring data practitioners 14 min read

If you are reading this you were probably considering a career change lately. I am assuming that you want to learn somewhat close to software engineering and database design. It doesnt matter what your background is marketing, analytics or finance, you can do this! This story is to help you find the fastest way to enter the data space. Many years ago I did the same and never regretted since then. Technology space and especially data is full of wonders and perks. Not to mention remote working and massive benefit packages from the leading IT companies, it makes you capable of doing magic with files and numbers. In this story, Ill try to summarise a set of skills and possible projects which could be accomplished within two to three months timeframe. Imagine, just a few months of active learning and you are ready for your first job interview.

Any sufficiently advanced technology is indistinguishable from magic.

Indeed, why not Data Analytics or Data Science? I think the answer resides in the nature of this role as it combines the most difficult parts of these worlds. To become a data engineer you would need to learn Software engineering and database design, Machine Learning (ML) models, and understand data modelling and Business Intelligence (BI) development.

Data engineering is the fastest growing job according to DICE. They conducted research to demonstrate that there is a gap so be quick.

While Data Scientists have been considered to be the sexiest job in the market for a long time now it seems there is a certain lack of Data Engineers. I can see a massive demand in this area. This includes not only experienced and highly qualified engineers but also entry-level roles. Data engineering has been one of the fastest-growing careers in the UK over the last five years, ranking 13 on LinkedIns list of the most in-demand jobs in 2023 [1]. On

Continued here:

What Does it Take to Get into Data Engineering in 2024? - Towards Data Science

Read More..