Page 438«..1020..437438439440..450460..»

Q&A | An Exclusive Chat with Binance Africa Team Leads on Regulation, Licensing, and Growth on the Continent – bitcoinke.io

BitKE got to an exclusive chat with the Binance Leads for Africa Hannes and Nadeem to talk about the recent developments in Africa for Binance particularly on regulation in South Africa.

Speaking to BitKE on the recent regulatory developments in South Africa, Hannes, the General Manager of Southern Africa for Binance, said:

South Africas Financial Sector Conduct Authority (FSCA) is set to issue licenses to crypto asset service providers (CASPs) in the next few weeks.

We are excited about this development and commend the work of the Financial Sector Conduct Authority (FSCA) for its commitment to innovation-driven policies. This is a positive step for both the cryptocurrency industry and South Africans. This move will contribute to clarity, user protection, and much-needed confidence in the ecosystem.

Binance acknowledges the value of operating in a stable regulatory landscape and has dedicated considerable time to applying for licenses, registrations, and authorisations across the globe.

We remain committed to working with regulators and policymakers to shape policies that protect consumers, encourage innovation, and propel our industry forward.

Here is the exclusive BitKE Q&A discussion with the Binance Leads for Africa:

Q: Please introduce yourself and tell us what you do at Binance ( Hannes & Nadeem)

Hannes: My name is Hannes Wessels,the General Manager of Southern Africa for Binance, the worlds foremost blockchain ecosystem and cryptocurrency infrastructure provider, currently spearheading business operations in these regions.

Nadeem: My name is Nadeem Anjarwalla, the Lead Director of Operations in Africa (excluding South Africa) for Binance. In this role, I focus on scaling the business, adoption, and freedom of money.

Q: In light of the recent developments in the crypto landscape in South Africa, what positive developments do you see from the recent FSCA announcement on crypto licensing? (Hannes)

Hannes: As Binance Africa, we are excited about the development and commend the work of the Financial Sector Conduct Authority (FSCA) for its commitment to innovation-driven policies. This is a positive step for both the cryptocurrency industry and South Africans. This move will contribute to clarity, user protection, and much-needed confidence in the ecosystem.

Binance acknowledges the value of operating in a stable regulatory landscape and has dedicated considerable time to applying for licenses, registrations, and authorisations across the globe.

We remain committed to working with regulators and policymakers to shape policies that protect consumers, encourage innovation, and propel our industry forward.

Q: Did you apply for the licence? If so, how was the experience? South Africa has experienced numerous crypto scams. Do you see the licensing contributing to the much-needed confidence in the ecosystem? (Hannes)

Hannes: Yes, we applied for the license and its been a positive experience. This is a beneficial move for South Africans and the cryptocurrency sector. Binance is committed to regulatory compliance worldwide, focusing on user safety and ecosystem confidence. We collaborate with legislators and regulators to safeguard consumers, promote innovation, and advance the industry. Binance assists users in learning crypto best practices, emphasizing vigilance against scams. While we provide warnings and education, users must stay informed to prevent personal losses. Knowledge and education are crucial defences against fraud and scams.

Q: Apart from South Africa, where else on the continent are you seeking licensing? (Hannes & Nadeem)

Hannes & Nadeem: Binance is a pro-regulation organisation, committed to engaging with the government and relevant stakeholders. Around the world, we are in the process of obtaining crypto-specific licences, and Binance has bolstered its global compliance in 18 jurisdictions, such as Spain, Italy, France, New Zealand, Dubai, Bahrain, Abu Dhabi, and many other jurisdictions more than any other exchange. Our strong compliance practices enable us to meet the requirements of regulated entities around the world, allowing us to enter into key partnerships that serve our users and encourage the adoption of cryptocurrency and blockchain technology.

Q: In light of the recent regulatory discussions in Kenya, how are you approaching regulation in East Africa, particularly Kenya? (Nadeem)

Nadeem: Binance endorses Kenyas parliamentary committees recommendation to create a comprehensive oversight framework and policies for virtual assets and virtual asset service providers in Kenya. Binance supports these developments, believing in its responsibility to collaborate with regulators to create a safer crypto environment. Binance is a pro-regulation organisation that will continue to work with the government and other stakeholders.

Q: What is the regulatory framework in Southern and Francophone Africa? Which countries in Francophone Africa are you particularly excited about, and why? Which Francophone countries do you see the most adoption of Binance? (Hannes South Africa & Nadeem Francophone Africa)

Hannes & Nadeem: Africas crypto industry has flourished, with several nations, such as the Southern and Francophone markets, embracing cryptocurrencies. Binance has played a significant role by simplifying and securing cryptocurrency trading for African users.

We are particularly excited about the opportunities in countries like Senegal and Ivory Coast. These nations have shown a growing interest in cryptocurrency, and weve been working to enhance our services to cater to the needs of users in these regions.

In terms of adoption, weve observed positive trends in Senegal, where our user base has been steadily increasing. Additionally, Ivory Coast has shown promising signs of Binance gaining popularity as a preferred platform for cryptocurrency enthusiasts.

Its essential to note that our success in each country is influenced by various factors, including regulatory developments, market dynamics, and user behaviours. We remain committed to fostering growth in Francophone Africa and providing users with a seamless and secure crypto trading experience.

We now support multiple African currencies, enabling seamless P2P trading with zero fees. This expansion strengthens Binances presence on the continent, catering to the growing interest and adoption of cryptocurrencies in Africa.

Q: You undertook some law enforcement training in Nigeria, Kenya, and South Africa for the first time in 2023. How was the reception? Are you looking at doing more of such training in 2024 in other African countries? (Hannes & Nadeem)

Hannes & Nadeem: The response from law enforcement was positive, and we were able to focus on protecting users, which is our top priority. We firmly believe that close collaboration between industry players and law enforcement agencies is essential for preventing and addressing cybercrime, including combating fraud. We are delighted to interact with and support law enforcement agencies to jointly safeguard user assets.

In addition to participating in public consultation, Binances commitment to protecting users is also instrumental in helping build a safer virtual asset ecosystem across Africa.

Follow us onTwitterfor the latest posts and updates

Join and interact with ourTelegram community

_________________________________________

_________________________________________

Read the original post:

Q&A | An Exclusive Chat with Binance Africa Team Leads on Regulation, Licensing, and Growth on the Continent - bitcoinke.io

Read More..

Ethereum and Binance Coin investors flock to DeeStream – crypto.news

Disclosure: This article does not represent investment advice. The content and materials featured on this page are for educational purposes only.

Streaming platforms like YouTube and Twitch allow people to communicate in real time with content creators. Meanwhile, platforms like Ethereum (ETH) and Binance Coin (BNB) offer better security and more control over finances. DeeStream (DST) combines the best of these worlds by merging streaming and blockchain technology.

DeeStreamis a new streaming platform that uses blockchain technology. A community of users runs it, and the DST token helps shape its policies.

The platform has attracted investments from ETH and BNB holders because of its unique approach.

They have invested heavily in the platforms presale.

ETH and BNB holders are confident of DSTs growth potential, believing the token could outperform in the coming months.

Currently, DST is trading for $0.035 in stage 1 of thepresale.

Streaming platforms like YouTube and Twitch enforce restrictions on freedom of speech and expression. At the same time, Ethereum and BNB can be expensive to invest in.

However, DeeStream is a new platform that aims to solve these issues by providing greater control and desirable rewards.

As blockchain and web3 technologies evolve, DeeStream may play a central role in areas.

Disclosure: This content is provided by a third party. crypto.news does not endorse any product mentioned on this page. Users must do their own research before taking any actions related to the company.

See the article here:

Ethereum and Binance Coin investors flock to DeeStream - crypto.news

Read More..

Binance Labs Announces Investment In Three Projects From Season 6 Incubation Program – BSC NEWS

Exploring the journey of embedding data in BTC blocks, from Namecoin and Colored Coins to innovative Ordinals and NFT initiatives

Special thanks to Prasad of the MH Ventures team for submitting this guest article...

Using Bitcoin's blockchain for more than just financial transactions has been actively pursued since Bitcoin's early days. One of the initial discussions on theBitcoinTalk.org forums focused on the possibility of developing a DomainName System DNS using Bitcoin, eventually leading to the inception of Namecoin in 2013. As early as 2010, Hal Finney talked about an overlay protocol that adds data to the blocks and can process it, while the original client ignores it.

This era also saw the emergence of the term "Colored Coins," a protocol that would assign specific attributes or uses to portions of Bitcoin UTXOs. These marked UTXOs could then be employed in various off-chain applications.

An early example of exploiting this flexibility was the Counterparty platform,launched in 2014. Counterparty used a multisig transaction hack for embedding data but later shifted to using the OP_RETURN opcode.OP_RETURN outputs are a type of Bitcoin transaction output that is unspendable and allows for a small amount of arbitrary data to be attached. Counterparty uses this to embed data while minimizing blockchain bloat. The presence of arbitrary data in Counterparty outputs, or similar outputs, posedan issue as they were unspendable and added unnecessary load to nodes indifferent to the data or the protocol it served. Initially,

To mitigate this, the OP_RETURN function was standardized in the Bitcoin Corev0.9.0 release in March 2014. OP_RETURN enabled marking an output as unspendable, thereby informing nodes that such outputs could be discarded without tracking them in the UTXO set. This change also introduced a data size limit of 40 bytes in an OP_RETURN output, which was later increased to 80bytes to accommodate larger data sets.

To this day, embedding data into the Bitcoin blockchain using OP_RETURN remains a straightforward process. But OPRETURN provides 80 bytes of space compared to about 4MBs of space when using Ordinals (using Taproot and Segwit).

OP_RETURN and other approaches rely on adding data to the outputs. Ordinalsuse something different. If you look at ordinary metal coins, each one is similar to the other - making them fungible. But physical notes, despite being similar to one another, often have a serial number printed on it and can be non-fungible in some sense. This idea was replicated by Casey in 2023, when he devised away to number each of the SATs that are generated and transferred making them identifiable to a meta protocol that indexes and tracks them.

Once these SATS are traceable, they are inscribed with some data in the transaction while spending them. These inscriptions can be text, images, or even audio files, directly written into the Bitcoin blocks. The Ordinal TheoryHandbook states that "individual satoshis can be inscribed with arbitrary content, creating unique Bitcoin-native digital artifacts that can be held inBitcoin wallets and transferred using Bitcoin transactions. Inscriptions are as durable, immutable, secure, and decentralized as Bitcoin itself."

NFTs on BTC add both metadata and actual content. This means that the blocksize had to be sufficiently large to accommodate this and the transaction data.This became possible due to updates unrelated to this problem.

SegWit was a major protocol upgrade for Bitcoin in 2017. Its primary goal wasto solve the scalability issues faced by the Bitcoin network, mainly by addressing the block size limit problem.

Regular Bitcoin blocks are composed of 2 types of data - witness data(signature) and transaction data. Before Segwit, they collectively had to fit within 1MBs.

The key innovation of SegWit was the separation of signature data (witness information) from the transaction data within a block. This segregation increased the capacity of each block without needing to increase the physical block size limit. By moving the witness data outside of the main transaction block, SegWit freed up space within each block. Segwit also introduced a new concept called "virtual bytes" to measure transaction size more accurately.This replaced the block size and used weights for calculation. The witness data was given a weight of 1 and the transaction data, a weight of 4. This means that the witness data is discounted by 75%.

This separate data, 75% cheaper than the transaction data, looked lucrative to anyone trying to add arbitrary data to the blocks.

The Taproot upgrade was activated in November 2021 and was designed toimprove Bitcoins privacy, efficiency, and scalability. It introduced several newfeatures to the Bitcoin protocol, including Schnorr signatures and MerkelizedAbstract Syntax Trees MAST.

Schnorr signatures allowed for more complex Bitcoin transactions to beaggregated and treated as a single transaction. This reduced the amount ofdata needed for each transaction, further optimizing the space within each block.

This meant that collectively Taproot made the transaction data more efficientand low in space, and the Segwit update separated the witness data, making it75% cheaper to add data within it.

The process of creating inscriptions used to be difficult and required developer experience. Developers had to run, download, and sync the BTC chain, then run an ORD client on top of it, and finally inscribe the data using CLI. However, with the emergence of inscription service providers like OrdinalsBot, the process has become much simpler for average users. These providers offer a user-friendly frontend where users can upload data or select parameters forBRC 20 tokens. They can also specify the BTC address where they want to receive SAT and make the payment using Lightning or other convenient methods.

On the backend, the Bitcoin node runs a software called ORD, developed by Casey Rodarmor, which enables it to add data and metadata to the BTC block in the witness section. Indexing also requires the BTC node to run the ORD client on top.

Inscribing is a 2-step process...

Numbering of the Ordinals is done by the ORD clients that sits on top of theBitcoin node. The ORD protocol assigns sequential numbers to the SATS in each block using an algorithm, starting with the Genesis block. Every newly generated SATS, which is created as mining rewards for each block, is numbered in order after accounting for all the SATs that came before. For example, the Genesis block had a reward of 50 BTC, which constituted the first5 billion SATs in the output. In subsequent blocks, the new SATs are numbered based on the rewards for that specific block. For instance, if the block following the genesis block had 1 BTC as its reward, each of the SAT in the rewards would be numbered 4999999 to 5999999.

There are several indexers, such as Ordinals.com, Ordinals Indexer, and BitcoinOracle, that access each BTC block, parse it, and maintain a database of ordinals and their ownership. These indexers track transactions in each block, monitor changes in ownership for each SAT, and update the index accordingly.The index is essentially a large database that stores information about ownership, block ID, and other details for each SAT.

Once the SAT has been numbered and identified, the next step is to add data to it.

An Inscription is typically stored in Bitcoin's "witness data", where a Bitcoin transaction's signatures & public keys reside ("Script Witness"). Inscription gets broken up into small parts so that it can go onto Bitcoin, but we "package"all those parts together in an "envelope". An envelope is the specific data structure that helps indexers like "ord" or OrdinalHub's "gord" identify & readInscriptions.

An envelope helps an indexer look through the witness data on Bitcoin and determine 2 things:

The content of inscriptions is serialized using data pushes within unexecuted conditionals, also known as "envelopes". These envelopes consist of anOP_FALSE OP_IF ... OP_ENDIF structure, which wraps multiple data pushes. Itis important to note that the BTC node disregards the content that comes after the conditionals OP_FALSE until OP_ENDIF. To accommodate larger-size data, each data push is limited to 520 bytes, requiring the use of multiple pushes.

The token ID of each satoshi is its sequential number, whereas the metadata of an Ordinals NFT is its inscription held within the witness data of a transaction.

NFTs created using Ordinals on Bitcoin are different from those on otherblockchains. Ordinals inscribe data directly onto individual SATS within theBitcoin blockchain, making each NFT an inherent part of the blockchain itself.This eliminates the need for external links or storage. In contrast, most NFTs onplatforms like Ethereum use smart contracts and often rely on external datahosting. This distinction makes Ordinals NFTs more integrated with theblockchain, enhancing their security and permanence. However, Ethereum'ssmart contract capabilities offer more flexibility, enabling complex interactionsand features for NFTs. Bitcoin NFTs have better custody options for institutionsas there is no need for custody of the new standard. New solutions in the BTCNFT ecosystem are also emerging to improve portability and reduce the needfor running full nodes, reducing friction as the traction increases.

The number of transactions included in the BTC blocks has changed since the introduction of Ordinals and Inscriptions. The average number of transactions per block has increased by almost 50%, thanks to the utilization of Segwit space. Initially, the transactions were primarily focused on financial use cases, while those after January 2023 now include text/image-based inscription transactions.

The chart below shows how people have been using different types of ordinals. Ever since the BRC20 standard was introduced, it has been the most popular choice for inscriptions, surpassing all other formats of NFTs. This indicates a strong demand for BRC20 tokens compared to other types of NFTs.

In terms of the total number of inscriptions, when there are more text-based inscriptions BRC20 tokens), they take up a relatively smaller block size compared to image-based inscriptions. This is because the data size of BRC20 inscriptions is 20% larger compared to 4045% for image-related inscriptions. So, we can say that BRC20 inscriptions don't take over the blockspace for financial transactions but rather occupy the unused space in the block.

The folks who use pictures or words to label things usually aren't keen onshelling out big bucks. Captions on images take up more space, while BRC20labels that use text take up less space but there are more of them. Miners havebeen making a solid 20% cut from these labels.

The starting price for an inscription can range from 12 sats/ vbyte to more than 50 sats/ vbyte. Looking at the data, we can see that the fees initially started at 12 sats/ vbyte and gradually went up, settling in the 1020 sats/vbyte range. This could be the standard fee for inscriptions that miners can expect in the long run. When compared to financial transactions, inscriptions are not time-sensitive and users usually don't mind waiting for a few blocks.Inscription demand at 1020 sats/ vbyte represents the buyers who are willing to take up any leftover space in each block. This means miners can enjoy a solid 20% increase in revenue.

The Bitcoin Ordinals ecosystem has experienced significant growth since its inception in January 2023, with the introduction of the Ordinals protocol marking a revolutionary step by enabling unique identification and inscription of data onto Bitcoin's SATS. The emergence of collections like "Ordinal Punks"soon after showcased the demand for these digital artifacts, sparking interest and investment. Infrastructure growth followed, with marketplaces, wallets, and data analytics tools such as Ordinals Wallet and Ordinal Hub emerging to support trading, management, and insight into the burgeoning market. DeFi applications and services that simplify the inscription process have further expanded the ecosystem, making it more accessible and functional for a broader user base.

The first and most popular BRC20 token is ORDI, which started as a meme but now has a market cap of over $1 Billion. Other BRC20 tokens include VMPX,MEME, BANKBRC, and PEPEBRC, which have been listed on Gate.io, the first exchange to support BRC20 trading. Now there are over 75k BRC20 tokens.

The first token contract to be deployed was for the $ORDI token with a limit of 1K tokens per mint and 21M max supply (in an homage to Bitcoins max supply).The launch created some buzz in a sub-sector of the Bitcoin community, and in less than a day, all 21M $ORDI tokens had been minted.

Comparison to Ordinals in terms of number of transactions and fees

The total number of BRC20 transactions has reached a whopping 42 million, while non-BRC20 transactions are only at 6.5 million. The fees paid for BRC20 transactions also speak volumes - a staggering 3500 BTC, compared to a mere 700 BTC for non-BRC20 transactions.

BRC20 tokens started on March 9, a pseudonymous Crypto Twitter user named @domo posted a thread theorizing a method called BRC20 that could create a fungible token standard on top of the Ordinals protocol.

In essence, the method was about inscribing text onto sats to create fungible tokens. The initial design only allowed for three different operations: deploying, minting, and transferring. The functions used to launch, mint, and transfer BRC20 tokens are essentially JSON objects. They are text that have to be inscribed on the SATS.

Step 1 The transfer function should be inscribed to the SAT that is held by the user who wants to send

Step 2 The SAT containing the transfer inscription is sent

Around 40M BRC20 tokens have been minted so far in three separate periods as we can see from the charts.

There are over 75k BRC20 tokens, as any user can create their own byinscribing SATS. This process is not so different from the ICO boom of 2017.

Runes are designed to be simpler and more efficient than BRC20 tokens.They are also fungible tokens issued directly on Bitcoin using the Runes protocol. The creator of Ordinals, Casey Rodarmor, proposed Runes with the intent to address some of the issues associated with BRC20, such as the excessive production of "junk" UTXOs that could potentially clog the network. Runes aims to streamline the token issuance process on Bitcoin by using a UTXO-based protocol that avoids the generation of these unwanted UTXOs. In contrast to other fungible token protocols for Bitcoin, Runes do not require off-chain data or a native token to operate, and they use the OP_RETURN function to store data on the blockchain,differentiating from how Ordinals and BRC20 tokens operate.

Recursive inscriptions work by referencing data from existing inscriptions, allowing for the creation of new inscriptions without having to upload all the original data again. This means that larger and more complex data structures can be stored on-chain, as new inscriptions can "call" data from pre-existing ones, daisy-chaining information together to form comprehensive files. This allows for significant savings on block space and potentially reduces transaction fees because only incremental data needs to be added to the blockchain.

The benefits of recursive inscriptions include greater efficiency in storing large files, the potential to represent data beyond the 4MB limit, the creation of new types of on-chain software, and cost savings in transaction fees. They can also introduce more complex functionalities like smart contracts to Bitcoin.

Projects like OnChainMonkey have already utilized recursive inscriptions to create more complex digital artifacts, like 3D NFT art on Bitcoin

The PIPE protocol is a method for creating and managing assets on theBitcoin network. It was developed by Benny, who was inspired by Casey'sRunes protocol and Domo's BRC20 standard.

One of the main features of the PIPE protocol is its ability to support both fungible and non-fungible tokens on the Bitcoin network. This means that users can create and trade unique digital assets, such as art or collectibles, as well as more traditional tokens, like cryptocurrencies or utility tokens.

The PIPE protocol is based on the concept of responsible UTXO management. This ensures that the Bitcoin blockchain remains efficient while allowing users to create and manage their digital assets. This is achieved through a process called "token-controlled access," which gives users control over who can access their digital assets and how they can be used.

Taproot Assets offers the capability to create various types of assets on the Bitcoin network, ranging from collectibles to regular, fungible assets.Essentially, there are no constraints on what these assets can symbolize they could be anything from stablecoins and company shares to event tickets, ownership rights, or even art.

Here are some potential use cases that Taproot Assets bring: Introduce stablecoins, a primary focus of Lightning labs

Enable asynchronous receipt & multi-recipient transactions Facilitate Bitcoin DeFi applications with the Lightning Network Manage ERC721 and ERC1155 assets without storing metadata on-chain.

Inscription events have recently put a lot of strain on almost every chain, causing some networks to experience outages and significant spikes in transaction fees.

On 15th December, the Arbitrum sequencer experienced an hour and a half of downtime. This was due to a large number of users spamming the L2with 'mint' transactions to acquire the FAIR20 inscriptions token. The recent surge in inscriptions on Arbitrum resulted in a significant increase in network transactions, reaching 5.1M in a single day. The number of inscription transactions was more than 10 times higher compared to non-inscription transactions. During the past 24 hours, the Arbitrum L2Sequencer Inbox contract burned the highest amount of ETH, totalling 795.7 ETH. The Arbitrum team confirmed that the sustained surge of inscriptions caused the sequencer to stop relaying transactions properly.Since then, the network has returned to normal.

On December 16, an inscription event on zkSync Era caused a large increase in transactions over 38 hours. For nearly 14 hours straight, the network handled 150 TPS peaking at 187 TPS with an average TXcost of $0.12. One of the systems primary databases was configured with fewer total connections than needed during prolonged 150 TPS. This led to 15 minutes of downtime, which was fixed immediately once the team re-started the database. The block explorer couldnt keep up at 150 TPS and many users took this as a sign that the network was delayed, even though that wasnt the case and transactions were continuing to go through - the explorer was just slow to index them

In Avalanche, new inscriptions are the ASC20 tokens like $BEEG $dino$QQ $AVAV $AVAST. The total ASC20 Market Cap stood at around $70M.Avalanche CChain Gas spent more than $20 million in the past seven days. In the past 7d, inscription activity accounted for 72.3% of the gas consumption and 86.5% of the transactions on the Avalanche CChain. Gas fees briefly spiked past 5,000 nAVAX $4.5 when Trade Joe co-founder released BEEG inscription minting. But overall the AVAX chain turned deflationary for a couple of days, burning more fees than the emissions.

Metaplex, the marketplace for NFTs on Solana has launched Metaplex Inscriptions and Engravings, a new standard for fully on-chain and immutable digital assets on Solana. Metaplex Inscriptions allow you to store an assets metadata and media fully on Solana, removing any external trust assumptions and unlocking greater composability for on-chain attributes and smart contracts. $sols has recently become the top NFT collection on Solana, surpassing other competitors in market cap. Sols-SPL20 was a fully public and freely available minting event. It took approximately 4hours to sell out. Now, it holds the top position among Solana's NFTs.

Visit link:

Binance Labs Announces Investment In Three Projects From Season 6 Incubation Program - BSC NEWS

Read More..

Physicists detect elusive ‘Bragg glass’ phase with machine learning tool | Cornell Chronicle – Cornell Chronicle

Cornell quantum researchers have detected an elusive phase of matter, called the Bragg glass phase, using large volumes of x-ray data and a new machine learning data analysis tool. The discovery settles a long-standing question of whether this almostbut not quiteordered state of Bragg glass can exist in real materials.

Crystal structure of pure ErTe3

The paper, Bragg glass signatures in PdxErTe3 with X-ray diffraction Temperature Clustering (X-TEC), published in Nature Physics on Feb. 9. The lead author isKrishnanand Madhukar Mallayya, postdoctoral researcher in the Department of Physics in the College of Arts and Sciences (A&S). Eun-Ah Kim, professor of physics (A&S), is the corresponding author. The research was conducted in collaboration with scientists at Argonne National Laboratory and at Stanford University.

The researchers present the first evidence of a Bragg glass phase as detected from X-ray scattering, which is a probe that accesses the entire bulk of a material, as opposed to just the surface of a material, in a systematically disordered charge density wave (CDW) material, PdxErTe3. They used comprehensive X-ray data and a novel machine learning data analysis tool, X-ray Temperature Clustering (X-TEC).

Despite its theoretical prediction three decades ago, concrete experimental evidence for CDW Bragg glass in the bulk of the crystal remained missing, Mallayya said.

Read the full story on the College of Arts and Sciences website.

Read more:
Physicists detect elusive 'Bragg glass' phase with machine learning tool | Cornell Chronicle - Cornell Chronicle

Read More..

Cracking the Code: How Uber Masters ETA Calculation on a Massive Scale – Medium

Predicting ETAs

Ubers main goal in predicting ETA was to be reliable. This means that the estimated time of arrival should be very close to the actual time, and this accuracy should be consistent across different places and times.

The simplest approach that comes to mind to find the predicted ETA is to use map data, such as the haversine distance (shortest distance between two points), and add a scaler for speed. However, this method is not sufficient, as there can be a significant gap between the predicted and the actual ETA calculation since people dont travel in a straight line between two points.

To address this issue, Uber has incorporated additional layers such as routing, traffic information, map matching, and machine learning algorithms to enhance the reliability of the predicted ETA.

Lets deep dive into the additional layers.

Problem statement : Build a large-scale system that computes the route from origin to destination with the least cost and low latency.

To achieve this they represent the physical map as a graph

Every road intersection represents a node, and each road segment is represented as a directed edge.

To determine the ETA, they need to find the shortest path in this directed weighted graph. Dijkstras algorithm is commonly used for this purpose, but its time complexity is O(n log n), where n is the number of road intersections or nodes in the graph.

Considering the vast scale of Ubers operations, such as the half a million road intersections in the San Francisco Bay Area alone, Dijkstras algorithm becomes impractical.

To address this issue, Uber partitions the graph and precomputes the best path within each partition.

Interacting with the boundaries of graph partitions alone is sufficient to discover the optimal path.

Picture a dense graph represented on a circular map.

To find the best path between two points in a circle, traditionally, every single node in the circle needs to be traversed, resulting in a time complexity proportional to the area of the circle ( * r).

However, by partitioning and precomputing, efficiency is improved. It becomes possible to find the best path by interacting only with the nodes on the circles boundary, reducing the time complexity to the perimeter of the circle (2 * * r).

In simpler terms, this means that the time complexity for finding the best path in the San Francisco Bay Area has been reduced from 500,000 to 700.

Once we have the route, we need to determine the travel time. To do that, we require traffic information.

Consider traffic conditions when determining the fastest route between two points.

Traffic depends on factors like time of day, weather, and the number of vehicles on the road.

They used traffic information to determine the edge weights of the graph, resulting in a more accurate ETA.

They integrated historical speed data with real-time speed information to enhance the accuracy of traffic updates, as the inclusion of additional traversal data contributes to more precise traffic information.

Before moving forward, there were two critical questions that needed addressing:

1. Validity of Real-time Speed: Too short a duration might imply a lack of understanding of the current road conditions. Conversely, if its too long, the data becomes outdated.

2. Integrating Historical and Real-time Speeds: Striking a balance here involves a tradeoff between bias and variance. Prioritizing real-time data yields less bias but more variance. Emphasizing historical data introduces more bias but reduces variance. The challenge lies in finding the optimal balance between the two

GPS signals can be less reliable and less frequent, especially when a vehicle enters a tunnel or an area with many tall buildings that can reflect the GPS signals.

Also, mobile GPS signals are usually close to the street segments but not perfectly on it, which makes it difficult to get the exact street coordinates.

Map matching is like connecting the dots! Imagine you have red dots representing raw GPS signals.

Now, the goal is to figure out which road segments these dots belong to. Thats where map matching comes in it links those red dots to specific road segments.

The resulting blue dots show exactly where those GPS signals align with the road segments. Its like fitting the puzzle pieces together to see the actual path on the map.

They use the Kalman filter for map matching. It takes GPS signals and matches them to road segments.

Besides they use the Viterbi algorithm to find the most probable road segments. Its a dynamic programming approach.

Ubers initial aim was to provide reliable ETA information universally. Reliability has been discussed above; now, lets shift the focus to how Uber ensures availability everywhere.

Uber has observed that ETA predictions in India are less accurate compared to North America due to systematic biases or inefficiencies. This is where Machine Learning (ML) can play a crucial role by capturing variations in: 1. Regions 2. Time 3. Trip types 4. Driver behavior etc

By leveraging ML, Uber aims to narrow the gap between predicted ETAs and actual arrival times, thereby enhancing the overall reliability and user experience.

Lets define a few terms, and then we will better understand their decisions.

1. Linear Model: Definition: A linear model assumes a linear relationship between the input variables (features) and the output variable. It follows the equation (y = mx + b), where (y) is the output, (x) is the input, (m) is the slope, and (b) is the intercept. Example: Linear regression is a common linear model used for predicting a continuous outcome.

2. Non-linear Model: Definition: A non-linear model does not assume a linear relationship between the input and output variables. It may involve higher-order terms or complex mathematical functions to capture the patterns in the data. Example: Decision trees, neural networks, and support vector machines with non-linear kernels are examples of non-linear models.

3. Parametric Model: Definition: A parametric model makes assumptions about the functional form of the relationship between variables and has a fixed number of parameters. Once the model is trained, these parameters are fixed. Example: Linear regression is parametric since it assumes a linear relationship with fixed coefficients.

4. Non-parametric Model: Definition: A non-parametric model makes fewer assumptions about the functional form and the number of parameters in the model. It can adapt to the complexity of the data during training. Example: k-Nearest Neighbors (KNN) is a non-parametric algorithm, as it doesnt assume a specific functional form and adapts to the data during prediction based on the local neighbourhood of points.

Since ETA is influenced by factors like location and time of day, and there is no predefined relationship between variables, they opted for non-linear and non-parametric machine learning models like

In their terms, With great (modelling) power comes great (reliability) responsibility! So, they have fallback ETAs to avoid system downtime situations.

They also monitor ETA to prevent issues for both internal and external consumers.

Link:
Cracking the Code: How Uber Masters ETA Calculation on a Massive Scale - Medium

Read More..

AI What is it good for? ‘Machine Learning’ at Central Square Theatre takes a look – WBUR News

The longer one lives, the more opportunities there are to act as a caregiver for a loved one in need. Though its not a glamorous job (its downright difficult), luckily, there are technological tools that can help. Reminders to take medicine or to call a doctor can be set with Siri or Alexa, family members can use cameras to converse and to ensure a loved ones safety, and there are multiple ways that artificial intelligence (AI) can be used to perform tasks, make predictions and even getspeedier diagnosesof various diseases, particularly cancer.

But even with all its promise, how much should technology take on? Will privacy and other ethical lines continue to blur? Does technologys presence in health care factor in that some people might do better than their prognosis? What of hope and faith? Questions like these shape Francisco Mendozas probing play Machine Learning (through Feb. 25 at Central Square Theater), where a son aims to help his father, who is battling cancer and a penchant for alcohol, with an app he named Arnold (a perfectly machine-sounding Matthew Zahnzinger). The Central Square production was produced in partnership with Teatro Chelsea, which Rivera helms, and the Catalyst Collaborative@MIT.

Whats interesting about the bilingual show is that it doesnt attempt to present definitive answers to the imminent questions about technologys use in health care. However, through the lens of a father and son (Gabriel and Jorge) struggling to connect, it does present a balanced case not too heavily laden with tech speak so that its unapproachable that shows how leaning too much on tech alone could help or hurt.

Machine Learning (ML) is a type of AI that isnt necessarily programmed to perform a specific task but can learn to make decisions or predictions over time as its exposed to more data. In the play Jorge (Armando Rivera) lands a paid fellowship and uses his app, Arnold (ML), to help manage his dad, Gabriel (Jorge Alberto Rubio) from pills to predictions and recommendations.

It's a solid production under Gabriel Vega Weissmans direction. Multiple suspended screens are aglow with a green churning image when Arnold speaks. The actor voicing Arnold is offstage. The clever (and on-genre) use of video and projections by SeifAllah Salotto-Cristobal and white screen-shaped squares that hide furniture and other props (courtesy of scenic designer Janie E. Howland and props person Julia Wonkka) bring the audience through multiple settings.

There are even a few telling visits into the past a terrifying car accident, Gabriel and a young Jorge watching "The Terminator" or Jorges visit to his dads house after Gabriel and his mom divorced that highlight how the chasm between them has widened and seems uncrossable. In these scenes, the acting chops of a young Jorge, wonderfully rendered by Xavier Rosario, get to shine.

Despite their challenges, Jorge and Gabriel still love each other. What Jorge lacks when it comes to expressing sentimental emotion, he funnels into monitoring and secretly hoping to save his dad. But what Jorge forgets, like many of us sometimes do, is that we all have a responsibility in relationships. Everyone has the choice to talk about what ails them, to unburden themselves, and often, to forgive. Not doing so can lead to torment.

But most of all, Jorge momentarily forgets that the use of tech doesnt mean that the action or inaction of AI will always be accurate or helpful or that it will always do what one hopes. After all, the data AI is driven by is derived from humans with all our innovation and intelligence as well as our biases and shortcomings.

Machine Learning at Central Square Theater shows through Feb. 25. The play was produced in partnership with Teatro Chelsea and the Catalyst Collaborative@MIT.

Here is the original post:
AI What is it good for? 'Machine Learning' at Central Square Theatre takes a look - WBUR News

Read More..

Data, Artificial Intelligence (AI), and Machine-Learning Are the Cornerstones of Prosperous Real Estate Portfolios – ATTOM Data Solutions

The only way for investors to achieve sustained outperformance relative to the market and their peers is if they have a unique ability to uncover material facts that are almost completely unknown to everybody else.

Mark J. Higgins, CFA, CFP, CFA Institute

The best investors have an uncanny ability to identify undervalued stocks the hidden gems. They see a stock that will outperform the market where most investors see nothing at all. The housing market is not the stock market, but some investors manage to jump on the best deals that others miss, and they are tapping data solutions to do so.

In this article, we explore how data, machine learning, and artificial intelligence-powered solutions are now integral to real estate investing at every stage. From property searches and deal negotiations to project and portfolio management, real estate and property AI solutions can help investors to make data-driven decisions and be more profitable.

To outperform the market, you need to identify undervalued assets. That means assessing an assets future potential and understanding all the variables that might affect your investment over time.

In the case of real estate, the variables include how much cashflow an asset can produce from future rentals, whether units need upgrades or refurbishments, the market demand for properties, economic variables, such as employment, crime rates, and interest rates, any risks to the property due to climate or hazards, and more.

Finding such data used to be time-intensive, if it could be found at all, and much of it might be overlooked in the rush to seal a deal. Today, however, investors have all of this information accessible from data platforms and APIs. Investors can tailor analytics to focus on the criteria they care about and still make fast investment decisions.

It used to be that real estate investors relied on networking in their locales to find out about potential projects. The geographic areas for sourcing properties were limited. Real Estate API data platforms have removed boundary limitations by providing real estate and property data on a national level and down to the granular street level. The world has opened up for investors, and the only boundaries investors worry about now are neighborhood boundary lines for school districts, demographics, and local house prices.

The incredible growth in Proptech sector, or property technology, had created rapid saturation. Proptech are digital solutions and startups providing tools to real estate professionals, asset managers, and property owners. They facilitate the researching, buying, selling, and managing of real estate. According to Globe Newswire the worldwide PropTech market was valued at billions of dollars and growing rapidly. Market size was around USD 19.5 billion in 2022 and is predicted to grow to around USD 32.2 billion by 2030.

Examples of these cutting-edge technologies are ATTOM, a property and real estate data provider; Zillow, another dataset provider; Opendoor, a digital platform for buying and selling homes, and Homelight, which matches buyers and sellers. Other players include Axonize, a Smart Building Software as a Service (SaaS), that uses IoT to help property owners optimize energy consumption, reduce costs, and improve space utilization. Home365 is a property management solution that offers vacancy insurance rental listings, and tenant management and maintenance.

Before the rise of Proptech and APIs, conventional analytical methods required investors and analysts to wade through millions of records or data points to discern patterns. By the time an investor arrived at a decision, and probably a risky one, the best opportunities were gone.

Lets say a developer is looking for parcel zones suitable for development. Using advanced analytics based on artificial intelligence (AI) and machine learning, the developer can collect hyperlocal community data, expected land use, government planning data, and local economic data to assess the potential ROI of a parcel.

An investor might be looking for a commercial property investment. Combining Yelp data with property price data might show that having two upscale restaurants within a quarter of a mile correlates with higher property prices, while more than four correlates with lower prices. This type of information is an example of how an investor might use data to identify investment targets quicker than their competitors.

AI and machine-learning solutions parse an unlimited amount of information that is the right mix of community, pricing, and location-based data to provide results.

Real Estate Data providers like ATTOM offer expansive data about properties, market trends, and historical sales. They offer neighborhood data, climate data, and other valuable data that can be used for predictive modeling to manage risk.

The investment decision is just one area where data has changed real estate investing. Property owners also use technology for project management.

Just as identifying potential real estate investments is now a data and solution-driven process, property management is also now digitalized. Solutions like Appfolio and Doorloop track property performance metrics like occupancy rates, maintenance costs, and rental income for investors.

Many of these solutions, including AppFolio and Buildium, automate rent collection, maintenance tracking, and will take care of communications between management and tenants using chatbots and automated emails.

Pouring over Excel spreadsheets and risk ratios and following due diligence used to be the way to a robust, risk-mitigated portfolio. But digital solutions like BiggerPockets and DealCheck will analyze deals, assess ROI, and evaluate risk for you. They will even educate you on investing and team you up with agents and brokers that serve your niche.

DealChecks software analyzes deals such as rental property acquisitions, flips, and multi-family buildings. It will estimate profits and configure deal parameters for you.

Granted, these solutions are limited in that they cannot structure an investing strategy. For that, investors must decide their niche or direction and find projects that follow their business model. Then, data analytics can support that strategic direction with long-term roles and goals for projects and investments.

Lets say an investor wants to build a portfolio of multifamily buildings, machine learning algorithms can identify neighborhoods with potential based on macro data and hyperlocal forecasts, such as the demand for multifamily housing and government subsidies. This allows the asset manager to identify the undervalued properties the hidden gems.

Its true that institutional investors have the resources to hire teams of experts to build models and create architecture. They can hire translators to apply findings to actions. But just like online investing platforms democratized stock investing, data APIs are leveling the playing field for real estate.

Pre-digital transformation, only investors teamed with connected and informed real estate brokers could lead real estate investing. Today, data and solutions providers have opened up a world where nationwide property data is at their fingertips and informed analytical reports are mitigating portfolio risk.

Data, AI, and machine-learning solutions have opened the gates for savvy real estate investors. They are helping to narrow down a competitive field that has reached global proportions.

Learn more about how ATTOMs data can power your portfolio and reveal the hidden gems.

Read the original:
Data, Artificial Intelligence (AI), and Machine-Learning Are the Cornerstones of Prosperous Real Estate Portfolios - ATTOM Data Solutions

Read More..

How symmetry can come to the aid of machine learning – MIT News

Behrooz Tahmasebi an MIT PhD student in the Department of Electrical Engineering and Computer Science (EECS) and an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL) was taking a mathematics course on differential equations in late 2021 when a glimmer of inspiration struck. In that class, he learned for the first time about Weyls law, which had been formulated 110 years earlier by the German mathematician Hermann Weyl. Tahmasebi realized it might have some relevance to the computer science problem he was then wrestling with, even though the connection appeared on the surface to be thin, at best. Weyls law, he says, provides a formula that measures the complexity of the spectral information, or data, contained within the fundamental frequencies of a drum head or guitar string.

Tahmasebi was, at the same time, thinking about measuring the complexity of the input data to a neural network, wondering whether that complexity could be reduced by taking into account some of the symmetries inherent to the dataset. Such a reduction, in turn, could facilitate as well as speed up machine learning processes.

Weyls law, conceived about a century before the boom in machine learning, had traditionally been applied to very different physical situations such as those concerning the vibrations of a string or the spectrum of electromagnetic (black-body) radiation given off by a heated object. Nevertheless, Tahmasebi believed that a customized version of that law might help with the machine learning problem he was pursuing. And if the approach panned out, the payoff could be considerable.

He spoke with his advisor, Stefanie Jegelka an associate professor in EECS and affiliate of CSAIL and the MIT Institute for Data, Systems, and Society who believed the idea was definitely worth looking into. As Tahmasebi saw it, Weyls law had to do with gauging the complexity of data, and so did this project. But Weyls law, in its original form, said nothing about symmetry.

He and Jegelka have now succeeded in modifying Weyls law so that symmetry can be factored into the assessment of a datasets complexity. To the best of my knowledge, Tahmasebi says, this is the first time Weyls law has been used to determine how machine learning can be enhanced by symmetry.

The paper he and Jegelka wrote earned a Spotlight designation when it was presented at the December 2023 conference on Neural Information Processing Systems widely regarded as the worlds top conference on machine learning.

This work, comments Soledad Villar, an applied mathematician at Johns Hopkins University, shows that models that satisfy the symmetries of the problem are not only correct but also can produce predictions with smaller errors, using a small amount of training points. [This] is especially important in scientific domains, like computational chemistry, where training data can be scarce.

In their paper, Tahmasebi and Jegelka explored the ways in which symmetries, or so-called invariances, could benefit machine learning. Suppose, for example, the goal of a particular computer run is to pick out every image that contains the numeral 3. That task can be a lot easier, and go a lot quicker, if the algorithm can identify the 3 regardless of where it is placed in the box whether its exactly in the center or off to the side and whether it is pointed right-side up, upside down, or oriented at a random angle. An algorithm equipped with the latter capability can take advantage of the symmetries of translation and rotations, meaning that a 3, or any other object, is not changed in itself by altering its position or by rotating it around an arbitrary axis. It is said to be invariant to those shifts. The same logic can be applied to algorithms charged with identifying dogs or cats. A dog is a dog is a dog, one might say, irrespective of how it is embedded within an image.

The point of the entire exercise, the authors explain, is to exploit a datasets intrinsic symmetries in order to reduce the complexity of machine learning tasks. That, in turn, can lead to a reduction in the amount of data needed for learning. Concretely, the new work answers the question: How many fewer data are needed to train a machine learning model if the data contain symmetries?

There are two ways of achieving a gain, or benefit, by capitalizing on the symmetries present. The first has to do with the size of the sample to be looked at. Lets imagine that you are charged, for instance, with analyzing an image that has mirror symmetry the right side being an exact replica, or mirror image, of the left. In that case, you dont have to look at every pixel; you can get all the information you need from half of the image a factor of two improvement. If, on the other hand, the image can be partitioned into 10 identical parts, you can get a factor of 10 improvement. This kind of boosting effect is linear.

To take another example, imagine you are sifting through a dataset, trying to find sequences of blocks that have seven different colors black, blue, green, purple, red, white, and yellow. Your job becomes much easier if you dont care about the order in which the blocks are arranged. If the order mattered, there would be 5,040 different combinations to look for. But if all you care about are sequences of blocks in which all seven colors appear, then you have reduced the number of things or sequences you are searching for from 5,040 to just one.

Tahmasebi and Jegelka discovered that it is possible to achieve a different kind of gain one that is exponential that can be reaped for symmetries that operate over many dimensions. This advantage is related to the notion that the complexity of a learning task grows exponentially with the dimensionality of the data space. Making use of a multidimensional symmetry can therefore yield a disproportionately large return. This is a new contribution that is basically telling us that symmetries of higher dimension are more important because they can give us an exponential gain, Tahmasebi says.

The NeurIPS 2023 paper that he wrote with Jegelka contains two theorems that were proved mathematically. The first theorem shows that an improvement in sample complexity is achievable with the general algorithm we provide, Tahmasebi says. The second theorem complements the first, he added, showing that this is the best possible gain you can get; nothing else is achievable.

He and Jegelka have provided a formula that predicts the gain one can obtain from a particular symmetry in a given application. A virtue of this formula is its generality, Tahmasebi notes. It works for any symmetry and any input space. It works not only for symmetries that are known today, but it could also be applied in the future to symmetries that are yet to be discovered. The latter prospect is not too farfetched to consider, given that the search for new symmetries has long been a major thrust in physics. That suggests that, as more symmetries are found, the methodology introduced by Tahmasebi and Jegelka should only get better over time.

According to Haggai Maron, a computer scientist at Technion (the Israel Institute of Technology) and NVIDIA who was not involved in the work, the approach presented in the paper diverges substantially from related previous works, adopting a geometric perspective and employing tools from differential geometry. This theoretical contribution lends mathematical support to the emerging subfield of Geometric Deep Learning, which has applications in graph learning, 3D data, and more. The paper helps establish a theoretical basis to guide further developments in this rapidly expanding research area.

See the original post here:
How symmetry can come to the aid of machine learning - MIT News

Read More..

Advancing Fairness in Lending Through Machine Learning – Federal Reserve Bank of Philadelphia

Our economys financial sector is using machine learning (ML) more often to support lending decisions that affect our daily lives. While technologies such as these pose new risks, they also have the potential to make lending fairer. Current regulation limits lenders use of ML and aims to reduce discrimination by preventing the use of variables correlated with protected class membership, such as race, age, or neighborhood, in any aspect of the lending decision. This research explores an alternative approach that would use an applicants neighborhood to consciously reduce fairness concerns between LMI and non-LMI applicants. Since this approach is costly to lenders and borrowers, we propose concurrent use with more advanced ML models that soften some of these costs by improving model predictions of default. The combination of embracing ML and setting explicit fairness goals may help address current disparities in credit access and ensure that the gains from innovations in ML are more widely shared. To successfully achieve these goals, a broad conversation should continue with stakeholders such as lenders, regulators, researchers, policymakers, technologists, and consumers.

Read the rest here:
Advancing Fairness in Lending Through Machine Learning - Federal Reserve Bank of Philadelphia

Read More..

MIT Researchers Make Breakthrough in AI and Machine Learning with Symmetry Exploitation – Medriva

In the rapidly evolving field of artificial intelligence (AI) and machine learning, a team of researchers from the Massachusetts Institute of Technology (MIT) have made a significant breakthrough. They have discovered that exploiting the symmetry within datasets can drastically reduce the amount of data required for training neural networks. This novel approach has profound implications for machine learning, AI, and data science, promising increased efficiency and potential applications in various industry sectors.

MIT researchers have made strides in streamlining data complexity in neural networks by adapting Weyls law, a mathematical principle that deals with the distribution of eigenvalues. This innovative approach was presented at the prestigious December 2023 Neural Information Processing Systems conference, earning a Spotlight designation. The work is a distinct divergence from previous studies as it bridges the gap between theoretical math and practical computing.

This innovation leverages symmetries to drastically lower the hurdles of machine learning, contributing significantly to AIs advancement towards refinement. It shows immense versatility and application potential in the rapidly expanding research area of Geometric Deep Learning.

The MIT researchers have developed an algorithm to detect transformations in data, demonstrating that the principle of symmetry is highly underutilized in neural networks. By using symmetry within datasets, they were able to reduce the quantity of data required for training neural networks, potentially enhancing the overall performance of predictive models.

This approach could revolutionize practical applications, improving efficiency and lowering computational costs in various industry sectors. It also highlights the potential for efficiency gains in training algorithms and the application of symmetry in improving machine learning models.

Besides this groundbreaking work, MIT researchers have also been involved in other significant projects. They have discovered a way to control the dancing patterns of magnetic bits using terahertz light in a nonlinear manner. This could revolutionize computing and provides new insights into how light can interact with spins. This work was primarily supported by the U.S. Department of Energy Office of Basic Energy Sciences, the Robert A. Welch Foundation, and the United States Army Research Office.

Additionally, the researchers have received a large grant to work on developing ingestible capsules to treat metabolic disorders. They are also making progress in quantum computing, developing a system to identify and control atomic-scale defects to build a larger system of qubits.

In conclusion, the work by MIT researchers in leveraging symmetry within datasets to enhance machine learning efficiency is a significant step forward in the field of AI and machine learning. It not only promises improved efficiency and reduced computational costs but also opens new avenues for practical applications in various sectors. The continuous and innovative research at MIT continues to push the boundaries of what is possible, paving the way for future advancements.

See more here:
MIT Researchers Make Breakthrough in AI and Machine Learning with Symmetry Exploitation - Medriva

Read More..