Page 2,516«..1020..2,5152,5162,5172,518..2,5302,540..»

Bitcoin price consolidation could be over, says trader as Ethereum, Polkadot hit all-time highs – Cointelegraph

Bitcoin (BTC) is in line for a continuation of its bull run, fresh analysis says, as BTC/USD retains its 2.2% daily gains.

As data from Cointelegraph Markets Pro and TradingViewtracks Bitcoins best day for a week, confidence in higher levels is building.

Earlier Tuesday, Nov. 2, the largest cryptocurrency abruptly exited a sideways trading channel to add almost $2,000 in just over an hour.

Amid calls of a potential breakout, popular trader Pentoshi believes that $62,500 may be buyers only chance to buy the dip.

$BTC any pullback towards 62.5k is a great bid zone to add, he advised Twitter followers alongside an accompanying chart.

With the Wall Street open around the corner, confidence is firmly evident among market participants, with analyst TechDev calling for a march beyond all-time highs of $67,100.

Related:Price analysis 11/1: BTC, ETH, BNB, ADA, SOL, XRP, DOT, SHIB, DOGE, LUNA

Ether (ETH), the largest altcoin by market capitalization, saw a fresh all-time high of $4,482 Tuesday, days after its previous record.

The top 10 cryptocurrencies by market cap were led by Polkadot (DOT), up 15% on the day at $52 itself an all-time high after its own October rally.

Bitcoin de facto hit its worst-case scenario monthly close for October, thus remaining on course to see almost $100,000 by the end of this month.

Bullish sentiment is also coming from a resurgent altcoin sphere.

Read more:
Bitcoin price consolidation could be over, says trader as Ethereum, Polkadot hit all-time highs - Cointelegraph

Read More..

Top 3 Price Prediction Bitcoin, Ethereum, Ripple: Cryptos stagnate but one readies to explode – FXStreet

Bitcoin price finds strong support against key Ichimoku levels, but buyers have seemingly dried up. Ethreum price may be positioning for a retest of the Tenkan-Sen or Kijun-Sen. Ripple is likely to outperform Bitcoin and Ethereum if it can find buyers.

Bitcoin price action shows the current daily candlestick could be developing a second consecutive inside bar. If that occurs, then a rare but powerful three-bar candlestick pattern known as a Bullish Squeeze Alert will be confirmed. According to Michael C. Thomsett, author of the Bloomberg Visual Guide to Candlestick Charting, the Bullish Squeeze Alert is one of the strongest reversal patterns in Japanese candlestick charting. One of the critical rules to this pattern is that the first bar is black/red, but the following two candlesticks can be of any color. If confirmed, the Bullish Squeeze Alert may presage Bitcoins rise to the $77,000 value area.

BTC/USD Daily Ichimoku Chart

Any continued bullish momentum will be invalidated if Bitcoin price closes below the October 26th swing low of $58,317.

Ethereum price did not experience much buying follow through after hitting new all-time highs on Wednesday. But it hasnt sold off either. The closest support level acting as support is the 61.8% Fibonacci expansion at $4,493. However, Ethereum may consider closing the gap between the bodies of the last three candlesticks and the Tenkan-Sen. If that were to occur, then Ethereum would dip down to $4,280. Absent any buying at the Tenkan-Sen; sellers may push Ethereum to the Kijun-Sen as $4,000. That is the final Ichimoku support level before Senkou Span B at $3,350.

ETH/USD Daily Ichimoku Chart

While a sell-off appears unlikely given the current bullish sentiment and inflows into the altcoin market, the dangers to the downside remain. But any new all-time high could likely invalidate any deeper retracement. This is especially true of Ethereum holds the $4,500 and develops a new floor.

XRP price may surprise the cryptocurrency market this weekend. Out of the three cryptocurrencies discussed in this article, Ripple is best positioned to experience a massive rally. However, to do that, it must remain above the Cloud, and the Chikou Span must remain above the candlesticks and in open space. Ripple must close at or above $1.15 to keep the Chikou Span above the candlesticks.

XRP recently broke out above the Cloud on Wednesday with a substantial 7% gain. Since then, XRP price has traded lower. The current price action is likely just a retest of the breakout a necessary move to begin any new uptrend. If Ripple finds buyers to maintain support at $1.15, market participants may see new two-month highs and a probable push towards $1.50.

XRP/USD Daily Ichimoku Chart

If Ripple were to face a sell-off and a daily below $0.99, that might be the death knell for any hope of bullish momentum in 2020.

More here:
Top 3 Price Prediction Bitcoin, Ethereum, Ripple: Cryptos stagnate but one readies to explode - FXStreet

Read More..

Spending a String of 20000 BTC 2 Bitcoin Whale Transactions Move Over $1.2 Billion Bitcoin News – Bitcoin News

On November 1, at block height 707,639, a blockchain parser caught two bitcoin whale transfers that moved approximately 19,876 bitcoin worth $1.2 billion in the mix of 2,819 transactions. Interestingly, the owner used a similar splitting mechanism the old school mining whale blockchain parsers caught, spending strings of 20 block rewards throughout 2020 and 2021.

Bitcoin whales are mysterious animals because in a blockchain world of pseudonymity we only see them when they move. Last year and this year as well, Bitcoin.com News has hunted a specific whale entity that spent thousands of bitcoin mined in 2010.

Every single time the whale spent the decade-old bitcoin that sat idle the whole time, the entity spent exactly 20 block rewards or 1,000 BTC. After the transfer, the wallets holding 1,000 BTC dispersed the funds into smaller-sized wallets.

According to the creator of btcparser.com, the close to 20K BTC transferred at block height 707,639 on November 1 shared similar splitting mechanics with the 2050 awakenings. The blockchain parsers owner would guess that the entity spending the two transactions could be the same person or organization.

The special transactions stemming from block height 707,639 derived from the bitcoin addresses 15kEr and 1PfaY. The 15kEr address transferred 9,900.87 BTC, while 1PfaY spent 9,975.31 BTC.

The two transactions were filtered among 2,819 BTC transfers with 6,406 inputs recorded in block 707,639. The output total in that block was 9,587 with 78,704.53 BTC dispersed. The two transactions stemming from 15kEr and 1PfaY, represented more than 25% of the BTC processed in block 707,639.

After the funds were sent, the nearly 20K BTC was split into 200 wallets with 100 BTC each. Then the bitcoin whales funds were split again into much smaller wallets until they finally consolidated into different amounts.

Data from blockchair.coms Privacy-o-meter for Bitcoin Transactions tool shows the wallet that sent the 9,975.31 BTC got a score of 60 or moderate. This is because matched addresses were identified and blockchair.coms tool notes that matching significantly reduces the anonymity of addresses. The 9,900.87 BTC spend suffers from the same tracking vulnerabilities as matched addresses were also identified.

Alongside the close to 20K BTC transfer in two separate transactions, 59 blocks later 50 sleeping bitcoins that had sat idle since April 28, 2011, were transferred at block height 707,698. The 50 BTC sat idle for over ten years since the day they were mined and when they were transferred, the exchange rate for the block reward of 50 BTC was just over $3 million.

Blockchair.coms privacy tool indicates the transaction got a score of 0 or critical. A critical score means that the tool identified issues [that] significantly endanger the privacy of the parties involved.

What do you think about the 20,000 bitcoin sent in two transactions at block height 707,639? Let us know what you think about this subject in the comments section below.

Image Credits: Shutterstock, Pixabay, Wiki Commons

Disclaimer: This article is for informational purposes only. It is not a direct offer or solicitation of an offer to buy or sell, or a recommendation or endorsement of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

Go here to read the rest:
Spending a String of 20000 BTC 2 Bitcoin Whale Transactions Move Over $1.2 Billion Bitcoin News - Bitcoin News

Read More..

Square says bitcoin demand slowed in Q3 but picked back up in October; earnings weigh on stock – MarketWatch

Square Inc. reported lower-than-expected revenue for the third quarter as less volatile pricing for bitcoin impacted demand, though the companys chief financial officer noted strength in volume during October.

The company posted a break-even third quarter, after earning $37 million, or 7 cents a share, in the year-earlier quarter. On an adjusted basis, Square SQ, -4.07% earned 37 cents a share, up from 34 cents a share a year earlier, while analysts tracked by FactSet were expecting 38 cents a share. The fintech company grew revenue to $3.84 billion from $3.03 billion, while analysts had been modeling $4.39 billion.

Shares fell roughly 3% in after-hours trading following the release of the report.

Squares revenue total for the latest quarter consisted of $1.30 billion in transaction-based revenue, $695 million in subscription revenue, $37.3 million in hardware revenue, and $1.82 billion in bitcoin revenue. Analysts tracked by FactSet were expecting $2.6 billion in bitcoin revenue.

Bitcoin BTCUSD, +0.88% is a relatively low-margin business for Square, and the company incurred $1.77 billion in bitcoin costs during the quarter.

Bitcoin revenue and gross profit benefited from year-over-year increases in the price of bitcoin and number of bitcoin actives, the company noted in its shareholder letter, though bitcoin revenue and gross profit both declined on a sequential basis, which Square largely attributed to relative stability in the price of bitcoin.

Chief Financial Officer Amrita Ahuja noted on a call with reporters that as bitcoin prices increased in October, the company saw strength in demand.

The companys total gross profit for the third quarter came in at $1.13 billion, up from $794 million a year earlier. Analysts had been expecting $1.15 billion. Speaking on the call with reporters, Ahuja argued for the importance of the gross-profit metric as an indicator of Squares performance.

Squares Cash App mobile wallet generated gross profit of $512 million, whereas analysts tracked by FactSet were looking for $536 million. Wolfe Research analyst Darrin Peller suggested the miss wasnt particularly surprising.

Heading into the print, we believed Cash App would face waning impacts of government stimulus benefits and lower bitcoin revenues, he wrote in a note to clients. We believe these dynamics were known by most investors, who were expecting ~$510 million in Cash App gross profit.

The results were unusual for Square, especially given the companys track record of revenue/earnings beats, Wedbush analyst Moshe Katri told MarketWatch. The results likely reflected the impact of fading stimulus payments, which was a similar theme at other payments names as well, he continued.

During the third quarter, Square saw a lower portion of transactions take place through debit cards, while average transaction size also fell on a year-over-year basis. Despite the decreases, Square noted in its shareholder letter that these trends remained elevated relative to historical periods partly as a result of changes to consumer behaviors due to COVID-19 and government disbursements, which may not continue in future quarters.

Square saw gross payment volume of $45.43 billion, up from $31.73 billion a year earlier. The FactSet consensus was for $45.61 billion.

Square expects seller GPV to be up 42% on a year-over-year basis during October.

Square has the largest short interest in the data-processing and outsourced services sector with $9.7 billion, according to data from S3 Partners. That short interest has increased by $996 million over the past 30 days. Squares short interest as a percentage of its float stands at 9.81%.

Shares of Square have declined about 7% over the past three months as the S&P 500 SPX, +0.37% has risen roughly 6%.

Continue reading here:
Square says bitcoin demand slowed in Q3 but picked back up in October; earnings weigh on stock - MarketWatch

Read More..

Why The Navajo Are Mining Bitcoin – Bitcoin Magazine

At nearly 400,000 people, the Navajo Nation is one of the largest Native American tribes in the United States. Its also one of the most impoverished, with poverty statistics closer to the worlds least developed countries than its neighboring cities of Phoenix, Arizona, or Santa Fe, New Mexico.

Nearly 50% of Navajo are unemployed, 40% dont have running water, 32% live without electricity, and over 30% live below the poverty line, according to an April 2021 testimony before Congress.

Generational poverty for Native American populations has been the focus of an abundance of government research and spending. Most solutions for the issues center on injecting federal dollars into local economies through subsidies, special business licenses and community work.

What these solutions dont propose, however, is giving tools for lasting individual empowerment to these indigienous populations. Indeed, the Navajo Nation is one of the most visible representations of living in a split monetary system: One with access to American capital, but lack of formal control over capital deployment.

But a silent financial revolution is occurring on Navajo land, and its fueled by the growth of a new industry: Bitcoin mining.

A Navajo person cant own the land

The broken Navajo economy is the product of numerous treaties signed between the United States government and tribes during Americas westward expansion. Most treaties abdicated direct control of tribal people to the tribe itself, including government functions, taxation rights and law enforcement. But two major responsibilities remained in U.S. hands: trusteeship of land and control of the currency.

These stipulations have had predictable financial consequences.

As the trustee, the federal government leases Indian land out for uses such as farming, logging or mining. The U.S. government also manages the money accrued from such activities on behalf of the nations. Decades of mismanagement culminated in 2012 with a $492 million settlement between 17 tribes and the Obama administration.

Yet, the leasing system itself continues to hamper progress against poverty.

The federal government took the land rights away from the Navajo people, Navajo Tribal Authority President Walter Hasse told Compass Mining in an interview. So a Navajo person cant own the land that their home is on. If you dont own the land, then how do you borrow the money to build a house on the land?

Tribal sovereignty does not extend to currency either. As U.S. citizens, Native Americans are taxed in dollars. And while it's difficult to say the dollar has been a net negative for tribes, restrictions around how money can be used within the incumbent financial system could be considered one.

Called the buckskin curtain, Indian tribes have not only been slow to adopt financial tools, but impeded from accessing them due to national sovereignty. Only 32 Native American financial institutions are in existence today, constituting the smallest percentage of minority-owned deposit institutions compared. Among other concerns, tribes worry accepting a bank charter from the Office of the Comptroller of the Currency (OCC) would interfere with their national status.

For example, where would a banking dispute be heard in court? In reservation courtrooms or in Washington? And what evidence do Native American tribes have that due process would be followed?

These questions have pushed tribes outside of the U.S. financial system by either being unable or unwilling to operate within the commercial banking sector.

Employment and currency only show half of the picture of economic damage though.

During the 20th century, energy firms outside of Navajo land signed contracts with the Navajo Nation to source and extract its abundant energy resources, especially coal and uranium.

That coal was used to power cities from Santa Fe, New Mexico, to Los Angeles, California illuminating, watering and powering a once sparsely populated portion of the United States. Years later, the power plants are coming down, leaving the Navajo little to show for leasing their land to outsiders, minus poisoned groundwater and abandoned coal pits.

Over 4 million tons of uranium were also mined on Navajo land from 1950 onward. While it fed Uncle Sams Cold War appetite, Navajo uranium would have devastatingly-long terminal effects on the indiginous people and their land. Some 27% of Navajo have heightened levels of uranium in their bodies, according to a 2016 study, while over 500 open-air uranium mines remain in various stages of cleanup.

Before Bitcoin, mining has carried very negative connotations for most of the Navajo Nation.

In 2017, a small Canadian firm named West Block approached the Navajo about tapping Navajo energy for a Bitcoin mine on Navajo land.

Currently using 8 megawatts (MW), the new mine is already in the process of doubling its size. Thats equivalent to about 3,000 machines of various types powering and protecting the Bitcoin network using Navajo energy.

But its not just about the machines. Its about the output of those machines in the context of a people group whove gone without many of the benefits the nominal American enjoys.

For example, the facility currently employs two full-time employees. With the expansion, that number will grow to eleven. The money created from the mine will then circulate into the local economy. It may seem insignificant now, but mining bitcoin on Navajo land is a very real source of future Navajo wealth, employment, and economic recovery.

The Navajo mines also represent the Navajo Nation creating wealth for themselves with their energy. Bitcoin mining brings demand for energy to wherever the energy source is. Navajo energy now has a non-stop and quickly-growing demand brought to their land with the profits paid to the Navajo Nation.

Lastly, the Navajo Bitcoin mines represent financial inclusion. Bitcoin mining is the first small step for broad bitcoin adoption by the Navajo Nation. Opting into a free and open-internet money protocol with a physical presence among the Navajo has unlimited potential for economic growth and wealth creation.

This is a guest post by William Foxley. Opinions expressed are entirely their own and do not necessarily reflect those of BTC, Inc. or Bitcoin Magazine.

Read the original:
Why The Navajo Are Mining Bitcoin - Bitcoin Magazine

Read More..

Data Science Master’s Degree | 100% Online | University of …

Organizations in nearly every industry are racing to hire qualified professionals with the skills to transform big data into big insights and better decisionsand these data scientists are in short supply.

The Master of Science in Data Science and Graduate Certificate in Data Science is a partnership between UW Extended Campus and several University of Wisconsin campuses. This collaboration gives students access to the combined resources and talent of the UW System. Online learning with UW Extended Campus is a smart choice for busy adult learners who want to advance their careers while balancing work, family and other commitments. As a student you will:

Admission to the masters and graduate certificate program requires a bachelors degree and a 3.0 GPA. Aptitude tests such as the GMAT and GRE are not required.

Grow your skills with an engaging, multidisciplinary curriculum

Discover how to transform big data into actionable insights and advance your career in the growing field of data science.

Learn More

Learn how to effectively work with and communicate about data, positioning yourself for success in todays data-driven world.

Learn More

Whether youre a UW Data Science Masters student or enrolled in the Graduate Certificate program, youll learn from distinguished faculty from six University of Wisconsin partner campuses. View our faculty biographies.

Get Program Guide

We understand its not easy to juggle the responsibilities of work and family while earning your degree or certificate. Thats why UW Data Science programs are online to give you freedom and flexibility.

Whether you live in Wisconsin or not, tuition is a flat fee tuition is $850 per credit. (36 credits total).

Financial aid is available for students who qualify.

Get Program Guide

Link:

Data Science Master's Degree | 100% Online | University of ...

Read More..

MSE in Data Science

Data Science in 100 seconds:Program Director, Susan Davidson

Admission Requirements

Program of Study

Ten Courses:

Information re: Data Science (DATS) Minor can be accessedhere

Information re: A comparison between Scientific Computing and Data Science can be accessed here

Meet Our Community

The emerging discipline of data science has become essential to making decisions, understanding observations, and solving problems in todays world. Read more

Penns Master of Science in Engineering (MSE) in Data Science prepares students for a wide range of data-centric careers, whether in technology and engineering, consulting, science, policy-making, or understanding patterns in literature, art or communications.

The Data Science Program can typically be completed in one-and-a-half to two years. It blends leading-edge courses in core topics such as machine learning, big data analytics, and statistics, with a variety of electives and an opportunity to apply these techniques in a domain specialization a depth area of choice. Read more

Penn provides the perfect environment for data science enthusiasts, with its strong cross-disciplinary traditions. Biomedical informatics, communications and public policy, robotics, machine learning and artificial intelligence, and data privacy are of broad interest across campus. Read more

The rest is here:

MSE in Data Science

Read More..

UVA Announces New Research Partnership at Intersection of Business and Data Science – UVA Today

Two schools at the University of Virginia this week announced a unique partnership to explore ways to combine the power of data science with the teaching and practical applications of business.

The School of Data Science and the Darden Graduate School of Business collaboratory capitalizes on the explosion of data; the ability to analyze and pull insights from it; and the shared interest in applying that new knowledge to improving how business is taught, researched and applied in actual settings.

The new Collaboratory for Applied Data Science in Business, with the support of the UVA Office of the Executive Vice President and Provost, will build on the existing relationship between the two schools and provide new avenues for students, staff and faculty to work and learn together at the intersection of data science and business. In todays complex and data-driven world, the collaboration aims to advance faculty research and create innovative pathways for new programming and community engagement.

The application of data science to transformative challenges and opportunities in business is a clear, natural and timely manifestation of our historic mission, UVA President Jim Ryan said. The creation of a collaboratory with this mandate will provide the vehicle to advance and sustain these efforts within and beyond the University.

Both Darden and Data Science have championed practical problem-solving through their curriculum and programs, real-world case studies and cutting-edge research. By formalizing their relationship through a collaboratory, they are using the sophisticated techniques and tools available through data science to address the challenges surrounding business ethics and analytical leadership.

Much of the promise of data science lies in its real-world applications solving problems and improving lives, School of Data Science Dean Phil Bourne said. This partnership furthers our goal to use data science to make a positive impact on business and society.

Darden Dean Scott Beardsley added, The collaboratory will also leverage Dardens and the Universitys strong presence in Northern Virginia and the D.C. Metro area, home to some of the highest concentrations of data traffic, data centers and data-related innovation on the planet.

Bourne said data science is vital to understanding and improving our world and it is a discipline that belongs to all of us.

As a school, we believe in that shared sense of responsibility and actively seek out opportunities to partner and collaborate with colleagues across the University, he said. This natural next step in our partnership will enable us to expand educational offerings aimed at building a world-renowned business analytics presence.

The core of the collaboratory aligns with concepts of mutual interest to the schools and University: ethics and leadership. It will explore ways in which business leadership is changing in the face of the data explosion and ever-present technologies that enable leaders to use and misuse it. Our goal is to explore how business leaders must lead differently and ethically to succeed in a data-intensive future, Marc Ruggiano, administrative director of the collaboratory and a lecturer in both schools, said.

The Collaboratory for Applied Data Science in Business builds on the foundation of recent partnerships. The two schools launched a successful MBA/Master of Data Science dual-degree program in 2017, and more recently collaborated on a series of Darden Executive Education & Lifelong Learning programs in partnership with the UVA McIntire School of Commerce. Darden and the School of Data Science also recently completed a successful pilot run of a research seminar series for faculty and staff.

The word collaboratory itself is rooted in UVA history and commonly attributed to William Wulf, a computer scientist in the School of Engineering and Applied Science, who coined the term in 1989. Wulfs definition of a collaboratory was a center without walls, a means for researchers to collaborate and share data and computational resources. This is the Universitys second collaboratory; the first was launched in 2019 between the School of Data Science and School of Education.

The Darden and Data Science collaboratory will be led by Casey Lichtendahl, associate professor of business administration, and Eric Tassone, associate professor of data science, who will serve as academic directors, along with Ruggiano. Together, they will lead the collaboratory and engage with faculty from multiple areas of study, including entrepreneurship, operations, finance and marketing, among others.

The collaboratory will kick off with an initial five-year term and will be housed at the School of Data Sciences new building, scheduled to be completed in fall 2023, part of the Discovery Nexus along the Emmet-Ivy Corridor.

See the article here:

UVA Announces New Research Partnership at Intersection of Business and Data Science - UVA Today

Read More..

Variable Names: Why They’re a Mess and How to Clean Them Up – Built In

Quick, what does the following code do?

Its impossible to tell right? If you were trying to modify or debug this code, youd be at a loss unless you could read the authors mind. Even if you were the author, a few days after writing this code you might not remember what it does because of the unhelpful variable names and use of magic numbers.

Working with data science code, I often see examples like above (or worse): code with variable names such as X, y, xs, x1, x2, tp, tn, clf, reg, xi, yi, iiand numerous unnamed constant values. To put it frankly, data scientists (myself included) are terrible at naming variables.

As Ive grown from writing research-oriented data science code for one-off analyses to production-level code (at Cortex Building Intelligence), Ive had to improve my programming by unlearning practices from data science books, coursesand the lab. There are significant differences between deployable machine learning code and how data scientists learn to program, but well start here by focusing on two common and easily fixable problems:

Unhelpful, confusing or vague variable names

Unnamed magic constant numbers

Both these problems contribute to the disconnect between data science research (or Kaggle projects) and production machine learning systems. Yes, you can get away with them in a Jupyter Notebook that runs once, but when you have mission-critical machine learning pipelines running hundreds of times per day with no errors, you have to write readable and understandable code. Fortunately, there are best practices from software engineering we data scientists can adopt, including the ones well cover in this article.

Note: Im focusing on Python since its by far the most widely used language in industry data science. Some Python-specific naming rules (see here for more details) include:

More From Will KoerhsenThe Poisson Process and Poisson Distribution, Explained

There are three basic ideas to keep in mind when naming variables:

The variable name must describe the information represented by the variable. A variable name should tell you concisely in words what the variable stands for.

Your code will be read more times than it is written. Prioritize how easy your code is to read over than how quick it is to write.

Adopt standard conventions for naming so you can make one global decision in a codebase instead of multiple local decisions.

What does this look like in practice? Lets go through some improvements to variable names.

If youve seen these several hundred times, you know they commonly refer to features and targets in a data science context, but that may not be obvious to other developers reading your code. Instead, use names that describe what these variables represent such as house_features and house_prices.

What does the value represent? It could stand for velocity_mph, customers_served, efficiencyorrevenue_total. A name such as value tells you nothing about the purpose of the variable and just creates confusion.

Even if you are only using a variable as a temporary value store, still give it a meaningful name. Perhaps it is a value where you need to convert the units, so in that case, make it explicit:

If youre using abbreviations like these, make sure you establish them ahead of time. Agree with the rest of your team on common abbreviations and write them down. Then, in code review, make sure to enforce these written standards.

Avoid machine learning-specific abbreviations. These values represent true_positives, true_negatives, false_positivesand false_negatives, so make it explicit. Besides being hard to understand, the shorter variable names can be mistyped. Its too easy to use tp when you meant tn, so write out the whole description.

The above are examples of prioritizing ease of reading code instead of how quickly you can write it. Reading, understanding, testing, modifying and debugging poorly written code takes far longer than well-written code. Overall, trying to write code faster by using shorter variable names will actually increase your programs development and debugging time! If you dont believe me, go back to some code you wrote six months ago and try to modify it. If you find yourself having to decipher your own past code, thats an indication you should be concentrating on better naming conventions.

These are often used for plotting, in which case the values represent x_coordinates and y_coordinates. However, Ive seen these names used for many other tasks, so avoid the confusion by using specific names that describe the purpose of the variables such as times and distances or temperatures and energy_in_kwh.

When Accuracy Isn't Enough...Use Precision and Recall to Evaluate Your Classification Model

Most problems with naming variables stem from:

On the first point, while languages like Fortran did limit the length of variable names (to six characters), modern programming languages have no restrictions so dont feel forced to use contrived abbreviations. Dont use overly long variable names either, but if you have to favor one side, aim for readability.

With regards to the second point, when you write an equation or use a model and this is a point schools forget to emphasize remember the letters or inputs represent real-world values!

We write code to solve real-world problems, and we need to understand the problem our model represents.

Lets see an example that makes both mistakes. Say we have a polynomial equation for finding the price of a house from a model. You may be tempted to write the mathematical formula directly in code:

This is code that looks like it was written by a machine for a machine. While a computer will ultimately run your code, itll be read by humans, so write code intended for humans!

To do this, we need to think not about the formula itself (the how)and consider the real-world objects being modeled (the what). Lets write out the complete equation. This is a good test to see if you understand the model):

If you are having trouble naming your variables, it means you dont know the model or your code well enough. We write code to solve real-world problems, and we need to understand the problem our model represents.

While a computer will ultimately run your code, itllbe read by humans, so write code intended for humans!

Descriptive variable names let you work at a higher level of abstraction than a formula, helping you focus on the problem domain.

One of the important points to remember when naming variables is: consistency counts. Staying consistent with variable names means you spend less time worrying about naming and more time solving the problem. This point is relevant when you add aggregations to variable names.

So youve got the basic idea of using descriptive names, changing xs to distances, e to efficiency and v to velocity. Now, what happens when you take the average of velocity? Should this be average_velocity, velocity_mean, or velocity_average? Following these two rules will resolve this situation:

Decide on common abbreviations: avg for average, max for maximum, std for standard deviation and so on. Make sure all team members agree and write these down. (An alternative is to avoid abbreviating aggregations.)

Put the abbreviation at the end of the name. This puts the most relevant information, the entity described by the variable, at the beginning.

Following these rules, your set of aggregated variables might be velocity_avg, distance_avg, velocity_min, and distance_max. Rule two is a matter of personal choice, and if you disagree, thats fine. Just make sure you consistently apply the rule you choose.

A tricky point comes up when you have a variable representing the number of an item. You might be tempted to use building_num, but does that refer to the total number of buildings, or the specific index of a particular building?

Staying consistent with variable names means you spend less time worrying about naming and more time solving the problem.

To avoid ambiguity, use building_count to refer to the total number of buildings and building_index to refer to a specific building. You can adapt this to other problems such as item_count and item_index. If you dont like count, then item_total is also a better choice than num. This approach resolves ambiguity and maintains the consistency of placing aggregations at the end of names.

For some unfortunate reason, typical loop variables have become i, j, and k. This may be the cause of more errors and frustration than any other practice in data science. Combine uninformative variable names with nested loops (Ive seen loops nested include the use of ii, jj, and even iii) and you have the perfect recipe for unreadable, error-prone code. This may be controversial, but I never use i or any other single letter for loop variables, opting instead for describing what Im iterating over such as

or

This is especially useful when you have nested loops so you dont have to remember if i stands for row or column or if that was j or k. You want to spend your mental resources figuring out how to create the best model, not trying to figure out the specific order of array indexes.

(In Python, if you arent using a loop variable, then use _ as a placeholder. This way, you wont get confused about whether or not the variable is used for indexing.)

All of these rules stick to the principle of prioritizing read-time understandability instead of write-time convenience. Coding is primarily a method for communicating with other programmers, so give your team members some help in making sense of your computer programs.

A magic number is a constant value without a variable name. I see these used for tasks like converting units, changing time intervals or adding an offset:

(These variable names are all bad, by the way!)

Magic numbers are a large source of errors and confusion because:

Only one person, the author, knows what they represent.

Changing the value requires looking up all the locations where it's used and manually typing in the new value.

Instead of using magic numbers in this situation, we can define a function for conversions that accepts the unconverted value and the conversion rate as parameters:

If we use the conversion rate throughout a program in many functions, we could define a named constant in a single location:

(Remember, before we start the project, we should establish with our team that usd = US dollars and aud = Australian dollars. Standards matter!)

Heres another example:

Using a NAMED_CONSTANT defined in a single place makes changing the value easier and more consistent. If the conversion rate changes, you dont need to hunt through your entire codebase to change all the occurrences, because youve defined it in only one location. It also tells anyone reading your code exactly what the constant represents. A function parameter is also an acceptable solution if the name describes what the parameter represents.

As a real-world example of the perils of magic numbers, in college, I worked on a research project with building energy data that initially came in 15-minute intervals. No one gave much thought to the possibility this could change, and we wrote hundreds of functions with the magic number 15 (or 96 for the number of daily observations). This worked fine until we started getting data in five and one-minute intervals. We spent weeks changing all our functions to accept a parameter for the interval, but even so, we were still fighting errors caused by the use of magic numbers for months.

More From Our Data Science ExpertsA Beginner's Guide to Evaluating Classification Models in Python

Real-world data has a habit of changing on you. Conversion rates between currencies fluctuate every minute and hard-coding in specific values means youll have to spend significant time re-writing your code and fixing errors. There is no place for magic in programming, even in data science.

The benefits of adopting standards are that they let you make a single global decision instead of many local ones. Instead of choosing where to put the aggregation every time you name a variable, make one decision at the start of the project, and apply it consistently throughout. The objective is to spend less time on concerns only peripherally related to data science: naming, formatting, style and more time solving important problems (like using machine learning to address climate change).

If you are used to working by yourself, it might be hard to see the benefits of adopting standards. However, even when working alone, you can practice defining your own conventions and using them consistently. Youll still get the benefits of fewer small decisions and its good practice for when you inevitably have to develop on a team. Anytime you have more than one programmer on a project, standards become a must!

Keep Clarifying Your Code5 Ways to Write More Pythonic Code

You might disagree with some of the choices Ive made in this article, and thats fine! Its more important to adopt a consistent set of standards than the exact choice of how many spaces to use or the maximum length of a variable name. The key point is to stop spending so much time on accidental difficulties and instead concentrate on the essential difficulties. (Fred Brooks, author of the software engineering classic The Mythical Man-Month, has an excellent essay on how weve gone from addressing accidental problems in software engineering to concentrating on essential problems).

Now let's go back to the initial code we started with and fix it up.

Well use descriptive variable names and named constants.

Now we can see that this code is normalizing the pixel values in an array and adding a constant offset to create a new array (ignore the inefficiency of the implementation!). When we give this code to our colleagues, they will be able to understand and modify it. Moreover, when we come back to the code to test it and fix our errors, well know precisely what we were doing.

Clarifying your variable names may seem like a dry activity, but if you spend time reading about software engineering, you realize what differentiates the best programmers is the repeated practice of mundane techniques such as using good variable names, keeping routines short, testing every line of code, refactoring, etc. These are the techniques you need to take your code from research or exploration to production-ready and, once there, youll see how exciting it is for your data science models to influence real-life decisions.

This article was originally published on Towards Data Science.

Original post:

Variable Names: Why They're a Mess and How to Clean Them Up - Built In

Read More..

Succeeding in Data Science Projects Inputs that Could Help You – Analytics Insight

For organizations, interest in ML, artificial intelligence (AI) and data science is developing. There is growing potential around data science to make new bits of knowledge and administrations for inward and outer clients. Nonetheless, this investment can be squandered if data science projects dont satisfy their customers. How might we ensure that these projects succeed?

To work on your odds of coming out on top around your ventures, it merits investing energy to take a gander at how information science functions practically speaking, and how your association works. While it incorporates the word science in its title, indeed information science requires a mix of both craftsmanship and science to deliver the best outcomes. Utilizing this current, its then conceivable to inspect increasing the outcomes. This will assist you with effectively transforming information science results into creation activities for the business.

At the most basic level, data science includes concocting thoughts and afterward utilizing data to test those theories. Utilizing a blend of various algorithms, plans, and approaches, data scientists can search out new experiences from the information that organizations make. In light of trial, error, and improvement, the groups included can make a scope of new experiences and revelations, which would then be able to be utilized to illuminate choices or create new products. This would then be able to be utilized to foster (ML) algorithms and AI arrangements.

The greatest risk around these projects is the gap between business assumptions and reality. Artificial intelligence has gotten an immense measure of promotion and consideration in the course of recent years. This implies that many ventures have unrealistic expectations. To forestall this issue, set out how your tasks will uphold generally speaking business objectives. You would then be able to begin little with projects that are easy and that can show improvements. Whenever you have set out some standard procedures around what AI can convey and penetrated the publicity swell around AI to make this all the same old thing you can maintain the attention on the outcomes that you convey.

Another big problem is that teams dont have the necessary skills to translate their vision into effective processes. While the ideas might be sound, a lack of understanding of the nuances of applying machine learning and statistics in practice can lead to poor outcomes. To prevent these kinds of problems, its important to establish a smoothly operating engineering culture that weaves data science work into the overall production pipeline. Rather than data science being a distinct team, work on how to integrate your data scientists into the production deployment process. This will help minimize the gap from data research and development to production.

Another enormous issue is that the teams dont have the vital abilities to make an interpretation of their vision into effective processes. While the thoughts may be sound, an absence of comprehension around the subtleties of applying AI and insights by and by can prompt helpless results. To forestall these sorts of issues, set up an easily working designing society that meshes information science work into the general creation pipeline. Maybe then information science is a particular group, work on the best way to coordinate your information researchers into the creation organization process. This will assist with limiting the hole from information innovative work to create.

While supporting inventiveness around data science, any work should have the business objectives as a top priority. This should put an accentuation on the thing result you are hoping to accomplish or find by utilizing data to demonstrate (or discredit) a hypothesis depending on how well that business objective was met. Close by this present, assess new advances for any enhancements by the way they may assist with meeting objectives. Keeping at the forefront is significant for data scientists, yet it is crucial to center around how any new technology can assist with meeting that particular and measurable business outcome.

In view of these thoughts, you can help your data science group take their inventiveness and apply it to find fascinating outcomes. When this research begins to find bits of insights, you would then be able to see how to drive this into creation. This includes making spans from the information science improvement and exploration group to those liable for running creation frameworks so that new models can be passed across.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

See the article here:

Succeeding in Data Science Projects Inputs that Could Help You - Analytics Insight

Read More..