Page 908«..1020..907908909910..920930..»

Pantera Capital Shares Bitcoin Halving Insights — BTC to $35K Then … – CCN.com

Will Bitcoin witness bullish demand before the 2024 halving event?

Key Takeaways

Since its genesis, Bitcoins future has been designed according to a strict mathematical equation. The founder of Bitcoin and blockchains, Satoshi Nakamoto, designed Bitcoins blockchain to run itself according to rules that govern the ceiling of Bitcoin tokens issued, as well as planned halving events that would see miners earning fewer tokens over time, creating scarcity of the token.

Pantera Capital, a crypto hedge fund, has been keeping tabs on Bitcoin halving events taking place on the chain. The company now predicts Bitcoin to reach $35,000 by its next planned halving event in 2024, and $148,000 after the event.

The hedge fund managing assets worth over $4.2 billion noticed a key pattern in Bitcoins progression, closely tying its price changes to upcoming halving events.

Bitcoin has historically bottomed 477 days prior to the halving, climbed leading into it, and then exploded to the upside afterwards. The post-halving rallies have averaged 480 days from the halving to the peak of that next bull cycle.

Panteras prediction model expected Bitcoin to drop in price in December 2022. In actuality, Bitcoin troughed in November 2022, after the events of FTXs collapse.

Accordingly, Pantera expects Bitcoin to rally twice in 2024, once at the start of the year, and another time after the halving event set in the same year.

Moreover, although Bitcoin has recently experienced a sharp drop in price, Pantera reports that Bitcoin is actually outperforming forecasts by 7%, expecting the token to reach $35,000 by next years halving event.

Whats even more interesting is that the company expects Bitcoin to reach $148,000 in value after the 2024 halving event.

The 2020 halving reduced the supply of new bitcoins by 43% relative to the previous halving. It had a 23% as big an impact on price.

The next halving is expected to occur on April 20, 2024. Since most bitcoins are now in circulation, each halving will be almost exactly half as big a reduction in new supply. If history were to repeat itself, the next halving would see bitcoin rising to $35k before the halving and $148k after.

Announcing the first release of Bitcoin, a new electronic cash system that uses a peer-to-peer network to prevent double-spending. Its completely decentralized with no server or central authority, reads the introduction of Nakamotos paper, titled Bitcoin v0.1 released.

The software is still alpha and experimental. Theres no guarantee the systems state wont have to be restarted at some point if it becomes necessary, although Ive done everything I can to build in extensibility and versioning.

Satoshis paper goes on to explain how Bitcoin trading and mining works, citing I made the proof-of-work difficulty ridiculously easy to start with, so for a little while in the beginning a typical PC will be able to generate coins in just a few hours. Itll get a lot harder when competition makes the automatic adjustment drive up the difficulty.

However, among the keynotes Nakamoto included in his paper are ones regarding the tokens total circulation limit and details on future halving events on the chain.

Total circulation will be 21,000,000 coins. Itll be distributed to network nodes when they make blocks, with the amount cut in half every 4 years.

first 4 years: 10,500,000 coins

next 4 years: 5,250,000 coins

next 4 years: 2,625,000 coins

next 4 years: 1,312,500 coins

etc

When that runs out, the system can support transaction fees if needed. Its based on open market competition, and there will probably always be nodes willing to process transactions for free.

Halving events are a common occurrence among cryptocurrencies (such as Litecoin) as the process allows for tokens to scale in value by creating intentional scarcity. Think of other valuable commodities, such as gold. The more scarce gold gets, the lower the supply, and the higher the demand for it, hence increasing its value by default.

Link:

Pantera Capital Shares Bitcoin Halving Insights -- BTC to $35K Then ... - CCN.com

Read More..

Bitcoin Spark Price Prediction: Why This BTC Whale Forecasts a … – Blockzeit

With the crypto market seemingly rebounding, a prominent Bitcoin (BTC) whale recently forecasted a bullish winter for Bitcoin Spark (BTCS).

While the Bitcoin price is affected by many factors, it tends to be most bullish around the time of its halving events. Approximately every four years, the reward for Bitcoin miners is cut in half. This means miners receive fewer new Bitcoins for their efforts. Bitcoin halvings thus create scarcity by reducing the rate at which new Bitcoins are generated. This scarcity effect, combined with anticipation and growing demand, triggers a surge in Bitcoins price in the months leading up to and following a halving event. Historically, the halving events in 2012, 2016, and 2020 have marked the beginning of significant bull markets for Bitcoin and the entire crypto industry.

Bitcoins limited supply of 21 million suggests potential value appreciation over time, thats why its compared to gold. However, while the chances of Bitcoin going lower are slim, other cryptos might surpass it in the future. Bitcoins transaction throughput is incredibly low, which leads to high gas fees, especially in peak periods. Additionally, Bitcoins Proof-of-Work (PoW) mining process has raised questions regarding centralization and environmental degradation due to the specialized equipment and significant energy required. Furthermore, the Bitcoin network has no other built-in use apart from being a P2P system for transferring BTC.

Bitcoin Spark is a new crypto project inspired by Satoshi Nakamoto. It seeks to solve Bitcoins drawbacks institute new features that set it apart, and position it for greater success. Nonetheless, the crypto does maintain some of its predecessors attributes, including having a maximum supply of 21 million.

The Bitcoin Spark network will have a significantly higher number of nodes, enhanced individual transaction capabilities per block, and reduced block time. These changes result in faster transactions and lower fees. Bitcoin Spark will also support smart contracts. It will have an integrated smart contract layer that reaches finality on the main network while allowing for multiple programming languages to be used, both high-level and low-level. This multi-layered design maintains scalability while promoting the diversity of the developers, smart contract styles, and decentralized applications (Dapps) within the network.

Bitcoin Spark also solves Bitcoins mining difficulties. Having a larger number of network nodes greatly reduces the investment required by miners and opens up BTCS mining to a wider range of users. Additionally, Bitcoin Spark uses a proprietary consensus mechanism known as the Proof-of-Process (PoP). The PoP rewards miners for validating blocks and contributing the processing power of the mining devices to the network. To ensure a fairer distribution of rewards, the PoP is combined with an algorithm that exponentially decreases rewards per additional power. The Bitcoin Spark development team will offer an application that enables users to mine BTCS by permitting access to their devices processing unit.

The Bitcoin Spark application will be compatible with Windows, Linux, Mac OS, iOS, and Android devices. To ensure security, it will operate in a virtual environment isolated from the devices operating system functions. The app will also adjust itself to the resources it is able to use on the device, accounting for overheating, battery life, and concurrent usage needs. This significantly reduces the work and power required for mining BTCS, making it possible and profitable for anyone with a smart device.

The processing power contributed by miners is rented out to the networks clients, who use it for high CPU/GPU tasks like video rendering and running scientific simulations. This ensures that the energy used in mining BTCS has a valid purpose in the real world. Those using the network for remote computing will be required to pay with BTCS, which is then transferred to the mining pool. The revenue will supplement and reduce the BTCS minting rewards, meaning if more revenue is generated within Bitcoin Spark, the BTCS minting endpoint moves further. Thus, BTCS miners will be able to remain constantly profitable in the long term.

Reviews of Bitcoin Spark suggest its a revolutionary project that solves Bitcoins limitations. Notably, the projects launch is set for November 30, 2023. Thus, many investors are flocking to get in on the ground floor, with the projects Initial Coin Offering (ICO) heading to its fourth phase.

Website: https://bitcoinspark.org/

Buy BTCS: https://network.bitcoinspark.org/register

Excerpt from:

Bitcoin Spark Price Prediction: Why This BTC Whale Forecasts a ... - Blockzeit

Read More..

Decentralization and Bitcoin Mining Pools – BTC Peers

Bitcoin is a decentralized digital currency that was created in 2009 by the pseudonymous Satoshi Nakamoto. Unlike traditional fiat currencies that are controlled by central banks, Bitcoin operates on a peer-to-peer network with no central authority. An important aspect of Bitcoin's decentralization is how transactions are verified and new coins are minted through a process called mining. In the early days of Bitcoin, anyone with a computer could mine Bitcoin by running the open-source software and helping to validate transactions. However, as Bitcoin grew in popularity and value, mining became increasingly competitive and is now dominated by specialized hardware and large mining pools. The increasing centralization of Bitcoin mining into pools has raised concerns about impacts on Bitcoin's decentralization.

Bitcoin mining is the process by which new Bitcoins are entered into circulation and transactions are confirmed. Miners use specialized hardware to solve complex computational math problems and verify blocks of transactions. The first miner to solve the math problem adds the verified block to the blockchain and receives a reward of newly minted Bitcoins. This mining process secures the Bitcoin network and provides an incentive for miners to validate transactions. However, Bitcoin mining has become very resource-intensive over the years.

"When I first started mining Bitcoin on my home computer it felt like a collaborative effort with cryptographers around the world. Now it seems large mining companies are trying to centralize control," says Claire Davies, a crypto enthusiast who mined Bitcoin in her basement in 2010.

As more miners competed for new coins, the difficulty of Bitcoin mining increased. To better compete, miners started pooling their computational resources and sharing mining rewards. These Bitcoin mining pools allow miners to work together and receive consistent payouts for their contributions. Popular early mining pools included Slush Pool, AntPool, F2Pool, and BTC.com. By combining computational power, miners in pools stand a better chance of solving a block and getting rewarded more regularly in smaller amounts. Pooled mining now accounts for a significant portion of overall Bitcoin mining activity.

While mining pools seem like a sensible adaptation, they have led to concerns about centralization. With fewer groups controlling mining power, they could potentially unite and exert authority over the network. However, no single pool has exceeded 50% control, which would be needed for a 51% attack to manipulate transactions. Complex strategies like smart pooling algorithms help maintain decentralization by preventing small numbers of pools from gaining dominance. New decentralized mining protocols are also in development to allow individual miners to contribute meaningfully again. Overall, mining remains decentralized enough to securely uphold Bitcoin's mission, even if competition favors big players.

Bitcoin's underlying protocol is built for decentralization, but human behavior tends towards efficiency which can lead to centralization over time. Making mining more decentralized will depend on developing new technologies and incentivizing participation worldwide. Two approaches that could help are:

New protocols like BetterHash and Stratum V2 aim to give individual miners more choice over transactions and the ability to mine solo profitably. Making it feasible for average users to mine from home PCs or small operations promotes decentralization.

China currently dominates Bitcoin mining, but spreading infrastructure and mining farms worldwide makes collusion and manipulation more difficult. Encouraging miners in North America, Europe, and developing countries balances control.

In conclusion, Bitcoin was built as a decentralized system but faces challenges from centralized mining pools. With careful governance and new protocols, Bitcoin can retain its decentralization and resist consolidation among miners and other entities. The story of mining pools illustrates how human tendencies can sometimes conflict with decentralization principles. Maintaining Bitcoin's core values will require ongoing vigilance, creativity, and responsibility among developers and users alike.

More:

Decentralization and Bitcoin Mining Pools - BTC Peers

Read More..

What is Bitcoin Dominance Chart and Why Is It Important? – Bitcoinsensus

Overview:

Bitcoin dominance shines as a key indicator, revealing how much sway Bitcoin holds in the market. As this influence shifts, it ripples through other coins. Understanding this concept is vital for trading altcoins wisely and gauging market trends. In this article you will learn about what Bitcoin Dominance is and why it is important for a trader to keep track of it.

Claim up to $30,030 in Bonus

In a rapidly expanding sea of cryptocurrencies, traders are on a constant lookout for insightful tools that can help them understand the market trends. Bitcoin (BTC) dominance is a crucial indicator used to discern patterns in the altcoin market, pinpoint bull markets, and identify opportunities during Bitcoin rallies. In this article we will dive into Bitcoin dominance, how it is calculated, its significance, and its role in the dynamic landscape of cryptocurrency trading. Lets take a look:

Bitcoin dominance is essentially a percentage that gauges the dominance of BTC within the larger market landscape. With the constant emergence of new coins and tokens in the altcoin market, this metric has gained significant traction. Traders and investors have embraced it as an indispensable tool to craft their portfolios and refine their trading and investment strategies.

As Bitcoin dominance grows, it casts light on multiple aspects including:

Bitcoin dominance is calculated by dividing Bitcoins market capitalization by the total market capitalization of all cryptocurrencies and then multiplying by 100. The formula is:

Bitcoin Dominance = (Bitcoin Market Cap / Total Crypto Market Cap) * 100

A higher percentage signifies Bitcoins market value outweighs other cryptocurrencies.

Unleash the potential of leverage trading! Join Bybit today with our link, complete KYC, and earn up to a $40 bonus on a $100 deposit. Dont miss out on this exclusive offer; claim your bonus now!

Claim up to $30,030 in Bonus

Throughout its history, Bitcoins role has changed in the world of cryptocurrencies, affecting how the whole market works. At first, as the very first and most famous cryptocurrency, Bitcoin was the big player, dominating everything. But as time went on and more cryptocurrencies showed up, its power started to lessen.

Around 2015, Bitcoin was in control with about 85.4% of the market. This was a strong position. However, things shifted when lots of new cryptocurrencies came into the picture, especially through something called initial coin offerings (ICOs).

Even with more competition, Bitcoin still has the biggest piece of the pie in the cryptocurrency world. Its share has changed, but its still important.

Bitcoins influence has a big effect on all cryptocurrencies. If Bitcoins influence is bigger, it can show that the whole market might not be doing so well. If its influence gets smaller, it can mean that people are getting more interested in other cryptocurrencies.

Remember, many things can change Bitcoins influence, like what people like, laws, new tech discoveries, and how people feel about the market. This shows that Bitcoins role is always changing.

Bitcoin dominance hinges upon Bitcoins market capitalization and the cumulative market capitalization of all cryptocurrencies. These figures are vital in calculating the dominance ratio. The process of obtaining and graphing these values is straightforward.

If you look at the Bitcoin market cap, the total cryptocurrency market cap, and Bitcoin dominance, it becomes evident that there exists an obvious relationship between these variables.

Generally, the trajectory and pattern of the overall cryptocurrency market capitalization tend to mirror that of Bitcoin. This happens due to Bitcoins overarching influence across the crypto landscape as it is the pioneering, largest, and most widely acknowledged cryptocurrency.

Newcomers to the cryptocurrency world often start their journey with Bitcoin, given its status as the premier and most recognized digital currency. This user inclination contributes to Bitcoins resonance as a benchmark for the broader crypto markets movement.

With an understanding of Bitcoin dominance in place, its important to understand the key factors that exert their influence on this metric. The factors that impact Bitcoin dominance include:

The trajectory of BTCs value is intimately linked to its dominance. As Bitcoins price climbs, its dominance within the market expands. This correlation is direct and fundamental. Historically, BTC held nearly 90% dominance when altcoins were in their infancy.

However, the rise of blockchain-powered sectors like gaming, finance, and art has shifted this dynamic. Each new advancement that introduces a fresh token contributes to the alteration of Bitcoins dominance.

The influx of new coins can indeed impact Bitcoins dominance. Despite the sheer scale of Bitcoins market cap, the introduction of newer and less established coins triggers a fundamental psychological aspect: risk appetite.

With a plethora of over 20,000 circulating crypto assets, market participants opt for diverse options influenced by social sentiments, fundamentals, and hype. This dynamic essentially pits Bitcoin against alternative assets, rendering its dominance vulnerable to shifts in capital allocation.

While Satoshi Nakamoto envisioned Bitcoin as a peer-to-peer transaction medium, stablecoins have taken up this mantle, facilitating the on-ramping of crypto investors onto exchanges. The surge in stablecoin popularity has the potential to erode BTC dominance significantly.

Stablecoins such as USDT, USDC, BUSD, and others enjoy a robust market presence, emerging as formidable contenders to Bitcoins dominance.

Bitcoins dominance can be influenced by prevailing market conditions. Notably, BTC dominance might exhibit growth during a bear market, even as both the total market cap and BTCs market cap experience declines.

This stems from Bitcoins evolution into a relatively stable crypto asset, often mirroring traditional markets like the S&P 500. As a result, Bitcoins stability shields it against market turbulence, causing volatile altcoins to bear the brunt and consequently leading to an expansion of BTC dominance.

On the other hand, during bullish market phases, the scenario can reverse. BTC dominance might recede despite an increasing market cap, as investors show more willingness to allocate funds to riskier altcoins.

While widely embraced as an indicator, Bitcoin dominance does not escape its share of critical scrutiny. Some drawbacks associated with this indicator includes:

Evolving Crypto Landscape: Bitcoin dominance encounters a decline during the launch of new cryptocurrencies. The steady influx of fresh protocols and projects prompts skepticism about the indicators long-term reliability.

Market Cap Metrics Limitations: Theres a contention surrounding the accuracy of Bitcoins market capitalization calculation. This arises from factors such as potentially lost Bitcoin supply or dormant holdings in obsolete wallets.

Given the complex nature of the cryptocurrency arena, the practicality of it all suggests refraining from solely relying on Bitcoin dominance for guiding trading strategies. Employing a blend of Bitcoin dominance and other useful indicators might facilitate a more accurate interpretation of market trends.

For those interested in keeping up with real-time Bitcoin dominance, here are some user-friendly platforms:

Claim up to $30,030 in Bonus

In the world of cryptocurrencies, Bitcoin dominance has been a steady presence. Its like a spotlight on how much control Bitcoin has over the market. When Bitcoins influence changes, it often affects other coins too. Understanding this is crucial because trading altcoins without considering Bitcoin dominance is like navigating in the dark. Its also a valuable tool to gauge if the market is doing well or not so well. So, keeping an eye on Bitcoin dominance is important and you can do that by taking a look at the tools mentioned above.

Empower your crypto trading skills with our Legends Masterclass. Sign up now and take advantage of our limited-time discount offer!Join the class today.

Continue reading here:

What is Bitcoin Dominance Chart and Why Is It Important? - Bitcoinsensus

Read More..

Machine learning for chemistry: Basics and applications – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

close

In a review published in Engineering, scientists explore the burgeoning field of machine learning (ML) and its applications in chemistry. Titled "Machine Learning for Chemistry: Basics and Applications," this comprehensive review aims to bridge the gap between chemists and modern ML algorithms, providing insights into the potential of ML in revolutionizing chemical research.

Over the past decade, ML and artificial intelligence (AI) have made remarkable strides, bringing us closer to the realization of intelligent machines. The advent of deep learning methods and enhanced data storage capabilities has played a pivotal role in this progress. ML has already demonstrated success in domains such as image and speech recognition, and now it is gaining significant attention in the field of chemistry, which is characterized by complex data and diverse organic molecules.

However, chemists often face challenges in adopting ML applications due to a lack of familiarity with modern ML algorithms. Chemistry datasets typically exhibit a bias towards successful experiments, while a balanced perspective necessitates the inclusion of both successful and failed experiments. Furthermore, incomplete documentation of synthetic conditions in literature poses additional challenges.

Computational chemistry, where datasets can be reliably constructed from quantum mechanics calculations, has embraced ML applications more readily. Nonetheless, chemists need a basic understanding of ML to harness the potential of data recording and ML-guided experiments.

This review serves as an introductory guide to popular chemistry databases, two-dimensional (2D) and three-dimensional (3D) features used in ML models, and popular ML algorithms. It delves into three specific chemistry fields where ML has made significant progress: retrosynthesis in organic chemistry, ML-potential-based atomic simulation, and ML for heterogeneous catalysis.

These applications have either accelerated research or provided innovative solutions to complex problems. The review concludes with a discussion of future challenges in the field.

The rapid advancement of computing facilities and the development of new ML algorithms indicate that even more exciting ML applications are on the horizon, promising to reshape the landscape of chemical research in the ML era. While the future is difficult to predict in such a fast-evolving field, it is undeniable that the development of ML models will lead to enhanced accessibility, generality, accuracy, intelligence, and ultimately, higher productivity.

The integration of ML models with the Internet offers a promising avenue for sharing ML predictions worldwide.

However, the transferability of ML models in chemistry poses a common challenge due to the diverse element types and complex materials involved. Predictions often remain limited to local datasets, resulting in decreased accuracy beyond the dataset.

To address this issue, new techniques such as the global neural network (G-NN) potential and improved ML models with more fitting parameters are being explored. While ML competitions in data science have produced exceptional algorithms, there is a need for more open ML contests in chemistry to nurture young talent.

Excitingly, end-to-end learning, which generates final output from raw input rather than designed descriptors, holds promise for more intelligent ML applications. AlphaFold2, for example, utilizes the one-dimensional (1D) structure of a protein to predict its 3D structure. Similarly, in the field of heterogeneous catalysis, an end-to-end AI model has successfully resolved reaction pathways. These advanced ML models can also contribute to the development of intelligent experimental robots for high-throughput experiments.

As the field of ML continues to evolve rapidly, it is crucial for chemists and researchers to stay informed about its applications in chemistry. This review serves as a valuable resource, providing a comprehensive overview of the basics of ML and its potential in various chemistry domains. With the integration of ML models and the collective efforts of the scientific community, the future of chemical research holds immense promise.

More information: Yun-Fei Shi et al, Machine Learning for Chemistry: Basics and Applications, Engineering (2023). DOI: 10.1016/j.eng.2023.04.013

See more here:
Machine learning for chemistry: Basics and applications - Phys.org

Read More..

Artificial Intelligence: Transforming Healthcare, Cybersecurity, and Communications – Forbes

rendering,conceptual image.getty

Globally, a new era of rapidly developing and interconnected technologies that combine engineering, computer algorithms, and culture is already beginning. The basic way we live, work, and connect will alter because of the digital transformation or convergence we will experience in the next years.

More remarkably, the advent of artificial intelligence (AI) and machine learning-based computers in the next century may alter how we relate to ourselves.

The digital ecosystem's networked computer components, which are made possible by machine learning and artificial intelligence, will have a significant impact on practically every sector of the economy. These integrated AI and computing capabilities could pave the way for new frontiers in fields as diverse as genetic engineering, augmented reality, robotics, renewable energy, big data, and more.

Three important verticals in this digital transformation are already being impacted by AI: 1) Healthcare, 2) Cybersecurity, and 3) Communications.

Artificial intelligence: What is it?

AI is a "technology that appears to emulate human performance typically by learning, coming to conclusions, seeming to understand complex content, engaging in natural dialogs with people, enhancing human cognitive performance, or replacing people on execution of non-routine tasks," according to Gartner.

Artificial intelligence (AI) systems aim to reproduce human characteristics and processing power in a machine and outperform human speed and constraints. Machine learning and natural language processing, which are already commonplace in our daily lives, are components of the advent of AI. Today's AI can comprehend, identify, and resolve issues from both organized and unstructured data - and in some situations, without being explicitly trained.

AI has the potential to significantly alter cognitive processes and generate economic gains. According to McKinsey & Company, the automation of knowledge labor by intelligent software systems that can carry out knowledge job activities from unstructured commands could have a $5 to $7 trillion potential economic impact by 2025. "These technologies hold many interesting possibilities. According to Dave Choplin, chief envisioning officer at Microsoft UK, artificial intelligence is "the most important technology that anyone on the planet is working on right now." Research and development spending and investments are reliable indicators of upcoming technological advancements. Goldman Sachs, a financial services company, predicts that by 2025, investments in artificial intelligence will reach $200 billion globally.

AI-enabled computers are made for automating tasks like speech recognition, learning, and planning and resolving issues. By prioritizing and acting on data, these technologies can facilitate more effective decision-making, particularly over bigger networks with numerous users and factors. Speech recognition, learning, planning, and problem-solving are some of the fundamental tasks that AI-powered computers are now being created for.

AI and Healthcare

AI is already transforming the healthcare industry in medication discovery, where it is utilized to evaluate combinations of substances and procedures that will improve human health and thwart pandemics. AI was crucial in helping medical personnel respond to the COVID outbreak and in the development of the COVID-19 vaccination medication.

Predictive analytics is one of the most fascinating applications of AI in healthcare. To forecast future outcomes based on a patient's current health or symptoms, predictive analytics leverage past data on their diseases and treatments. This enables doctors to choose the best course of action for treating individuals with persistent diseases or other health problems. Google's DeepMind AI team recently developed computers that can forecast many protein configurations, which is very advantageous for science and medical research.

AI will advance in predicting health outcomes, offering individualized care plans, and even treating illness as it continues to develop. Healthcare professionals will be able to treat patients more effectively at home, in charitable or religious settings, and in the office thanks to this power.

AI and Cybersecurity

AI in cybersecurity can offer a quicker way to recognize and detect online threats. The use of abnormal or malicious credentials, brute force login attempts, unusual data movement, and data exfiltration are all things that cybersecurity companies have developed software and platforms powered by AI to detect in real-time. They do this by scanning data and files. This enables companies to make statistical judgments and guard against anomalies prior to their reporting and patching.

To assist cybersecurity professionals, AI also improves network monitoring and threat detection technologies by minimizing noise, delivering priority warnings, utilizing contextual data backed by proof, and using automated analysis based on correlation indices from cyber threat intelligence reports.

Automation is undoubtedly important in the cybersecurity world. "There are too many things happening - too much data, too many attackers, too much of an attack surface to defend - that without those automated capabilities that you get with artificial intelligence and machine learning, you don't have a prayer of being able to defend yourself," said Art Coviello, a partner at Rally Ventures and the former chairman of RSA.

Although AI and ML can be useful tools for cyber-defense, they can also have drawbacks. Threat actors can also utilize them to detect threat abnormalities and improve cyber defensive capabilities more quickly. AI and MI are already being used as tools by malicious governments and criminal hackers to identify and exploit threats in threat detection models. They employ a variety of techniques to do this. Their preferred methods frequently involve automated human-impersonating phishing attacks and malware that self-modifies to trick or even defeat cyber-defense systems and programs.

Cybercriminals are already attacking and investigating the networks of their victims using AI and ML capabilities. The most at risk are small firms, organizations, and in particular healthcare facilities that cannot afford substantial expenditures in defensive developing cybersecurity technology like AI. Ransomware-based extortion by hackers who demand payment in cryptocurrency poses a potentially persistent and developing threat.

Communications & Customer Service (CX)

AI is also changing the way our society communicates. Businesses are already using robotic processing automation (RPA), a type of artificial intelligence, to automate more routine tasks and save manual labor. By utilizing technology for routine, repeatable tasks, RPA improves service operations and frees up human talent for more difficult, complicated problems. It is scalable and adaptable to performance requirements. In the private sector, RPA is frequently utilized in contact centers, insurance enrollment and billing, claims processing, and medical coding, among other applications.

Chatbots, voice assistants, and other messaging apps that use conversational AI help a variety of sectors by completely automating customer service and providing round-the-clock support. Conversational AI/chatbots have advanced and introduced new forms of human communication through facial expressions and contextual awareness with each passing day. The use of these apps is already widespread in the healthcare, retail, and travel sectors.

A wide range of business sectors have utilized AI technologies to produce news stories, social media posts, legal filings, and banking reports in both the media and on social media. The potential of AI and its human-like correlations, especially when expressing itself in textual analysis, have recently come to light thanks to a conversation box called ChatGPT. Another OpenAI program called DALL-E has demonstrated the capacity to generate graphics from simple instructions. Both AI systems accomplish this by synthesizing the data after mimicking human speech and language.

AI and Our Future

We need to consider any potential ethical concerns with artificial intelligence in the future. We need to consider what might occur if we use this technology and who will oversee it.

Algorithm bias is a serious problem. It has repeatedly been demonstrated. A recent MIT project examined several computer programming approaches to find viewpoints. Many of the programs, they discovered, had harmful biases. We need to consider biases while working with human variables in programming. Technology is made by humans, and humans have prejudices.

This is how technology can be bad. Human monitoring of technology development and application is a plus. We must ensure that the folks writing the code and the algorithms are as diverse as possible. Technology will be shaped to be more balanced if there is responsible oversight over the data input and response.

Understanding AI's contextual nature is another issue. Algorithms that are programmed only display Xs and Os. It does not depict interactions or conduct between people. In the future, interactivity and behavior may be encoded into the software, but that time has not yet come.

The genuine hope is that we will be able to guide these incredible technologies we are creating in the proper direction for good. If we use them properly, each of them has applications that could help our civilization. It must be done by the entire world community. To keep things in check, we need collective research, ethics, transparent strategies, and proper industry incentives to keep AI on the right track.

Chuck Brooks, President of Brooks Consulting International, is a globally recognized thought leader and subject matter expert Cybersecurity and Emerging Technologies. LinkedIn named Chuck as one of The Top 5 Tech People to Follow on LinkedIn. He was named as Cybersecurity Person of the Year by Cyber Express, as one of the worlds 10 Best Cyber Security and Technology Experts by Best Rated,as a Top 50 Global Influencer in Risk, Compliance, by Thompson Reuters, Best of The Word in Security by CISO Platform, and by IFSEC and by Thinkers 360 as the #2 Global Cybersecurity Influencer. He was featured in the 2020 and 2021 Onalytica "Who's Who in Cybersecurity" as one of the top Influencers for cybersecurity issues and in Risk management. He was also named one of the Top 5 Executives to Follow on Cybersecurity by Executive Mosaic.He is also a Cybersecurity Expert for The Network at the Washington Post, Visiting Editor at Homeland Security Today, Expert for Executive Mosaic/GovCon, and a Contributor to FORBES.

In government, Chuck has received two senior Presidential appointments. Under President George W. Bush Chuck was appointed to The Department of Homeland Security (DHS) as the first Legislative Director of The Science & Technology Directorate at the Department of Homeland Security. He also was appointed as Special Assistant to the Director of Voice of America under President Reagan. He served as a top Advisor to the late Senator Arlen Specter on Capitol Hill coveringsecurity and technology issues on Capitol Hill. Currently Chuck is serving DHS CISA on a working group exploring space and satellite cybersecurity.

In industry, Chuck has served in senior executive roles for General Dynamics as the Principal Market Growth Strategist for Cyber Systems, at Xerox as Vice President & Client Executive for Homeland Security, for Rapiscan and Vice President of R & D, for SRA as Vice President of Government Relations, and for Sutherland as Vice President of Marketing and Government Relations. He currently sits on several corporate and not-for-profit Boards in advisory roles.

In academia, Chuck is Adjunct Faculty at Georgetown Universitys Graduate Applied Intelligence Program and the Graduate Cybersecurity Programs where he teaches courses on risk management, homeland security, and cybersecurity. He designed and taught a popular course called Disruptive Technologies and Organizational Management. He was an Adjunct Faculty Member at Johns Hopkins University where he taught a graduate course on homeland security for two years. He has an MA in International relations from the University of Chicago, a BA in Political Science from DePauw University, and a Certificate in International Law from The Hague Academy of International Law.

In the media, Chuck has been a featured speaker at dozens of conferences, events, podcasts, and webinars and has published more than 250 articles and blogs on cybersecurity, homeland security and technology issues.Recently, Chuck briefed the G-20 Energy Conference on operating systems cybersecurity.He has also presented on the need for global cooperation in cybersecurity to the Holy See and the US Embassy to the Holy See in Rome. His writings have appeared on AT&T, IBM, Intel, Microsoft, General Dynamics, Xerox, Juniper Networks, NetScout, Human, Beyond Trust, Cylance, Ivanti, Checkpoint, and many other blogs. He has 104,000 plus followers on LinkedIn and runs a dozen LI groups, including the two largest in homeland security. He has his own newsletter, Security & Tech Trends, which has 48,000 subscribers. He also has a wide following on Twitter (19,000 plus followers), and Facebook (5,000 friends).

Some of Chucks other activities include being a Subject Matter Expert to The Homeland Defense and Security Information Analysis Center (HDIAC), a Department of Defense (DoD) sponsored organization through the Defense Technical Information Center (DTIC),as a featured presenter at USTRANSCOM on cybersecurity threats to transportation, as a featured presenter to the FBI and the National Academy of Sciences on Life Sciences Cybersecurity. He also served on working group with the National Academy of Sciences on digital transformation for the United States Air Force He is an Advisory Board Member for the Quantum Security Alliance.

Follow Chuck on social media:

LinkedIn:https://www.linkedin.com/in/chuckbrooks/

Twitter: @ChuckDBrooks

Read more here:
Artificial Intelligence: Transforming Healthcare, Cybersecurity, and Communications - Forbes

Read More..

How Apple is already using machine learning and AI in iOS – AppleInsider

Apple may not be as flashy as other companies in adopting artificial intelligence features. Still, the already has a lot of smarts scattered throughout iOS.

Apple does not go out of its way to specifically name-drop "artificial intelligence" or AI meaningfully, but the company isn't avoiding the technology. Machine learning has become Apple's catch-all for its AI initiatives.

Apple uses artificial intelligence and machine learning in iOS in several noticeable ways. Here is a quick breakdown of where you'll find it.

It has been several years since Apple started using machine learning in iOS and other platforms. The first real use case was Apple's software keyboard on the iPhone.

Apple utilized predictive machine learning to understand which letter a user was hitting, which boosted accuracy. The algorithm also aimed to predict what word the user would type next.

Machine learning, or ML, is a system that can learn and adapt without explicit instructions. It is often used to identify patterns in data and provide specific results.

This technology has become a popular subfield of artificial intelligence. Apple has also been incorporating these features for several years.

In 2023, Apple is using machine learning in just about every nook and cranny of iOS. It is present in how users search for photos, interact with Siri, see suggestions for events, and much, much more.

On-device machine learning systems benefit the end user regarding data security and privacy. This allows Apple to keep important information on the device rather than relying on the cloud.

To help boost machine learning and all of the other key automated processes in iPhones, Apple made the Neural Engine. It launched with the iPhone's A11 Bionic processor to help with some camera functions, as well as Face ID.

Siri isn't technically artificial intelligence, but it does rely on AI systems to function. Siri taps into the on-device Deep Neural Network, or DNN, and machine learning to parse queries and offer responses.

Siri can handle various voice- and text-based queries, ranging from simple questions to controlling built-in apps. Users can ask Siri to play music, set a timer, check the weather, and much more.

Apple introduced the TrueDepth camera and Face ID with the launch of the iPhone X. The hardware system can project 30,000 infrared dots to create a depth map of the user's face. The dot projection is paired with a 2D infrared scan as well.

That information is stored on-device, and the iPhone uses machine learning and the DNN to parse every single scan of the user's face when they unlock their device.

This goes beyond iOS, as the stock Photos app is available on macOS and iPadOS as well. This app uses several machine learning algorithms to help with key built-in features, including photo and video curation.

Apple's Photos app using machine learning

Facial recognition in images is possible thanks to machine learning. The People album allows searching for identified people and curating images.

An on-device knowledge graph powered by machine learning can learn a person's frequently visited places, associated people, events, and more. It can use this gathered data to automatically create curated collections of photos and videos called "Memories."

Apple works to improve the camera experience for iPhone users regularly. Part of that goal is met with software and machine learning.

Apple's Deep Fusion optimizes for detail and low noise in photos

The Neural Engine boosts the camera's capabilities with features like Deep Fusion. It launched with the iPhone 11 and is present in newer iPhones.

Deep Fusion is a type of neural image processing. When taking a photo, the camera captures a total of nine shots. There are two sets of four shots taken just before the shutter button is pressed, followed by one longer exposure shot when the button is pressed.

The machine learning process, powered by the Neural Engine, will kick in and find the best possible shots. The result leans more towards sharpness and color accuracy.

Portrait mode also utilizes machine learning. While high-end iPhone models rely on hardware elements to help separate the user from the background, the iPhone SE of 2020 relied solely on machine learning to get a proper portrait blur effect.

Machine learning algorithms help customers automate their general tasks as well. ML makes it possible to get smart suggestions regarding potential events the user might be interested in.

For instance, if someone sends an iMessage that includes a date, or even just the suggestion of doing something, then iOS can offer up an event to add to the Calendar app. All it takes is a few taps to add the event to the app to make it easy to remember.

There are more machine learning-based features coming to iOS 17:

One of Apple's first use cases with machine learning was the keyboard and autocorrect, and it's getting improved with iOS 17. Apple announced in 2023 that the stock keyboard will now utilize a "transformer language model," significantly boosting word prediction.

The transformer language model is a machine learning system that improves predictive accuracy as the user types. The software keyboard also learns frequently typed words, including swear words.

Apple introduced a brand-new Journal app when it announced iOS 17 at WWDC 2023. This new app will allow users to reflect on past events and journal as much as they want in a proprietary app.

Apple's stock Journal app

Apple is using machine learning to help inspire users as they add entries. These suggestions can be pulled from various resources, including the Photos app, recent activity, recent workouts, people, places, and more.

This feature is expected to arrive with the launch of iOS 17.1.

Apple will improve dictation and language translation with machine learning as well.

Machine learning is also present in watchOS with features that help track sleep, hand washing, heart health, and more.

As mentioned above, Apple has been using machine learning for years. Which means the company has technically been using artificial intelligence for years.

People who think Apple is lagging behind Google and Microsoft are only considering chatGPT and other similar systems. The forefront of public perception regarding AI in 2023 is occupied by Microsoft's AI-powered Bing and Google's Bard.

Apple is going to continue to rely on machine learning for the foreseeable future. It will find new ways to implement the system and boost user features in the future.

It is also rumored Apple is developing its own chatGPT-like experience, which could boost Siri in a big way at some point in the future. In February 2023, Apple held a summit focusing entirely on artificial intelligence, a clear sign it's not moving away from the technology.

Apple can rely on systems it's introducing with iOS 17, like the transformer language model for autocorrect, expanding functionality beyond the keyboard. Siri is just one avenue where Apple's continued work with machine learning can have user-facing value.

Apple's work in artificial intelligence is likely leading to the Apple Car. Whether or not the company actually releases a vehicle, the autonomous system designed for automobiles will need a brain.

See the original post here:
How Apple is already using machine learning and AI in iOS - AppleInsider

Read More..

Harnessing deep learning for population genetic inference – Nature.com

Wakeley, J. The limits of theoretical population genetics. Genetics 169, 17 (2005).

Article PubMed PubMed Central Google Scholar

Lewontin, R. C. Population genetics. Annu. Rev. Genet. 1, 3770 (1967).

Article Google Scholar

Fu, Y.-X. Variances and covariances of linear summary statistics of segregating sites. Theor. Popul. Biol. 145, 95108 (2022).

Article PubMed PubMed Central Google Scholar

Bradburd, G. S. & Ralph, P. L. Spatial population genetics: its about time. Annu. Rev. Ecol. Evol. Syst. 50, 427449 (2019).

Article Google Scholar

Ewens, W. J. Mathematical Population Genetics I: Theoretical Introduction 2nd edn (Springer, 2004). This classic textbook covers theoretical population genetics ranging from the diffusion theory to the coalescent theory.

Crow, J. F. & Kimura, M. An Introduction to Population Genetics Theory (Blackburn Press, 2009). This classic textbook introduces the fundamentals of theoretical population genetics.

Pool, J. E., Hellmann, I., Jensen, J. D. & Nielsen, R. Population genetic inference from genomic sequence variation. Genome Res. 20, 291300 (2010).

Article CAS PubMed PubMed Central Google Scholar

Charlesworth, B. & Charlesworth, D. Population genetics from 1966 to 2016. Heredity 118, 29 (2017).

Article CAS PubMed Google Scholar

Johri, P. et al. Recommendations for improving statistical inference in population genomics. PLoS Biol. 20, e3001669 (2022).

Article CAS PubMed PubMed Central Google Scholar

The 1000 Genomes Project Consortium. A global reference for human genetic variation. Nature 526, 6874 (2015).

Article Google Scholar

Mallick, S. et al. The Allen Ancient DNA Resource (AADR): a curated compendium of ancient human genomes. Preprint at bioRixv https://doi.org/10.1101/2023.04.06.535797 (2023).

The 1001 Genomes Consortium. 1,135 genomes reveal the global pattern of polymorphism in Arabidopsis thaliana. Cell 166, 481491 (2016).

Article Google Scholar

Sudlow, C. et al. UK Biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 12, e1001779 (2015).

Article PubMed PubMed Central Google Scholar

Walters, R. G. et al. Genotyping and population characteristics of the China Kadoorie Biobank. Cell Genom. 3, 100361 (2023).

Schrider, D. R. & Kern, A. D. Supervised machine learning for population genetics: a new paradigm. Trends Genet. 34, 301312 (2018). This review covers the applications of supervised learning in population genetic inference.

Article CAS PubMed PubMed Central Google Scholar

LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436444 (2015).

Article CAS PubMed Google Scholar

Gao, H. et al. The landscape of tolerated genetic variation in humans and primates. Science 380, eabn8153 (2023).

Article CAS PubMed Google Scholar

van Hilten, A. et al. GenNet framework: interpretable deep learning for predicting phenotypes from genetic data. Commun. Biol. 4, 1094 (2021).

Article PubMed PubMed Central Google Scholar

Vaswani, A. et al. Attention is all you need. In Proc. Advances in Neural Information Processing Systems 30, NIPS 2017 (eds Guyon, I. et al.) 59996009 (NIPS, 2017). This study proposes the vanilla transformer architecture, which has become the basis of novel architectures that achieve state-of-the-art performance in different machine learning tasks.

Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N. & Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In Proc. 32nd International Conference on Machine Learning Vol. 37 (eds Bach, F. & Blei, D.) 22562265 (PMLR, 2015).

Nei, M. in Molecular Evolutionary Genetics 327403 (Columbia Univ. Press, 1987).

Hamilton, M. B. in Population Genetics 5367 (Wiley-Blackwell, 2009).

Kimura, M. Diffusion models in population genetics. J. Appl. Probab. 1, 177232 (1964).

Article Google Scholar

Kingman, J. F. C. On the genealogy of large populations. J. Appl. Probab. 19, 2743 (1982).

Article Google Scholar

Rosenberg, N. A. & Nordborg, M. Genealogical trees, coalescent theory and the analysis of genetic polymorphisms. Nat. Rev. Genet. 3, 380390 (2002).

Article CAS PubMed Google Scholar

Fu, Y.-X. & Li, W.-H. Maximum likelihood estimation of population parameters. Genetics 134, 12611270 (1993).

Article CAS PubMed PubMed Central Google Scholar

Griffiths, R. C. & Tavar, S. Monte Carlo inference methods in population genetics. Math. Comput. Model. 23, 141158 (1996).

Article Google Scholar

Tavar, S., Balding, D. J., Griffiths, R. C. & Donnelly, P. Inferring coalescence times from DNA sequence data. Genetics 145, 505518 (1997).

Article PubMed PubMed Central Google Scholar

Marjoram, P. & Tavar, S. Modern computational approaches for analysing molecular genetic variation data. Nat. Rev. Genet. 7, 759770 (2006).

Article CAS PubMed Google Scholar

Williamson, S. H. et al. Simultaneous inference of selection and population growth from patterns of variation in the human genome. Proc. Natl Acad. Sci. USA 102, 78827887 (2005).

Article CAS PubMed PubMed Central Google Scholar

Wang, M. et al. Detecting recent positive selection with high accuracy and reliability by conditional coalescent tree. Mol. Biol. Evol. 31, 30683080 (2014).

Article CAS PubMed Google Scholar

Szpiech, Z. A. & Hernandez, R. D. selscan: an efficient multithreaded program to perform EHH-based scans for positive selection. Mol. Biol. Evol. 31, 28242827 (2014).

Article CAS PubMed PubMed Central Google Scholar

Maclean, C. A., Hong, N. P. C. & Prendergast, J. G. D. hapbin: an efficient program for performing haplotype-based scans for positive selection in large genomic datasets. Mol. Biol. Evol. 32, 30273029 (2015).

Article CAS PubMed PubMed Central Google Scholar

Huang, X., Kruisz, P. & Kuhlwilm, M. sstar: a Python package for detecting archaic introgression from population genetic data with S*. Mol. Biol. Evol. 39, msac212 (2022).

Article CAS PubMed PubMed Central Google Scholar

Borowiec, M. L. et al. Deep learning as a tool for ecology and evolution. Methods Ecol. Evol. 13, 16401660 (2022).

Article Google Scholar

Korfmann, K., Gaggiotti, O. E. & Fumagalli, M. Deep learning in population genetics. Genome Biol. Evol. 15, evad008 (2023).

Article PubMed PubMed Central Google Scholar

Alpaydin, E. in Introduction to Machine Learning 3rd edn (eds Dietterich, T. et al.) 120 (MIT Press, 2014).

Bengio, Y., LeCun, Y. & Hinton, G. Deep learning for AI. Commun. ACM 64, 5865 (2021).

Article Google Scholar

Sapoval, N. et al. Current progress and open challenges for applying deep learning across the biosciences. Nat. Commun. 13, 1728 (2022).

Article CAS PubMed PubMed Central Google Scholar

Bishop, C. M. Model-based machine learning. Philos. Trans. R. Soc. A 371, 20120222 (2013).

Article Google Scholar

Lee, C., Abdool, A. & Huang, C. PCA-based population structure inference with generic clustering algorithms. BMC Bioinform. 10, S73 (2009).

Article Google Scholar

Li, H. & Durbin, R. Inference of human population history from individual whole-genome sequences. Nature 475, 493496 (2011).

Article CAS PubMed PubMed Central Google Scholar

Skov, L. et al. Detecting archaic introgression using an unadmixed outgroup. PLoS Genet. 14, e1007641 (2018).

Article PubMed PubMed Central Google Scholar

Chen, H., Hey, J. & Slatkin, M. A hidden Markov model for investigating recent positive selection through haplotype structure. Theor. Popul. Biol. 99, 1830 (2015).

Article PubMed Google Scholar

Lin, K., Li, H., Schltterer, C. & Futschik, A. Distinguishing positive selection from neutral evolution: boosting the performance of summary statistics. Genetics 187, 229244 (2011).

Article PubMed PubMed Central Google Scholar

Schrider, D. R., Ayroles, J., Matute, D. R. & Kern, A. D. Supervised machine learning reveals introgressed loci in the genomes of Drosophila simulans and D. sechellia. PLoS Genet. 14, e1007341 (2018).

Article PubMed PubMed Central Google Scholar

Durvasula, A. & Sankararaman, S. A statistical model for reference-free inference of archaic local ancestry. PLoS Genet. 15, e1008175 (2019).

Article CAS PubMed PubMed Central Google Scholar

Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016). This classic textbook introduces the fundamentals of deep learning.

Eraslan, G., Avsec, Z., Gagneur, J. & Theis, F. J. Deep learning: new computational modelling techniques for genomics. Nat. Rev. Genet. 20, 389403 (2019).

Article CAS PubMed Google Scholar

Villanea, F. A. & Schraiber, J. G. Multiple episodes of interbreeding between Neanderthals and modern humans. Nat. Ecol. Evol. 3, 3944 (2019).

Article PubMed Google Scholar

Unadkat, S. B., Ciocoiu, M. M. & Medsker L. R. in Recurrent Neural Networks: Design and Applications (eds Medsker, L. R. & Jain, L. C.) 112 (CRC, 1999).

Gron, A. Neural networks and deep learning (OReilly Media Inc., 2018).

Sheehan, S. & Song, Y. S. Deep learning for population genetic inference. PLoS Comput. Biol. 12, e1004845 (2016).

Article PubMed PubMed Central Google Scholar

Mondal, M., Bertranpetit, J. & Lao, O. Approximate Bayesian computation with deep learning supports a third archaic introgression in Asia and Oceania. Nat. Commun. 10, 246 (2019).

Article PubMed PubMed Central Google Scholar

Sanchez, T., Curry, J.,Charpiat, G. & Jay, F. Deep learning for population size history inference: design, comparison and combination with approximate Bayesian computation. Mol. Ecol. Resour. 21, 26452660 (2021).

Article PubMed Google Scholar

Tran, L. N., Sun, C. K., Struck, T. J., Sajan, M. & Gutenkunst, R. N. Computationally efficient demographic history inference from allele frequencies with supervised machine learning. Preprint at bioRixv https://doi.org/10.1101/2023.05.24.542158 (2023).

Article Google Scholar

See the original post here:
Harnessing deep learning for population genetic inference - Nature.com

Read More..

Here’s Why GPUs Are Deep Learning’s Best Friend – Hackaday

If you have a curiosity about how fancy graphics cards actually work, and why they are so well-suited to AI-type applications, then take a few minutes to read[Tim Dettmers] explain why this is so. Its not a terribly long read, but while it does get technical there are also car analogies, so theres something for everyone!

He starts off by saying that most people know that GPUs are scarily efficient at matrix multiplication and convolution, but what really makes them most useful is their ability to work with large amounts of memory very efficiently.

Essentially, a CPU is a latency-optimized device while GPUs are bandwidth-optimized devices. If a CPU is a race car, a GPU is a cargo truck. The main job in deep learning is to fetch and move cargo (memory, actually) around. Both devices can do this job, but in different ways. A race car moves quickly, but cant carry much. A truck is slower, but far better at moving a lot at once.

To extend the analogy, a GPU isnt actually just a truck; it is more like a fleet of trucks working in parallel. When applied correctly, this can effectively hide latency in much the same way as an assembly line. It takes a while for the first truck to arrive, but once it does, theres an unbroken line of loaded trucks waiting to be unloaded. No matter how quickly and efficiently one unloads each truck, the next one is right there, waiting. Of course, GPUs dont just shuttle memory around, they can do work on it as well.

The usual configuration for deep learning applications is a desktop computer with one or more high-end graphics cards wedged into it, but there are other (and smaller) ways to enjoy some of the same computational advantages without eating a ton of power and gaining a bunch of unused extra HDMI and DisplayPort jacks as a side effect. NVIDIAs line of Jetson development boards incorporates the right technology in an integrated way. While it might lack the raw horsepower (and power bill) of a desktop machine laden with GPUs, theyre no slouch for their size.

Read more:
Here's Why GPUs Are Deep Learning's Best Friend - Hackaday

Read More..

Some Experiences Integrating Machine Learning with Vision and … – Quality Magazine

Some Experiences Integrating Machine Learning with Vision and Robotics | Quality Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

See the rest here:
Some Experiences Integrating Machine Learning with Vision and ... - Quality Magazine

Read More..