Page 1,202«..1020..1,2011,2021,2031,204..1,2101,220..»

Student research puzzles out cryptocurrency risk by comparing … – Bryant University

The recent boom in cryptocurrencies has created a universe of new investment possibilities, not just for individual investors but institutional investors, governments, and publicly listed firms as well with hype to match. Yet as they become more popular, cryptocurrencies have seen enormous fluctuations in price over their relatively short lifespans, adding uncertain risks and returns to the bottom line.

Risk translation: How cryptocurrency impacts company risk, beta and returns, a paper published in the Journal of Capital Markets Studies by Jack Field 23 and Bryant Professor of Finance A. Can Inci, Ph.D., looks beyond both the hype and the doom predictions to gain a true analysis of the novel asset. Through careful investigation, the study, based on Fields honors thesis, compares the various virtues of different crypto-based strategies, including using cryptocurrency as part of a treasury portfolio versus as a medium of exchange or a commission-based asset.

Whether from forces of supply and demand, or from complex algorithmic technologies (such as blockchains), or from a mixture of the two, the underlying worth of cryptocurrencies has been an enigma for investors and politicians alike, the article notes.

RELATED STORY: Inci on the risks, and rewards, of investing in cryptocurrencies

The piece, published in May, examines the effect cryptocurrency assets can have on the risk profiles of publicly traded firms. Through a cross-sectional analysis of the daily returns, volatility, betas and Sharpe ratios of the four largest public holders of cryptocurrencies (MicroStrategy Inc., Tesla, Inc., Square Inc., and Marathon Digital Holdings, Inc.) and five of the largest cryptocurrencies by market cap (Bitcoin, Ether, Tether, Ripple, and Dogecoin), the authors measured the risk and return characteristics of holding cryptocurrencies, as well as the motivations behind holding them as an asset class.

Their conclusions demonstrate the difference in return for different crypto-relate strategies, finding that strategies tailored around the utilization of cryptocurrency as part of a treasury portfolio exhibit the most positive effects on common stock risk and returns, while strategies that use cryptocurrencies as a medium of exchange or a commission-based asset yielded relatively poorer outcomes.

They also note the importance of transparency and risk disclosures in firms dealing with cryptocurrencies. Being such a volatile asset class, cryptocurrencies can introduce uncertainty into a companys balance sheet, as the value of said assets can change drastically in short periods of time. It is necessary not only for a firms managers to understand the implications of cryptocurrencies on total asset values but also for shareholders to have the right to know the true risk in owning equity shares of a company, Field and Inci state.

Now a compliance analyst at Manulife Investment Management, Field chose to focus his thesis on an emerging area. Research on cryptocurrency use in corporate finance is an especially untrodden research area, the authors note, and their study is one of the first on cryptocurrency investments in the treasury departments of publicly traded companies.

Field, who graduated magna cum laude in December with a major in Finance and concentration in Economics, was one of more than 40 students who completed honors thesis projects this year, ranging from analyzing how match results affect the equity value of publicly traded soccer teams to studying the effects of single-use and fabric facemasks on the environment.

In addition to his role as co-author, Inci also served as Fields thesis advisor for his project. Fields editorial advisor was Professor of Finance Hakkan Saraoglu, Ph.D.

View post:
Student research puzzles out cryptocurrency risk by comparing ... - Bryant University

Read More..

Investment Opportunity with Latest Cryptocurrency Miners – GlobeNewswire

NEW YORK, June 16, 2023 (GLOBE NEWSWIRE) -- Crypto enthusiasts around the world have found a safe, convenient, and highly profitable investment opportunity since the launch of Bitmanu miners. Three powerful ASIC mining rigs from this blockchain development company have already claimed their stake as the markets most profitable crypto miners.

Equipped with the latest 3nm chips, Bitmanu miners are capable of delivering higher mining speed in spite of their moderate power consumptions. Above all, these miners offer hash rates that are unheard of in this industry. Naturally, users of these mining rigs find it much easier to earn mining rewards without using a lot of power.

Bitmanu Hash Rates

Many Bitmanu customers have mentioned that needed just a month to earn 100% ROI by mining Monero. The profitability of these rigsis the highest ever in the industry. This has skyrocketed the demand for Bitmanu miners amongst seasoned mining experts as well as newbies looking to build a steady income source.

Potential Profits/Month

Even though Bitmanu miners are extremely efficient and powerful, these machines can be used without any mining knowledge and experience. They are delivered pre-configured, and users can start mining just by connecting them to a power socket.

For the first time in the history of this industry, we have designed mining rigs specifically for the common man. Our ultimate goal is to level the playing field and democratize the market, said David Letoski, CMO of Bitmanu.

To find out more about Bitmanu, please visit https://bitmanu.com/

About Bitmanu: Bitmanu stands as a prominent manufacturing company, driven by a team of investors and renowned experts in the cryptocurrency industry. The company's mission is to make the advantages of the latest technological innovations accessible to everyone. Bitmanu proudly presents an impressive lineup of cryptocurrency miners that deliver exceptional returns on investment with remarkable speed.

See the original post:
Investment Opportunity with Latest Cryptocurrency Miners - GlobeNewswire

Read More..

TechScape: The US is clamping down on cryptocurrency is the UK next? – The Guardian

TechScape

Rishi Sunaks techno-moment has come. Unfortunately for him, it might be too late.

Last week, the US Securities and Exchange Commission (SEC) launched a pair of lawsuits against the countrys biggest cryptocurrency exchanges, Binance and Coinbase.

The lawsuit against Binance, which had been previewed in an earlier action by the CFTC, the US commodities regulator, was juicy:

The SEC complaint alleges that [CEO Changpeng Zhao] directed Binance to conceal the access of high-spending US customers to Binance.com. In one piece of evidence included in the lawsuit, the Binance chief compliance officer messaged a colleague saying: We are operating as a fking unlicensed securities exchange in the USA bro. Elsewhere in the lawsuit, Binances CCO is quoted as saying: We do not want [Binance].com to be regulated ever.

The company runs two supposedly separate exchanges: a regulated US one and an anything-goes international one. A substantial chunk of each lawsuit focuses on the allegation that the company was knowingly helping traders who should only have been allowed on the regulated exchange to skip over to the international one. A Binance spokesperson said: While we take the allegations in the SECs complaint seriously, they should not be the subject of an SEC enforcement action, let alone on an expedited basis. They are unjustified.

But its the lawsuit against Coinbase that is sending shivers through Americas cryptocurrency industry:

Since at least 2019, through the Coinbase platform, Coinbase has operated as an unregistered broker an unregistered exchange and an unregistered clearing agency, the SEC said in its complaint. Coinbase has for years defied the regulatory structures and evaded the disclosure requirements that Congress and the SEC have constructed for the protection of the national securities markets and investors.

Paul Grewal, the chief legal officer and general counsel of Coinbase, said: The SECs reliance on an enforcement-only approach in the absence of clear rules for the digital asset industry is hurting Americas economic competitiveness and companies like Coinbase that have a demonstrated commitment to compliance.

The case against Binance is a clear allegation of clear wrongdoing: if you were to run a crypto exchange that you accept cant service American customers and then you were to secretly help American customers to trade on it, you arent going to be too stunned when regulatory action follows.

But the case against Coinbase is more fundamental. It is the SEC arguing that it is illegal to run a cryptocurrency exchange per se. Specifically, that some unknown number of crypto tokens are, in fact, regulated securities (the SEC names 13 in its suit against Coinbase, including Solana, Cardano and Polygon) and that, even if those projects are not illegal in and of themselves, helping people trade in them is.

Its a controversial assessment. During the ICO boom of 2017, the SEC took action against specific crypto projects that veered too close to the sun, and generally won on the merits: selling a token to investors that looks and acts like a unit of stock, while telling them buy this and youll get rich, is quite easy for a financial regulator to take action on.

But it is less clear that a cryptocurrency exchange where users trade tokens that arent illegal could nonetheless function as an illegal stock exchange. Nonetheless, the industry is hedging its bets and looking for an escape hatch. Enter the UK:

California-based Andreessen Horowitz (A16Z) said Britain was on the right path to becoming a leader in crypto regulation. The venture capital firms new office will open later this year and will be dedicated to investing in crypto and tech startups in the UK and Europe.

Chris Dixon, the head of crypto investing at Andreessen Horowitz, wrote in a blogpost: While there is still work to be done, we believe that the UK is on the right path to becoming a leader in crypto regulation.

The UK also has deep pools of talent, world-leading academic institutions, and a strong entrepreneurial culture.

Rishi Sunak said he was thrilled that the firm had chosen the UK, a move he said was testament to our world-class universities and talent and our strong competitive business environment.

Although the A16Z office is technically targeting the crypto and startup ecosystem in the UK, it will functionally be extremely focused on the crypto part of that mix. The companys latest UK investment is modish crypto-AI startup Gensyn, the office is led by crypto-focused investor Sriram Krishnan, and, well, theres this statement by Sunak:

As we cement the UKs place as a science and tech superpower, we must embrace new innovations like Web3, powered by blockchain technology, which will enable start-ups to flourish here and grow the economy.

That success is founded on having the right regulation and guardrails in place to protect consumers and foster innovation. While theres still work to do, Im determined to unlock opportunities for this technology and turn the UK into the worlds Web3 centre.

Its been a long time coming for the prime minister, who first tried to attach himself to cryptos rising star when he was the chancellor. In 2021, he launched a taskforce to explore a Bank of England digital currency, and a year later, he tasked the Royal Mint with creating an NFT, just as the market imploded. (The plans were dropped just under a year later).

The UK had already been benefiting from the regulatory uncertainty in the US before last weeks actions, with crypto founders viewing it as a comfortable middle-ground between the risk of remaining in the US and the upheaval of relocating to a fully low-touch regime like the UAE. But a one-two punch of a gleefully optimistic prime minister in Britain and the long-awaited arrival of a true crackdown in America could be the impetus needed to spark a substantial relocation.

Of course, there is one problem: Sunaks regime is not long for this world. You would need a higher risk appetite than even your typical angel investor to bet on him staying in power past 2024, and Labour is somewhat less enthusiastic about cryptocurrencies. The gamble, from those in the space with whom Ive spoken to, is less that Sunak will be able to pass friendly laws in the 18 months he has left in office, and more that when hes replaced as prime minister, a crypto clampdown will be extremely low on the list of priorities of whoever replaces him.

A programming note

Im heading off on parental leave next week, to see my son through to his first birthday. I wont be fully absent youll hear from me about once a month for the rest of the year but Ill be joined by a rotating cast of guest writers from around the Guardian and beyond, led by my partner in tech, our global technology editor Dan Milmo.

If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Tuesday.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more from the original source:
TechScape: The US is clamping down on cryptocurrency is the UK next? - The Guardian

Read More..

With transparent machine learning tool, engineers accelerate … – University of Wisconsin-Madison

Machine learning can be a powerful tool for discovering and designing new polymers, according to new research from the UWMadison College of Engineering. Photo illustration by Xin (Zoe) Zou/UWMadison College of Engineering

Using the power of prediction, University of WisconsinMadison mechanical engineers have quickly discovered several promising high-performance polymers out of a field of 8 million candidates.

The aerospace, automobile and electronics industries use these polymers, known as polyimides, for a wide variety of applications because they have excellent mechanical and thermal properties including strength, stiffness and heat resistance.

Right now, theres a limited number of existing polyimides because the process of designing them is costly and time-consuming.

However, with their data-driven design framework, the UWMadison engineers leverage machine learning predictions and molecular dynamics simulations to dramatically speed up the discovery of new polyimides with even better properties.

Ying Li

The team detailed its approach in a paper published this month in the Chemical Engineering Journal.

Our findings have broad implications for the field of materials science and will inspire further research in the development of advanced data-driven techniques for materials discovery, says Ying Li, an associate professor of mechanical engineering at UWMadison who led the research. Our design strategy is much more efficient compared to the conventional trial-and-error process and can also be applied to the molecular design of other polymeric materials.

Polyimides are produced through a condensation reaction of dianhydride and diamine/diisocyanate molecules. For their study, the engineers first collected open-source data of the chemical structures of all the existing dianhydride and diamine/diisocyanate molecules, then used that data to build a comprehensive library of 8 million hypothetical polyimides.

Its kind of like building something with LEGO blocks, Li says. You have the basic building blocks a whole bunch of different dianhydride and diamine/diisocyanate molecules. And you could try to build all of the possible structures by hand, but that would take forever because the various combinations are enormous.

So, Li and his colleagues used a computer to combine the building blocks together, which allowed them to organize all possible combinations into a huge database.

Database in hand, the team created multiple machine learning models for the thermal and mechanical properties of polyimides based on experimentally reported values. Using a variety of machine learning techniques, the researchers identified chemical substructures that are most important for determining individual properties.

We incorporated techniques that essentially explain how our machine learning model behaves, so our model isnt a black box, Li says. Weve built a transparent box that allows human experts to immediately understand why the machine learning model made a certain decision.

Applying their well-trained machine learning models, the researchers obtained predictions for the properties of the 8 million hypothetical polyimides. Then they screened that whole dataset and identified the three best hypothetical polyimides with combined properties superior to those of existing polyimides.

They also checked their work: The researchers built all-atom models for their top-three candidates and conducted molecular dynamics simulations to calculate a key thermal property.

The molecular dynamics simulations were in good agreement with the predictions from the machine learning models, so that gives us confidence that our predictions are quite reliable, Li says. In addition, the simulations showed that these new polyimides would be easy to synthesize.

As a final validation method, the team made one of the new polyimides and performed experiments that demonstrated the materials excellent heat resistance. Their experimental results showed the new polyimide could withstand a temperature of about 1,022 degrees Fahrenheit before it started to degrade a result that agreed with their machine learning predictions. In contrast, existing polyimides could endure temperatures only in the range of 392 to 572 degrees F. The researchers also created a web-based application that allows users to explore the new high-performing polyimides with interactive visualization.

Additional authors on the Chemical Engineering Journal paper include equal-contributing first authors Jinlong He of UWMadison, Lei Tao of the University of Connecticut, and Nuwayo Eric Munyaneza of Virginia Polytechnic Institute and State University. Vikas Varshney of the Air Force Research Laboratory, Wei Chen of Northwestern University, and Guoliang Liu of Virginia Polytechnic Institute and State University are also authors on the paper.

The research was supported by funding from the Air Force Office of Scientific Research through the Air Forces Young Investigator Research Program, the Air Force Research Laboratory, and the National Science Foundation.

Read the original here:
With transparent machine learning tool, engineers accelerate ... - University of Wisconsin-Madison

Read More..

Machine-learning-based diagnosis of thyroid fine-needle aspiration … – Nature.com

In this study, a combination of RI image data and color Papanicolaou-stained image data improved the accuracy of MLA for diagnosing cancer using thyroid FNAB specimens. The classification results of the MLA using color Papanicolaou-stained images were highly dependent on the size of the nucleus, but those of the MLA using RI images were less dependent on nucleus size and were affected by information around the nuclear membrane. The final algorithm using data from both types of images together distinguished thyroid cell clusters from benign thyroid nodules and PTC with 100% accuracy.

MLA has shown superior diagnostic performance using images of thyroid FNAB specimens when a convolutional neural network (CNN) architecture was adopted, which is effective for image analysis7,8,12,13. Guan et al.13 studied a CNN-based MLA for classifying hematoxylineosin-stained FNAB specimens of benign thyroid nodule and PTC (TBSRTC II, V and VI). A total of 887 fragmented color images were used in this study, which were cropped from 279 images taken using a digital camera attached to a brightfield microscope. The trained algorithm exhibited 97.7% accuracy for distinguishing between 128 test images of benign and malignant nodules. Range et al.8 used MLA to classify Papanicolaou-stained FNAB specimens of broader spectrum thyroid nodules (TBSRTC IIVI). They used 916 color images obtained using a whole slide scanner. The trained MLA distinguished malignant from benign nodules with high accuracy (90.8%), comparable to that of a pathologist. Similarly, a CNN-based MLA performed well in our study, exhibiting high-accuracy patch-level classification (97.3%) and cluster-level classification (99.0%), using only color Papanicolaou-stained images.

However, given that the purpose of FNAB is to determine whether to operate on thyroid nodules, it must not only exhibit high overall accuracy, but also minimize serious misclassification, such as classification of an obvious malignancy as benign or that of an overtly benign nodule as a malignancy. In Guans study, MLA misclassified some cases that a pathologist classified as obviously benign as a malignancy. Similarly, in Ranges study, MLA misclassified some clearly benign nodules as malignant or misclassified a malignant nodule that was indicated for surgery as benign8. These issues are problematic because they can lead to an erroneous treatment plan for patients who would receive proper treatment if they underwent the current standard care. We studied nodules with relatively distinct benign or malignant characteristics (TBSRTC II, V, and VI). Our findings that RI data improved the accuracy of MLA in these nodules have important clinical significance since these indicate a potential reduction in the aforementioned serious misclassification.

Guan et al.13 suggested that the significant misclassifications of MLA for the thyroid FNAB specimens could be related to the nucleus size. In their study, the cells in false-positive cases showed large nuclei with a high mean pixel color information similar to malignant cells, but the pathologist determined that these cells had a typically benign morphology. The authors interpreted that the classification of MLA was based on the size and staining of the nucleus, but not on the shape. Furthermore, in our results, MLA based on color images showed limitations in accurately classifying benign thyroid cells with a large nucleus or malignant thyroid cells with a small nucleus because the size of the nucleus was the main feature required for classification. However, MLA classification based on the RI image was less affected by nucleus size. This suggests that RI images for can compensate for the limitations of MLA using color images for FNAB specimens whose nuclear size is not typical for benign or malignant cells.

Further results from analyses to explain the models suggest that RI-image based MLA uses the structure and shape of the nucleus for classification. In addition to the algorithm being activated mainly for large nuclei in color images, the algorithm was activated not only by large nuclei but also by nuclei with a clear structure in RI images. The certainty of the MLA classification results was proportional to the detail of the information around the nuclear membrane when based on RI images, but not when based on color images. Detailed nuclear structures, such as nuclear membrane irregularity and micronucleoli are important indicators of thyroid cancer diagnosis26. Thus, the accuracy of MLA classification can be improved when such information is incorporated.

Another potential strength of RI images is the integration of information of a wide vertical space. In a thyroid cytology specimen, cells are scattered over a wide vertical space (i.e. multiple z-plains) rather than over a plane. A single layer (z-plain) 2D image cannot address this vertical spread, and information from out-of-focus cells is likely to be lost or distorted. In contrast, in the RI image obtained through ODT, cells located in different Z-plains are in focus simultaneously. In our study, MLA based on color images showed a false positive result for some out-of-focus patches, whereas MLA based on RI image showed a true negative result for the same image patches (data not shown). However, the out-of-focus area is only a part of the color images, and the use of multiple z-plane images did not improve the accuracy of MLA when compared to the use of a single z-plane image in a previous study8. Therefore, it is unclear whether the aforementioned factor significantly affects the accuracy of MLA.

This study has certain limitations. Despite the large number of sample measurements, this study was performed in a single center and could not cover all conditions of specimens that could exist in real clinical environments. ODT provides optimal RI imaging in un-manipulated living cells27, but we obtained RI images from chromatically stained cells. Staining acted as an extrinsic noise or artifact in the RI images, which reduced the accuracy of MLA. Further study is required to determine the effect of staining on the outcomes. Finally, up to 30% of FNABs may have indeterminate cytopathology (TBSRTC III and IV). This study targeted specimen characteristic of benign or malignant thyroid nodules (TBSRTC II, V, and VI), and therefore, the currently trained algorithm cannot be directly applied to TBSRTC III and IV specimens without relevant training.

To investigate the complementary nature of RI images and color images, a 2D MIP image was generated by projecting the 3D RI image along the z-axis, thereby excluding the influence of dimensionality. Previous studies in the field of cell classification have demonstrated improved performance when using 3D RI images compared to 2D images28,29. Although our research did not incorporate 3D images due to the specific research objectives, we plan to expand our investigations in future studies by incorporating 3D RI images and other 3D imaging modalities.

In this study, we demonstrated the efficacy of multiplexing of RI with standard brightfield imaging using a single ODT platform for MLA-based classification of benign and malignant thyroid FNABs. Multiplexed ODT showed promise for the development of a more accurate classification of thyroid FNABs while reducing the inherent uncertainty and error observed in the current diagnostic standards. Thus, an ODT-based MLA may potentially contribute to an improved cost-effective and rapid point-of-care management of thyroid malignancies.

See the rest here:
Machine-learning-based diagnosis of thyroid fine-needle aspiration ... - Nature.com

Read More..

What to Expect From IRS Cryptocurrency Enforcement – Wealth Management

Practitioners should be paying close attention to tax issues impacting the cryptocurrency industry, according to a recent presentation, Navigating the Crypto Winter Preparing for Cryptocurrency Regulation & Enforcement in Uncertain Times, at the 15th Annual NYU Tax Controversy Forum on June 8, 2023, in New York City.

Regulation and enforcement relating to digital assets has taken some time, but with the influx of funding expected in the coming years to the Internal Revenue Service, its expected that the agency will be cracking down on taxpayers who evade reporting and paying taxes on digital asset transactions.

As a primer, cryptocurrency, non-fungible tokens and other similarly recognized digital assets are classified as property for federal tax purposes and are subject to the same general tax principles. Transactions involving a digital asset are generally required to be reported on a tax return. The IRS has issued guidance on the tax treatment of transactions involving digital assets. While the guidance is rather straightforward on what type of transactions are taxable (as capital gains or income), the speakers spend some time focusing on open issues when it comes to crypto losses.

Open Issues

The speakers started by explaining the various economic loss events for digital assets, including the sale or exchange of the digital asset, abandonment, worthlessness and a distressed or bankrupt crypto exchange. The speakers emphasized that merely because an event has occurred that appears to crystalize an economic loss on a digital asset does not mean that the loss is realized, recognized, and otherwise allowable for US tax purposes. The discussion also focused on some of the shortfalls of the IRS, such as whether theyre really able to monitor crypto transactions (spoiler: the IRS probably wont have much luck with decentralized finance [DeFi]transactions) and how the IRS can definitively know a taxpayer transferred assets to someone else and not just to another account they own. Other unique situations also discussed were what happens when a taxpayer loses a key to a digital wallet only to later find it, how likely is the IRS to go after someone who made a reasonable effort to report when there are so many non-reporters out there and how to characterize staking (as ordinary income or capital gain?) Staking is when you lock crypto assets for a set period of time to help support the operation of a blockchain and earn staking rewards for doing sothe panelists compared it to earning interest but explained that its mechanically different.

Enforcement

After laying out the open issues, the conversation shifted to why this is important for practitioners. It was reiterated that the IRS will continue bolstering its enforcement in this space, leading to more audits. The agency has already updated Form 1040 for the 2022 tax year, asking taxpayers to disclose any transactions of digital assets. Recent enforcement efforts include a successful conviction for conspiracy to launder cryptocurrencies and a court order requiring a bank to produce information concerning U.S. taxpayers who might have failed to report crypto transactions.

Follow-Up Steps

One important takeaway from the presentation is to advise clients to track digital assets and all related information by evaluating what theyve bought and sold. Find out if your client is involved with any DeFi and warn clients that the IRS is taking digital asset reporting very seriously and that its critical they self-report gain/loss even if they dont receive a Form 1099 or a transaction report from an exchange. Lastly, advise clients that crypto isnt as anonymous as they might think. The IRS is already engaging third parties to help it follow digital asset transactions using forensic tracing of the blockchain and is working quickly to enhance its other compliance capabilities. While there still arent robust know your customer and anti-money-laundering policies in place in the crypto space to help fight money laundering and tax evasion, it wont be long before the IRS beefs up its auditing and clients end up in the hot seat if theyre not careful.

Original post:
What to Expect From IRS Cryptocurrency Enforcement - Wealth Management

Read More..

What Is Unsupervised Machine Learning? – The Motley Fool

Artificial intelligence (AI) is an area that focuses on enabling machines and software to process information and make decisions autonomously. Machine learning, a component of AI, involves computer systems enhancing their problem-solving and comprehension of complex issues through automated techniques.

The three central machine-learning methodologies that programmers can use are supervised learning, unsupervised learning, and reinforcement learning. For in-depth information on supervised machine learning and reinforcement machine learning, kindly refer to the articles dedicated to them. Here you can read up on the basics of unsupervised machine learning.

Image source: Getty Images.

With unsupervised machine learning, a system is like a curious toddler exploring a world they know nothing about. The system is exploring data without knowing what it's looking for but is excited -- in a digital kind of way -- about any new pattern it stumbles upon.

With this type of machine learning, algorithms sift through heaps of unstructured data without any specific directions or end goals in mind. They are looking for previously unknown patterns, much as you might look for a new stock pick in an overlooked corner of the market. This is rarely the last step since the owner of the raw data typically applies more sophisticated deep learning or supervised machine learning analyses to any potentially interesting patterns.

Why should you care about this artificial intelligence toddler on a quest without a firm goal? Well, unsupervised machine learning is actually on the cutting edge of technology and innovation. Its a key player in everything from autonomous vehicles learning to navigate roads to recommendation algorithms on your favorite streaming platforms. This pattern-finding method is a powerful first step in a deep analysis of any complex topic, from weather forecasting to genetic research.

Two major types of unsupervised learning are clustering and association.

So now you know what unsupervised machine learning is and why it matters. How can you take this newfound knowledge and put it to good use?

First off, you can make informed investment decisions. Companies leveraging unsupervised learning are often poised for growth as this technology continues to evolve. Think about Amazon (AMZN -1.27%) using unsupervised learning for its product recommendations or Netflix (NFLX -2.99%) running unsupervised machine learning routines across years of collected viewership data to generate your streaming home page and make future content production decisions.

These applications aren't just fun toys -- they are business advantages and growth drivers.

Also, AI and machine learning continue to reshape many industries. Whether you're into FAANG stocks or emerging AI startups, knowledge about unsupervised learning can give you an edge in evaluating a company's tech prowess and potential for future success.

We all appreciate a bit of connection, right? Well, thanks to unsupervised machine learning, we're getting better at finding people we might know or like on social media platforms. Facebook is a prime example.

Have you ever wondered how Facebook seems to know who your actual friends from high school are -- the ones you may actually want to keep in touch with? It's not sorcery. It's unsupervised learning in action.

Meta Platforms' (META -0.29%) massive social network continually analyzes a trove of user data, looking for patterns and shared features among users. Common friends are a helpful clue; similar locations and shared interests can point the platform in the right direction, and mutual workplaces can be the clincher. None of these qualities is enough in itself to find that long-lost flame or forgotten friend, but they add up through the power of unsupervised machine-learning algorithms.

So when Facebook suggests "People You May Know," it essentially gives you the output of an unsupervised learning model. The social network isn't just pulling these suggestions out of a digital hat. Each one is the result of a complex analysis of patterns and connections.

View original post here:
What Is Unsupervised Machine Learning? - The Motley Fool

Read More..

Labelling cryptocurrency as ‘gambling’ shows lack of understanding and misses the solution, expert says – ABC News

When a UK parliamentary committee proposed last month that cryptocurrency be regulated as gambling, it didn't take long for the Treasury to reject the idea.

But the fact thatit was suggested at all is revealing, says Gavin Brown, associate professor in financial technology at the University of Liverpool.

"[The committee] didn't really understand the technology," he says.

And in this, they aren't alone despite cryptocurrencies, digital currency designed to offer an alternative payment method to traditional money, now being over a decade old.

"I see that all the time. I'll get a taxi in London and the taxi driver will know ten times more [about cryptocurrency] than the CEO of a multinational bank I'm about to visit," Brown says.

"We still see that disparity of knowledge, and not just from people on the street, but also from people who are actually making the policies who should know better."

That's because crypto is"powerful stuff",he says.

The largest ever Bitcoin transaction was for just over $US1 billion ($1.5 billion), which, to move without a bank, carried a transaction fee of $US3.56 ($5.35).

"And it cleared and settled in minutes," Brown says.

He argues that ignorance of cryptocurrency is risky.

"Western Anglo-Saxon economies are stuck between a rock and a hard place, because it's not going away and it's a constant threat."

There are thousands of different cryptocurrencies Bitcoin is the biggest and trying to regulate them is anything but simple.

Larger crypto companies are centralised, meaningthey are traditional companies with shareholders or a board of directors.

But the same is not true of cryptocurrencies, which are decentralised.

"The problem we have with things like Bitcoin, is that it's not really controllable or ban-able in a traditional sense because [it] doesn't have a CEO, a head office, any employees, an email address, doesn't file any accounts, doesn't have any buildings, has no AGM, has no shareholders," Brown says.

"Literally, Bitcoin is an idea. It's a computer program that's being run globally all over the world at the same time."

In some senses, that elusiveness is exactly the point.

"[Cryptocurrency] has been deliberately constructed in a way that is anti-state, and almost naturally beyond the reach of regulators," Brown says.

It's one of "a ton of downsides [associated with it], like nefarious use by criminals", he says.

John Reed Stark, a lawyer in Washington DC specialising in the intersection of law and technology, told ABC's Four Corners last year that "horrific crimes from ransomware attacks, and terrorism, and evading sanctions during war time drug dealing [and] sex trafficking" are crimes that are "now a lot easier to do because of cryptocurrency".

Natasha Gillezeau, SXSW Sydney production lead and former Australian Financial Reviewtech journalist, says"people need to understand how serious [cryptocurrency] is".

"We have to understand how much of a marketing and advertising push that crypto [companies have]done in the last few years," she tells ABC RN's Download This Show.

"We're talking sports stadiums [sponsored by] crypto.com, we're talking outreach to influencers We're actually in a different point in the cycle of how much the marketing and advertising industry has legitimised it.

"I've been in conversations with people who have said, 'We target people deliberately on Facebook and Instagram, that we know have gambling problems, with crypto ads because they're more likely to flip than others'."

While Gillezeau doesn't see the UK's gambling regulation proposal as the best solution to the problem, she believes it does recognise "the human effects of cryptocurrency".

"Probably what these British MPs [who raised the proposal] are speaking to is that there are certain segments of society that have been affected and blasted the last few years with crypto-specific advertising, they've lost a lot of money and this is a response," she says.

If crypto trading was designated as gambling, platforms could face additional licensing rules, requirements to protect vulnerable users, stake limits and closer control of advertising.

Brown can also appreciate some of the motivation to align cryptocurrency use with gambling regulations such as these.

"[Cryptocurrency]has the power to defraud, it has the power for people to lose significant amounts of wealth, it kind of feels a bit like gambling as well. And therefore, by taking that kind of ultra prudent label of gambling and just pinning it on it, it's quickand it plays to that downside risk agenda."

It also allows regulators to dip in to, and "just repurpose" ready-made law.

"But that misses a trick," Brown says.

"These new types of technology are not gambling, they're very different to gambling, actually. There is no house and punter. In fact, it's much more nuanced than that."

Here in Australia, in mid-2022around one million people owned cryptocurrency. In the UK, 5.2 million people or one in nine have either used or owned cryptocurrency.

"It's come that far in 13 years," Brown says.

"Go forward another 10 years. What happens if that number [in the UK] is 30 million or 40 million?

"What happens if every British person or every Australian person wakes up and says, 'I'm a bit sick of inflation, I'm sick of interest rates, I'm sick of my government or whoever controlling money in a certain way. I want a different type of money'.

"Well, guess what? There is this alternative type of money and all you need is an internet connection to access it."

The more a population uses alternative currency, the more difficult it becomes to control its economy, Brown says.

"If people aren't using that [traditional] currency, you're completely emasculated. That right hand of your two-handed approach is gone."

After presenting on cryptocurrencies to the UK Treasury six years ago, Brown was asked, "If people start using this [cryptocurrency], who pays for schools? Who pays for roads? Who pays for defence?"

"This is dangerous", the person said.

And Brown agrees.

"For so long cryptocurrencies and digital assets have been kept at arm's length our fingers in the ears, 'let's hope it'll go away, let's hope it'll disappear'.

"Nation states would like it to go away, but it's just not going away.

"The challenge we have, especially for countries like the UK and Australia, is because financial services are such an important part of the economy, we can't afford to get left behind."

Governments must have an effective digital strategy, he says. And while crypto itself might be extremely difficult to regulate, the same is not true of the people and companies who interact with it.

"If someone says, 'Hey, we're a cryptocurrency bank', well, guess what? I can regulate you as a bank of a digital asset.

"If someone says, 'I'm a prime broker', or 'I want to be a custodian of Bitcoin', or 'I want to be a financial adviser of digital assets', we can regulate those people because they are companies and individuals in a traditional sense.

"And that's a much more pragmatic thing to do."

This article contains content that is only available in the web version.

View original post here:
Labelling cryptocurrency as 'gambling' shows lack of understanding and misses the solution, expert says - ABC News

Read More..

Mentorship and machine learning: Graduating student Irene Fang is … – University of Toronto

Majoring in human biology and immunology, Irene Fang capitalized on opportunities inside and outside the classroom to research innovative methods in ultrasound detection driven by artificial intelligence and machine learning. Shes also working on research into cells and proteins in humans that could lead to new treatments and therapies for immunocompromised patients.

As she earned her honours bachelor of science degree, Fang always wanted to help others succeed. As a senior academic peer advisor with Trinity College, shes admired throughout the community for her brilliance, kindness and dedication to U of T.

I want to keep giving back because I am so appreciative of the upper-year mentors I connected with, starting in first year, says Fang. They continue to serve as an inspiration, motivating me to further develop professional and personal skills.

Why was U of T the right place for you to earn your undergraduate degree?

U of T provided a plethora of academic, research and experiential learning opportunities alongside a world-class faculty to help cultivate my curiosity and consolidate my knowledge. In conjunction with an unparalleled classroom experience, I gained a real-world perspective with international considerations through the Research Opportunities Program.

I would be remiss if I didnt also mention how extracurricular activities enhanced and enriched my university experience. The many clubs at U of T helped me focus on my passions and make meaningful connections with like-minded peers who became my support network, enabling me to reach my full potential.

How do you explain your studies to people outside your field?

Im interested in machine learning, which is an offshoot of artificial intelligence that teaches and trains machines to perform specific tasks and identify patterns through programming.

There are two types of machine learning. Supervised learning involves training your machine learning algorithm with labelled images. In unsupervised learning, your algorithm learns with unlabeled images; this is advantageous as it eliminates the need to look for expert annotators or sonographers to label the images, saving time and costs. My research project compared how well unsupervised learning was able to identify and classify the three distinct ultrasound scanning planes at the human knee with supervised learning, the current standard for machine learning in ultrasound images.

My research project in immunology seeks to explore how a particular protein or receptor expressed on a specific subpopulation of human memory B cells mediates their immune responses. This is significant as memory B cells generate and maintain immunological memory, eliciting a more rapid and robust immune response upon the re-exposure to the same foreign invader, such as a pathogen or toxin, enabling a more effective clearance of the infection.

How is your area of study going to improve the life of the average person?

It is absolutely fascinating that AI has already revolutionized the medical field. Specifically, AI possesses the potential to aid in the classification of ultrasound images, enhancing early detection and diagnosis of internal bleeding because of injuries or hemophilia. Overall, AI may lead to more efficient care for patients, thereby improving health outcomes.

In terms of my immunology research, since the memory B cells expressing the specific receptor are dysregulated in people suffering from some autoimmune disorders and infectious diseases, a better understanding of how memory B cells are regulated could provide valuable insight into the underlying mechanisms of such diseases so we can enable scientists to develop new therapies that alleviate patients symptoms.

What career or job will you pursue after graduation?

I aspire to pursue a career in the medical field, conduct more research and nurture my profound enthusiasm for science while interacting with a diverse group of people. I hope to devote my career to improving human health outcomes while engaging in knowledge translation to make science more accessible to everyone.

You spent time at U of T as an academic peer advisor. Why was this work so important to you and what made it so fulfilling?

I remember feeling overwhelmed as a first-year student until I reached out to my academic peer advisors. Had I not chatted with them, I would not have known about, let alone applied for, my first research program. Looking back, it opened the door to many more new, incredible possibilities and opportunities. This experience made me realize the significance and power of mentorship, inspiring me to become an academic peer advisor. Seeing my mentees thrive and achieve their goals has made this role so rewarding so much so that I am determined to engage in mentorship throughout my career after graduation.

What advice do you have for current and incoming students to get the most out of their U of T experience?

Ask all questions because there are no silly questions. Get involved, whether it be volunteering, partaking in work-study programs, sports or joining a club. Meeting new people and talking to strangers can be daunting, but the undergraduate career is a journey of exploration, learning and growth.

Be open-minded and dont be afraid to try something new. Immersing yourself in distinct fields enables you to discover your interests and passions, which can lead you to an unexpected but meaningful path.

Also, be kind to yourself because failures are a normal part of the learning process; whats important is that you take it as an opportunity to learn, grow and bolster your resilience. And finally, although academia and work can keep you busy, remember to allocate time for self-care. Exercise, sleep and pursue hobbies because mental health is integral for success in life.

See more here:
Mentorship and machine learning: Graduating student Irene Fang is ... - University of Toronto

Read More..

A reinforcement learning approach to airfoil shape optimization … – Nature.com

In the following section, we present the learning capabilities of the DRL agent with respect to optimizing an airfoil shape, trained in our custom RL environment. Different objectives for the DRL agent were tested, gathered into three tasks. In Task 1, the environment is initialized with a symmetric NACA0012 airfoil and successive tests were performed in which the agent must (i) maximize the lift-to-drag ratio L/D, (ii) maximize the lift coefficient Cl, (iii) maximize endurance (C^{3/2}_{l}/C_{d}), and (iv) minimize the drag coefficient Cd. In Task 2, the environment is initialized with a high performing airfoil having high lift-to-drag ratio and the agent must maximize this ratio. The goal is to test if the learning process is sensitive to the initial state of the environment and if higher performing airfoils can potentially be produced by the agent. In Task 3, the environment is initialized with this same higher performing airfoil, but has been flipped along the y axis. Under this scenario, we investigate the impact of initializing the environment with a poor performing airfoil on the agent and determine if the agent is able to modify the airfoil shape to recoup a high lift-to-drag ratio. Overall, these tasks demonstrate the learning capabilities of the DRL agent to meet specified aerodynamic objectives.

Since we are interested in evaluating the drag of the agent-produced airfoils, the viscous mode of Xfoil is used. In viscous flow conditions, Xfoil only requires the user to specify a Reynolds number (Re) and an airfoil angle of attack (alpha). In all tasks, the flow conditions specified in Xfoil were kept constant. A zero-degree angle of attack and Reynolds number equal to 106 were selected to define the design point for the flow conditions. The decision to keep the airfoils angle of attack at a fixed position is motivated by the interpretability of the agents policy. A less constrained problem, in which the agent can modify the angle of attack, would significantly increase the design space, leading to less interpretability of the agents actions. Additionally, the angle of attack is chosen to be fixed at zero in order to easily compare the performance of agent-generated shapes with those found in the literature. The Reynolds number was chosen to represent an airfoil shape optimization problem at speeds under the transonic flow regime15. Hence, given the relatively low Re number chosen, the flow is incompressible over the airfoil, although Xfoil does include some compressibility corrections when approaching transonic regimes (Karman-Tsien compressibility correction,43). All airfoils are thus compared at zero angle of attack.

Two parameters relating to the PPO algorithm in Stable Baselines can be set, namely the discount factor (gamma) and the learning rate. The discount factor impacts how important future rewards are to the current state: (gamma = 0) will favor short-term reward whereas (gamma = 1) aims at maximizing the cumulative reward in the long run. The learning rate controls the amount of change brought to the model: it is a hyperparameter tuning the PPO neural network. For the PPO agent, the learning rate must be withing ([5times 10^{-6}, 0.003]). A study of the effects of the discount factor and learning rate on the learning process was conducted. This study shows that optimal results are found when using a discount factor (gamma = 0.99) and learning rate equal to 0.00025.

In building our custom environment, we have set some parameters to limit the generation of unrealistic shapes by the agent. These parameters help take into account structural considerations as well as limit the size of the action space. For instance, we define limits to the thickness of the produced shape. If the generated shape (resulting from the splines represented by the control points) exhibits a thickness over or under a specified limit value, the agent will receive a poor reward. Regarding the action space, we set bounds for the change in thickness and camber. This allows the agent to search in a restricted action space thus eliminating a great number of unconverged shapes resulting from actions bringing changes to the airfoil shape that are too extreme. These parameters are given in Table2. Moreover, the iterations parameter is the number of times Xfoil is allowed to rerun a calculation for a given airfoil in the event the solver does not converge. Having a high iterations number increases the convergence rate of Xfoil but also increases run times.

The environment is initialized with a symmetric airfoil having (L/D = 0), (C_{l} = 0) and (C_{d} = 0.0054) at (alpha = 0) and (Re = 10^{6}). In a first experiment, the agent is tasked with producing the highest lift-to-drag airfoil, starting from the symmetric airfoil. During each experiment, the agent is trained over a total number of iterations (defined as the total timestep parameter), which are broken down into episodes having a given length (defined as the episode length parameter). The DRL agent is updated (i.e., changes are brought to the neural network parameters) every N steps. At the end of an experiment, several results are produced. Figure7a displays the L/D of the airfoil successively modified by the agent at the end of each episode.

Learning curves for max L/D objective starting with a symmetric airfoil.

In Fig.7a, each dot represents the L/D value of the shape at the end of an episode and the blue line represents the L/D running average value over 40 successive episodes. The maximum L/D obtained over all episodes is also displayed. Settings regarding the total number of iterations, episode length and N steps for the experiment are given above the graph. It can be observed from Fig.7a that starting with a low L/D during early episodes, the L/D at the end of an episode increases with the number of episodes. Though significant variance in the L/D at the end of an episode can be seen, with values ranging between (L/D = -30) and (L/D = 138), the average value however increases and stabilizes around (L/D = 100). This increase in L/D suggests that the agent in able to learn the appropriate modifications to bring to the symmetric airfoil resulting in an airfoil having high lift-to-drag ratio. We are also interested in tracking a score over a whole episode. Here, we arbitrarily define this score as the sum of the L/D of each shape produced during an episode. For instance, if an episode is comprised of 20 iterations, the agent will have the opportunity to modify the shape 20 times thus resulting in 20 L/D values. Summing these values corresponds to the score over one episode. If the agent produces a shape that does not converge in the aerodynamic solver, a value of 0 is added to the score, thus penalizing the score over the episode if the agent produces highly unrealistic shapes. The evolution of the score with the number of episodes played is displayed in Fig.7b.

Figure7b shows the significant increase in the average score at end of episode, signaling that the agent is learning the optimal shape modifications. We can then visualize the best produced shape over the training phase in Fig.8.

Agent-produced airfoil shape having highest L/D over training.

In Fig.8, the red dots are the control points accessible to the agent. The blue curve describing the shape is the spline resulting from these control points. It is interesting to observe that the optimal shape produced shares the characteristics of high lift-to-drag ratio airfoils, such as those found on gliders, having high camber and a drooped trailing edge. Finally, we run the trained agent on the environment over one episode and observe the generated shapes in Fig.9. Starting from the symmetric airfoil, we can notice the clear set of actions taken by the agent to modify the shape to increase L/D. The experiment detailed above was repeated by varying total timesteps, episode lengths and N steps.

Trained agent modifies shape to produce high L/D.

We then proceed to train the agent under different objectives: maximize Cl, maximize endurance and minimize Cd. Associated learning curves and modified shapes can be found in Figures10, 11, 12, 13.

Learning curves for max Cl objective starting with a symmetric airfoil.

Learning curves for max (C^{3/2}_{l}/C_{d}) objective starting with a symmetric airfoil.

For the minimization of Cd objective, the environment is initialized with a symmetric airfoil having Cd = 0.0341. This change in initial airfoil, compared to the previously used NACA0012 is justified by enhanced learning visualizations.

Learning curves for min Cd objective starting with a symmetric airfoil.

Trained agent modifies shape to produce low Cd starting with a low-performance airfoil.

Similarly, the results show a clear learning curve during which both the metric of interest and the score at end of episode increase with the number of episodes. The learning process appears to happen within the first 100 episodes as signaled by the rapid increase in the score and then plateaus, oscillating around an average score value.

A second set of experiments was performed to assess the impact of the initial shape. The environment is initialized with a high performing airfoil (i.e., having a relatively high lift-to-drag ratio) and the agent is tasked with bringing further improvement to this airfoil. We chose this airfoil by investigating the UIUC database41 and selected the airfoil having the highest L/D. This corresponds to the Eppler 58 airfoil (e58-il) having (L/D = 160) at (alpha = 0) and (Re = 10^{6}), displayed in Fig.14. Results for this experiments are displayed in Fig.15.

Eppler 58 high lift-to-drag ratio airfoil.

Learning curves for max L/D objective starting with a high L/D airfoil.

It is interesting to compare the learning curves and average scores achieved when starting with the symmetric airfoil and the high performance airfoil.

In Fig.16, we can observe that for both initial situations there is an increase in the average score during early episodes followed by stagnation, demonstrating the learning capabilities of the agent. However, the plateaued average score reached is significantly higher when the environment is initialized with the high performance airfoil, given that the environment is initialized in an already high-reward region (through the high-performance airfoil). Additionally, it was observed that a slightly higher maximum L/D value could be achieved when starting with the high lift-to-drag ratio airfoil. Overall, Task 1 and Task 2 emphasize the robustness of the RL agent to successfully converge on high L/D airfoils, regardless of the initial shapes (in both experiments, the agent converges on airfoils having (L/D > 160)). The agent-generated airfoil for Task 2 is represented in Fig.21a.

Initial airfoil impact on the learning curve.

For Task 3, the starting airfoil is a version of the Eppler 58 airfoil that has been flipped around the y axis. As such, the starting airfoil has a lift-to-drag ratio opposite of the Eppler 58 (i.e., (L/D = -160)), thus exhibits low aerodynamic performance. The goal for this task is for the agent to modify the shape into a high performing airfoil, having high L/D.

In Fig.17, we display the learning curves associated to the score and L/D value at the end of each episode when the environment is initialized with the flipped e58 airfoil at the beginning of each episode. A noticeable increase in both the score and L/D values between episode 30 and episode 75 can be observed, followed by a plateau region. This demonstrates that the agent is able to learn the optimal policy to transform the poor performing airfoil into a high performing airfoil by bringing adequate changes to the airfoil shape. The agent then applies this learned optimal policy after episode 100. Moreover, the agent is capable of producing airfoils having lift-to-drag ratios equivalent or higher than the Eppler e58 high-performance airfoil, signaling that the initial airfoil observed by the agent does not impact the optimal policy learned by the agent, but rather only delays its discovery (see Figs.15 and 17).

Score and L/D learning curves when starting with a low performance airfoil.

An example of a high L/D shape produced by the DRL agent when starting with the flipped e58 airfoil is displayed in Fig.18. It is interesting to notice that in this situation, the produced airfoil shares previously observed geometric characteristics, such as high camber and a drooped trailing edge, leading to a high L/D value. The trained agent is then run over one episode length in Fig.19. By successively modifying the airfoil shape, we can observe that the agent is able to recover positive L/D values having started with a low performance airfoil. This demonstrate the correctness of the behavior learned by the agent.

Agent-produced airfoil shape when starting with low performance airfoil.

Trained agent modifies shape to produce high L/D starting with a low-performance airfoil.

Finally, the best produced shapes (i.e., those maximizing the metric of interest) for the different objectives and tasks can now be compared, as illustrated in Figs.20 and21.

Best performing agent-produced shapes under different objectives and a symmetric initial airfoil.

Best performing agent-produced shapes under different objectives and an asymmetric initial airfoil.

The results presented above demonstrate that the number of function evaluations (i.e., the number of times Xfoil is run and converges on a new shape proposed by the agent) depends on the task at hand. For instance, around 2,000 function evaluations were needed in Task 2, while 4,000 are needed in Task 1 and around 20,000 were required in Task 3. These differences can be explained by the distance that exists between the starting shape and the optimal shape. In other terms, when starting with the low performing airfoil, the agent has to perform a greater number of successful steps to converge on an optimal shape, whereas when starting with an already high-performance airfoil, the agent is close to an optimal shape and requires fewer Xfoil evaluations to converge on an optimal shape. The number of episodes needed to reach an optimal policy, however, appears to be between 100 and 200 episodes across all tasks. Overall, when averaging across all tasks performed in this research, approximately 10,000 function evaluations were needed for the agent to converge on the optimal policy.

Having trained the RL agent on a given aerodynamic task, the designer can then draw physical insight by observing the actions the agent follows to optimize the airfoil shape. From the results presented in this research, it can be observed that high camber around the leading edge and low thickness around the trailing edge are preferred shapes to maximize L/D, given the flow conditions used here. Observing the various policies corresponding to different aerodynamic tasks, the designer can then make tradeoffs between the different aerodynamic metrics to optimize. Multi-point optimization can be achieved by including in the reward multiple aerodynamic objectives. For example, if the designer seeks to optimize both L/D and Cl, a new definition of the reward could be: (r = (L/D_{current} + Cl_{current})-(L/D_{previous} + Cl_{previous})) (after having normalized L/D and Cl). However, multi-point optimization will decrease interpretability of the agents actions. By introducing multiple objectives in the agents reward, it will become more difficult for the designer to draw insight from shape changes and link those changes to maximizing a specific aerodynamic objective.

The proposed methodology enables to reduce computational costs by leveraging a data-driven approach. Having learned an optimal policy for a given aerodynamic objective, the agent can be used to optimize new shapes, without having to restart the whole optimization process. More specifically, this approach can be used to alleviate the computational burden of problems requiring high-fidelity solvers (when RANS or compressibility are required). For these problems, the DRL agent can quickly find a first optimal solution, using a low-fidelity solver. The solution can then be refined using a higher-fidelity solver and a traditional optimizer. In other words, DRL is used in this context to extract prior experience to speed up the high-fidelity optimization. As such, our approach can speed up the airfoil optimization process by very rapidly offering an initial optimal solution. Similarly to8, our approach can also be used directly for high-fidelity models. To accelerate convergence speeds, the DRL agent is first trained using a low-fidelity solver in order to rapidly learn an optimal policy. The agent is then deployed using a high-fidelity solver. In doing so this approach (i) reduces computational cost by shifting from a low to a high-fidelity solver to speed up the learning process, (ii) is data-efficient as the policy learned by the agent can then be followed for any comparable problem and, (iii) bears some generative capabilities as it does not require any user-provided data.

As reinforcement learning does not rely on any provided database, no preconception of what a good airfoil shape should look like is available to the agent. This results in added design freedom leading the agent to occasionally generate airfoil shapes that can be viewed as unusual to the aerodynamicists eye. In Fig.22, we compare agent-produced shapes to existing airfoils in literature. The focus is not on the agents ability to produce a specific shape for given flow conditions and aerodynamic targets, but rather to illustrate the geometric similarities found on both existing airfoils and artificially-generated shapes. A strong resemblance between the agent-generated and existing airfoils can be observed. This highlights the rationality of the policy learned by the agent: having no preexisting knowledge on fluid mechanics or airfoils, an intelligent agent trained in the presented custom RL environment can generate realistic airfoil shapes.

We compare five existing airfoils to our agent-produced shapes in Fig.22. In Fig.22a and b, we compare the agent-produced shape to Whitcombs supercritical airfoil. The shared flat upper surface, cambered rear and blunt trailing edge can be noticed51. We then compare agent-generated shapes to existing high-lift airfoils. Here also, the geometric resemblance is noticeable, notably the shared high camber.

Airfoil shape comparison between agent-produced shapes and existing airfoils.

Detrimental effects of large episode lengths.

One observation was made when noticing drastic decreases in the average score at the end of episode after a first period of increase. We believe this can be explained by the fact that when the episode length is large, once the agent has learned a policy allowing to quickly (under relatively few iterations) attain high L/D values, the average score will then decrease because the agent reaches the optimal shape before the end of the episode. Within the remaining iterations before the episode ends, the agent continues to modify the shape hoping for higher performance, but reaches a limit where the shape is too extreme for the aerodynamic solver to converge, resulting in a poor reward. This would explain why we can observe on Fig.23 a rapid increase in the score between 0 and 25 episodes, during which the agent explores various shapes and estimates an optimal policy, and a strong decrease in the score following this peak during which the agent follows the determined optimal policy and reaches optimal shapes before the episode ends.

The results presented above demonstrate the ability of a DRL agent to learn how to optimize airfoil shapes, provided a custom RL environment to interact with. We now compare this approach to a classical simplex method, under the same possible action conditions: starting from a symmetric airfoil, the optimizer must successively modify the shape by changing thickness and camber at selected x positions to achieve the highest performing airfoil in terms of L/D.

Here, the optimizer is based on the Nelder-Mead simplex algorithm, capable of finding the minimum of a multivariate function without having to calculate the first or second derivatives52. In this case, the function maps a 3-set of actions, being [select x position, change thickness, change camber] to a -L/D value. More specifically, taking the 3-set of actions as inputs, the function modifies the airfoil accordingly, evaluates the modified airfoil in Xfoil and outputs the associated -L/D. As the optimizer tries to minimize the- -L/D value, it searches for the 3-set that will maximize L/D. Once the optimizer finds the optimal 3-set of actions, the airfoil shape is modified accordingly and the optimizer is rerun on this new modified shape. This defines what we call one optimization cycle. Hence, the optimizer is tasked with the exact same optimization problem as the DRL agent: optimizing the airfoil shape to reach the highest L/D value possible by successively modifying the shape. During each optimization cycle, the optimizer evaluates the function a certain number of times. In Fig.24, we monitor the increase in L/D with the number of function evaluations.

Simplex method approachL/D increase with function evaluations for different starting points.

In the three situations displayed, it can be observed that the value of L/D increases with the number of function evaluations. However, the converged L/D value is significantly lower than values obtained through the DRL approach. For instance, even after 500 optimization cycles (i.e., 500 shape modifications and over 30,000 function evaluations), the optimizer is unable to generate an airfoil having L/D over 70. We know that this value of L/D is not a global optimum, as an L/D of at least 160 can be reached with the Eppler 58 airfoil from the UIUC database41. Thus, it seems that the simplex algorithm has converged on a local minimum. Furthermore, as demonstrated in Fig.24a and c, the converged L/D value found by the optimizer is highly dependent on the initial point. The airfoil shapes generated using the simplex method can be found in Fig.25.

Gradient-free approach generated airfoil shapes.

In Table3, we compare the converged L/D values, number of iterations and run times of the simplex method and DRL approach. In both approaches, the agent or optimizer can modify the airfoil 60 times. Although the number of iterations and run time are lower for the simplex method, the converged L/D value is far lower compared to the DRL approach.

This rapid simplex approach to the airfoil shape optimization problem highlights the benefits and capabilities of the presented DRL approach. First, the DRL approach seems less prone to convergence on local minima, as very high values of L/D can be achieved. Second, once the DRL agent has learned the optimal policy during a training period, it can be applied directly to any new situation whereas the simplex approach will require a whole optimization process for each new scenario encountered.

See the article here:
A reinforcement learning approach to airfoil shape optimization ... - Nature.com

Read More..