Page 1,863«..1020..1,8621,8631,8641,865..1,8701,880..»

AI and Machine Learning in Finance: How Bots are Helping the Industry – ReadWrite

Artificial intelligence and ML are making considerable inroads in finance. They are the critical aspect of variousfinancial applications, including evaluating risks, managing assets, calculating credit scores, and approving loans.

Businesses use AI and ML:

Taking the above points into account, its no wonder that companies like Forbes and Venture beat are usingAI to predict the cash flow and detect fraud.

In this article, we present the financial domain areas in which AI and ML have a more significant impact. Well also discuss why financial companies should care about and implement these technologies.

Machine learning is a branch of artificial intelligence that allows learning and improvement without any programming. Simply put, data scientists train the MI model with existing data sets and automatically adjust its parameters to improve the outcome.

According to Statista, digital payments are expected to show an annual growth rate of 12.77% and grow to 20% by 2026. This vast number of global revenues, done online requires an intelligent fraud system.

Source: Mordor Intelligence

Traditionally, to check the authenticity of users, fraud-detection systems analyze websites through factors like location, merchant ID, the amount spent, etc. However, while this method is appropriate for a few transactions, it would not cope with the increased transactional amount.

And, analyzing the surge of digital payments, businesses cant rely on traditional fraud-detection methods to process payments. This gives rise to AI-based systems with advanced features.

An AI and ML-powered payment gateway will look at various factors to evaluate the risk score. These technologies consider a large volume of data (location of the merchant, time zone, IP address, etc.) to detect unexpected anomalies, and verify the authenticity of the customer.

Additionally, the finance industry, through AI, can process transactions in real-time, allowing the payment industry to process large transactions with high accuracy and low error rates.

The financial sector, including the banks, trading, and other fintech firms, are using AI to reduce operational costs, improve productivity, enhance users experience, and improve security.

The benefits of AI and ML revolve around their ability to work with various datasets. So lets have a quick look at some other ways AI and ML are making roads into this industry:

Considering how people invest their money in automation, AI significantly impacts the payment landscape. It improves efficiency and helps businesses to rethink and reconstruct their process. For example, businesses can use AI to decrease the credit card processing (gettrx dot com card processing guide for merchants) time, increase automation and seamlessly improve cash flow.

You can predict credit, lending, security, trading, baking, and process optimization with AI and machine learning.

Human error has always been a huge problem; however, with machine learning models, you can reduce human errors compared to humans doing repetitive tasks.

Incorporating security and ease of use is a challenge that AI can help the payment industry overcome. Merchants and clients want a payment system that is easy to use and authentic.

Until now, the customers have to perform various actions to authenticate themselves to complete a transaction. However, with AI, the payment providers can smooth transactions, and customers have low risk.

AI can efficiently perform high volume; labor-intensive tasks like quickly scrapping data and formatting things. Also, AI-based businesses are focused and efficient; they have minimum operational cost and can be used in the areas like:

Creating more Value:

AI and machine learning models can generate more value for their customers. For instance:

Improved customer experience: Using bots, financial sectors like banks can eliminate the need to stand in long queues. Payment gateways can automatically reach new customers by gathering their historical data and predicting user behavior. Besides, Ai used in credit scoring helps detect fraud activity.

There are various ways in which machine learning and artificial intelligence are being employed in the finance industry. Some of them are:

Process Automation:

Process automation is one of the most common applications as the technology helps automate manual and repetitive work, thereby increasing productivity.

Moreover, AI and ML can easily access data, follow and recognize patterns and interpret the behavior of customers. This could be used for the customer support system.

Minimizing Debit and Credit Card Frauds:

Machine learning algorithms help detect transactional funds by analyzing various data points that mostly get unnoticed by humans. ML also reduces the number of false rejections and improves the real-time approvals by gauging the clients behavior on the Internet.

Apart from spotting fraudulent activity, AI-powered technology is used to identify suspicious account behavior and fraudulent activity in real-time. Today, banks already have a monitoring system trained to catch the historical payment data.

Reducing False Card Declines:

Payment transactions declined at checkout can be frustrating for customers, putting huge repercussions on banks and their reputations. Card transactions are declined when the transaction is flagged as fraud, or the payment amount crosses the limit. AI-based systems are used to identify transaction issues.

The influx of AI in the financial sector has raised new concerns about its transparency and data security. Companies must be aware of these challenges and follow safeguards measures:

One of the main challenges of AI in finance is the amount of data gathered in confidential and sensitive forms. The correct data partner will give various security options and standards and protect data with the certification and regulations.

Creating AI models in finance that provide accurate predictions is only successful if they are explained to and understood by the clients. In addition, since customers information is used to develop such models, they want to ensure that their personal information is collected, stored, and handled securely.

So, it is essential to maintain transparency and trust in the finance industry to make customers feel safe with their transactions.

Apart from simply implementing AI in the online finance industry, the industry leaders must be able to adapt to the new working models with new operations.

Financial institutions often work with substantial unorganized data sets in vertical silos. Also, connecting dozens of data pipeline components and tons of APIS on top of security to leverage a silo is not easy. So, financial institutions need to ensure that their gathered data is appropriately structured.

AI and ML are undoubtedly the future of the financial sector; the vast volume of processes, transactions, data, and interactions involved with the transaction make them ideal for various applications. By incorporating AI, the finance sector will get vast data-processing capabilities at the best prices, while the clients will enjoy the enhanced customer experience and improved security.

Of course, the power of AI can be realized within transaction banking, which sits on the organizations usage. Today, AI is very much in progress, but we can remove its challenges by using the technology. Lastly, AI will be the future of finance you must be ready to embrace its revolution.

Featured Image Credit: Photo by Anna Nekrashevich; Pexels; Thank you!

Link:
AI and Machine Learning in Finance: How Bots are Helping the Industry - ReadWrite

Read More..

Feasibility and application of machine learning enabled fast screening of poly-beta-amino-esters for cartilage therapies | Scientific Reports -…

Gupta, R. et al. Artificial intelligence to deep learning: Machine intelligence approach for drug discovery. Mol. Divers. 25, 13151360. https://doi.org/10.1007/s11030-021-10217-3 (2021).

CAS Article PubMed PubMed Central Google Scholar

Rda, C., Kaufmann, E. & Delahaye-Duriez, A. Machine learning applications in drug development. Comput. Struct. Biotechnol. J. 18, 241252. https://doi.org/10.1016/j.csbj.2019.12.006 (2020).

CAS Article PubMed Google Scholar

Deloitte Centre for Health Solutions - Embracing the future of work to unlock RD productivity. https://www2.deloitte.com/content/dam/Deloitte/uk/Documents/life-sciences-health-care/deloitte-uk-measuring-roipharma.pdf.

Morgan, S., Grootendorst, P., Lexchin, J., Cunningham, C. & Greyson, D. The cost of drug development: A systematic review. Health Policy 100, 417. https://doi.org/10.1016/j.healthpol.2010.12.002 (2011).

Article PubMed Google Scholar

Reis, M. et al. Machine-learning-guided discovery of 19F MRI agents enabled by automated copolymer synthesis. J. Am. Chem. Soc. 143, 1767717689. https://doi.org/10.1021/jacs.1c08181 (2021).

CAS Article PubMed Google Scholar

Ekins, S. et al. Exploiting machine learning for end-to-end drug discovery and development. Nat. Mater. 18, 435441. https://doi.org/10.1038/s41563-019-0338-z (2019).

ADS CAS Article PubMed PubMed Central Google Scholar

Struble, T. J. et al. Current and future roles of artificial intelligence in medicinal chemistry synthesis. J. Med. Chem. 63, 86678682. https://doi.org/10.1021/acs.jmedchem.9b02120 (2020).

CAS Article PubMed PubMed Central Google Scholar

Paul, D. et al. Artificial intelligence in drug discovery and development. Drug Discov. Today 26, 8093. https://doi.org/10.1016/j.drudis.2020.10.010 (2021).

CAS Article PubMed Google Scholar

Kimber, T. B., Chen, Y. & Volkamer, A. Deep learning in virtual screening: Recent applications and developments. Int. J. Mol. Sci. 22, 4435. https://doi.org/10.3390/ijms22094435 (2021).

CAS Article PubMed PubMed Central Google Scholar

Moosavi, S. M., Jablonka, K. M. & Smit, B. The role of machine learning in the understanding and design of materials. J. Am. Chem. Soc. 142, 2027320287. https://doi.org/10.1021/jacs.0c09105 (2020).

CAS Article PubMed Central Google Scholar

Baum, Z. J. et al. Artificial intelligence in chemistry: Current trends and future directions. J. Chem. Inf. Model. 61, 31973212. https://doi.org/10.1021/acs.jcim.1c00619 (2021).

CAS Article PubMed Google Scholar

Butler, K. T., Davies, D. W., Cartwright, H., Isayev, O. & Walsh, A. Machine learning for molecular and materials science. Nature 559, 547555. https://doi.org/10.1038/s41586-018-0337-2 (2018).

ADS CAS Article PubMed Google Scholar

Stephenson, N. et al. Survey of machine learning techniques in drug discovery. Curr. Drug Metab. 20, 185193. https://doi.org/10.2174/1389200219666180820112457 (2019).

CAS Article PubMed Google Scholar

Khan, S. R., Al Rijjal, D., Piro, A. & Wheeler, M. B. Integration of AI and traditional medicine in drug discovery. Drug Discov. Today 26, 982992. https://doi.org/10.1016/j.drudis.2021.01.008 (2021).

CAS Article PubMed Google Scholar

Rohall, S. L. et al. An Artificial intelligence approach to proactively inspire drug discovery with recommendations. J. Med. Chem. 63, 88248834. https://doi.org/10.1021/acs.jmedchem.9b02130 (2020).

CAS Article PubMed Google Scholar

Yi, Z. et al. Mapping drug-induced neuropathy through in-situ motor protein tracking and machine learning. J. Am. Chem. Soc. 143, 1490714915. https://doi.org/10.1021/jacs.1c07312 (2021).

CAS Article PubMed Google Scholar

Espinoza, G. Z., Angelo, R. M., Oliveira, P. R. & Honorio, K. M. Evaluating deep learning models for predicting ALK-5 inhibition. PLoS ONE 16, e0246126. https://doi.org/10.1371/journal.pone.0246126 (2021).

CAS Article PubMed PubMed Central Google Scholar

Stokes, J. M. et al. A deep learning approach to antibiotic discovery. Cell 180, 688-702.e613. https://doi.org/10.1016/j.cell.2020.01.021 (2020).

CAS Article PubMed PubMed Central Google Scholar

Bess, A. et al. Artificial intelligence for the discovery of novel antimicrobial agents for emerging infectious diseases. Drug Discov. Today https://doi.org/10.1016/j.drudis.2021.10.022 (2021).

Article PubMed PubMed Central Google Scholar

Kundu, S. et al. Enabling early detection of osteoarthritis from presymptomatic cartilage texture maps via transport-based learning. Proc. Natl. Acad. Sci. U. S. A. 117, 2470924719. https://doi.org/10.1073/pnas.1917405117 (2020).

ADS CAS Article PubMed PubMed Central Google Scholar

Tsigelny, I. F. Artificial intelligence in drug combination therapy. Brief. Bioinform. 20, 14341448. https://doi.org/10.1093/bib/bby004 (2019).

CAS Article PubMed Google Scholar

Patel, L., Shukla, T., Huang, X., Ussery, D. W. & Wang, S. Machine learning methods in drug discovery. Molecules 25, 5277. https://doi.org/10.3390/molecules25225277 (2020).

CAS Article PubMed Central Google Scholar

Gao, C. et al. Innovative materials science via machine learning. Adv. Funct. Mater. 32, 2108044. https://doi.org/10.1002/adfm.202108044 (2022).

CAS Article Google Scholar

Yin, Z.-W. et al. Advanced electron energy loss spectroscopy for battery studies. Adv. Funct. Mater. 32, 2107190. https://doi.org/10.1002/adfm.202107190 (2022).

CAS Article Google Scholar

Miljkovi, F., Rodrguez-Prez, R. & Bajorath, J. Impact of artificial intelligence on compound discovery, design, and synthesis. ACS Omega https://doi.org/10.1021/acsomega.1c05512 (2021).

Article PubMed PubMed Central Google Scholar

Tkatchenko, A. Machine learning for chemical discovery. Nat. Commun. 11, 4125. https://doi.org/10.1038/s41467-020-17844-8 (2020).

ADS CAS Article PubMed PubMed Central Google Scholar

Ripphausen, P., Nisius, B., Peltason, L. & Bajorath, J. Quo vadis, virtual screening? A comprehensive survey of prospective applications. J Med Chem 53, 84618467. https://doi.org/10.1021/jm101020z (2010).

CAS Article PubMed Google Scholar

Kimber, T. B., Chen, Y. & Volkamer, A. Deep learning in virtual screening: Recent applications and developments. Int. J. Mol. Sci. https://doi.org/10.3390/ijms22094435 (2021).

Article PubMed PubMed Central Google Scholar

Gautam, V., Gaurav, A., Masand, N., Lee, V. S. & Patil, V. M. Artificial intelligence and machine-learning approaches in structure and ligand-based discovery of drugs affecting central nervous system. Mol. Divers. https://doi.org/10.1007/s11030-022-10489-3 (2022).

Article PubMed Google Scholar

Mizera, M. & Latek, D. Ligand-receptor interactions and machine learning in GCGR and GLP-1R drug discovery. Int. J. Mol. Sci. https://doi.org/10.3390/ijms22084060 (2021).

Article PubMed PubMed Central Google Scholar

Gawriljuk, V. O. et al. Development of machine learning models and the discovery of a new antiviral compound against yellow fever virus. J. Chem. Inf. Model. 61, 38043813. https://doi.org/10.1021/acs.jcim.1c00460 (2021).

CAS Article PubMed Google Scholar

Safiri, S. et al. Global, regional and national burden of osteoarthritis 19902017: A systematic analysis of the Global Burden of Disease Study 2017. Ann. Rheum. Dis. 79, 819828. https://doi.org/10.1136/annrheumdis-2019-216515 (2020).

Article PubMed Google Scholar

Buckwalter, J. A., Mankin, H. J. & Grodzinsky, A. J. Articular cartilage and osteoarthritis. Instr. Course Lect. 54, 465480 (2005).

PubMed Google Scholar

Bajpayee, A. G., Wong, C. R., Bawendi, M. G., Frank, E. H. & Grodzinsky, A. J. Avidin as a model for charge driven transport into cartilage and drug delivery for treating early stage post-traumatic osteoarthritis. Biomaterials 35, 538549. https://doi.org/10.1016/j.biomaterials.2013.09.091 (2014).

CAS Article PubMed Google Scholar

Geiger, B. Grodzinsky, A. & Hammond, P. - Designing Drug Delivery Systems for Articular Joints - May 2018 Chemical Engineering Progress (CEP) - American Institute of Chemical Engineers (AIChE)

Geiger, B. C., Wang, S., Padera, R. F., Grodzinsky, A. J. & Hammond, P. T. Cartilage-penetrating nanocarriers improve delivery and efficacy of growth factor treatment of osteoarthritis. Sci. Transl. Med. 10, eaat8800. https://doi.org/10.1126/scitranslmed.aat8800 (2018).

CAS Article PubMed PubMed Central Google Scholar

Jacobs J.W.G. & Bijlsma J.W.J. Glucocorticoid therapy. in Kelley's Textbook of Rheumatology 7th edn. 870874 (Elsevier Saunders, 2005).

Perni, S. & Prokopovich, P. Poly-beta-amino-esters nano-vehicles based drug delivery system for cartilage. Nanomedicine 13, 539548. https://doi.org/10.1016/j.nano.2016.10.001 (2017).

CAS Article PubMed Google Scholar

Perni, S. & Prokopovich, P. Optimisation and feature selection of poly-beta-amino-ester as a drug delivery system for cartilage. J. Mater. Chem. B 8, 50965108. https://doi.org/10.1039/c9tb02778e (2020).

CAS Article PubMed PubMed Central Google Scholar

Green, J. J., Langer, R. & Anderson, D. G. A combinatorial polymer library approach yields insight into nonviral gene delivery. Acc. Chem. Res. 41, 749759. https://doi.org/10.1021/ar7002336 (2008).

CAS Article PubMed PubMed Central Google Scholar

Burger, S. V. Introduction to machine learning with R: Rigorous mathematical analysis. (2018).

Friedman, J., Hastie, J. & Tibshirani, R. The elements of statistical learning. (2009).

Russo, D. P., Zorn, K. M., Clark, A. M., Zhu, H. & Ekins, S. Comparing multiple machine learning algorithms and metrics for estrogen receptor binding prediction. Mol. Pharm. 15, 43614370. https://doi.org/10.1021/acs.molpharmaceut.8b00546 (2018).

CAS Article PubMed PubMed Central Google Scholar

Korotcov, A., Tkachenko, V., Russo, D. P. & Ekins, S. Comparison of deep learning with multiple machine learning methods and metrics using diverse drug discovery data sets. Mol. Pharm. 14, 44624475. https://doi.org/10.1021/acs.molpharmaceut.7b00578 (2017).

CAS Article PubMed PubMed Central Google Scholar

Fan, Y. et al. Investigation of machine intelligence in compound cell activity classification. Mol. Pharm. 16, 44724484. https://doi.org/10.1021/acs.molpharmaceut.9b00558 (2019).

CAS Article PubMed Google Scholar

Guan, X. et al. Clinical and inflammatory features based machine learning model for fatal risk prediction of hospitalized COVID-19 patients: results from a retrospective cohort study. Ann. Med. 53, 257266. https://doi.org/10.1080/07853890.2020.1868564 (2021).

CAS Article PubMed PubMed Central Google Scholar

Kuhn, M. & Johnson, K. Applied Predictive Modeling. (2013).

Zhou, Z. H. Ensemble Methods: Foundations and Algorithms (Chapman and Hall/CRC, 2012).

Book Google Scholar

Fanourgakis, G. S., Gkagkas, K., Tylianakis, E. & Froudakis, G. E. A universal machine learning algorithm for large-scale screening of materials. J. Am. Chem. Soc. 142, 38143822. https://doi.org/10.1021/jacs.9b11084 (2020).

CAS Article PubMed Google Scholar

Chen, J. et al. Machine learning aids classification and discrimination of noncanonical DNA folding motifs by an arrayed host: Guest sensing system. J. Am. Chem. Soc. 143, 1279112799. https://doi.org/10.1021/jacs.1c06031 (2021).

CAS Article PubMed Google Scholar

Jang, J., Gu, G. H., Noh, J., Kim, J. & Jung, Y. Structure-based synthesizability prediction of crystals using partially supervised learning. J. Am. Chem. Soc. 142, 1883618843. https://doi.org/10.1021/jacs.0c07384 (2020).

CAS Article PubMed Google Scholar

Guo, Y. et al. Machine-learning-guided discovery and optimization of additives in preparing Cu catalysts for CO2 reduction. J. Am. Chem. Soc. 143, 57555762. https://doi.org/10.1021/jacs.1c00339 (2021).

CAS Article PubMed Google Scholar

Xie, Y. et al. Machine learning assisted synthesis of metal-organic nanocapsules. J. Am. Chem. Soc. 142, 14751481. https://doi.org/10.1021/jacs.9b11569 (2020).

CAS Article PubMed Google Scholar

See original here:
Feasibility and application of machine learning enabled fast screening of poly-beta-amino-esters for cartilage therapies | Scientific Reports -...

Read More..

Identification of microstructures critically affecting material properties using machine learning framework based on metallurgists’ thinking process |…

Analysis of structure optimization problem of dual-phase materials

For demonstrating the potential of our framework for the structure optimization of multiphase materials in terms of a target property, a simple sample problem is considered. The sample problem is the structure optimization of artificial dual-phase steels composed of the soft phase (ferrite) and hard phase (martensite). Examples of microstructures are shown in Fig.3. The prepared dual-phase microstructures can be divided into four major categories: laminated microstructures, microstructures composed of rectangle- and ellipse-shaped martensite/ferrite grains, and random microstructures. The size of microstructure images is (128times 128~mathrm {pixels}) and the total number of prepared microstructures is 3824. As an example of a target material property, the fracture strain was selected since fracture behavior is strongly related to the geometry of the two phases. The fracture strain is the elongation of materials at break. As shown in Methodology, the fracture strains for each category were calculated on the basis of the GTN fracture model18,19. Figure4 illustrates the relationship between martensite volume fraction and fracture strain for each category. This shows that laminated microstructures have a relatively high fracture strain. Also, microstructures with a lower martensite volume fraction (higher ferrite volume fraction) possess a higher fracture strain.

Examples of artificial dual-phase microstructures used for training. Black and white pixels respectively correspond to the hard phase (martensite) and soft phase (ferrite). The size of microstructure images is (128times 128) pixels. The dataset can be divided into four major categories. (a) Laminated microstructures. This category only has completely laminated microstructures. (b) Microstructures composed of rectangular martensite grains. This category includes partially laminated structures, such as these shown in the lower left panel. (c) Microstructures composed of elliptical martensite grains. (d) The random microstructures.

Relationship between martensite volume fraction and fracture strain, and examples of microstructures. (a) Plot showing correspondence between martensite volume fraction and fracture strain. (b) Examples of microstructures. Their martensite volume fractions and fracture strains are shown in the plot.

Microstructures generated by the machine learning framework trained by several datasets. (a) Examples of microsturctuers generated for several fracture strains by the network trained using All dataset. (b) Each column corresponds to the microstructures obtained by the models trained using all microstructures, only the random microstructures, only the microstructures composed of ellipse-shaped martensite grains, or only the microstructures composed of rectangle-shaped martensite grains. However, the Rectangle dataset is limited to include only the microstructures whose martensite volume fraction is between 20% and 30%. The given fracture strains are 0.1, 0.3, 0.7, and 0.9 for the All, Random, and Ellipse datasets, and 0.05, 0.1, 0.2, and 0.3 for the Rectangle dataset.

To show the applicability of our framework, we prepared several datasets: all microstructures (All), only random microstructures (Random), only microstructures composed of ellipse-shaped martensite grains (Ellipse), and only microstructures composed of rectangle-shaped martensite grains (Rectangle). Then, we trained VQVAE and PixelCNN using these datasets. The Rectangle dataset is limited to include only the microstructures whose martensite volume fraction is between 20% and 30% to consider the case in which martensite grains are located separately from each other.

Figure5a shows examples of microstructures generated for several fracture strains using the network trained by All dataset. Figure5b summarizes the trend of the microstructures obtained by the networks trained using the above datasets with gradually increasing fracture strain. For the All, Random, and Ellipse datasets, we can see the trend that martensite grains become smaller and thinner as the target fracture strain increases. Since the larger area fraction of the soft phase (ferrite) contributes to the realization of higher elongation as we can see in Fig.4, this result is reasonable. In addition, it should be noted that the laminated structure corresponding to the highest fracture strain ((text {FS}=0.9)) was generated only for the All case in which the laminated structures are included in the training dataset. Additionally, in the case of the controlled martensite volume fraction of the input microstructures (Rectangle), the martensite grains tend to arrange more uniformly as the given fracture strain increases.

Generated microstructures and trend of martensite volume fraction. (a) Microstructures generated at fixed tensile strength and several fracture strains. The tensile strength is set as 700 MPa. The given FSs are 0.1, 0.3, 0.4, 0.5, 0.7, and 0.9. (b) Trend of martensite volume fraction relative to the change in fracture strain. For each fracture strain, the martensite volume fractions of 3000 microstructures generated corresponding to the fracture strain and fixed tensile strength ((700 mathrm {MPa})) were calculated. The black lines and green triangles in the boxes denote median and mean values, respectively.

From these results, we can conclude that there are at least two different strategies for the realization of a higher fracture strain: one is to decrease the size of martensite grains and also to arrange them uniformly, and the other to alternatively make a completely laminated composite structure27. The fact that the laminated structures never appear without providing the laminated structures in the training dataset implies that there exists an impenetrable wall for a simple optimization process, such as a gradient descent algorithm used to train neural networks, to figure out the robustness of laminated structures from the other structures.

Next, the tensile strength is given in addition to the fracture strain as another label for PixelCNN for considering the balance between strength and fracture strain (ductility). In this case, all microstructure data are used for training. The microstructures are generated at the fixed tensile strength of (700 mathrm {MPa}). The generated microstructures are shown in Fig.6a. The laminated structures seem to be dominantly selected as the target fracture strain increases. The trend that martensite grains become smaller and thinner is not seen when the tensile strength is fixed.

In addition, the martensite volume fractions were calculated for 3000 microstructures generated corresponding to several fracture strains. The tensile strength was fixed at (700 mathrm {MPa}) again. The box plot of the trend of the martensite volume fraction relative to the change in fracture strain is shown in Fig.6b. The martensite volume fraction decreases as the given fracture strain increases. At the same time, the martensite volume fraction approaches a constant value. For the realization of a higher ductility without decreasing the tensile strength, the shape of martensite grains approaches that of the laminated structures as the martensite volume fraction decreases. This result implies that laminated structures can achieve a higher tensile strength with a smaller martensite volume fraction. As a result, the laminated structures can be considered as the optimized structures with respect to the shape of martensite grains for the realization of a higher ductility without decreasing their strength. The laminated structures were actually reported to exhibit improved combinations of strength and ductility27.

Correspondence between the target fracture strains given as inputs and the actual fracture strains. For each target fracture strain, 30 microstructures were generated. Then, fracture strains are calculated using the physical model18,19. (a) Plot of relationship. (b) Box plot of relationship. The black lines and green triangles in the boxes denote median and mean values, respectively. (c) Microstructures whose fracture strains are smaller than 20% of the target fracture strains. The values above the panels denote the given target fracture strains (left) and actual fracture strains (right).

To validate the effectiveness of the present framework, fracture strains are calculated using the physical model18,19 for each microstructure obtained using the framework. In this case, the network trained by giving only fracture strain as the target property is used. Figure7a,b show the correspondence between the target fracture strains for generated microstructures and the actual calculated fracture strains. Also, the coefficient of determination was 0.672. It is clear that our framework captures well the general trend of microstructures relative to the fracture strain. However, it should be noted also that there exist several microstructures whose actual fracture strains are far less than the target strains. Figure7c shows the typical microstructures whose fracture strains are smaller than 20% of the target fracture strains. Additionally, the coefficient of determination for the data without data points corresponding to the microstructures shown in Fig.7c was 0.76. All of them are partially incomplete laminated structures. This can be understood as follows. Although laminated structures has a potential to realize higher fracture strains as shown in Fig.4, this is true only when the microstructures are completely laminated. Even when one martensite layer has a tiny hole, the gap between martensite grains becomes the hot spot that induces much earlier rupture. Thus, the box plot shown in Fig.7b is understood to show decreasing values as a result of an attempt to completely laminate the structures to realize the given target fracture strain. This indicates that the framework recognizes the structures shown in Fig.7c to be structurally close to completely laminated structures even though they have far less fracture strains than the completely laminated structures.

As a consequence, these results illustrate that our framework provides a powerful tool for the optimization of material microstructures in terms of target properties, or at least for capturing the trend of microstructures in terms of the change in target property in various cases.

The above results of the generation of material structures corresponding to the target fracture strain indicate that our framework captures the implicit correlation between the material microstructures and the fracture strain. However, generally, it is difficult to interpret implicit knowledge captured by machine learning methods. For that reason, we cannot hastily conclude that machine learning can understand this problem and acquire meaningful knowledge for material design similarly to humans or that it just obtains physically meaningless problem-specific knowledge. Usually, human researchers attain the background physics by noting a part or behavior that will affect a target property during numerous trial-and-error experiments. Generally, this process is time-consuming. Accordingly, approaching implicit knowledge obtained by machine learning methods could be beneficial for achieving a more efficient way to extract general knowledge for material design. Thus, we discuss how to approach the physical background behind the implicit knowledge captured by our framework. In particular, we investigate whether the machine learning framework can identify a part of material microstructures that strongly affects a target property in a similar way human experts can predict on the basis of their experiences.

To identify a critical part of microstructures, we consider calculating a derivative of material microstructures with respect to the fracture strain. This is based on the assumption that human experts unconsciously consider the sensitivity of material microstructures to a slight change in target property. Accordingly, the following variable (Delta) is defined as the derivative:

$$begin{aligned} Delta :=frac{partial D(mathbb {E}_{P(theta |epsilon _f, M_r)}[ theta ])}{partial epsilon _f}, end{aligned}$$

(3)

where (mathbb {E}_{P(theta |epsilon _f, M_r)}[ theta ]) is the expectation of a spatial arrangement of fundamental structures (theta) according to (P(theta |epsilon _f, M_r)), which is the probability distribution captured by PixelCNN. Here, (M_r) and (epsilon _f) are the reference microstructure under consideration and the calculated fracture strain for the microstructure, respectively. In other words, (mathbb {E}_{P(theta |epsilon _f, M_r)}[ theta ]) is the deterministic function of (epsilon _f) and (M_r). In addition, D is the CNN-based deterministic decoder function; hence, (Delta) has the same pixel size of the input microstructure images.

If the machine learning framework correctly captures the physical correlation between the geometry of the material microstructures and the fracture strain, (Delta) is expected to correspond to the areas in (M_r) that highly affects the determination of the fracture strain even without giving the physical mechanism itself. For numerical calculation, (Delta) is approximated as

$$begin{aligned} Delta thickapprox {D(mathbb {E}_{P(theta |epsilon _f+Delta epsilon _f, M_r)}[ theta ])-D(mathbb {E}_{P(theta |epsilon _f, M_r)}[ theta ])}/Delta epsilon _f, end{aligned}$$

(4)

where (Delta epsilon _f) is the gap of the fracture strain, which is set as 0.01 in this paper. Because it is difficult to compare quantitatively the distribution of this variable with the critical microstructure distributions obtained from the physical model, in this paper, we only discuss the location of crucial parts. Thus, the denominator (Delta epsilon _f) is ignored for the calculation of (Delta) in the rest of this paper.

Comparison of derivatives of microstructures with respect to the fracture strain obtained using the machine learning framework with the distributions of void volume fractions calculated on the baisis of physical model. (a)(d) Comparisons for several microstructures. The left, middle, and right column correspond to the reference microstructures, the void distributions obtained using the physical model, and the derivative obtained by the machine learning framework, respectively.

Figure8 shows the comparison of the parts of microstructures critically affecting the determination of the fracture strain obtained by the physical model and our machine learning framework. In the case of the results from machine learning, the absolute values of (Delta) defined in Eq.(3) for each pixel are shown as colormaps. On the other hand, because the fracture behavior is formulated as damage and void-growth processes in the physical model, the void distribution in a critical state directly shows the critical points for the determination of fracture strain. Thus, in the case of the physical model, the calculated void distribution in a critical state is shown in Fig.8. The details of the physical model and the experiment for the determination of some parameters are given in Methodology. For ease of comparison, the ranges of visualized values are changed for each image, while the relative relationships among values of each colormap are kept. Thus, we compare the results qualitatively in terms of the distribution of areas having relatively high values in the next paragraph.

Figure8a,b illustrate the crucial parts of the microstructures composed of relatively long and narrow rectangle-shaped martensite grains. We can see an acceptable agreement between the results of the physical and machine learning methods in terms of the overall distribution of crucial areas which are shown in red in the colormaps of Fig.8. In addition, Fig.8c,d show the parts that critically influence the fracture behavior in the microstructures composed of similarly shaped martensite grains. As an important difference between them, in Fig.8c, the rectangle-shaped martensite grains are irregularly arranged and some martensite grains are close to each other, which might critically affect the fracture behavior, whereas in Fig.8d, circular martensite grains are almost regularly arranged. About Fig.8c, the machine learning framework seems to capture the crucial parts that are predicted by the physical model. As mentioned above, the distributions seem to be dominantly affected by the martensite grains being close to each other. In other words, the short-range interactions among a small number of martensite grains are dominant for the determination of the fracture strain in this case. Also, in Fig.8d, both the physical model and the machine learning framework can predict that the crucial parts are uniformly distributed in square areas.

On the other hand, the physical model also predicts the influence of long-range interactions among martensite grains on fracture behavior, which can be seen in Fig.8c,d as a bandlike distribution. However, the bandlike distribution resulting from the long-range interactions does not seem to be captured by the machine learning framework owing to the characteristic of PixelCNN. Because a global stochastic relationship among the fundamental elements is factorized as a product of stochastic local interactions in PixelCNN as defined in Eq.(1), the extent of interaction exponentially decreases as distance increases. Therefore, the long-range interactions are difficult to be captured by PixelCNN. The discussion of the limitation of PixelCNN in capturing long-range interactions and a remedy for this limitation can be found in28. Figure9 illustrates a sample case showing that the relatively long-range interactions are important for the dertermination of fracture strain. In this case, the determination of the part that critically affects the fracture behavior seems to be difficult using the framework based on PixelCNN.

Sample case showing that a relatively long-range interactions among martensite grains are important for the determination of fracture strain.

For incompletely laminated structures such as that shown in Fig.8a, the martensite layers are expanded to achieve a higher fracture strain even though increasing the martensite volume fraction basically contributes to the decrease in the fracture strain, as shown in Fig.4. Similarly, we can see in Fig.8c that the martensite grains tended to expand to fill the hot spots between them. Additionally, as mentioned above, even though completely laminated structures are structurally similar to incompletely laminated structures, the fracture strains of completely laminated structures are much higher than those of incompletely laminated structures. Thus, eliminating tiny holes that could be causes of hot spots and reaching ({ completely}) laminated structures markedly improve their fracture strains. Altogether, these results imply that the framework recognizes the potential of laminated structures to achieve a higher fracture strain in a similar way that human researchers reach an intuition on completely laminated structures as a result of the consideration of reducing the occurrence of hot spots.

From the above results, we can conclude that our framework can identify the areas that critically affect a target property without human prior knowledge when the local topology of microstructures is dominant for the target property. This implies that machine learning designed consistent with metallurgists process of thinking can approach the background or the meaning of the implicitly extracted knowledge in a similar way that humans acquire an empirical knowledge.

View original post here:
Identification of microstructures critically affecting material properties using machine learning framework based on metallurgists' thinking process |...

Read More..

Are You Making These Deadly Mistakes With Your AI Projects? – Forbes

Since data is at the heart of AI, it should come as no surprise that AI and ML systems need enough good quality data to learn. In general, a large volume of good quality data is needed, especially for supervised learning approaches, in order to properly train the AI or ML system. The exact amount of data needed may vary depending on which pattern of AI youre implementing, the algorithm youre using, and other factors such as in house versus third party data. For example, neural nets need a lot of data to be trained while decision trees or Bayesian classifiers dont need as much data to still produce high quality results.

So you might think more is better, right? Well, think again. Organizations with lots of data, even exabytes, are realizing that having more data is not the solution to their problems as they might expect. Indeed, more data, more problems. The more data you have, the more data you need to clean and prepare. The more data you need to label and manage. The more data you need to secure, protect, mitigate bias, and more. Small projects can rapidly turn into very large projects when you start multiplying the amount of data. In fact, many times, lots of data kills projects.

Clearly the missing step between identifying a business problem and getting the data squared away to solve that problem is determining which data you need and how much of it you really need. You need enough, but not too much. Goldilocks data is what people often say: not too much, not too little, but just right. Unfortunately, far too often, organizations are jumping into AI projects without first addressing an understanding of their data. Questions organizations need to answer include figuring out where the data is, how much of it they already have, what condition it is in, what features of that data are most important, use of internal or external data, data access challenges, requirements to augment existing data, and other crucial factors and questions. Without these questions answered, AI projects can quickly die, even drowning in a sea of data.

Getting a better understanding of data

In order to understand just how much data you need, you first need to understand how and where data fits into the structure of AI projects. One visual way of understanding the increasing levels of value we get from data is the DIKUW pyramid (sometimes also referred to as the DIKW pyramid) which shows how a foundation of data helps build greater value with Information, Knowledge, Understanding and Wisdom.

DIKW pyramid

With a solid foundation of data, you can gain additional insights at the next information layer which helps you answer basic questions about that data. Once you have made basic connections between data to gain informational insight, you can find patterns in that information to gain understanding of the how various pieces of information are connected together for greater insight. Building on a knowledge layer, organizations can get even more value from understanding why those patterns are happening, providing an understanding of the underlying patterns. Finally, the wisdom layer is where you can gain the most value from information by providing the insights into the cause and effect of information decision making.

This latest wave of AI focuses most on the knowledge layer, since machine learning provides the insight on top of the information layer to identify patterns. Unfortunately, machine learning reaches its limits in the understanding layer, since finding patterns isnt sufficient to do reasoning. We have machine learning, not but the machine reasoning required to understand why the patterns are happening. You can see this limitation in effect any time you interact with a chatbot. While the Machine learning-enabled NLP is really good at understanding your speech and deriving intent, it runs into limitations rying to understand and reason.For example, if you ask a voice assistant if you should wear a raincoat tomorrow, it doesn't understand that youre asking about the weather. A human has to provide that insight to the machine because the voice assistant doesnt know what rain actually is.

Avoiding Failure by Staying Data Aware

Big data has taught us how to deal with large quantities of data. Not just how its stored but how to process, manipulate, and analyze all that data. Machine learning has added more value by being able to deal with the wide range of different types of unstructured, semi-structured or structured data collected by organizations. Indeed, this latest wave of AI is really the big data-powered analytics wave.

But its exactly for this reason why some organizations are failing so hard at AI. Rather than run AI projects with a data-centric perspective, they are focusing on the functional aspects. To gain a handle of their AI projects and avoid deadly mistakes, organizations need a better understanding not only of AI and machine learning but also the Vs of big data. Its not just about how much data you have, but also the nature of that data. Some of those Vs of big data include:

With decades of experience managing big data projects, organizations that are successful with AI are primarily successful with big data. The ones that are seeing their AI projects die are the ones who are coming at their AI problems with application development mindsets.

Too Much of the Wrong Data, and Not Enough of the Right Data is Killing AI Projects

While AI projects start off on the right foot, the lack of the necessary data and the lack of understanding and then solving real problems are killing AI projects. Organizations are powering forward without actually having a real understanding of the data that they need and the quality of that data. This poses real challenges.

One of the reasons why organizations are making this data mistake is that they are running their AI projects without any real approach to doing so, other than using Agile or app dev methods. However, successful organizations have realized that using data-centric approaches focus on data understanding as one of the first phases of their project approaches. The CRISP-DM methodology, which has been around for over two decades, specifies data understanding as the very next thing to do once you determine your business needs. Building on CRISP-DM and adding Agile methods, the Cognitive Project Management for AI (CPMAI) Methodology requires data understanding in its Phase II. Other successful approaches likewise require a data understanding early in the project, because after all, AI projects are data projects. And how can you build a successful project on a foundation of data without running your projects with an understanding of data? Thats surely a deadly mistake you want to avoid.

See the rest here:
Are You Making These Deadly Mistakes With Your AI Projects? - Forbes

Read More..

Tackling the reproducibility and driving machine learning with digitisation – Scientific Computing World

Dr Birthe Nielsen discusses the role of the Methods Database in supporting life sciences research by digitising methods data across different life science functions.

Reproducibility of experiment findings and data interoperability are two of the major barriers facing life sciences R&D today. Independently verifying findings by re-creating experiments and generating the same results is fundamental to progressing research to the next stage in its lifecycle, be it advancing a drug to clinical development, or a product to market. Yet, in the field of biology alone, one study found that 70 per cent of researchers are unable to reproduce the findings of other scientists, and 60 per cent are unable to reproduce their own findings.

This causes delays to the R&D process throughout the life sciences ecosystem. For example, biopharmaceutical companies often use an external Contract Research Organisation (CROs) to conduct clinical studies. Without a centralised repository to provide consistent access, analytical methods are often shared with CROs via email or even by physical documents, and not in a standard format but using an inconsistent terminology. This leads to unnecessary variability and several versions of the same analytical protocol. This makes it very challenging for a CRO to re-establish and revalidate methods without a labour-intensive process that is open to human interpretation and thus error.

To tackle issues like this, the Pistoia Alliance launched the Methods Hub project. The project aims to overcome the issue of reproducibility by digitising methods data across different life science functions, and ensuring data is FAIR (Findable, Accessible, Interoperable, Reusable) from the point of creation. This will enable seamless and secure sharing within the R&D ecosystem, reduce experiment duplication, standardise formatting to make data machine-readable, and increase reproducibility and efficiency. Robust data management is also the building block for machine learning and is the stepping-stone to realising the benefits of AI.

Digitisation of paper-based processes increases the efficiency and quality of methods data management. But it goes beyond manually keying in method parameters on a computer or using an Electronic Lab Notebook; A digital and automated workflow increases efficiency, instrument usages and productivity. Applying a shared data standards ensures consistency and interoperability in addition to fast and secure transfer of information between stakeholders.

One area that organisations need to address to comply with FAIR principles, and a key area in which the Methods Hub project helps, is how analytical methods are shared. This includes replacing free-text data capture with a common data model and standardised ontologies. For example, in a High-Performance Liquid Chromatography (HPLC) experiment, rather than manually typing out the analytical parameters (pump flow, injection volume, column temperature etc. etc.), the scientist will simply download a method which will automatically populate the execution parameters in any given Chromatographic Data System (CSD). This not only saves time during data entry, but the common format eliminates room for human interpretation or error.

Additionally, creating a centralised repository like the Methods Hub in a vendor-neutral format is a step towards greater cyber-resiliency in the industry. When information is stored locally on a PC or an ELN and is not backed up, a single cyberattack can wipe it out instantly. Creating shared spaces for these notes via the cloud protects data and ensures it can be easily restored.

A proof of concept (PoC) via the Methods Hub project was recently successfully completed to demonstrate the value of methods digitisation. The PoC involved the digital transfer via cloud of analytical HPLC methods, proving it is possible to move analytical methods securely between two different companies and CDS vendors with ease. It has been successfully tested in labs at Merck and GSK, where there has been an effective transfer of HPLC-UV information between different systems. The PoC delivered a series of critical improvements to methods transfer that eliminated the manual keying of data, reduces risk, steps, and error, while increasing overall flexibility and interoperability.

The Alliance project team is now working to extend the platforms functionality to connect analytical methods with results data, which would be an industry first. The team will also be adding support for columns and additional hardware and other analytical techniques, such as mass spectrometry and nuclear magnetic resonance spectroscopy (NMR). It also plans to identify new use cases, and further develop the cloud platform that enables secure methods transfer.

If industry-wide data standards and approaches to data management are to be agreed on and implemented successfully, organisations must collaborate. The Alliance recognises methods data management is a big challenge for the industry, and the aim is to make Methods Hub an integral part of the system infrastructure in every analytical lab.

Tackling issues such as digitisation of methods data doesnt just benefit individual companies but will have a knock-on effect for the whole life sciences industry. Introducing shared standards accelerates R&D, improves quality, and reduces the cost and time burden on scientists and organisations. Ultimately this ensures that new therapies and breakthroughs reach patients sooner. We are keen to welcome new contributors to the project, so we can continue discussing common barriers to successful data management, and work together to develop new solutions.

Dr Birthe Nielsen is the Pistoia Alliance Methods Database project manager

Here is the original post:
Tackling the reproducibility and driving machine learning with digitisation - Scientific Computing World

Read More..

Deep learning of ECG waveforms for diagnosis of heart failure with a reduced left ventricular ejection fraction | Scientific Reports – Nature.com

In this study, we validated the DeepECG-HFrEF to identify LVSD in patients with symptomatic HF regardless of EF and evaluated the predictive power of the algorithm for the 5-year all-cause mortality. The DeepECG-HFrEF algorithm showed outstanding performance in discriminating LVSD among patients with HF. DeepECG-HFrEF (+) was associated with a worse 5-year survival, even when compared to using the actual EF value. To our knowledge, this is the first study to validate the performance of a deep learning-based AI algorithm for LVSD detection and to show risk predictability in symptomatic patients with HF.

LVSD is identified in 4050% of patients with HF16. Although survival rates of patients with HF have recently improved in developed countries, patients with HF still show an eight-fold higher mortality than an age-matched population17,18. Not only does HF increase the risk of mortality, but the associated economic burden cannot be overlooked. The economic burden of HF was estimated to be $108 billion per annum globally in 2012, with 60% direct costs to the healthcare system and 40% indirect costs to society through morbidity and others19. Such burden is even higher in Asian countries compared to the United States, with a large proportion of the HF-related healthcare costs directly associated to hospitalization20. The impact of this burden is accentuated among elderly patients, with almost three-quarters of the total resources assigned to HF being solely devoted to the older population21. The increase in the proportion of elderly individuals in the general population, social ageing phenomenon, is consistent throughout the world, with the elderly population projected to double to almost 1.6 billion globally, from 2025 to 205022. Considering the economic burden of HF in the elderly population, there is a need to improve early diagnosis and treatment of LVSD to slow or even prevent its progression to HF.

A summary of currently developed AI algorithms for the detection of LVSD and the validation of these algorithms is provided in Supplementary Table S5. The definition of LVSD and the primary endpoint differed among studies, with an EF cut-off of 35% to 40% having been used. The study population used for validation also differed between the studies, from using patients at a community general hospital to patients in cardiac intensive care unit and patients with COVID-199,12,13. As a result of these differences in the clinical population used, the proportion of patients within the validation population varied between 2 and 20%7,11. Our study is the first to validate the algorithm to detect LVSD solely using patients with HF. Our results showed the strength of the DeepECG-HFrEF algorithm to discriminate LVSD even when the prevalence of HF is high.

Despite recent advances in HF pharmacotherapy, the mortality and rehospitalization rates of patients with HF are still high. Therefore, the identification of high-risk patients who would benefit the most from comprehensive HF treatment is urgently required23. A few studies suggested the promising role of AI support for the early diagnosis of low EF15. Regarding AI for the detection of LVSD, only one study, by Attia et al., reported on the power of an AI algorithm to predict future LVSD development7. Our study is the first to show an association between long-term survival and LVSD of patients with HF based on an AI algorithm. Our results show that the AI algorithm can identify abnormalities in ECG before overt LVSD is observed on echocardiography.

The AI algorithms are known for being a black box with exact mechanism unexplainable. However, there are some ECG characteristics in the DeepECG-HFrEF (+) group which might have contributed to the prognostic performance of the algorithm. The DeepECG-HFrEF (+) group had significantly increased corrected QT intervals and increased proportions of LBBB and IVCD. A study by Lee et al. showed that LBBB and IVCD were associated with an increased risk of all-cause mortality and rehospitalization due to HF aggravation24. Regarding the QTc interval, a study by Park et al. showed a J-curve association between the corrected QT interval and mortality among patients with acute HF, with a nadir of 440450ms in men and 470480ms in women25. Thus, such an association might be one of the factors used by the DeepECG-HFrEF algorithm to differentiate between the two groups. Nevertheless, as our study did not specifically differentiate the corrected QT interval according to sex, the application of results by Park et al. should be done with caution25. Thus, we can carefully interpret that the features shown in the DeepECG-HFrEF (+) group, such as LBBB and IVCD, might be factors that the algorithm is searching for group classification.

There is no clear explanation for the increased false-positive and false-negative rates among patients with an EF near 40%. One plausible explanation might be that the clustering near an EF of 40% may be a heterogeneous group. A previous study by Rastogi et al. showed heterogeneity in the underlying demographics of HFmrEF to be associated with changes in EF over time26. Among the HFmrEF groups, improvement in EF tends to be associated with coronary artery disease, while a worsening of EF is more likely to coexist with hypertension and diastolic dysfunction26. Patients with acute coronary syndrome are more likely to have dynamic changes in their ECGs and EF over a short period of time27,28. As ischemia was the leading cause of acute HF among patients in the KorAHF Registry, such dynamic changes might have contributed to heterogeneity, resulting in a discrepancy between actual EF and DeepECG-HFrEF algorithm results29.

The limitations of our study need to be acknowledged in the interpretation of results. First, owing to the retrospective design used, causation between identified factors of LVSD among patients with HF could not be inferred. Further validation of the algorithm using a prospective study design is needed. Second, generalization of our results is limited, and should be cautiously interpreted, as the study population was drawn from a single hospital site in Korea. Further studies on a wider range of race and ethnicity are necessary, as done per the study conducted by the Mayo Clinic using an artificial intelligence-augmented electrocardiogram (AI-ECG) in the United States and Uganda9,14. Third, although most of the ECGs were matched to echocardiography within 24h, some were performed within 30days. Although these time gaps might influence the performance of our model, the meanstandard deviation of time gaps for true positive, false positive, false negative, and true negative are 22.0 (65.6), 30.6 (86.4), 31.3 (107.3), and 33.6 (90.2), respectively, which was not statistically significant (p=0.192). Also, the performance of the algorithm although the 30-day maximum has generally been accepted in previous studies10,12. It is important to note that the ECG matched to echocardiography within 24h comprised 82.1% of the data used in this study. Fourth, HF medication compliance was not considered. As angiotensin-converting enzyme inhibitors and beta-blockers are known to have a favorable prognosis for the treatment of LVSD, data on such medication adherence would have affected survival. Fifth, our study focused on the association between ECG and echocardiography and included multiple ECG and echocardiographic data from one person. This may have had a slight influence on the survival analysis. A sequential study using a single ECG and echocardiography from individual patients would be useful to confirm our results. Lastly, our study used visually estimated EF values documented by the examiners because EF measurement by Simpsons biplane or other calculated methods were inadequate either by poor echocardiographic window or severely unbalanced myocardial contraction (61 out of 1291 cases).

Read more here:
Deep learning of ECG waveforms for diagnosis of heart failure with a reduced left ventricular ejection fraction | Scientific Reports - Nature.com

Read More..

Deep learning algorithm predicts Cardano to trade above $2 by the end of August – Finbold – Finance in Bold

The price of Cardano (ADA) has mainly traded in the green in recent weeks as the network dubbed Ethereum killer continues to record increased blockchain development.

Specifically, the Cardano community is projecting a possible rise in the tokens value, especially with the upcoming Vasil hard fork.

In this line, NeuralProphets PyTorch-based price prediction algorithm that deploys an open-source machine learning framework has predicted that ADA would trade at $2.26 by August 31, 2022.

Although the prediction model covers the time period from July 31st to December 31st, 2022, and it is not an accurate indicator of future prices, its predictions have historically proven to be relatively accurate up until the abrupt market collapse of the algorithm-based stablecoin project TerraUSD (UST).

However, the prediction aligns with the generally bullish sentiment around ADA that stems from the network activity aimed at improving the assets utility. As reported by Finbold, Cardano founder Charles Hoskinson revealed the highly anticipated Vasil hard fork is ready to be rolled after delays.

It is worth noting that despite minor gains, ADA is yet to show any significant reaction to the upgrade, but the tokens proponents are glued to the price movement as it shows signs of recovery. Similarly, the token has benefitted from the recent two-month-long rally across the general cryptocurrency market.

Elsewhere, the CoinMarketCap community is projecting that ADA will trade at $0.58 by the end of August. The prediction is supported by about 17,877 community members, representing a price growth of about 8.71% from the tokens current value.

For September, the community has placed the prediction at $0.5891, a growth of about 9% from the current price. Interestingly, the algorithm predicts that ADA will trade at $1.77 by the end of September. Overall, both prediction platforms indicate an increase from the digital assets current price.

By press time, the token was trading at $0.53 with gains of less than 1% in the last 24 hours.

In general, multiple investors are aiming to capitalize on the Vasil hard fork, especially with Cardano clarifying the upgrade is going on according to plan.

Disclaimer:The content on this site should not be considered investment advice. Investing is speculative. When investing, your capital is at risk.

Read more here:
Deep learning algorithm predicts Cardano to trade above $2 by the end of August - Finbold - Finance in Bold

Read More..

How to use the intelligence features in iOS 16 to boost productivity and learning – TechRepublic

Apple has packed intelligence features into iOS 16 to allow for translations from videos, copying the subject of a photo and removing the background, and copying text from a video.

A few years ago, Apple began betting on local machine learning in iOS to boost the user experience. It started simple with Photos, but now machine learning is a mainstay in iOS and can help to boost productivity in every turn. iOS 16 adds to the machine learning features of iOS to allow for the ability to copy text from a video, perform quick text actions from photos and videos, and allow you to easily copy the subject of a photo and remove the background, creating an easy alpha layer.

Well walk through these three new intelligence features in iOS 16, find out how to use them, and show you all of the ways that you can use these features to boost your productivity and more.

SEE: iCloud vs. OneDrive: Which is best for Mac, iPad and iPhone users? (free PDF) (TechRepublic)

All of the features below only work on iPhones containing an A12 Bionic processor or later, and the translation and text features only available in the following languages: English, Chinese, French, Italian, German, Japanese, Korean, Portuguese, Spanish and Ukrainian text.

One of the cooler features in iOS 16 was the ability to lift the subject of a photo off the photo, creating an instant alpha of the subject. This removes the background from the photo and leaves you with a perfectly cut out photo subject that you can easily paste into a document, iMessage or anywhere else you can imagine (Figure A).

Figure A

This feature works on iPhone with A12 Bionic and later, and can be done by performing these steps inside of the Photos app:

This doesnt only work in Photos, but also in the Screenshot Utility, QuickLook, Safari and other apps soon. This feature saves a lot of time over opening the photo into a photo editor and manually removing the background.

iOS 15 introduced Live Text, which lets you copy text from a photo or search through your Photo library using text that might be contained in a photo (Figure B). Apple is ramping up this feature in iOS 16 by allowing you to pause a video and copy text from it as well.

Figure B

It works like this:

This feature is great for online learning environments where students might need to copy an example and paste it into a document or other file.

Live Text has been around for two iterations of iOS, so Apple has started building additional features around the Live Text feature, namely the ability to perform actions on text from a photo or paused video frame (Figure C).

Figure C

When you select text in a photo or paused video, you now have the option of performing the following actions on the text:

You can do this by selecting the text from the photo or video, then selecting one of the quick actions presented. This works in the Camera app, Photos app, QuickLook and in the iOS video player.

Original post:
How to use the intelligence features in iOS 16 to boost productivity and learning - TechRepublic

Read More..

Microsoft is teaching computers to understand cause and effect – TechRepublic

Image: ZinetroN/Adobe Stock

AI that analyzes data to help you make decisions is set to be an increasingly big part of business tools, and the systems that do that are getting smarter with a new approach to decision optimization that Microsoft is starting to make available.

Machine learning is great at extracting patterns out of large amounts of data but not necessarily good at understanding those patterns, especially in terms of what causes them. A machine learning system might learn that people buy more ice cream in hot weather, but without a common sense understanding of the world, its just as likely to suggest that if you want the weather to get warmer then you should buy more ice cream.

Understanding why things happen helps humans make better decisions, like a doctor picking the best treatment or a business team looking at the results of AB testing to decide which price and packaging will sell more products. There are machine learning systems that deal with causality, but so far this has mostly been restricted to research that focuses on small-scale problems rather than practical, real-world systems because its been hard to do.

SEE: How to become a machine learning engineer: A cheat sheet (TechRepublic)

Deep learning, which is widely used for machine learning, needs a lot of training data, but humans can gather information and draw conclusions much more efficiently by asking questions, like a doctor asking about your symptoms, a teacher giving students a quiz, a financial advisor understanding whether a low risk or high risk investment is best for you, or a salesperson getting you to talk about what you need from a new car.

A generic medical AI system would probably take you through an exhaustive list of questions to make sure it didnt miss anything, but if you go to the emergency room with a broken bone, its more useful for the doctor to ask how you broke the bone and whether you can move your fingers rather than asking about your blood type.

If we can teach an AI system how to decide whats the best question to ask next, it can use that to gather just enough information to suggest the best decision to make.

For AI tools to help us make better decisions, they need to handle both those kinds of decisions, Cheng Zhang, a principal researcher at Microsoft, explained.

Say you want to judge something, or you want to get the information on how to diagnose something or classify something properly: [the way to do that] is what I call Best Next Question, said Zhang. But if you want to do something, you want to make things better you want to give students new teaching material, so they can learn better, you want to give a patient a treatment so they can get better I call that Best Next Action. And for all of these, scalability and personalization is important.

Put all that together, and you get efficient decision making, like the dynamic quizzes that online math tutoring service Eedi uses to find out what students understand well and what they are struggling with, so it can give them the right mix of lessons to cover the topics they need help with, rather than boring them with areas they can already handle.

The multiple choice questions have only one right answer, but the wrong answers are carefully designed to show exactly what the misunderstanding is: Is someone confusing the mean of a group of numbers for the mode or the median, or do they just not know all the steps for working out the mean?

Eedi already had the questions but it built the dynamic quizzes and personalized lesson recommendations using a decision optimization API (application programming interface) created by Zhang and her team that combines different types of machine learning to handle both kinds of decisions in what she calls end-to-end causal inferencing.

I think were the first team in the world to bridge causal discovery, causal inference and deep learning together, said Zhang. We enable a user who has data to find out the relationship between all these different variables, like what calls what. And then we also understand their relationship: For example, how much the dose [of medicine] you gave will increase someones health, by how much which topic you teach will increase the students general understanding.

We use deep learning to answer causal questions, suggest whats the next best action in a really scalable way and make it real world usable.

Businesses routinely use AB testing to guide important decisions, but that has limitations Zhang points out.

You can only do it at a high level, not an individual level, said Zhang. You can get to know that for this population, in general, treatment A is better than treatment B, but you cannot say for each individual which is best.

Sometimes its extremely costly and time consuming, and for some scenarios, you cannot do it at all. What were trying to do is replace AB testing.

The API to do that, currently called Best Next Question, is available in the Azure Marketplace, but its in private preview, so organizations wanting to use the service in their own tools the way Eedi has need to contact Microsoft.

For data scientists and machine learning experts, the service will eventually be available either through Azure Marketplace or as an option in Azure Machine Learning or possibly as one of the packaged Cognitive Services in the same way Microsoft offers services like image recognition and translation. The name might also change to something more descriptive, like decision optimization.

Microsoft is already looking at using it for its own sales and marketing, starting with the many different partner programs it offers.

We have so many engagement programs to help Microsoft partners to grow, said Zhang. But we really want to find out which type of engagement program is the treatment that helps a partner grow most. So thats a causal question, and we also need to do it in a personalized way.

The researchers are also talking to the Viva Learning team.

Training is definitely a scenario we want to make personalized: We want people to get taught with the material that will help them best for their job, said Zhang.

And if you want to use this to help you make better decisions with your own data, We want people to have an intuitive way to use it. We dont want people to have to be data scientists.

The open-source ShowWhy tool that Microsoft built to make causal reasoning easier to use doesnt yet use these new models, but it has a no-code interface, and the researchers are working with that team to build prototypes, Zhang said.

Before the end of this year, were going to release a demo for the deep end-to-end causal inference, said Zhang.

She suggests that in the longer term, business users might get the benefit of these models inside systems they already use, like Microsoft Dynamics and the Power Platform.

For general decision-making people, they need something very visual: A no-code interface where I load data, I click a button and [I see] what are the insights, said Zhang.

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

Humans are good at thinking causally, but building the graph that shows how things are connected and whats a cause and whats an effect is hard. These decision optimization models build that graph for you, which fits the way people think and lets you ask what-if questions and experiment with what happens if you change different values. Thats something very natural, Zhang said.

I feel humans fundamentally want something to help them understand If I do this, what happens, if I do that, what happens, because thats what aids decision making, said Zhang.

Some years ago, she built a machine learning system for doctors to predict how patients would recover in different scenarios.

When the doctors started to use the system they would play with it to see if I do this or if I do that, what happens,' said Zhang. But to do that, you need a causal AI system.

Once you have causal AI, you can build a system with two-way correction where humans teach the AI what they know about cause and effect, and the AI can check whether thats really true.

In the U.K., schoolchildren learn about Venn diagrams in year 11. But when Zhang worked with Eedi and the Oxford University Press to find the causal relationships between different topics in mathematics, the teachers suddenly realized theyd been using Venn diagrams to make quizzes for students in years 8 and 9, long before theyd told them what a Venn diagram is.

If we use data, we discover the causal relationship, and we show it to humans its an opportunity for them to reflect and suddenly these kinds of really interesting insights show up, said Zhang.

Making causal reasoning end to end and scalable is just a first step: Theres still a lot of work to do to make it as reliable and accurate as possible, but Zhang is excited about the potential.

40% of jobs in our society are about decision making, and we need to make high-quality decisions, she pointed out. Our goal is to use AI to help decision making.

Originally posted here:
Microsoft is teaching computers to understand cause and effect - TechRepublic

Read More..

Borrow A Boat smashes crowd funding targets – Global Banking And Finance Review

Disclaimer:This article isSponsoredFeature Presented by Discover Media. The opinion expressed here is not investment advice it is provided for informational purposes only. It does not necessarily reflect the views or opinion of Global Banking & Finance Review and in no way an endorsement or recommendation. All investments and trading involve risk, users of the GBAF Website must consult a suitably qualified professional adviser for advice and perform their own research. Accordingly, we will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the GBAF Website and within GBAF Content.

Whether its the influence of shows like Below Deck or the trend towards luxury holidays, or even a combination of the two, one things for sure renting boats and booking charters are more popular than ever. The market is set to be worth an incredible $25 billion by 2027, and many different companies and brands are looking to get a piece of the space.

As a result the boat rental marketplace and charter industry are becoming hyper-competitive, with only those who can offer interesting solutions or something extra special coming out on top. A company who are definitely making a name for themselves in this saturated industry are Borrow A Boat.

Colloquially known as the Airbnb of boats, Borrow A Boat is one of the leading yacht charter and boat rental marketplace, with an amazing 35,000 boat listing hosted across more than 65 countries around the world. The top destinations include the UK, the Mediterranean, the Caribbean, North America and South East Asia. Reviewed highly in places like The Guardian, Boat International, Yachting World and Superyacht News, Borrow A Boat is definitely becoming one of the key players in the industry and continue to make big moves to the top.

Whats their story?

British company Borrow A Boat was launched back in 2017 by founder Matt Ovenden. A long time lover of boats and sailing, Matt is a successful serial entrepreneur as well as being a Fellow of the Institution of Mechanics and a Chartered Mechanical Engineer.

Having an MSc in Innovation and Design for Sustainability, Matt created Bright Green Shoots back in 2009 a energy and sustainability company that works to help businesses and organisations embrace green and sustainable innovation.

Through Bright Green Shoots Matt created two other companies. In 2011 he launched Free Wind ltd, a UK wind development company, and in 2014 he launched Free Island Energy, which works with remote renewables, microgrid and eco-island projects.

When Matt launched Borrow A Boat he was looking to solve a new challenge: how to get those who have been averse to yachting in the past, either because of its complexity in booking or the constraints that seven-day charters can have. In its first iteration as a peer-to-peer business, it became known as the Airbnb for yachting. Having quickly realised there were other opportunities, Matt and his team pivoted the business to also include traditional yachting suppliers which contributed to its explosive growth. At a similar time, Matt decided to also access the superyacht sector to appeal to the ultra elite or those who want the best in luxury yachting when it comes to their boating experience.

(Board discussing targets)

How have they grown so far?

Borrow A Boat have started their summer out well. In late June, the leading boat rental and yacht charter marketplace in the UK closed their most successful crowdfunding round ever, having raised and incredible 3,017,030 from over 650 investors.

This new raise brings the total of the business lifetime crowdfunding to over 7.8 million impressive for a company that launched in 2017 and faced some challenges due to disruptions in global travel over the 2020 period. The largest ever crowdfund for Borrow A Boat, it exceeded its funding target by 402% and has reinforced their position as one of the best in the business when it comes to the yachting and boating rental industry.

Borrow A Boat already had an incredibly successful year during 2022 when it came to growth in revenues. They expanded in a few other countries as part of this. They also acquired three competitors over the past twelve months. The first of these, Helm, allows users to build their perfect yachting holiday. Barqo, the second, seeks to make sailing a truly accessible experience by allowing its users to rent (and rent out) their boats. And beds on board is truly the Airbnb of boating, enabling users to rent beds on board of boats across the UK. These acquisitions go to show just how expansive and dominant across the industry that Borrow A Boat is planning to get.

What does this mean for Borrow A Boat and the industry?

The successful crowdfunding round shows just how well Borrow A Boat has been doing over the past few years and the support theyve received from stakeholders shows the belief that many people have that the company will continue to thrive and succeed.

Its success is important for consumers too; more people than ever will be able to access its services and enjoy the experience of renting a boat or chartering a yacht. Its very likely that Borrow A Boat will continue to dominate and expand their offering as the industry continues to skyrocket and hit new heights all the way into 2027. It may be that we see new competitors rise up but from the looks of things, Borrow A Boat is here to stay.

More here:
Borrow A Boat smashes crowd funding targets - Global Banking And Finance Review

Read More..