Page 1,021«..1020..1,0201,0211,0221,023..1,0301,040..»

Machine Learning Tools: Transformative Insights into Animal … – Fagen wasanni

Animal communication signals have always been a complex field to decipher. Researchers rely on careful observation and experimentation to understand their meaning. However, this process is time-consuming, and even experienced biologists struggle with differentiating similar signal types.

AI may offer a solution to expedite this process. Machine learning algorithms, known for their pattern detection abilities, can potentially decode the communication systems of various animals like whales, crows, and bats. These algorithms have proven their effectiveness in processing human language and can also identify and classify animal signals from audio and video recordings.

One of the main challenges with machine learning methods is the need for vast amounts of data. For instance, the Chat GPT-3 language model was trained using billions of tokens or words. This means creative solutions are necessary to collect data from wild animals.

Despite these challenges, there are ongoing research projects exploring the use of AI in animal communication. Project CETI (Cetacean Translation Initiative) focuses on the communicative behavior of sperm whales. Utilizing bioinspired whale-mounted tags, underwater robots, and other methods, researchers aim to map the full richness of these animals communication.

Understanding who talks to whom and the environmental and social conditions are essential for decoding animal conversations. By combining machine learning approaches with well-designed experiments, researchers hope to discover which signals animals use and potentially their meanings. This knowledge can then be applied to improve animal welfare in captivity and develop more effective conservation strategies.

In the future, machine learning could even enable the ability to listen in on entire communities of animals. Detailed comparisons of communication could be made, including historical baseline recordings of the last surviving individuals held in conservation breeding centers. This research has the potential to reintroduce lost calls and restore cultural practices among animal populations.

Moreover, the use of passive acoustic monitoring systems could help identify communication signals associated with distress or avoidance. This could provide insights into the well-being of animals at a landscape level and aid in conservation efforts.

Here is the original post:
Machine Learning Tools: Transformative Insights into Animal ... - Fagen wasanni

Read More..

Animations, and 3-D Models, and 3,000 Drawings: Inside Googles Massive Machine-Learning Masterclass on Leonardo da Vinci – artnet News

Science & Tech

Thanks to machine learning, Leonardo's expansive codices have been broken down into different themes.

What can A.I. teach us about Italian Renaissance polymath Leonardo da Vinci? A lot, as we discovered from a new online retrospective from Google Arts and Culture thats powered by machine learning.

Its a fascinating mini consumer Phd in Leonardo, Amit Sood, founder and director of Google Arts and Culture, told Artnet News. He added that he personally enjoyed learning that the great artist was a left-handed vegetarian:Theres a quote in one of the codices about being vegetarian and drinking wine in moderationvery practical health and well-being advice from Leonardo da Vinci!

The expansive project, titled Inside a Genius Mind, is a collaboration with 28 institutions around the world, curated by noted Leonardo expert and art historian Martin Kemp (who recently offered an online masterclass on the artist). It features 3,000 drawings, including 1,300 pages of the Old Masters famed codices, such as the 12-volume Codex Atlanticus.

Over 500 years after Leonardos death, these fragile manuscriptsrarely on view to the general publicoffer the closest thing we have to a glimpse inside the mind of the artist, inventor, and engineer.

Inside a Genius Mind, a new online Leonardo da Vinci retrospective from Google Arts and Culture.

Written back to front in semi-legible old Italian and covering subjects from science to anatomy to flight, the contents of [Leonardos] codices can feel overwhelmingly vast, varied, and inaccessible, Kemp said in a statement. Inside a Genius Mind transforms the diverse contents of the codices as an interactive visual journey, engaging audiences with a powerful tool to learn more about the complexities and connections that run throughout Leonardos genius.

The team from Google used machine learning to sort through Leonardos prolific writings and drawings, presenting his oeuvre in thematic sections that represent the full breadth of his varied artistic and scientific output, and seemingly boundless, interdisciplinary ingenuity.

Leonardo da Vinci, Codex Atlanticus, folio 755 r. Collection of the Veneranda Biblioteca Ambrosiana, Milan, courtesy of Google Arts and Culture.

Weve always tried to use technology to build online projects that are very difficult to do in a physical realm, Sood said. We use machine learning to uncover visual ideas and similarities that will take the human eye much longer to seeor cant see at all.

People know Leonardos art, but they dont necessarily know his codices, because they are spread across different institutions. Bringing them into one platform was something that was important to us, headded.

Google Arts and Culture digitizing Leonardo da Vincis wall paintings at the Sala delle Asse at Castello Sforzesco in Milan. Photo courtesy of Google Arts and Culture.

The project was a massive undertaking that involved working closely with museums in Poland, Italy, and France, among others. At some institutions, Google scanned all the drawings. It also digitized Leonardos room of wall paintingsat the Sala delle Asse, at the Castello Sforzesco in Milan, which has been closed for renovations since 2012.

The online exhibition also includes impressive 3-D models and animations of some of Leonardos drawings and inventions, such as his flying machines. Google has been working with these visualizations for the last seven years, and is also offering them to museums to include in traditional exhibitions, where they can augment the irreplaceable experience of seeing Leonardos drawings in person.

Google Arts and Culture created this 3-D animation of Leonardo da Vincis Leocopter. Courtesy of Google Arts and Culture.

But Inside a Genius Mind aims to tell Leonardos incredible story in a way that appeals to both art history neophytes and experts, from the comfort of their own homes.

The diversity of what Leonardo was able to accomplish in his lifetime is something that people are going to be inspired and surprised by, Sood said. In his sketches, he was not putting different disciples in silos. Everything seemed to merge and converge in different ways.

Leonardo da Vinci, Ginevra de Benci. Collection of the National Gallery of Arts, Washington, D.C., courtesy of Google Arts and Culture.

The online exhibition also uses A.I. to generate playful mashups of Leonardos sketches, dubbed Da Vincis Stickies. It also transports you to the artists birthplace and final resting place courtesy of Google Street View, and offers a deep dive into the only Leonardo painting in North America, Ginevra de Benci at the National Gallery of Art in Washington, D.C.

Read the original here:
Animations, and 3-D Models, and 3,000 Drawings: Inside Googles Massive Machine-Learning Masterclass on Leonardo da Vinci - artnet News

Read More..

The Scamdemic: Can Machine Learning Turn the Tide? – CDOTrends

The worldwide digital space was gripped by an unprecedented surge in online scams and phishing attacks in 2022. Cybersecurity company Group-IB unveiled an alarming analysis detailing this rising threat.

Their recently launched study showed that the number of scam resources created per brand soared by 162% globally, and even more drastically in the Asia-Pacific region, with a whopping increase of 211% from 2021. The report also disclosed a more than three-fold increase in detected phishing websites over the last year.

These findings underscore the persistent cyber threat landscape, shedding light on a cyber menace that cost more than USD55 billion in damages last year, according to the Global Anti Scam Alliance and ScamAdviser's 2022 Global State of Scams Report. With these alarming trends, the scamdemic shows no signs of slowing down.

Scam campaigns are not just affecting more brands each year; the impact that each individual brand faces is growing larger. Scammers are using a vast amount of domains and social media accounts to not only reach a greater number of potential victims but also evade counteraction, explained Afiq Sasman, head of the digital risk protection analytics team in the Asia Pacific at Group-IB.

The rise in scams was attributed to increased social media use and the growing automation of scam processes. Social media platforms often serve as the first point of contact between scammers and potential victims, with 58% of scam resources created on such platforms in the Asia-Pacific region last year. Group-IB's Digital Risk Protection analysts found that more than 80% of operations are now automated in scams like Classiscam.

Using automation and AI-driven text generators by cybercriminals to craft convincing scam and phishing campaigns poses an escalating threat. Such advancements allow cybercriminals to scale operations and provide increased security within their illicit ecosystems.

The study also highlighted the uptick in scam resources hosted on the .tk domain, accounting for 38.8% of all scam resources examined by Group-IB in the second half of 2022. This development reveals the increasing impact of automation in the scam industry, as affiliate programs automatically generate links on this domain zone.

The research underscores the urgent need for robust and innovative cybersecurity measures. By leveraging advanced technologies such as neural networks and machine learning, organizations can monitor millions of online resources to guard against external digital risks, protecting their intellectual property and brand identity. Only through such proactive measures can we hope to turn the tide against the rising tide of this digital 'scamdemic.

Image credit: iStockphoto/Dragon Claws

Read the rest here:
The Scamdemic: Can Machine Learning Turn the Tide? - CDOTrends

Read More..

Energy Consumption in Machine Learning: An Unseen Cost of … – EnergyPortal.eu

Energy Consumption in Machine Learning: An Unseen Cost of Innovation

In recent years, machine learning has emerged as a driving force behind many technological advancements, from self-driving cars to facial recognition systems. As these innovations continue to transform our world, there is a growing concern about the environmental impact of the energy consumption required to power these advancements. The energy consumption in machine learning is an unseen cost of innovation that needs to be addressed in order to ensure a sustainable future.

Machine learning, a subset of artificial intelligence, involves the development of algorithms that enable computers to learn from and make predictions or decisions based on data. These algorithms require vast amounts of computational power to process and analyze the data, which in turn requires significant energy resources. As the demand for machine learning applications grows, so does the need for more powerful hardware and energy to fuel these computations.

One of the most energy-intensive aspects of machine learning is the training process, during which an algorithm is exposed to a large dataset and learns to recognize patterns and make predictions. This process can take days, weeks, or even months to complete, depending on the complexity of the task and the size of the dataset. During this time, the hardware used to run the algorithms consumes a considerable amount of electricity, contributing to greenhouse gas emissions and exacerbating climate change.

The energy consumption of machine learning is not only an environmental concern but also a financial one. As the cost of electricity continues to rise, companies and researchers may find it increasingly difficult to afford the energy required to develop and deploy machine learning applications. This could potentially slow down the pace of innovation and hinder the adoption of new technologies that could improve our lives.

Recognizing the need to address this issue, researchers and technology companies are exploring ways to reduce the energy consumption of machine learning. One approach is to develop more energy-efficient hardware, such as specialized processors designed specifically for machine learning tasks. These processors can perform computations more efficiently than traditional CPUs or GPUs, reducing the amount of energy required to run machine learning algorithms.

Another approach is to optimize the algorithms themselves, making them more efficient and requiring less computational power to achieve the same results. This can be achieved through techniques such as pruning, which involves removing unnecessary connections in a neural network, and quantization, which reduces the precision of the numerical values used in the computations. Both of these techniques can lead to significant reductions in energy consumption without sacrificing the accuracy of the machine learning model.

In addition to these technological solutions, there is also a growing awareness of the need for more sustainable practices in the field of machine learning. Researchers and companies are increasingly considering the environmental impact of their work and taking steps to minimize their energy consumption. This can include using renewable energy sources to power their data centers, implementing energy-efficient cooling systems, and recycling or repurposing old hardware.

As machine learning continues to advance and become more prevalent in our daily lives, it is crucial that we address the issue of energy consumption in order to ensure a sustainable future. By developing more energy-efficient hardware and algorithms, adopting sustainable practices, and raising awareness of the environmental impact of machine learning, we can continue to enjoy the benefits of these innovations while minimizing their impact on our planet. The unseen cost of innovation must be acknowledged and addressed to ensure that the progress we make does not come at the expense of our environment.

Here is the original post:
Energy Consumption in Machine Learning: An Unseen Cost of ... - EnergyPortal.eu

Read More..

Know Labs Demonstrates Improved Accuracy of Machine Learning Model for Non-Invasive Glucose Monitor – Marketscreener.com

SEATTLE - Know Labs, Inc. (NYSE American: KNW) today announced results from a new study titled, 'Novel data preprocessing techniques in an expanded dataset improve machine learning model accuracy for a non-invasive blood glucose monitor.'

The study demonstrates that continued algorithm refinement and more high-quality data improved the accuracy of Know Labs' proprietary Bio-RFID sensor technology, resulting in an overall Mean Absolute Relative Difference (MARD) of 11.3%.

As with all Know Labs' previous research, this study was designed to assess the ability of the Bio-RFID sensor to non-invasively and continuously quantify blood glucose, using the Dexcom G6 continuous glucose monitor (CGM) as a reference device. In this new study where data collection was completed in May of 2023, Know Labs applied novel data preprocessing techniques and trained a Light Gradient-Boosting Machine (lightGBM) model to predict blood glucose values using 3,311 observations - or reference device values - from over 330 hours of data collected from 13 healthy participants. With this method, Know Labs was able to predict blood glucose in the test set - the dataset that provides a blind evaluation of model performance with a MARD of 11.3%.

Comparatively, Know Labs released study results in May 2023 that analyzed data from five participants of a similar demographic using 1,555 observations from 130 hours of data collection, and the first application of the lightGBM ML model, which resulted in an overall MARD of 12.9%.

In June 2023, Know Labs announced the completed build of its Gen 1 prototype, which incorporates the Bio-RFID sensor that Know Labs has been using to conduct clinical research in a lab environment for the last two years, and has published results of its proven stability, into a portable device. Testing with the Gen 1 device is underway, optimizing the sensor configuration for data collection, including new environmental and human factors.

The Company's focus is on collecting more high-quality, high-resolution data across a diverse participant population representing different glycemic ranges and testing scenarios, to refine its algorithms based on this new data, and to optimize its sensor in preparation for scale. To support this work, the Company is continuing to test with its Gen 1 device every day in parallel with ongoing clinical research with its stationary lab system. Gen 1 is expected to generate tens of billions of data observations to process which will be critical to helping validate algorithm performance across the real-world scenarios in which Know Labs' glucose monitoring device may be used. This is a key component of realizing the Company's vision for bringing an FDA-cleared product to the market.

About Know Labs, Inc.

Know Labs, Inc. is a public company whose shares trade on the NYSE American Exchange under the stock symbol 'KNW.' The Company's technology uses spectroscopy to direct electromagnetic energy through a substance or material to capture a unique molecular signature. The Company refers to its technology as Bio-RFID. The Bio-RFID technology can be integrated into a variety of wearable, mobile or bench-top form factors. This patented and patent-pending technology makes it possible to effectively identify and monitor analytes that could only previously be performed by invasive and/or expensive and time-consuming lab-based tests. The first application of our Bio-RFID technology will be in a product marketed as a non-invasive glucose monitor. The device will provide the user with accessible and affordable real-time information on blood glucose levels. This product will require U.S. Food and Drug Administration clearance prior to its introduction to the market.

Safe Harbor Statement

This release contains statements that constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995 and Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements appear in a number of places in this release and include all statements that are not statements of historical fact regarding the intent, belief or current expectations of Know Labs, Inc., its directors or its officers with respect to, among other things: (i) financing plans; (ii) trends affecting its financial condition or results of operations; (iii) growth strategy and operating strategy and (iv) performance of products. You can identify these statements by the use of the words 'may,' 'will,' 'could,' 'should,' 'would,' 'plans,' 'expects,' 'anticipates,' 'continue,' 'estimate,' 'project,' 'intend,' 'likely,' 'forecast,' 'probable,' 'potential,' and similar expressions and variations thereof are intended to identify forward-looking statements. Investors are cautioned that any such forward-looking statements are not guarantees of future performance and involve risks and uncertainties, many of which are beyond Know Labs, Inc.'s ability to control, and actual results may differ materially from those projected in the forward-looking statements as a result of various factors. These risks and uncertainties also include such additional risk factors as are discussed in the Company's filings with the U.S. Securities and Exchange Commission, including its Annual Report on Form 10-K for the fiscal year ended September 30, 2022, Forms 10-Q and 8-K, and in other filings we make with the Securities and Exchange Commission from time to time. These documents are available on the SEC Filings section of the Investor Relations section of our website at http://www.knowlabs.co. The Company cautions readers not to place undue reliance upon any such forward-looking statements, which speak only as of the date made. The Company undertakes no obligation to update any forward-looking statement to reflect events or circumstances after the date on which such statement is made.

Contact:

Laura Bastardi

Email: Knowlabs@matternow.com

Tel: (603) 494-6667

(C) 2023 Electronic News Publishing, source ENP Newswire

Read the original post:
Know Labs Demonstrates Improved Accuracy of Machine Learning Model for Non-Invasive Glucose Monitor - Marketscreener.com

Read More..

Research hotspots of deep learning in critical care medicine | JMDH – Dove Medical Press

Introduction

Deep learning (DL) is a subset of machine learning (ML) that is created using complex algorithms that are inspired by the organization of the human brain with many discrete nodes or neurons and can identify important patterns or features in a dataset.1 DL and ML refer to two different technologies, and DL is considered an advanced structure of ML. Convolutional neural networks, long and short-term memory networks, recurrent neural networks, transformer models, and attention mechanisms are all common u DL technologies.2 ML techniques are a collection of mathematical and statistical concepts such as support vector machine, random forest, and K-nearest neighbors.3 Whereas DL algorithms are specialized techniques that are a subset of ML.1 The most important difference between the two approaches is that ML requires a feature engineering process that eliminates unnecessary variables and pre-selects only those that will be used for learning.4 This process is disadvantaged by the requirement that experienced professionals pre-select critical variables. Conversely, DL algorithms overcomes this shortfall by a process in which have built-in mechanisms for assessing and addressing the root of any inaccuracies and do not require guidance.4 To interpret an image, for example, to deconstruct the image into specific features, such as brightness, curvature, sharpness, etc., the extraction process of the support vector machine, a ML technique, requires digital input into the computer algorithms to extract these image features.5 Whereas this feature extraction process is completely different in DL algorithms. By varying the weights of the given features, DL applies a series of convolutional filters to the image, and the DL algorithm can then be trained to recognize a specific type of image and ultimately achieve the extraction of features from the image.6

Patients in the critical care medicine (CCM) field usually have aggressive and complex conditions, complicated medical record data, and clear trends in personalized treatment, resulting in a huge need for automated and reliable health information processing and analysis.7 DL algorithms and models enable machines to mimic human activities such as seeing, hearing, and thinking, helping to solve many complex pattern recognition challenges, and such features seem to help bring breakthroughs to the CCM. In addition to its well-known use in image processing and analysis, DL is also widely used in the medical field for health-record analysis, clinical diagnosis, health monitoring, personalized medicine, and drug development.7 Managing most diseases in the field of CCM, such as sepsis, acute respiratory distress syndrome, and severe stroke, largely requires the application of DL technologies.810 Currently, various algorithms and models based on DL have been widely used in the management of common diseases in CCM (such as sepsis and acute respiratory distress syndrome), including early detection of diseases, severity score estimation, facilitating ICU liberation through early successful extubation, early mobility, and survival prediction.1115 Accumulating evidence suggests that the application of DL technology promotes intelligence in CCM, not only effectively improving the quality of medical care, but also helping to increase the efficiency of clinicians.16

Publications on the application of DL in CCM have continued to grow in recent years. The continuous increase in publications is positive for the updating of knowledge, but it also poses a challenge for researchers, as the process of acquiring knowledge makes it difficult to avoid the heavy work of combing publications.17 As a quantitative research method used to analyze the scholarly characteristics of the literature in certain scientific fields, bibliometrics helps researchers to grasp research hotspots and trends in their fields of interest and to predict their prospects.18 Therefore, the present bibliometric and visualized study was performed to provide a comprehensive overview of current hotspots, future trends in the use of DL in CCM, to showcase the contributions of leading countries, authorities, and prominent scholars, and to provide clues to potential future collaborations and research directions.

Data were obtained from the Web of Science on 15 March, 2023 using the followed strategy: TS=(critical care OR critically ill OR intensive care OR ICU OR high dependency) AND TS=(deep learning OR convolutional neural network). It should be noted that TS=Topic. The inclusion criteria were as follows: (a) literature published between 2012 and 2022; (b) articles as the type of literature; and (c) literature published in English. Duplicate publications were excluded. A manual check of the included literature was independently performed by two authors (clinicians).

Full records and cited references of the obtained publications were downloaded in BibTex or txt formats for further analysis. Information on the publications, including title, abstract, key words, country, author, institution, source, count of citations, cited references, and the 2021 IF of the top 10 core journals as well as the H-index of the top 10 most productive authors were recorded. Data extraction were conducted by two independent authors.

Bibliometric analysis was performed by Bibliometrix package in R software (4.2.2), Microsoft Excel 2019, VOSviewer (1.6.18), and CiteSpace (5.8.R3). In the present study, publication trends in the literature were analyzed using Microsoft Excel 2019. A polynomial-fitting curve in Microsoft Excel 2019 was applied to predict the number of future publications. The National collaborative networks, author collaborative networks, institutional collaborative networks, and journal publication trends were constructed using Bibliometrix package. Furthermore, co-occurrence of keywords and co-citation relationship of references were analyzed using VOSviewer. Finally, using CiteSpace to analyze keyword bursts.

A total of 1708 articles on DL in CCM were published in the past 11 years. Overall, there was an overall upward trend in the number of publications (Figure 1A), with 3 in 2012 to 651 in 2022. Notably, research activity peaked in 2017, where 95.67% (1634/1708) articles were published during the past six years.

Figure 1 (A) The number of publications and annual citations over time. (B) Curve fitting of the e total annual growth trend of publications (R2 = 0.9773).

Furthermore, the polynomial-fitting curve suggested that research in this area will continue to grow, with an R2 value of 0.9773 (Figure 1B).

Publications on this topic were contributed by 62 different countries/regions. Table 1 shows the top ten most productive countries/regions. China ranked first with 804 publications, followed by the USA with 420 publications. USA had the highest number of total citations and average citations of all publications. Furthermore, China maintained close ties with USA, Korea, and France, whereas USA had strong cooperative bonds with China, England, and Australia (Figure 2).

Table 1 The Top 10 Publishing Countries/Regions

Figure 2 The international collaboration between countries/regions.

Notes: The different colors of arcs represent different countries/regions, and the larger the arc area, the wider the international cooperation of the country/region. Line thickness between countries/regions reflects the intensity of the closeness.

For the analysis of institutions, 2379 institutions made contributions to this field. The top ten most productive institutions were Chinese Academy of Sciences, Harvard University, University of Chinese, Academy of Sciences, University of California System, Wuhan University, Harvard Medical School, Tsinghua University, Centre National De La Recherche Scientifique CNRS, Shanghai Jiao Tong University, and Harbin Institute of Technology (Table 2). Notably, the top 10 institutions were from the China (n = 6), USA (n = 3), and France (n = 1). Furthermore, University of Chinese Academy of Sciences, Harvard University, and Tsinghua University have more connections to other affiliations (Figure 3).

Table 2 The Top 10 Publishing Affiliations

Figure 3 Collaboration between affiliations.

Notes: Each circle represents an affiliation, and the larger the circle, the wider the cooperative relationship. Affiliations with frequent cooperative relationships are clustered into plates of the same color. Line thickness between affiliations reflects the intensity of the closeness.

Over the last ten years, a total of 6211 authors have made significant contributions to the field. Based on publication counts, Wang Y was the most productive author (n = 37), followed by Liu Y, Li Y, Wang J, Wang L, Zhang J, Zhang Y, Li L, Liu J, and Yang Y (Table 3). Furthermore, Zhang Y with the highest total citations and average citations, whereas Li Y has the highest H-index. Interestingly, the top ten most productive authors are all from China. These findings agree with the total productions for the nations mentioned above, showing that China is leading the way in this area. In addition, Wang Y, Wang X, Wang J and Zhang Y have more connections to other authors (Figure 4).

Table 3 The Top 10 Publishing Authors

Figure 4 Collaboration between authors.

Notes: Each circle represents an author, and the larger the circle, the wider the cooperative relationship. Authors with frequent cooperative relationships are clustered into plates of the same colour. Line thickness between authors reflects the intensity of the closeness.

A total of 260 journals have made contributions to this field. As shown in Table 4, IEEE Access, Scientific Reports, Remote Sensing were the top three. When it came to journal impact, IEEE Transactions on Geoscience and Remote Sensing ranked first, with an IF of 8.125, followed by Journal of Biomedical Informatics (IF = 8.000), and Computers in Biology and Medicine (IF = 6.698). These journals were therefore valuable resources for research in this field. Additionally, over the last five years, the top five most active journals have displayed a sharp rise in the number of annual publications (Figure 5).

Table 4 The Top 10 Most Active Journals

Figure 5 Publications of the top 5 most active journals over time.

A total of 59,659 co-cited references were identified. After setting the minimum number of citations to 30, 52 of them were selected to form the cited reference network, which contained four clusters (Figure 6). Cluster 1 (in red) primarily centered on the model development and validation, cluster 2 (in green) primarily centered on the application of the models, cluster 3 (in blue) and cluster 4 (in yellow) primarily centered on the application of models in the medical field. These results suggest that DL-based model development and applications are the foundation of current research in this area. All literature included in Figure 6 is provided in Supplementary Material 1.

Figure 6 Network visualization map of co-citation references.

Notes: Cluster 1 (in red), cluster 2 (in green), cluster 3 (in blue), cluster 4 (in yellow). The lines between the circles represent the co-citation relationship. The thickness and number of connections between the nodes indicate the strength of links between references.

A total of 4864 keywords were identified. Deep learning, machine learning, and feature extraction were the keywords with the highest frequency (Figure 7A). After setting the minimum number of occurrences to ten, 84 of them were selected to form the keyword network, which contained six clusters (Figure 7A). Cluster 1 (in red) primarily centered on the model development and validation, cluster 2 (in green) and cluster 4 (in yellow) primarily centered on the extraction of clinical characteristics of critically ill patients, cluster 3 (in blue) primarily centered on the prognosis of critically ill patients, cluster 5 (in purple) primarily centered on monitoring changes in the condition of critically ill patients; and cluster 6 (in light blue) primarily centered on big data analysis in CCM. It was worth noting that adaptation models, computed tomography, and electronic medical records were recent emerging hot topics (Figure 7B). These topics offer potential research directions on DL in CCM for the future.

Figure 7 (A) Network map of keywords on DL in CCM. (B) Visualization map of top 15 keywords with the strongest citation bursts.

Notes: Cluster 1 (in red), cluster 2 (in green), cluster 3 (in blue), cluster 4, cluster 5 (in purple), cluster 6 (in light blue). The node size reflects the co-occurrence frequencies and the link indicates the co-occurrence relationship. The thickness of the link is proportional to the number of times two keywords co-occur. The blue bars indicate that the keywords have been published and the red bars indicate citation burstness.

The present study used a bibliometric approach to analyze publications of DL in CCM by exploring the expansion of research interest, publication output, top nations, international cooperation, top institutions, authoritative scholars, preferred journals, keywords, and citation analysis.

Publications on DL in CCM have grown steadily in recent years since the concept of DL was introduced in 2006.19 In 2018, there was a significant increase in interest in DL in CCM, which marked the turning point. Interest in DL in general medicine has been gradually increasing since 2012, but in the field of CCM, it was significantly delayed by six years.20 Indeed, the DL models are not commonly used in CCM daily practice. The reason is that few models have external validation, clinical interpretability and high predictability.21,22 Furthermore, most models are developed in a single institution and are do not perform well when applied to other institutions.23 There are also limited venues to incorporate the models. Ideally, they would be embedded in electronic health record systems, but this is challenging to implement due to the limitations of these systems and the corporate disincentives to do so.24,25 In addition, privacy issues are also one of the challenges faced by the adoption of artificial intelligence in the medical field.26 Based on the above evidence, we therefore believe that the safety and accountability of DL models applied to critically ill patients has not yet fully gained gain enough trust from the people. Since 2016, with the continuous development of DL technologies and the development of DL models that begin to pay attention to multi-center data sources and external validation, the accuracy and clinical adaptability of the models have been strengthened, which may help to establish patient confidence in the DL model.27,28 The emergence of new DL technologies, such as convolutional neural networks, long and short-term memory networks, recurrent neural networks, transformer models, and attention mechanisms, offers previously unheard-of possibilities for disease management, diagnosis, and prediction. In addition, MIMIC and eICU, two large public intensive care databases launched in 2016 and 2018, respectively, became available to researchers.29,30 Especially, the release of the MIMIC III database was a large contributing factor to the development of DL models in CCM. Types of DL models developed based on MIMIC III typically include diagnostic models, disease severity score models, real-time monitoring models, hospital length of stay prediction models, readmission prediction models, survival prediction models, and automated adverse drug reaction reporting models.12,15,3135 Common conditions in the field of CCM that these DL models are applied to manage include sepsis, acute respiratory distress syndrome, acute kidney injury, and cardiovascular disease.3135 The common variables used in these models can be divided into 4 categories: history information, admission information, vital signs, laboratory results, and arterial blood gas analysis.15 However, MIMIC-III is only an extensive single-center database spanning from 2001 to 2012 of electronic medical records of patients admitted to the ICU at Beth Israel Deaconess Medical Center, an academic teaching hospital of Harvard Medical School in Boston, USA.36 Therefore, the establishment of a continuously updated CCM database of multicenter admitted ICU patient data, or even electronic medical records of ICU patients admitted globally, would be more conducive to the promotion of the field of CCM as well as the application of DL models.

It was discovered that high-income nations predominate in DL research on CCM after analyzing the distribution of publications across nations. Notably, the top 30 countries in the world in terms of GDP include these ten most productive countries, suggesting that the number of publications is closely linked to the economic power of each country. This finding is consistent with the bibliometric findings of many other medical disciplines.3739 Furthermore, over 70% of the publications came from the USA and China, indicating that these two nations are the main contributors to the DL in CCM research. Additionally, the highest citation rates are also found in these two nations, though China has a lower average citation rate per article than the USA. Analysis of collaborative networks showed that the USA and China are the countries with the most collaborative network relationships. A stable and adaptable policy is a prerequisite to ensure that international collaborations are successfully achieved. Adequate research funding, a wide range of research collaborators, as well as a significant proportion of visiting scholars all contribute to improved international partnerships. Furthermore, the top 100 universities in the world include seven of the top ten most productive institutions, indicating that the use of DL in CCM has gained the attention of leading universities. Researchers may be encouraged to consider conducting conjoint research or applying for educational programs or visiting scholars with these top institutions in the USA or China.

Notably, current articles on this topic are not in the top-tier clinical journals in this space, such as Critical Care Medicine, JAMA, or the New England Journal. It is likely due to the fact that there is still a gap between the DL models and clinical applicability. It needs to be acknowledged that DL has the advantage of responding to the challenges faced by CCM. DL algorithms and models enable machines to mimic human activities such as vision, hearing, and thinking and can automate and reliably process and analyze health information.7 Therefore, various algorithms and models based on DL have been widely used in the management of common diseases in CCM, including early detection of diseases, severity score estimation, facilitating ICU liberation through early successful extubation, early mobility, and survival prediction.1115 Accumulating evidence suggests that the application of DL technology promotes intelligence in CCM, not only effectively improving the quality of medical care, but also helping to increase the efficiency of clinicians.16 DL implementation will support clinicians in the decision-making processes. Benefits comprehend earlier diagnoses, detection of subclinical deteriorations and generation of new medical knowledge.7 To improve the clinical utility of DL models in CCM, the following challenges need to be addressed. First, DL models that lack external validation automatically move away from clinical applicability. It should be advocated that researchers should develop external validation from a multidimensional perspective to continuously improve the scientific validity of the model and enhance the predictive performance, which in turn will promote the clinical interpretability and applicability of DL models in the CMM field.21 Models developed in a single institution are not always applicable to other institutions. Therefore, constructing models based on multicenter shared data should be advocated to increase the breadth of their applicability.23 If achievable, the creation of a global shared database of electronic medical records in the field of CCM is expected to bring a major breakthrough in this area.36 Furthermore, privacy protection through policies remains the cornerstone of health data, with the addition of special safeguards for personal health data addressed by the new innovative principles of the General Data Protection Regulation.40

Based on the authors keywords in the identified categories, CCM-related DL research mainly focused on the model development and validation, the extraction of clinical characteristics of critically ill patients, the prognosis of critically ill patients, monitoring changes in the condition of critically ill patients, and big data analysis in CCM. Furthermore, the primary disease domains addressed in CCM-related DL research were COVID-19, ARDS, sepsis, cardiac arrest, and acute kidney injury. The common targets of DL algorithms are these common diseases in CCM. It was worth noting that adaptation models, computed tomography, and electronic medical records were recent emerging hot topics. These topics offer potential research directions on DL in CCM for the future. DL has demonstrated potential applications in various areas of CCM. However, the development and implementation of DL in CCM remains challenging. Firstly, the absence of external validation and prospective assessment to confirm the repeatability of DL protocols both limit the utility of DL in clinical practice.41 Secondly, implementing artificial intelligence models in clinical practice may entail high initial costs, which is a significant barrier to implementing artificial intelligence (AI) in low- and middle-income countries.42 Furthermore, the World Health Organization released guidelines on the ethics and management of AI for health in 2021, emphasizing the key role of privacy, transparency, informed consent, and regulation of data protection frameworks.43 Thus, the legal protection of patient privacy also limits the current widespread use of AI in medicine.42

The application of AI is beneficial for the advancement of medicine.44,45 This study provides a systematic review of hot spots and trends in CCM-related DL research, highlights leading countries and institutions, reveals potential partnership networks, and provides insights into the direction of future research. However, limitations should be acknowledged. The COVID-19 pandemic has impacted various industries around the world, including the DL field and the CCM field. From the outset of the COVID-19 pandemic, it was clear that the greatest challenge was the unavailability of fully equipped and staffed ICU beds.46 With the expansion of the Internet, the amount of content on COVID-19 has exploded in the last three years. In addition to fact-based content, a large amount of COVID-19 content is being manipulated, and it leads to people spending more time online and getting more invested in this false content.47 Potentially preventing its spread by using DL to identify uninformative information early has also raised concerns.47 Therefore, the COVID-19 pandemic may introduce a publication bias to the publication trend of CCM-related DL research, which may lead to unstable research hotspots and trends identified in this study. Furthermore, given that our search strategy was constructed based on Topic, the search strategy used in the present study may have resulted in missing some relevant literature.

Hot spots in research on the application of DL in CCM have focused on classifying disease phenotypes, predicting early signs of clinical deterioration, and forecasting disease progression, prognosis, and death. Extensive collaborative research to improve the maturity and robustness of the model remains necessary to make DL-based model applications sufficiently compelling for conventional CCM practice.

This work was supported by Science and Technology Development Fund of Hospital of Chengdu University of Traditional Chinese Medicine (No.21ZL08).

The authors report no conflicts of interest in this work.

1. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Commun ACM. 2017;60:8490. doi:10.1145/3065386

2. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. 27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Columbus, OH; 2014.

3. Goecks J, Jalili V, Heiser LM, Gray JW. How machine learning will transform biomedicine. Cell. 2020;181(1):92101. doi:10.1016/j.cell.2020.03.022

4. Shinde P, Shah S. A review of machine learning and deep learning applications; 2019.

5. Irisson JO, Ayata SD, Lindsay DJ, Karp-Boss L, Stemmann L. Machine learning for the study of plankton and marine snow from images. Ann Rev Mar Sci. 2022;14:277301. doi:10.1146/annurev-marine-041921-013023

6. Wang S, Yang DM, Rong R, Zhan X, Xiao G. Pathology image analysis using segmentation deep learning algorithms. Am J Pathol. 2019;189(9):16861698. doi:10.1016/j.ajpath.2019.05.007

7. Egger J, Gsaxner C, Pepe A, et al. Medical deep learning-A systematic meta-review. Comput Methods Programs Biomed. 2022;221:106874. doi:10.1016/j.cmpb.2022.106874

8. Zhang Z, Pan Q, Ge H, Xing L, Hong Y, Chen P. Deep learning-based clustering robustly identified two classes of sepsis with both prognostic and predictive values. EBioMedicine. 2020;62:103081. doi:10.1016/j.ebiom.2020.103081

9. Reamaroon N, Sjoding MW, Gryak J, Athey BD, Najarian K, Derksen H. Automated detection of acute respiratory distress syndrome from chest X-rays using directionality measure and deep learning features. Comput Biol Med. 2021;134:104463. doi:10.1016/j.compbiomed.2021.104463

10. Sharma N, Simmons LH, Jones PS, et al. Motor imagery after subcortical stroke: a functional magnetic resonance imaging study. Stroke. 2009;40(4):13151324. doi:10.1161/STROKEAHA.108.525766

11. Lauritsen SM, Kalr ME, Kongsgaard EL, et al. Early detection of sepsis utilizing deep learning on electronic health record event sequences. Artif Intell Med. 2020;104:101820. doi:10.1016/j.artmed.2020.101820

12. Aurolu T, Oul H. A deep learning approach for sepsis monitoring via severity score estimation. Comput Methods Programs Biomed. 2021;198:105816. doi:10.1016/j.cmpb.2020.105816

13. Jia Y, Kaul C, Lawton T, Murray-Smith R, Habli I. Prediction of weaning from mechanical ventilation using convolutional neural networks. Artif Intell Med. 2021;117:102087. doi:10.1016/j.artmed.2021.102087

14. Yeung S, Rinaldo F, Jopling J, et al. A computer vision system for deep learning-based detection of patient mobilization activities in the ICU. NPJ Digit Med. 2019;2:11. doi:10.1038/s41746-019-0087-z

15. Tang H, Jin Z, Deng J, et al. Development and validation of a deep learning model to predict the survival of patients in ICU. J Am Med Inform Assoc. 2022;29(9):15671576. doi:10.1093/jamia/ocac098

16. Datta R, Singh S. Artificial intelligence in critical care: its about time! Med J Armed Forces India. 2021;77(3):266275. doi:10.1016/j.mjafi.2020.10.005

17. Dan F, Kudu E. The evolution of cardiopulmonary resuscitation: global productivity and publication trends. Am J Emerg Med. 2022;54:151164. doi:10.1016/j.ajem.2022.01.071

18. Niu B, Hong S, Yuan J, Peng S, Wang Z, Zhang X. Global trends in sediment-related research in earth science during 19922011: a bibliometric analysis. Scientometrics. 2013;98(1):511529. doi:10.1007/s11192-013-1065-x

19. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436444. doi:10.1038/nature14539

20. Guo Y, Hao Z, Zhao S, Gong J, Yang F. Artificial intelligence in health care: bibliometric analysis. J Med Internet Res. 2020;22(7):e18228. doi:10.2196/18228

21. Ozrazgat-Baslanti T, Loftus TJ, Ren Y, Ruppert MM, Bihorac A. Advances in artificial intelligence and deep learning systems in ICU-related acute kidney injury. Curr Opin Crit Care. 2021;27(6):560572. doi:10.1097/MCC.0000000000000887

22. Ching T, Himmelstein DS, Beaulieu-Jones BK, et al. Opportunities and obstacles for deep learning in biology and medicine. J R Soc Interface. 2018;15(141):20170387.

23. Miotto R, Wang F, Wang S, Jiang X, Dudley JT. Deep learning for healthcare: review, opportunities and challenges. Brief Bioinform. 2018;19(6):12361246. doi:10.1093/bib/bbx044

24. Wang L, Tong L, Davis D, Arnold T, Esposito T. The application of unsupervised deep learning in predictive models using electronic health records. BMC Med Res Methodol. 2020;20(1):37. doi:10.1186/s12874-020-00923-1

25. Golas SB, Shibahara T, Agboola S, et al. A machine learning model to predict the risk of 30-day readmissions in patients with heart failure: a retrospective analysis of electronic medical records data. BMC Med Inform Decis Mak. 2018;18(1):44. doi:10.1186/s12911-018-0620-z

26. Harvey HB, Gowda V. Regulatory issues and challenges to artificial intelligence adoption. Radiol Clin North Am. 2021;59(6):10751083. doi:10.1016/j.rcl.2021.07.007

27. Zhong L, Dong D, Fang X, et al. A deep learning-based radiomic nomogram for prognosis and treatment decision in advanced nasopharyngeal carcinoma: a multicentre study. EBioMedicine. 2021;70:103522. doi:10.1016/j.ebiom.2021.103522

28. Hiremath A, Shiradkar R, Fu P, et al. An integrated nomogram combining deep learning, Prostate Imaging-Reporting and Data System (PI-RADS) scoring, and clinical variables for identification of clinically significant prostate cancer on biparametric MRI: a retrospective multicentre study. Lancet Digit Health. 2021;3(7):e445e454. doi:10.1016/S2589-7500(21)00082-0

29. Pollard TJ, Johnson AE, Raffa JD, Celi LA, Mark RG, Badawi O. The eICU collaborative research database, a freely available multi-center database for critical care research. Sci Data. 2018;5:180178. doi:10.1038/sdata.2018.178

30. Johnson AE, Pollard TJ, Shen L, et al. MIMIC-III, a freely accessible critical care database. Sci Data. 2016;3:160035. doi:10.1038/sdata.2016.35

31. Guo F, Zhu X, Wu Z, Zhu L, Wu J, Zhang F. Clinical applications of machine learning in the survival prediction and classification of sepsis: coagulation and heparin usage matter. J Transl Med. 2022;20(1):265. doi:10.1186/s12967-022-03469-6

32. Alfieri F, Ancona A, Tripepi G, et al. External validation of a deep-learning model to predict severe acute kidney injury based on urine output changes in critically ill patients. J Nephrol. 2022;35(8):20472056. doi:10.1007/s40620-022-01335-8

33. Wu J, Lin Y, Li P, Hu Y, Zhang L, Kong G. Predicting Prolonged Length of ICU stay through machine learning. Diagnostics. 2021;11(12):2242. doi:10.3390/diagnostics11122242

34. Pishgar M, Theis J, Del Rios M, Ardati A, Anahideh H, Darabi H. Prediction of unplanned 30-day readmission for ICU patients with heart failure. BMC Med Inform Decis Mak. 2022;22(1):117. doi:10.1186/s12911-022-01857-y

35. McMaster C, Chan J, Liew DFL, et al. Developing a deep learning natural language processing algorithm for automated reporting of adverse drug reactions. J Biomed Inform. 2023;137:104265. doi:10.1016/j.jbi.2022.104265

36. Rsli E, Bozkurt S, Hernandez-Boussard T. Peeking into a black box, the fairness and generalizability of a MIMIC-III benchmarking model. Sci Data. 2022;9(1):24. doi:10.1038/s41597-021-01110-7

37. Qiang W, Xiao C, Li Z, et al. Impactful publications of critical care medicine research in China: a bibliometric analysis. Front Med. 2022;9:974025. doi:10.3389/fmed.2022.974025

38. Liu YX, Zhu C, Wu ZX, Lu LJ, Yu YT. A bibliometric analysis of the application of artificial intelligence to advance individualized diagnosis and treatment of critical illness. Ann Transl Med. 2022;10(16):854. doi:10.21037/atm-22-913

39. Cui X, Chang Y, Yang C, Cong Z, Wang B, Leng Y. Development and trends in artificial intelligence in critical care medicine: a bibliometric analysis of related research over the period of 20102021. J Pers Med. 2022;13(1):50. doi:10.3390/jpm13010050

40. Castelluccia Claude, le Metayer Daniel, European Parliament. European Parliamentary Research Service. Scientific foresight unit. Understanding algorithmic decision-making: opportunities and challenges; 2019. Available from: https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624261/EPRS_STU. Accessed January 27, 2023.

41. Yoon JH, Pinsky MR, Clermont G. Artificial intelligence in critical care medicine. Crit Care. 2022;26(1):75. doi:10.1186/s13054-022-03915-3

42. Caruso PF, Greco M, Ebm C, Angelotti G, Cecconi M. Implementing artificial intelligence: assessing the cost and benefits of algorithmic decision-making in critical care. Crit Care Clin. 2023;6072(1):1253.

43. Ethics and governance of artificial intelligence for health ethics and governance of artificial intelligence for health 2; 2021. Available from: http://apps.who.int/bookorders. Accessed January 27, 2023.

44. Wu Q, Liu S, Zhang R, et al. ACU&MOX-DATA: a platform for fusion analysis and visual display acupuncture multi-omics heterogeneous data. Acupunct Herbal Med. 2023;3(1):5962. doi:10.1097/HM9.0000000000000051

45. Jiang C, Qu H. In-line spectroscopy combined with multivariate analysis methods for endpoint determination in column chromatographic adsorption processes for herbal medicine. Acupunct Herbal Med. 2022;2(4):253260. doi:10.1097/HM9.0000000000000035

46. Arabi YM, Myatra SN, Lobo SM. Surging ICU during COVID-19 pandemic: an overview. Curr Opin Crit Care. 2022;28(6):638644. doi:10.1097/MCC.0000000000001001

47. Machov K, Mach M, Porezan M. Deep learning in the detection of disinformation about COVID-19 in online space. Sensors. 2022;22(23):9319. doi:10.3390/s22239319

Read the rest here:
Research hotspots of deep learning in critical care medicine | JMDH - Dove Medical Press

Read More..

Analyzing the Environmental Impact of Cloud Computing – Analytics Insight

Examine some of the initiatives that organizations and cloud providers may take to reduce

In recent years, cloud computinghas been an increasingly popular choice for organizations trying to simplify operations and save expenses. Organizations may minimize their dependency on on-premise hardware and software by accessing remote servers and computing resources, which can result in considerable savings in energy usage and carbon emissions.Yet, the shift to cloudcomputing has environmental consequences, and as more firms use this technology, it is critical to examine the possible environmental impact of this change.

Lower Energy Consumption:Cloud computing can result in significant energy savings. The average data center needs massive energy to power and cool its servers. On the other hand, cloud providers run theirdata centerswith exceptional energy efficiency, employing innovative cooling systems and power management techniques.

Cloud companies also employ virtualization technology, enabling several users to share a single server, lowering the number of physical servers necessary. As a result, the overall carbon footprint of data centers lowers.

Carbon Footprint:According to the International Energy Agency (IEA), data centers energy usage could treble by 2030, and the industrys carbon footprint could account for up to 3.2% of global greenhouse gas emissions. Several cloud companies are looking into renewable energy to reduce the carbon footprint of data centers. Some businesses even construct their own renewable energy plants to power their data centers. Data centers are being constructed to be more energy-efficient, using features such as hot and cold aisle confinement, air-side economization, and virtualization.

E-Waste:In numerous ways, cloud computing adds to the rising problem of electronic waste. As more businesses migrate to cloud computing, they are routinely upgrading their IT equipment to stay up with the newest technical breakthroughs. In the end, it leads to an increase in electronic waste disposal. Second, cloud computing requires a significant amount of energy and resources to operate, resulting in increased e-waste in servers, data centers, and networking equipment.

E-waste disposal needs to be adequately handled, resulting in negative environmental repercussions. Toxic chemicals like lead, mercury, and cadmium can be found in e-waste and harm the air, water, and soil. When these compounds are inappropriately disposed of, they can leak into the ground and water systems, possibly polluting the local ecosystem and endangering human and animal health.

Green Cloud Computing:While cloud computing has been chastised for its environmental impact, some providers are taking deliberate steps to lessen their environmental imprint. Green cloud computing or sustainable cloud computing are other terms for it.

By adopting renewable energy sources, cloud companies may reduce their environmental effect. Amazon, Microsoft Azure, andGoogle Cloudhave all committed to using 100% renewable energy in their data centers. In reality, Amazon Web Services (AWS) declared in 2019 that it will use 80% renewable energy by 2024 and 100% renewable energy by 2030.

Green cloud computing has the potential to improve public relations. Companies may improve their image and reputation as good corporate citizens by advertising their environmental activities. This can set them apart from rivals and serve as an effective marketing strategy.

Challenges Implementing Green Cloud Computing:While green cloud computing has several potential benefits, it also confronts several obstacles that must be solved. Renewable energy is becoming more cost-effective, but it is still more expensive in some areas than traditional energy sources. It becomes difficult for cloud providers, particularly smaller providers with limited funds, to transition to renewable energy sources. Adopting green practices may need a significant reworking of present processes, incurring costs and taking up significant time.

More here:
Analyzing the Environmental Impact of Cloud Computing - Analytics Insight

Read More..

Juniper Stock Slides on Cut to Outlook as Cloud Business Slows – Barron’s

Juniper Networks shares are losing ground after the infrastructure hardware provider provided disappointing financial guidance, with weaker-than-expected demand from cloud computing customers.

While Juniper thinks it is a long-term beneficiary of the artificial intelligence software trend, it will take some time for that opportunity to develop. The shift appears to be hurting the company in the short run.

For the second quarter, Juniper (ticker: JNPR) posted revenue of $1.43 billion, up 13% from a year ago and slightly above the Wall Street consensus of $1.42 billion. Adjusted profits of 58 cents a share likewise were three cents above the Street consensus view of 55 cents.

We delivered better than expected results during the June quarter as our teams continued to execute well and we benefited from improved supply, CEO Rami Rahim said in a statement. We were particularly encouraged by the momentum we experienced in our enterprise business, which not only had a record quarter, but also represented both our largest and fastest growing vertical for a third consecutive quarter.

But managements financial forecasts proved disappointing. For the third quarter, Juniper sees revenue of $1.385 billion, which would be down about 2% from the year earlier period, falling short of the Street consensus of $1.48 billion. The company projects non-GAAP profits for the quarter of 54 cents, while the consensus call was for 62 cents.

Advertisement - Scroll to Continue

Analysts at Needham, Citi, Raymond James, and others all trimmed their financial forecasts and stock-price targets following the Thursday report.

In prepared remarks, CFO Kenneth Miller said orders were weaker than expected in the second quarter. He said he expects continued weakness in bookings in the third quarter, mostly from cloud customers, but to a lesser extent from telecom service providers.

We believe the softness in bookings is largely attributable to customer digestion of previously placed orders and certain projects being pushed to future periods, Miller said. We expect the macroeconomic environment to remain challenged, which may continue to impact customer spending. These factors are negatively impacting our revenue expectations.

Advertisement - Scroll to Continue

The company reduced its forecast of full-year revenue growth to between 5% and 6%, from 9%.

On a conference call with analysts, Rahim said that the focus by cloud providers right now is on building their AI offerings, which he says might be a bit of a negative for Juniper for now, although he adds that it will help in the longer term.

To the extent that AI is a new killer app thats going to be offered by cloud providersits going to result in an increase of trafficin areaswhere we have a significant footprint, he says.

Junipers disappointing results are weighing on shares of other network equipment providers with significant cloud exposure. Particularly hard hit is Arista Networks (ANET), which gets almost half of its business from Microsoft (MSFT) and Meta Platforms (META).

Arista shares, which had already been under pressure this week from concerns that Microsoft and Meta spent less than expected on capital equipment in the June quarter, are down 6% on Friday. That increased the stocks four-day loss to 13%.

Advertisement - Scroll to Continue

Write to Eric J. Savitz at eric.savitz@barrons.com

Read the original post:
Juniper Stock Slides on Cut to Outlook as Cloud Business Slows - Barron's

Read More..

The Rising Costs of Cloud Computing: Big Tech Responds with In … – Fagen wasanni

The shift to the cloud and the subsequent boom in the sector promised companies the ability to digitally transform themselves while keeping their data secure. However, the cost of this transformation is on the rise, particularly with the addition of generative AI tools.

Big Tech companies, burdened with hefty cloud bills, find themselves in a catch-22 situation. They cannot opt out for fear of being left behind, so they are seeking ways to cut corners. One solution being explored is the development of in-house AI chips to reduce costs.

IBM, at a semiconductor conference in San Francisco, announced its consideration of using its in-house AI chips, specifically the Artificial Intelligence Unit, to lower cloud computing costs. Other tech giants like Google, Microsoft, and Amazon are already designing their own AI chips in an effort to save money on their AI endeavors. Previously, the focus had been on specialized chips like graphic chips, but the demand is expanding.

Microsoft has accelerated its project to design its own AI chips, aiming to make them available within the company and OpenAI by next year. Googles AI chip engineering team has also moved to its Google Cloud unit to expedite progress.

Not only are cloud providers facing high costs, but clients themselves are also grappling with soaring prices. Shifts to on-premises solutions are being considered due to the expense of building on-premises AI/ML resources. However, enterprises are wary of falling behind competitors in terms of AI/ML capabilities. Cloud solutions offer an attractive option for businesses that need to strengthen their infrastructure for AI/ML integration.

To maximize return on investment, clients must carefully consider their needs in terms of tools and compare the cost of creating and using models. Its also important to avoid trying to do everything independently and instead utilize open-source and paid models as a base, training them for specific enterprise data.

Cloud providers are also attempting to lower prices to attract more customers. Amazon Web Services (AWS), for example, aims to lower the cost of training and operating AI models.

As the demand for cloud services continues to increase, fueled by AI workloads, a Gartner report predicts that AI will be one of the top factors driving IT infrastructure decisions through 2023.

In this landscape, businesses may opt to outsource cloud management and maintenance to third-party firms or tools. Adopting a hybrid approach where on-premises AI hardware is used for sensitive data processing and latency-sensitive applications, while cloud services are utilized for data storage, distributed training, and deploying AI models, allows for cost optimization.

Given the bright trajectory of AI, the cloud industry is expected to continue experiencing significant growth and benefit from it.

Excerpt from:
The Rising Costs of Cloud Computing: Big Tech Responds with In ... - Fagen wasanni

Read More..

Todays Cache | Twitters new name has legal baggage; Generative AI boom complicates cloud computing; Adobes Figma deal may be investigated – The Hindu

(This article is part of Todays Cache, The Hindus newsletter on emerging themes at the intersection of technology, innovation and policy. To get it in your inbox, subscribehere.)

The social media platform known as Twitter will be renamed X, announced owner Elon Musk this week. However, the letter is so widely used and trademarked that a lawsuit against Twitter is inevitable, according to legal experts. Hundreds of companies have active U.S. trademark registrations for the letter, including Twitter rival Meta and software giant Microsoft. A lawsuit could be initiated if any of the brands feel that Twitters rebranding to X could lead to confusion for its own company or services. Twitter (X), Microsoft, and Meta are yet to make public their legal actions, if any.

Musk has floated the idea of an everything app, like Chinas WeChat, where entertainment, socialisation, and global payments are all covered by one application. However, he admitted earlier in the month that Twitter has lost around half of its advertising revenue.

The boom in generative AI technologies and tools this year has made cloud computing services more expensive, but companies are feeling pressured to continue along this path. To save their budget, Big Tech firms such as Google, Microsoft and Amazon are designing their own AI chips, with IBM perhaps set to join them soon. Out of these, Amazon Web Services (AWS) is trying to draw in potential customers for its cloud services by stressing on lower prices than its competitors.

Chipmakers and the cloud industry have benefited from the boom in AI in business, but rising hardware costs and a spike in demand continue to challenge companies trying to match the pace in growth. Some companies are choosing another route: letting a third party take over the management and maintenance of their cloud.

Figma, the cloud-based platform for designers, is used by major companies such as Zoom, Airbnb, and Coinbase. However, the EU may investigate a $20 billion deal by Adobe to acquire it. Sources reported that an antitrust investigation for the deal may be expected after a preliminary review.

Adobe and the European Commission did not issue official statements about the legal proceedings. While Adobe stressed that it was working with regulators worldwide, the European Commission had earlier spoken against the deal, citing harm to healthy competition in the interactive product design sector.

Here is the original post:
Todays Cache | Twitters new name has legal baggage; Generative AI boom complicates cloud computing; Adobes Figma deal may be investigated - The Hindu

Read More..