Page 51«..1020..50515253..6070..»

UCF Researcher Clearing the Way for Smart Wireless Networks – UCF

Communicating unimpeded at distances near and far is a dream Murat Yuksel is hoping to realize.

His ongoing research, titled INWADE: INtelligent Waveform Adaptation with DEep Learning, and funded by the U.S. Air Force Research, aims to get us closer to that dream by improving the quality of high frequency wireless networks using machine learning to fine-tune the networks efficacy.

The need to efficiently improve wireless signal quality will grow with the continuing proliferation of wireless networks for use in communications, says Yuksel, who is a UCF Department of Electrical and Computer Engineering professor within the College of Engineering and Computer Science.

The emerging 5G-and-beyond wireless networks regularly use high frequency signals that are very sensitive to the environment, he says. They get blocked easily or attenuate quickly as they travel. Even the nature of the particles in the air affects them significantly. Deep learning enables us to learn the features of the environment. Hence, using these learned features enables us to better tune the wireless signals to attain higher data transfer rates.

INWADE is an automated means to design multiple communication blocks at the transmitter and the receiver jointly by training them as a combination of deep neural networks, benefitting wireless network users.

The development and study of the INWADE network was catalyzed by the need to keep pace with the spread and usage of wireless networks.

Demand for wireless data transfers (such as cellular and Wi-Fi) is ever-increasing and this causes more tussle on the sharing of the underlying natural resource, which is the radio spectrum that supports these wireless transfers, Yuksel says.

The deep learning aspect of the research is an emerging consideration delivering better wireless signals with minimal delay. The deep learning network will select the optimal waveform modifications and beam direction with its perceived radio frequency environment to manage drones and nodes providing wireless signals and modifications.

Our work shows the feasibility of using deep reinforcement learning in real time to fine tune millimeter-wave signals, which operate in part of the super-6 GHz bands, Yuksel says. Further, the project aims to show that deep learning at the link level as well as network level can work together to make the signals deep smart.

Harnessing existing wireless networking resources and navigating fixed obstacles or crowded airways quickly is an omnipresent concern and leads network managers to search for spectra at higher frequencies than the commonly used sub-6 GHz frequency bands, Yuksel says.

These super-6 GHz bands are difficult to access and maintain, so deep learning is something Yuksel is hoping to use to address that challenge.

They operate with highly directional antennas, which makes them brittle to mobility/movement and they cannot reach far as they are sensitive to particles in the air, Yuksel says. Hence, we have to handle them very carefully to support high-throughput data transfers. This requires advanced algorithmic methods that can learn the environment and tune the super-6 GHz wireless signals to the operational environment.

Some initial findings regarding the viability of algorithms that may be implemented in INWADE were published at the International Federation for Information Processing Internet of Things Conference in late 2023.

The project started earlier in 2024 after receiving the first portion of the awarded $250,000 from the Air Force Research Laboratory in late 2023, but there already are promising findings, Yuksel notes.

We have shown in a lab testbed that our deep learning methods can successfully solve some of the fundamental link level problems, such as angle-of-arrival detection or finding the direction of an incoming signal, he says. This capability is very useful for several critical applications, for example determining location and movement direction. Next steps include demonstrating similar capability on drones and showing the feasibility of co-existence of deep learning at the link and network levels.

After developing and testing the INWADE framework, Yuksel foresees additional challenges and considerations that may require further study when implementing machine learning.

A key theoretical endeavor is to understand if multiple machine learning agents can co-exist at different layers of the wireless network and still attain improved performance without jeopardizing each others goals, he says.

Although Yuksel is the principal investigator for the research, he credits his students and collaborators for much of his success.

My students help in performing the experiments and gathering results, he says. I am indebted to them. We are also collaborating with Clemson as they are working on designing new machine learning methods for the problems we are tackling.

Yuksels work continues, and he is optimistic that his research will further benefit the greater scientific endeavor of making wireless networks accessible for all.

The potential for this effort is huge, he says. I consider the radio spectrum to be a critical natural resource, like water or clean air. As machine learning methods are advancing, being able to use them for better sharing the spectrum and solving critical wireless challenges is very much needed.

Distribution A. Approved for public release: Distribution Unlimited: AFRL-2024-2894 on 17 Jun 2024

Researchers Credentials

Yuksel is a professor at UCFs Department of Electrical and Computer Engineering and served as its interim chair from 2021 to 2022. He received his doctoral degree in computer science from Rensselaer Polytechnic Institute in 2002. Yuksels research interests include wireless systems, optical wireless, and network management, and he has multiple ongoing research projects funded by the National Science Foundation.

Read more here:
UCF Researcher Clearing the Way for Smart Wireless Networks - UCF

Read More..

Improve productivity when processing scanned PDFs using Amazon Q Business | Amazon Web Services – AWS Blog

Amazon Q Businessis a generative AI-powered assistant that can answer questions, provide summaries, generate content, and extract insights directly from the content in digital as well as scanned PDF documents in your enterprise data sources without needing to extract the text first.

Customers across industries such as finance, insurance, healthcare life sciences, and more need to derive insights from various document types, such as receipts, healthcare plans, or tax statements, which are frequently in scanned PDF format. These document types often have a semi-structured or unstructured format, which requires processing to extract text before indexing with Amazon Q Business.

The launch of scanned PDF document support with Amazon Q Business can help you seamlessly process a variety of multi-modal document types through the AWS Management Console and APIs, across all supported Amazon Q Business AWS Regions. You can ingest documents, including scanned PDFs, from your data sources using supported connectors, index them, and then use the documents to answer questions, provide summaries, and generate content securely and accurately from your enterprise systems. This feature eliminates the development effort required to extract text from scanned PDF documents outside of Amazon Q Business, and improves the document processing pipeline for building your generative artificial intelligence (AI) assistant with Amazon Q Business.

In this post, we show how to asynchronously index and run real-time queries with scanned PDF documents using Amazon Q Business.

You can use Amazon Q Business for scanned PDF documents from the console, AWS SDKs, or AWS Command Line Interface (AWS CLI).

Amazon Q Business provides a versatile suite of data connectors that can integrate with a wide range of enterprise data sources, empowering you to develop generative AI solutions with minimal setup and configuration. To learn more, visit Amazon Q Business, now generally available, helps boost workforce productivity with generative AI.

After your Amazon Q Business application is ready to use, you can directly upload the scanned PDFs into an Amazon Q Business index using either the console or the APIs. Amazon Q Business offers multiple data source connectors that can integrate and synchronize data from multiple data repositories into single index. For this post, we demonstrate two scenarios to use documents: one with the direct document upload option, and another using the Amazon Simple Storage Service (Amazon S3) connector. If you need to ingest documents from other data sources, refer to Supported connectors for details on connecting additional data sources.

In this post, we use three scanned PDF documents as examples: an invoice, a health plan summary, and an employment verification form, along with some text documents.

The first step is to index these documents. Complete the following steps to index documents using the direct upload feature of Amazon Q Business. For this example, we upload the scanned PDFs.

You can monitor the uploaded files on the Data sources tab. The Upload status changes from Received to Processing to Indexed or Updated, as which point the file has been successfully indexed into the Amazon Q Business data store. The following screenshot shows the successfully indexed PDFs.

The following steps demonstrate how to integrate and synchronize documents using an Amazon S3 connector with Amazon Q Business. For this example, we index the text documents.

When the sync job is complete, your data source is ready to use. The following screenshot shows all five documents (scanned and digital PDFs, and text files) are successfully indexed.

The following screenshot shows a comprehensive view of the two data sources: the directly uploaded documents and the documents ingested through the Amazon S3 connector.

Now lets run some queries with Amazon Q Business on our data sources.

Your documents might be dense, unstructured, scanned PDF document types. Amazon Q Business can identify and extract the most salient information-dense text from it. In this example, we use the multi-page health plan summary PDF we indexed earlier. The following screenshot shows an example page.

This is an example of a health plan summary document.

In the Amazon Q Business web UI, we ask What is the annual total out-of-pocket maximum, mentioned in the health plan summary?

Amazon Q Business searches the indexed document, retrieves the relevant information, and generates an answer while citing the source for its information. The following screenshot shows the sample output.

Documents might also contain structured data elements in tabular format. Amazon Q Business can automatically identify, extract, and linearize structured data from scanned PDFs to accurately resolve any user queries. In the following example, we use the invoice PDF we indexed earlier. The following screenshot shows an example.

This is an example of an invoice.

In the Amazon Q Business web UI, we ask How much were the headphones charged in the invoice?

Amazon Q Business searches the indexed document and retrieves the answer with reference to the source document. The following screenshot shows that Amazon Q Business is able to extract bill information from the invoice.

Your documents might also contain semi-structured data elements in a form, such as key-value pairs. Amazon Q Business can accurately satisfy queries related to these data elements by extracting specific fields or attributes that are meaningful for the queries. In this example, we use the employment verification PDF. The following screenshot shows an example.

This is an example of an employment verification form.

In the Amazon Q Business web UI, we ask What is the applicants date of employment in the employment verification form? Amazon Q Business searches the indexed employment verification document and retrieves the answer with reference to the source document.

In this section, we show you how to use the AWS CLI to ingest structured and unstructured documents stored in an S3 bucket into an Amazon Q Business index. You can quickly retrieve detailed information about your documents, including their statuses and any errors occurred during indexing. If youre an existing Amazon Q Business user and have indexed documents in various formats, such as scanned PDFs and other supported types, and you now want to reindex the scanned documents, complete the following steps:

"errorMessage": "Document cannot be indexed since it contains no text to index and search on. Document must contain some text."

If youre a new user and havent indexed any documents, you can skip this step.

The following is an example of using the ListDocuments API to filter documents with a specific status and their error messages:

The following screenshot shows the AWS CLI output with a list of failed documents with error messages.

Now you batch-process the documents. Amazon Q Business supports adding one or more documents to an Amazon Q Business index.

The following screenshot shows the AWS CLI output. You should see failed documents as an empty list.

The following screenshot shows that the documents are indexed in the data source.

If you created a new Amazon Q Business application and dont plan to use it further, unsubscribe and remove assigned users from the application and delete it so that your AWS account doesnt accumulate costs. Moreover, if you dont need to use the indexed data sources further, refer to Managing Amazon Q Business data sources for instructions to delete your indexed data sources.

This post demonstrated the support for scanned PDF document types with Amazon Q Business. We highlighted the steps to sync, index, and query supported document typesnow including scanned PDF documentsusing generative AI with Amazon Q Business. We also showed examples of queries on structured, unstructured, or semi-structured multi-modal scanned documents using the Amazon Q Business web UI and AWS CLI.

To learn more about this feature, refer toSupported document formats in Amazon Q Business. Give it a try on theAmazon Q Business consoletoday! For more information, visitAmazon Q Businessand theAmazon Q Business User Guide. You can send feedback toAWS re:Post for Amazon Qor through your usual AWS support contacts.

Sonali Sahu is leading the Generative AI Specialist Solutions Architecture team in AWS. She is an author, thought leader, and passionate technologist. Her core area of focus is AI and ML, and she frequently speaks at AI and ML conferences and meetups around the world. She has both breadth and depth of experience in technology and the technology industry, with industry expertise in healthcare, the financial sector, and insurance.

Chinmayee Rane is a Generative AI Specialist Solutions Architect at AWS. She is passionate about applied mathematics and machine learning. She focuses on designing intelligent document processing and generative AI solutions for AWS customers. Outside of work, she enjoys salsa and bachata dancing.

Himesh Kumar is a seasoned Senior Software Engineer, currently working at Amazon Q Business in AWS. He is passionate about building distributed systems in the generative AI/ML space. His expertise extends to develop scalable and efficient systems, ensuring high availability, performance, and reliability. Beyond the technical skills, he is dedicated to continuous learning and staying at the forefront of technological advancements in AI and machine learning.

Qing Wei is a Senior Software Developer for Amazon Q Business team in AWS, and passionate about building modern applications using AWS technologies. He loves community-driven learning and sharing of technology especially for machine learning hosting and inference related topics. His main focus right now is on building serverless and event-driven architectures for RAG data ingestion.

Follow this link:
Improve productivity when processing scanned PDFs using Amazon Q Business | Amazon Web Services - AWS Blog

Read More..

Uncovering hidden and complex relations of pandemic dynamics using an AI driven system | Scientific Reports – Nature.com

This section presents the experimental results and comprehensive evaluations of BayesCovid. We will explicitly discuss the results of the algorithms applied to the clinical datasets to uncover the hidden patterns of COVID-19 symptoms.

We set up a Spark on Hadoop Yarn cluster consisting of 4 EC2 machines, 1 master and 3 workers in AWS to deploy BayesCovid. We chose Ubuntu Server 20.04 LTS as the operating system for all the machines and installed Hadoop version 3.3.2 and Spark 3.3.1. All the nodes have 4 cores and 16 GB of memory.

The dataset, prepared by Carbon Health and Braid Health14, was obtained through RT-PCR tests from 11,169 individuals, approximately 3% of patients living in the United States who had COVID-positive, and 97% had COVID-negative tests. This dataset, which began to be collected by Carbon Health in early April 2020, was collected under the anonymity standard of the Health Insurance Portability and Accountability Act (HIPAA) privacy rule. This dataset covers multiple physiognomies, including Epidemiological (Epi) Factors, comorbidity, vital signs, healthcare worker-identified, patient-reported, and COVID-19 symptoms. In addition, information about patients, such as heart rate, temperature, diabetes, cancer, asthma, smoking and age, is also available. The Carbon Health team gathered the Braid Health team datasets, which collected radiological information, including CXR information. This dataset includes data from patients with one or more symptoms and no symptoms, and we only used the COVID-19 symptom information indicated in Fig. 1. Radiological information was not included in the analysis. Table 2 shows the statistical information of the COVID-19 dataset. We have 18,538 test results of 11 different COVID-19 symptoms and COVID severity values, belonging to 11,169 individuals. Moreover, Table 3 demonstrates the number of false (negative) and true (positive) values for each symptom.

Cross-validation is an important step in assessing the predictive power of models while mitigating the risk of overfitting15. To rigorously evaluate our models, we implemented ten-fold cross-validation by dividing the dataset into ten equal parts. During each iteration, one part was the validation/test set, while the remaining nine were used for model training. This process was repeated ten times, and the resulting accuracies were averaged across all folds to assess each models performance comprehensively. Importantly, using ten-fold cross-validation ensures that every instance in the dataset is precisely used once as a testing and training sample, which minimises the risk of overfitting16.

This subsection explains three distinct Bayesian networks: Nave Bayesian, Tree-Augmented Nave Bayesian, and Complex Bayesian models. These models have unveiled intricate and concealed patterns within COVID-19, offering valuable insights into the complex dynamics and relationships underlying the disease.

Figure 3a depicts the dependencies for the Nave Bayesian algorithm where the class variable, COVID severity, is the only parent associated with each symptom, and there is no link between symptoms. Figure 4 and 5 show the probability percentages of the symptoms for their positive and negative values. For example, in Fig. 4, while the probability of diarrhea is around 3% for COVID severity level 1, the probability of this symptom for level 3 is about 95%. Moreover, the probabilities of shortness of breath for levels 1, 2, 3, and 4 are very low, about 5%, and the likelihood of having this symptom is very high for levels 5 and 6. In short, the distribution of symptoms differs according to the severity levels of COVID-19, and the probability of some increases as the COVID-19 severity level rises. When we compare Figs. 4 and 5, it is seen that there is an inverse relationship between the incidence and absence of symptoms.

Conditional probability of symptoms with COVID-19 severity if symptoms are positive for Nave Bayes.

Conditional probability of symptoms with COVID-19 severity if symptoms are negative for Nave Bayes.

The dependency network built using the Tree-augmented Nave Bayesian network is depicted in Fig. 3b. COVID severity is the class variable similar to Nave Bayesian network, but the connections between the symptoms (features) are also available. As seen from the figure, for example, cough has an effect on both headache and fever while muscle sore is affected by headache and affects fatigue. For the probabilities, Tables 4 and 5 show some results of the Conditional Probability Table (CPT). In Table 4, when shortness of breath and fever are negative (F), the probability of COVID severity level 1 is 92.95%. In contrast, when shortness of breath is positive (T), and fever is negative, then the probability of COVID severity level 1 is 2.04%. When headache is positive, but cough is negative, then the probability of COVID severity level 4 is 65.92% (see Table 5).

Figure 3c shows the dependencies between all the symptoms (features) and COVID severity (class variable). Cough is most affected by different symptoms and does not affect any features. While the class variable, COVID severity, has impacts on shortness of breath, fever, fatigue, and sore throat, interestingly, it is affected by diarrhea. Another interesting pattern different from the Tree-augmented Nave Bayesian network is that fever affects muscle sore. While Table 6 shows the CPT for three variables, namely COVID severity, shortness of breath, and fever, Table 7 displays the probabilities for four variables, namely diarrhea, fatigue, muscle sore, and headache. When shortness of breath is negative, and fever is positive, the probability of COVID severity level 4 is 98.92%. In contrast, when shortness of breath is positive, but fever is negative, the probability of COVID severity level 6 is 48.18% (see Table 6). For the probabilities based on the situation of four symptoms (see Table 6), for instance, when all three symptoms, diarrhea, fatigue, and muscle sore, are positive, the probability of having headache symptom is 73.18%. Another remarkable finding in Table 7 is that if an individual has fatigue, muscle sore, and headache, the probability of not having diarrhea is 58.43%.

In this study, we have also investigated and implemented three distinct Bayesian models, each representing a unique intersection of deep learning and Bayesian inference. The first model, Deep Learning-based Nave Bayes (DL-NB), is a deep learning-based Nave Bayes structure that capitalises on the capacity of deep neural networks to refine the traditional Nave Bayes model for enhanced feature learning and dependency representation. Additionally, we extended our exploration to traditional Bayesian network structures by implementing Deep Learning-based Tree-Augmented Nave Bayes (DL-TAN), where deep learning principles are integrated to augment the classic Tree-Augmented Nave Bayes algorithm, providing richer feature representations. Furthermore, our investigation includes Deep Learning-based Complex Bayesian (DL-CB), a model designed to overcome the limitations of traditional Complex Bayesian structures in modelling intricate relationships within high-dimensional data. This comprehensive analysis and implementation of DL-NB, DL-TAN, and DL-CB contribute to the broader understanding of the synergies between deep learning and Bayesian techniques in various Bayesian network architectures. Figure 6 demonstrates the network dependencies of deep learning-based Bayesian network algorithms which uncover the complex and hidden relationships between COVID symptoms. As illustrated in Fig. 6ac, our Bayesian deep learning models, namely DL-NB, DL-TAN, and DL-CB, reveal a richer web of relationships among features compared to their traditional counterparts. The Bayesian Deep Learning models exhibit a higher density of connections, which indicates a more nuanced understanding of inter-feature dependencies. This heightened connectivity means the enhanced capacity of Bayesian Deep Learning to capture complex relationships within the data that provide a comprehensive and informative modelling of the underlying dynamics.

Bayesian deep learning dependency networks.

Figure 7 demonstrates the accuracys for the three different algorithms proposed in our system, namely Nave Bayesian Network, Tree-Augmented Nave Bayesian Network, and Complex Bayesian Network. Although the general accuracy of the algorithms is close to each other, there are apparent differences in the accuracy of the symptoms. The algorithms perform between 60% and 68% poorly for cough symptoms, while they show high accuracys for COVID severity ranging from 94% to 97%. The overall accuracys of these three algorithms are 83.52%, 87.35%, and 85.15%, respectively.

Total accuracys of the algorithms.

In the evaluation of the accuracy of deep learning-based Bayesian network algorithms, the results, as depicted in Fig. 8, showcase the performance of three distinct models: DL-NB, DL-TAN, and DL-CB. The overall accuracies reveal nuanced differences among the algorithms. DL-TAN emerges with the highest cumulative accuracy of 95.21%, which indicates its superior predictive capabilities across a spectrum of symptoms. DL-NB and DL-CB follow closely, exhibiting overall accuracies of 91.04% and 92.81%, respectively. These results underscore the efficacy of deep learning-based Bayesian approaches in capturing complex relationships within the dataset.

The comparative analysis of Bayesian deep learning algorithms against traditional Bayesian network algorithms elucidates a discernible advantage favouring the former. Notably, the Bayesian deep learning models, such as DL-NB, DL-TAN, and DL-CB, exhibit superior predictive performance across various symptoms.

Total accuracys of the deep learning-based Bayesian algorithms.

We have developed a web interface for BayesCovid decision support system that can be used by any clinical practitioner or other users. It utilises Python libraries, concerning probabilistic graphical models, data manipulation, network analysis, and data visualization. Additionally, tkinter is adopted for the graphical user interface, and PyMuPDF (fitz) is leveraged for PDF file handling. All the source code and accompanying documentation for BayesCovid decision support system are available as open-source on GitHub (https://github.com/umitdemirbaga/BayesCovid). A demonstration is also available online on YouTube (https://youtu.be/7j36HuC9Zto). The designed user interface provides dual functionality highlighted below.

Dependency analysis: This component of application ensures efficient and accurate relationship analysis between the symptoms and severity assessment, enhancing the decision-making process in clinical settings. Figure 9a depicts the user-friendly interface where a data file can be uploaded using Select CSV button. After the data file is uploaded, six radio buttons are provided for users to select one of the following Bayesian models: (a) Nave Bayesian Network, (b) Tree-Augmented Nave Bayesian Network, (c) Complex Bayesian Network, (d) Nave Bayes Deep Learning, (e) Tree-Augmented Bayes Deep Learning, and (e) Complex Bayes Deep Learning. An Analyse button that starts the processing of the selected model with the selected CSV file. A progress bar populates to show the processing status. After the model is processed, the dependency network plot is generated (see Fig. 9b) and the CPT output is saved as a file.

Severity analysis: This component of the application assists clinical staff in calculating the severity of COVID-19. This feature assists in selecting the detected symptoms that the patient exhibits and subsequently determines the severity of COVID-19. As depicted in Fig. 9c a clinician or user can select the visible symptoms and calculate severity. This will output the COVID-19 severity level based on the input symptoms as shown Fig. 9c.

See the rest here:
Uncovering hidden and complex relations of pandemic dynamics using an AI driven system | Scientific Reports - Nature.com

Read More..

Accurate Prediction of Protein Structural Flexibility by Deep Learning Integrating Intricate Atomic Structures and Cryo … – Nature.com

Overview of RMSF-net procedure

We propose a deep learning approach named RMSF-net to analyze protein dynamics based on cryo-electron microscopy (cryo-EM) maps. The primary objective of this method is to predict the RMSF of local structures (residue, atoms) within proteins. RMSF is a widely used measure to assess the flexibility of molecular structures in MD analysis and defined by the following equation:

$${RMSF}=sqrt{frac{1}{T}{sum }_{t=1}^{T}{left(xleft(tright)-widetilde{x}right)}^{2}}$$

where(x) represents the real-time position of atoms or residues, (t) represents time and (widetilde{x}) represents the mean position over a period of time ({T}). In addition to the experimental cryo-EM maps, RMSF-net incorporates fitted PDB models, which represent the mean structures of fluctuating proteins. A schematic overview of RMSF-net is depicted in Fig.1a. The cryo-EM map and PDB model are initially combined to create a dual feature pair. The PDB models are converted into voxelized density maps using the MOLMAP tool in UCSF Chimera23 to facilitate seamless integration with cryo-EM maps. Subsequently, both the density grids of the cryo-EM maps and the PDB simulated maps are divided into uniform-sized density boxes (404040) with a stride of 10. The corresponding density boxes from the mapping pair are concatenated to form a two-channel feature input for the neural network (RMSF-net) to predict the RMSF of atoms within the central subbox (101010). RMSF-net is a three-dimensional convolutional neural network comprising two interconnected modules. The primary module employs a Unet++(L3) architecture24 for feature encoding and decoding on the input density boxes. The other module utilizes 1-kernel convolutions for regression on the channels of the feature map generated by the Unet++ backbone. A center crop is then applied to the regression module output to obtain the central RMSF subboxes, where the voxel values correspond to the RMSF of the atoms contained within them. Finally, the RMSF sub-boxes are spatially merged into an RMSF map using a merging algorithm.

a Overview of RMSF-net. The data preparation and RMSF inference for RMSF-net are illustrated in the upper section. Cryo-EM maps and their fitted atomic structure models were obtained from the EMDB and PDB databases. The PDB models were simulated as density maps resembling cryo-EM maps. Both the cryo-EM map and PDB simulated map were then segmented into 40-cubic density boxes. The density boxes with matching positions from the pair were concatenated into a two-channel tensor and input to the 3D CNN of RMSF-net to infer the RMSF for atoms in the central 103 voxels (subboxes). The RMSF prediction across the entire map was obtained by combining predictions from these subboxes. The lower section depicts the RMSF-net supervised training process. The RMSF-net neural network architecture is shown in the lower right, with the number of channels indicated alongside the hidden feature maps. With Unet++ (L3) as the backbone, a regression head and crop operation were added. The ground truth RMSF for training RMSF-net was derived from MD simulations, as illustrated in the lower left. b Data processing of maps in RMSF-net.

RMSF-net incorporates several data processing strategies for maps, as illustrated in Fig.1b. First, to ensure a consistent spatial scale, all the cryo-EM maps were resampled using the ndimage.zoom module from SciPy25 to obtain a uniform voxel size of 1.5, which is approximately the size of a C atom. Second, a screening algorithm is applied to retain only those boxes that encompass atoms within the central subbox to avoid processing unnecessary boxes located in structure-free regions. This strategy significantly improves the efficiency of RMSF-net when dealing with large cryo-EM maps. The retained box indices are recorded for the subsequent merging algorithm. In addition, the voxel densities within the boxes are normalized to a range of [0, 1] before being input into the network, thus mitigating density distribution variations across cryo-EM maps. Voxel density normalization is achieved through the following process: within each box, any density values less than 0 are set to 0, and then divided by the maximum density value within the box, thus scaling the voxel density to a range from 0 to 1.

We created a high-quality protein dataset with 335 entries for training and evaluating RMSF-net. The dataset was constructed by selecting data from EMDB26 and PDB27. As of November 2022, EMDB contained over 23,593 deposited entries, with more than half being high-resolution maps with resolutions ranging from 24. We focused on maps within this range. Initially, we included a high-resolution cryo-EM map dataset from EMNUSS12 which underwent rigorous screening in EMDB and PDB prior to October 2020 and consisted of 468 entries. In addition, we performed data selection on deposited cryo-EM maps and PDB models from October 2020 to November 2022 to incorporate newly deposited data. The selected data had to meet specific criteria, including a well-fitting cryo-EM map and PDB model with a fitness above 0.7 (measured by the correlation between the PDB simulated map and cryo-EM map); the proteins had to contain at least one alpha helix or beta-strand, with no missing chains or nucleic acids. We further filtered these data by applying a clustering procedure to remove redundancy. Using the K-Medoids28 algorithm with a k value of 50, we defined the distance between two proteins as the maximum distance between any two chains from each protein, where chain distances were determined by sequence identity. After clustering, we selected the 50 medoids and added them to the dataset. Finally, out of the remaining 518 entries, 335 were successfully subjected to MD simulations, resulting in the RMSF-net dataset.

RMSF-net employs a supervised training approach, requiring labeled RMSF values derived from MD simulations19. We conducted MD simulations on the PDB models of the dataset following a standardized procedure using Assisted Model Building with Energy Refinement (AMBER)20, which consists of four stages: energy minimization, heating, equilibration, and production runs. To focus on local structure fluctuations around specific protein conformations, we configured the production run for 30 nanoseconds. Specifically, the initial atomic coordinates of the proteins were set to the original PDB model coordinates. Small molecule ligands in all complexes were removed to purely study the characteristics of proteins. Each system was immersed in a truncated octahedron box filled with TIP3P29 water molecules (at least a 12 buffer distance between the solute and edge of the periodic box). Based on the charge carried by the protein, Na+ or Cl ions were placed randomly in the simulation box to keep each system neutral. An additional 150mM NaCl solution was added to all systems according to the screening layer tally by the container average potential method30 to match the experimental conditions better. All MD simulations were performed using the AMBER 20 software package20,31 on NVIDIA Tesla A100 graphics cards. The parameters for Na+ and Cl- ions were derived from the previous work by Joung et al.32. The parameters used for the protein structure were AMBER ff14SB force field33. Each system was energy minimized using the conjugate gradient method for 6000 steps. Then, the systems were heated using the Langevin thermostat34 from 0 to 300K in 400ps using position restraints with a force constant of 1000kcal mol1 2 to the protein structure (NVT ensemble, T=300K). Subsequently, each system was gradually released in 5ns (spending 1ns each with position restraints of 1000, 100, 10, 1, and 0kcal mol1 2) using the NPT ensemble (P=1bar, T=300K) before a production run. Afterward, the final structure of each system was subjected to a 30ns MD simulation at constant temperature (300K) and pressure (1bar) with periodic boundary conditions and the particle mesh Ewald (PME) method35 We used the isotropic Berendsen barostat36 with a time constant of 2ps to control constant pressure. The protein structure was completely free in the solutions during the equilibration and production process. Simulations were run with an integration step of 2fs, and bond lengths for hydrogen atoms were fixed using the SHAKE algorithm37. PME electrostatics were calculated with an Ewald radius of 10, and the cutoff distance was also set to 10 for the van der Waals potential.

After simulation, the trajectories were processed and analyzed using the built-in Cpptraj module of AMBER Tools package38. We first removed the translational and rotational motion of all protein molecules to ensure a rigorous comparison between different trajectories. Then, the average structure of each protein (only heavy atoms) was calculated as a reference structure. Afterward, each conformation in the trajectory was aligned to the reference structure and RMSF of the protein molecule was output. These computed RMSF values were subsequently mapped onto voxels of cryo-EM maps to serve as the ground truth for training and evaluating RMSF-net.

For the training of RMSF-net, we utilized a masked mean squared error (MSE) loss function to compute the loss between the predicted RMSF and ground truth RMSF on labeled voxels of the output subboxes. The training spanned 100 epochs with a batch size of 32, and we employed the Adam optimizer39 with a learning rate of 0.004. Several techniques were implemented to mitigate overfitting, including Kaiming weight initialization40, learning rate decay, and early stopping. If the validation loss did not decrease for 10 consecutive epochs, the learning rate was halved. If it did not decrease for 30 epochs, training was terminated, and the model with the minimum validation loss was saved. We applied rotation and mirroring augmentation to the training set to account for the lack of rotational and mirror symmetry in convolutional networks, increasing the training data eightfold. The training of RMSF-net was conducted on two NVIDIA Tesla A100 graphics cards, typically lasting 58h. Following training, we conducted RMSF predictions on the test set to evaluate the performance of RMSF-net.

We employed a five-fold cross-validation approach to assess the performance of this method. The dataset was randomly divided into five equal partitions, with one partition used as the test set each time, and the remaining four partitions served as the training and validation sets. In particular, the division was based on the maps rather than the segmented boxes in order to ensure independence between these sets. The training and testing process was repeated five times, and every data entry was tested once to obtain the methods performance on the entire dataset. To prevent overfitting during model training, the training and validation sets were set at a ratio of 3:1. During testing, the correlation coefficients between the predicted RMSF and the ground truth (RMSF values derived from MD simulations) were computed as the evaluation metric. The correlation coefficients were computed at two levels: voxel level, corresponding to RMSF on the map voxels, and residue level, corresponding to RMSF on the PDB model residues (obtained by averaging RMSF on the corresponding atoms). We defaulted to using the correlation coefficient at the voxel level when analyzing and comparing model performance unless otherwise specified. In addition, we employed the correlation coefficient at the residue level when discussing the protein test cases.

As a baseline, we initially used only the cryo-EM map intensity as the single-channel input to the neural network, referred to as RMSF-net_cryo. Cross-validation using RMSF-net_cryo on the dataset revealed an average correlation coefficient of 0.649 and a bias of 0.156. We also performed cross-validation using the prior method DEFMap method for comparison. DEFMap reported a test correlation of approximately 0.7 on its dataset. However, its dataset includes only 34 proteins and the dataset used in our study is more diverse and significantly larger. Therefore, we applied the DEFMap pipeline to our dataset to ensure fair comparisons. Notably, DEFMap employed different data preprocessing strategies and neural networks. During its preprocessing, a low-pass filter was adopted to standardize the resolution of the cryo-EM maps. In addition, the neural network it used took 10-cubic subvoxels as input and outputted the RMSF of the central voxel. We strictly followed DEFMaps procedures and network for training and testing. The results showed an average correlation coefficient of 0.6 and a bias of 0.171. Through comparison, it is evident that RMSF-net_cryo exhibits superior performance compared to DEFMap.

Although RMSF-net_cryo performed better than DEFMap with our designed network and data processing strategies, it still relies on neural networks to directly establish patterns between cryo-EM maps and flexibility. What role the structural information plays in this process remains unknown. This prompted us to divide dynamic prediction via cryo-EM maps into two sequential steps: first, structural information extraction, and second, dynamic prediction based on the extracted structural information.

To accomplish the extraction of structural information, as depicted in Fig.2a, we introduced an Occ-net module. This module predicts the probabilities of structural occupancy on cryo-EM map voxels using a 3D convolutional network. Both input and output dimensions were set to 403. For training and evaluating Occ-net, we utilized PDB models to generate structure annotation maps as the ground truth, where voxels were categorized into two classes: occupied by structure and unoccupied by structure. The details of this network and data annotation process are provided in the Supplementary Information (section Structure of Occ-net and the data annotation process). Cross-entropy loss function was employed during training, with class weights set to 0.05:0.95 to address class imbalance. Once this training stage was completed, Occ-net parameters were fixed, and the second stage of training commenced. In the second stage, the two-channel classification probabilities output by Occ-net were input into the dynamics extraction module to predict the RMSF for the central 103 voxels, which is consistent with the RMSF-net approach.

a Overview of Occ2RMSF-net. In the first stage, the cryo-EM density (403) is input into Occ-net to predict the probabilities (403) of structure occupancy on the voxels, with Pu denoting the probabilities of voxels being occupied by the protein structure and Po denoting that of not being occupied by the structure. Then in the second stage, the two-channel probabilities are input into RMSF-net to predict the RMSF on the center 103 voxels. b Test performance of Occ2net. For six classification thresholds from 0.3 to 0.8, the precisions, recalls and F1-scores of the positive class (structure occupied) on the test set were computed and are shown in the plot. c Comparision of RMSF prediction performance between Occ2RMSF-net and RMSF-net_cryo on the dataset. CC is an abbreviation for correlation coefficient. d Count distribution of test correlation coefficients for DEFMap, RMSF-net_cryo, and RMSF-net on the dataset. e Data distribution of correlation coefficients for RMSF-net_cryo and RMSF-net_pdb relative to RMSF-net on the dataset. f Count distribution of test correlation coefficients for RMSF-net_pdb, RMSF-net_pdb01, and RMSF-net on the dataset. g Data distribution of correlation coefficients for RMSF-net and RMSF-net_pdb relative to RMSF-net_cryo on data points where the test correlation coefficients with RMSF-net_cryo are above 0.4. The color for each method in d, e, f and g is shown in the legend.

This model is referred to as Occ2RMSF-net, and cross-validation was conducted on it. After training, we first assessed the performance of Occ-net by calculating the precision, recall, and F1-score at the voxel level for the positive class (structure class) on the test set. This evaluation involves six classification thresholds, ranging from 0.3 to 0.8. As depicted in Fig.2b, achieving high precision and recall simultaneously was challenging due to severe class imbalance and noise. A relative balance was achieved at the threshold of 0.7, where the F1 score reached its highest value of 0.581. Regarding the final output RMSF, the correlation between the Occ2RMSF-net predictions and the ground truth on the dataset is 0.6620.158, showing a slight improvement compared to RMSF-net cryo. Figure2c displays the scatter plot of the test correlation of data for Occ2RMSF-net and RMSF-net cryo. The two models exhibited similar performance on most of the data points, with Occ2RMSF-net slightly outperforming RMSF-net_cryo overall. This highlights the critical role of structure information from cryo-EM maps for predicting the RMSF in the network and enhances the interpretability of methods like DEFMap and RMSF-net_cryo.

Inspired by the above results, we incorporated PDB models representing precise structural information and integrated them into our method in a density map-like manner, i.e., simulated density maps generated based on PDB models. We employed two approaches to input the PDB simulated maps into the network. First, the PDB simulated map was taken as a single-channel feature input into the neural network, referred to as RMSF-net_pdb. Second, the PDB simulated map was transformed into a binary encoding map representing occupancy of the structure to highlight tertiary structural features: a threshold of 3 ( represents r.m.s.d of the PDB simulated map density) was chosen, where voxels with densities above the threshold were encoded as 1 and the others as 0. This encoding map was then converted into a two-channel one-hot input to the network, known as RMSF-net_pdb01. The same cross-validation was applied to these two models. Results showed that RMSF-net_pdb achieved a test correlation coefficient of 0.7230.117, and RMSF-net_pdb01 achieved 0.7120.112. These two approaches demonstrated significantly better performance than the above cryo-EM map-based methods, further demonstrating the strong correlation between protein structure topology and flexibility.

We further combined the information from the PDB structure and cryo-EM map and input them into the network, which is the main method proposed in this work. We refer to this method, along with the neural network it employs, simply as RMSF-net. As outlined in the Overview of RMSF-net procedure section, RMSF-net takes the dual-channel feature of density from the cryo-EM map and PDB simulated map at the same spatial position as input, while the main part of the network remains the same as RMSF-net_cryo and RMSF-net_pdb. Conducting the same cross-validation on RMSF-net, the results revealed an average correlation coefficient of 0.746, with a median of 0.767 and a standard deviation of 0.127.

RMSF-net demonstrated an approximate 10% improvement compared to the baseline of RMSF-net_cryo, and a 15% enhancement over DEFMap. Figure2d presents a comparison of the distribution of data quantities at different test correlation levels for the three methods. Overall, the two cryo-EM map-based methods (DEFMap and RMSF- net_cryo) exhibit similar distribution shapes, while the distribution of RMSF-net is more concentrated, focusing on the range between 0.6 and 0.9. Nearly half of the data points cluster around 0.7 to 0.8, and close to one-third fall between 0.8 and 0.9. In comparison, the two PDB-based methods in Fig.2f exhibit similar distributions with RMSF-net. This suggests that the structure information from PDB models plays a primary role in the ability of RMSF-net to predict flexibility. On the other hand, RMSF-net further outforms the PDB-based methods through combination with information from cryo-EM maps, indicating image features related to structural flexibility in the cryo-EM map make an effective auxiliary. Regarding the robustness of these approaches, Table1 demonstrates that RMSF-net_pdb and RMSF-net_pdb01 exhibited less deviation on the test set compared to RMSF-net, while RMSF-net_cryo displayed the highest deviation. This indicates the flexibility-related information in cryo-EM maps is unstable compared to that in PDB models, which might be caused by noise and alignment errors in cryo-EM maps.

The experimental results above prove that the combination of cryo-EM map and PDB model results in the superior performance of RMSF-net. As shown in Fig. 2e, the prediction of RMSF-net is better in most cases, comparing models utilizing only the cryo-EM map or only the PDB model. Because the PDB models are built from the corresponding cryo-EM maps, their spatial coordinates are naturally aligned, and their structural information is consistent. Moreover, the PDB model built from the cryo-EM map corresponds precisely to the average position of the structure, and the cryo-EM map reconstructed from multiple particles in the sample corresponds to the information of multiple instantaneous conformations. By combining the expectation and conformational variance from the two sources, we believe that this structural consistency and complementarity create an alignment effect, and promote the superior performance of RMSF-net. However, structural deviations may exist between the PDB model and the cryo-EM map in some instances, or the PDB model may only partially occupy the cryo-EM map. These anomalies might lead to subpar performance of RMSF-net_cryo compared to RMSF-net and RMSF-net_pdb. To exclude the influence of these factors, we performed dataset filtering by excluding data points with test correlations below 0.4 for RMSF-net_cryo and compared the three models on the filtered dataset. Figure2g shows that RMSF-net and RMSF-net_pdb still demonstrated better performance overall compared to RMSF-net_cryo on the filtered dataset. The test correlations for the three models were 0.7600.084, 0.7330.083, and 0.6840.1, respectively. When the filtering threshold was increased to 0.5, the correlations for the three models were 0.7610.08, 0.7340.08, and 0.6980.084, respectively, showing consistent results.

Figure3a showcases RMSF-net predictions for three relatively small proteins: the bluetongue virus membrane-penetration protein VP5 (EMD-6240/PDB 3J9E)41, African swine fever virus major capsid protein p72 (EMD-0776/PDB 6KU9)42 and C-terminal truncated human Pannexin1 (EMD-0975/PDB 6LTN)43. Among these, 3J9E displays an irregular shape composed of loops and alpha helices, while 6KU9 and 6LTN exhibit good structural symmetry with beta sheets and alpha helices, respectively. The predictions by RMSF-net exhibit strong agreement with the ground truth for these proteins, yielding correlation coefficients of 0.887, 0.731, and 0.757, respectively, as depicted in Fig.3b. Predictions by RMSF-net_cryo and RMSF-net_pdb are supplied in Supplementary Figs.S1S3. On 3J9E and 6KU9, both RMSF-net_cryo and RMSF- net_pdb perform well, achieving correlations of 0.82, 0.69, and 0.881, 0.7, respectively. However, on 6LTN, RMSF- net_cryo only exhibits a correlation of 0.3 with the ground truth, possibly due to ring noise in the intermediate region of EMD-0975, leading to model misjudgment. In contrast, RMSF-net_pdb achieves a higher correlation of 0.767 on this protein, even surpassing RMSF-net, suggesting that instability factors in cryo-EM maps have a slight impact on RMSF-nets inference.

a, b Show RMSF-net performance on three small proteins (EMD-6240/3J9E, EMD-0776/6KU9, EMD-0975/6LTN). a Visual comparison of RMSF-net predictions and ground truth. The first column shows the cryo-EM map overlaid on the PDB model. The second and third columns depict the PDB model colored according to the normalized RMSF values, using colors indicated by the color bar on the right. The second column represents the RMSF predictions by RMSF-net, and the third column represents the ground truth RMSF values from MD simulations. b Correlation plots between normalized RMSF-net predicted values and normalized ground truth values at residue levels for each protein. The normalized RMSF for residues is calculated as the average normalized RMSF of all atoms within that residue. Normalization is achieved by subtracting the mean and dividing by the standard deviation of RMSF in the PDB model. CC is an abbreviation for correlation coefficient. c RMSF-net performance on large protein complexes (PDB entry 6FBV, 6KU9, and 6LTN). The first and second rows display the PDB models of three protein complexes, with colors corresponding to the normalized RMSF values, indicated by the color bar on the right. The first row is colored based on the RMSF predictions by RMSF-net. The second row is colored according to the ground truth RMSF values from MD simulations. The third row demonstrates the profiles of normalized RMSF-net predicted values and normalized ground truth values along residue sequences for three proteins, where residue IDs correspond to the sequence order in the PDB models. CC is an abbreviation for correlation coefficient. d The dynamic change of NTCP protein from the inward-facing conformation to the open-pore conformation (PDB entry 7PQG/7PQQ). From left to right, the first cartoon illustrates the conformational transition of 7PQG to 7PQQ. The second, third, and fourth cartoons depict the dynamic changes in the conformational transition of this protein from the front, back, and top-down perspectives, respectively, where the RMSF difference is calculated and colored on the 7PQQ using the color bar provided in the upper right corner. The RMSF visualization was generated using PyMOL59 and UCSF ChimeraX60.

In addition to small proteins, Fig.3c presents test examples of large protein complexes, including Mycobacterium tuberculosis RNA polymerase with Fidaxomicin44 (EMD-4230/PDB 6FBV), RSC complex45 (EMD-9905/PDB 6K15), and coronavirus spike glycoprotein trimer46 (EMD-6516/PDB 3JCL). RMSF-net also excels on these complex structures, achieving correlation coefficients of 0.902, 0.819, and 0.804, respectively. Remarkably, these proteins are associated with human diseases and drug development, emphasizing the potential value of RMSF-net in facilitating drug development efforts. Predictions by RMSF-net_cryo and RMSF-net_pdb for these proteins are provided in Supplementary Figs.S4S6, with correlation coefficients of 0.759 and 0.859 for 6FBV, 0.661 and 0.774 for 6K15, and 0.635 and 0.784 for 3JCL, respectively. Comparing model predictions in the Supplementary Figs. shows that RMSF-net aligns more closely with RMSF-net_pdb, supporting the previous argument that information from the PDB model plays a primary role in RMSF-nets feature processing.

We further applied RMSF-net to investigate dynamic changes in the NTCP protein during its involvement in biochemical processes. NTCP (Na+/taurocholate co-transporting polypeptide)47 is a vital membrane transport protein predominantly located on the cell membrane of liver cells in the human body. It is responsible for transporting bile acids from the bloodstream into liver cells and plays a crucial role in the invasion and infection processes of liver viruses such as the hepatitis B virus. Therefore, structural and functional analysis of NTCP is crucial for liver disease treatment. In a previous study, two NTCP conformations, the inward-facing conformation (EMD-13593/PDB 7PQG) and the open pore conformation (EMD-13596/PDB 7PQQ), were resolved using cryo-electron microscopy. We performed dynamic predictions using RMSF-Net on these two conformations and revealed dynamic changes during the transition from the inward-facing to the open pore state, as shown in Fig.3d. Compared to the inward-facing state, the open pore conformation displayed increased dynamics in the TM1 and TM6 regions of the panel domain, the TM7 and TM9 regions of the core domain, and the X motif in the center. Other regions maintained stability or exhibited enhanced stability. We hypothesize that the increased flexibility in these regions is associated with the relative motion between the panel and core domain in the open pore state, facilitating the transport of bile acids and the binding of preS1 of HBV in this conformation.

Despite exhibiting high-performance, RMSF-net is trained and tested based on relatively short-term (30ns) MD simulations. In order to determine whether the structural fluctuation patterns obtained through the 30ns simulation are stable enough for the model training, we performed longer simulations on three proteins46,48,49 (PDB 3JCL, 6QNT, and 6SXA). The detailed setup and results are provided in the Supplementary Information (section MD simulations over longer time periods). In these cases, the results from the previous 30ns simulation showed strong correlations with the results up to 500ns, indicating that the 30ns simulation can effectively capture stable structural fluctuation mode, thus qualified to serve as the foundation for our model training. The RMSF-net predictions also maintain high correlations with the long-term MD simulations, proving that the trained model has effectively absorbed the structural fluctuation patterns in MD simulations.

In addition, to make large-scale simulations feasible, we removed small molecules and ionic ligands during MD simulations, but ligands are included in the input density of RMSF-net. Therefore, the simulation results may be inaccurate regarding the flexibility of the ligand-containing protein, especially the flexibility near the ligands. To assess the impact of this treatment, we performed MD simulations for two protein systems containing ligands: the structure of the cargo bound AP-1:Arf1:tetherin-Nef closed trimer monomeric subunit50 (EMD-7537/PDB 6CM9) and spastin hexamer in complex with substrate51 (EMD-20226/PDB 6P07). Configurations of the MD simulations are provided in the Supplementary Information (section MD simulation configurations for ligand-binding proteins and membrane proteins). The simulations with and without ligands exhibited high correlations in terms of RMSF, with correlations of 0.748 and 0.859 for the two proteins, respectively (Table2). The predictions of RMSF-net also maintain comparable correlations with the additional simulations, of 0.757 and 0.859 respectively. This indicates that, overall, small molecule ligands have little impact on protein structural flexibility. However, we observed that near the ligands, the RMSF obtained from simulations without ligands is indeed higher than that obtained from simulations with ligands, as shown in Fig.4. In these regions, the predicted values of RMSF-net are even closer to simulations with ligands, i.e., the predicted RMSF is lower. Our understanding of this phenomenon is that on one hand, the structure of the ligands is relatively small compared to the protein structure, so the scope of influence is limited and does not greatly affect the global distribution of structural flexibility. On the other hand, the local structures near the ligands became relaxed without the ligands in the original simulations. The deep model utilizes the learned pattern between protein internal structures and flexibility to infer the dynamics of the protein structure binding to the ligands. Although this is only an approximation, it has some correction effect.

a Results for 6CM9. The top panel shows scatter plots of RMSF on residues, with colors corresponding to the three approaches as indicated in the legend. The middle panels present the visualizations of RMSF from the three approaches on the PDB structure, with colors corresponding to the normalized RMSF values, indicated by the color bar on the right. Ligands (GTP) are shown as yellow sticks, residues within 5 of the ligands are shown as surfaces, and black boxes indicate their positions. The bottom panels display the RMSF of structures near the ligands separately, highlighting regions within the black boxes in the middle panels. b Results for 6P07. The top panel shows scatter plots of RMSF on residues, with colors corresponding to the three approaches as indicated in the legend. The middle panels present the visualizations of RMSF from the three approaches on the PDB structure, with colors corresponding to the normalized RMSF values, indicated by the color bar on the right. Ligands (ADP, ATP) are shown as yellow sticks, and residues within 5 of the ligands are shown as surfaces. The bottom panels display the RMSF of structures near the ligands separately.

Another aspect of MD simulation is the simulation of membrane proteins. In the sample preparation of cryo-EM, membrane proteins are purified and separated from membrane structures (Van Heel et al., 2000), which means that the structure and dynamics of membrane proteins in cryo-EM reflect their free state in solution. Correspondingly, our dynamic simulations were also performed in the membrane-free state. Therefore, our model is applicable to proteins in their free state in solution. However, in vivo, membrane proteins are attached to the cell membrane, so considering the simulation environment of the membrane will more accurately simulate their dynamics in biological systems. To explore the differences brought by the membranes, we conducted MD simulations in a membrane environment on two membrane proteins, the cryo-EM structure of TRPV5 (1-660) in nanodisc52 (EMD-0593/PDB 6O1N) and cryo-EM structure of MscS channel, YnaI53 (EMD-6805/PDB 5Y4O). The configurations of the MD simulations are provided in Supplementary Information (section MD simulation configurations for ligand-binding proteins and membrane proteins). The results, as shown in Table3 and Fig.5, demonstrate that the RMSF obtained from MD simulations with and without membranes maintain consistency overall, with correlations of 0.767 and 0.678 on 6O1N and 5Y4O respectively. The correlations between RMSF-net predictions and MD simulations with membranes are 0.804 and 0.675 respectively for these two proteins. As shown in Fig.5b, the presence of the membrane leads to some changes in the flexibility of 5Y4O: in the upper region of 5Y4O, the RMSF obtained from MD simulations with the membrane is lower than that from MD simulations without membrane and RMSF-net. We speculate that this region may be influenced by membrane constraints, resulting in decreased flexibility, but the overall flexibility distribution remains largely unchanged. In addition, we observe that on these two highly symmetrical structures, the predictions of RMSF-net also maintain symmetry similar to MD simulations.

a Results for 6O1N. b Results for 5Y4O. The top panels show scatter plots of RMSF on residues, with colors corresponding to the three approaches indicated in the legend. The bottom panels present the visualizations of RMSF from the three approaches on the PDB structure, with colors corresponding to the normalized RMSF values, indicated by the color bar inthe middle.

The experimental results also indicate that our method exhibits consistent performance across cryo-EM maps with varying resolutions. The resolution of cryo-EM maps signifies the minimum scale at which credible structures are discernible within the map. In our dataset, there are more maps in the resolution range of 34 compared to 23, as shown in Fig.6a. Considering that our method takes cryo-EM maps of various resolutions into network training, concerns arise regarding potential model bias towards specific map resolutions. To address this concern, we conducted an analysis of the test performance of RMSF-net_cryo, Occ2RMSF-net, and RMSF-net compared to RMSF- net_pdb on maps of different resolutions. Results demonstrate that these models exhibit no significant performance differences across different resolution ranges, as shown in Fig.6bd. Only a minor deviation is observed in the range of 22.5, which is statistically insignificant due to the limited number of 7 data points. This underscores that neural networks can fit data indiscriminately within the high-resolution range of 24, without the need to process the maps to a uniform resolution at the preprocessing stage. The similar distributions of RMSF-net and Occ2RMSF-net across different resolutions, shown in Fig.6b, e, further support the conclusion that dynamic inference from cryo-EM maps relies on an intermediate process of structural resolution. Furthermore, Fig.6d demonstrates that, on average, RMSF-net outperforms RMSF-net_pdb across different resolution ranges, indicating that cryo-EM maps have an auxiliary effect on PDB models for dynamic analysis across different resolutions.

a Resolution distribution of cryo-EM maps in the dataset. bd Shows the performance of RMSF prediction methods on different resolution maps in the dataset. For resolution groups 22.5, 2.53, 33.5 and 3.54, the sample sizes N=7, 42, 120, and 166. b Distribution of correlation coefficients (CC) of RMSF-net_cryo on maps across the four resolution ranges. c Distribution of correlation coefficients of Occ2RMSF-net across the four resolution ranges. d Distribution of correlation difference between RMSF-net and RMSF-net_pdb on the four resolution ranges. In bd, the center white line, the lower and upper bound of the box in each violin plot indicate the median, the first quartile (Q1), and the third quartile (Q3), respectively. The whiskers of the boxes indicate Q1-1.5*IQR and Q3+1.5*IQR, with IQR representing the interquartile range. The bounds of the violin plots show the minima and maxima, and the width indicates the density of the data. e, f Show the relationship between RMSF-net run time and data size. e The relationship between the RMSF-net run time on CPUs and the weighted sum of map size and PDB size among data points of the dataset. The map size and PDB size are weighted as 0.0015:0.9985 from linear regression, both taking k voxels as the units. The weighted sum range is set below 1000, which encompasses the majority of the data. The full range is presented in Supplementary Fig.S12a. f The relationship between the RMSF-net run time on GPUs and the map size among data points of the dataset. The map size is set below 3003, which encompasses the majority of the data. The full range is presented in Supplementary Fig.S12b.

In addition to its superior performance, RMSF-net demonstrates rapid inference capabilities and minimal storage overhead, whether running on high-performance GPUs or CPUs. Using a computer equipped with 10 CPU cores and 2 NVIDIA Tesla A100 GPUs, we conducted runtime assessments on the dataset, revealing a strong linear relationship between the execution time of RMSF-net and the data size. Moreover, compared to conventional MD simulations and DEFMap, this approach achieves substantial acceleration in processing speed.

As shown in Fig.6e, Supplementary Figs.S11c and S12a, when executed on CPUs, RMSF-nets runtime is directly proportional to the weighted summation of cryo-EM map size and PDB model size, with a weight ratio of map size to PDB size at 0.0015:0.9985, both measured in units of k voxels. For most data, the weighted sum of map size and PDB size is within 500 k voxels, and processing can be completed in under a minute. When executed on GPUs, since most of the time is spent on preprocessing, the total time is linearly related to the map size, as shown in Fig.6f, Supplementary Figs.S11a and S12b. For most maps with sizes below 3003 voxels, computations can be completed within 30s. Detailed information regarding the RMSF-net processing time is provided in the Supplementary Information (section Details of the RMSF-net processing time).

For comparative analysis, we selected ten relatively smallmaps from our dataset and performed runtime assessments using RMSF-net and DEFMap. As presented in Supplementary TablesS1S3, across these ten data points, DEFMap exhibited processing times of 45.9431.84minutes on CPUs and 37.5110.51minutes on GPUs, concurrently generating data files of size 11.988.30 GB. In contrast, RMSF-net showcased remarkable efficiency, with runtime durations of 16.669.60s and 3.091.45s on CPUs and GPUs, respectively, and yielding data files of 66.3031.06MB. Both in terms of storage occupancy and time consumption, RMSF-net demonstrates significant improvements over DEFMap. Furthermore, in contrast to extended MD simulations, which often require hours or even days to perform simulations of 30ns on individual proteins, RMSF-net delivers predictions with an average correlation of up to 0.75 and saves time and resources significantly, making it an ultra-fast means of performing protein dynamic analysis.

Read this article:
Accurate Prediction of Protein Structural Flexibility by Deep Learning Integrating Intricate Atomic Structures and Cryo ... - Nature.com

Read More..

Empowering Manufacturing Innovation: How AI & GenAI Centers of Excellence can drive Modernization | Amazon Web … – AWS Blog

Introduction

Technologies such as machine learning (ML), artificial intelligence (AI), and Generative AI (GenAI) unlock a new era of efficient and sustainable manufacturing while empowering the workforce. Areas where AI can be applied in manufacturing include predictive maintenance, defect detection, supply chain visibility, demand forecasting, product design, and many more. Benefits include improving uptime and safety, reducing waste and costs, improving operational efficiency, enhancing products and customer experience, and faster time to market. Many manufacturers have started adopting AI. Georgia-Pacific uses computer vision to reduce paper tears, improving quality and increasing profits by millions of dollars. Baxter was able to prevent 500 hours of downtime in just one facility with AI-powered predictive maintenance.

However, many companies struggle (per recent World Economic Forum study) to fully leverage AI due to weak foundations in organization and technology. Reasons include lack of skills, resistance to change, lack of quality data, and challenges in technology integration. AI projects often get stuck at a pilot stage and do not scale for production use. Successfully leveraging AI and Gen AI technologies requires a holistic approach across cultural and organizational aspects, in addition to technical expertise. This blog explores how an AI Center of Excellence (AI CoE) provides a comprehensive approach to accelerate modernization through AI and Gen AI adoption.

The manufacturing industry faces unique challenges for AI adoption as it requires merging the traditional physical world (Operational Technology, or OT) and the digital world (Information Technology, or IT). Challenges include cultural norms, organizational structures, and technical constraints.

Factory personnel deal with mission critical OT systems. They prioritize uptime and safety and perceive change as risky. Cybersecurity was not a high priority, as systems were isolated from the open internet. Traditional factory operators rely on their experience gained through years of making operational decisions. Understanding how AI systems arrive at their decisions is crucial for gaining their trust and overcoming adoption barriers. Factory teams are siloed, autonomous, and operate under local leadership, making AI adoption challenging. Initial investment in AI systems and infrastructure can be substantial, depending on the approach, and many manufacturers may struggle to justify the expense.

AI relies on vast amounts of high-quality data, which may be fragmented, outdated, or inaccessible in many manufacturing environments. Legacy systems in manufacturing often run on vendor-dependent proprietary software, which use non-standard protocols and data formats, posing integration challenges with AI. Limited internet connectivity in remote locations requires overcoming latency challenges as manufacturing systems rely on accurate and reliable real-time response. For example, an AI system needs to process sensor data and camera images in real-time to identify defects as products move down the line. A slight delay in detection could lead to defective products passing through quality control. Additionally, manufacturing AI systems need to meet stringent regulatory requirements and industry standards, adding complexity to AIs development and deployment processes. The field of AI is still evolving, and there is a lack of standardization in tools, frameworks, and methodologies.

Transformative AI adoption requires commitment and alignment from both OT and IT senior leadership. OT leaders benefit by realizing that a connected, smart industrial operation simplifies work without compromising uptime, safety, security, and reliability. Likewise, IT leaders demonstrate business value through AI technologies when they understand the uniqueness of shop floor requirements. In fact, OT can be viewed as a business function enabled by IT. Integrating OT and IT perspectives is crucial for realizing AIs business value, such as revenue growth, new products, and improved productivity. Leadership must craft a clear vision linking AI to strategic goals and foster a collaborative culture to drive functional and cultural change.

While vision provides the why behind AI adoption, successful AI adoption requires vision to be translated into action. The AI CoE bridges the gap between vision and action.

Overview: The AI CoE is a multi-disciplinary team of passionate AI and manufacturing subject matter experts (SMEs) who drive responsible AI adoption. They foster human-centric AI, standardize best practices, provide expertise, upskill the workforce, and ensure governance. They develop a modernization roadmap focused on edge computing and modern data platforms. The AI CoE can start small with 2-4 members and scale as needed. For the AI COE to be successful, it requires executive sponsorship and the ability to act autonomously. Figure 1 outlines the core capabilities of the AI CoE.

Figure 1 AI CoE capabilities

The AI CoE should champion explainable AI in manufacturing, where safety and uptime are critical. For example, when an AI model predicts equipment malfunction, a binary AI output such as failure likely or failure unlikely wont earn trust with factory personnel. Instead, an output such as Failure likely due to a 15% increase in vibration detected in the bearing sensor, similar to historical bearing failure patterns would make people more likely to trust AIs advice. AWS provides multiple ways to enhance AI model explainability.

The AI CoE should partner with HR and leadership to upskill staff in the AI-powered workplace by developing career paths and training programs that leverage existing skills. GenAI solutions can help close the skills gap by showcasing how AI complements worker expertise. Leaders should emphasize how AI-enabled capabilities can free up time for complex problem-solving and interpreting AI insights. For example, Hitachi, Ericsson, and AWS demonstrated computer vision by leveraging a private 5G wireless network that could inspect 24 times more components simultaneously than manual inspections to detect defects.

The AI CoE ensures collaboration and joint decision rights between AI solution builders and factory domain experts. Together, they work backwards from business goals, breaking down silos and converging on AI solutions to achieve desired results. Additionally, the CoE acts as a hub to pinpoint impactful AI use cases, evaluating factors such as data availability, quick success potential, and business value. For example, in a textile factory, the AI CoE can leverage data analysis to optimize energy-intensive processes, delivering cost savings and sustainability benefits. Explore additional use cases with the AWS AI Use Case explorer.

Governance and data platforms are critical for scaling manufacturing AI. The CoE establishes policies, standards, and processes for responsible, secure, and ethical AI use, including data governance and model lifecycle management. AWS offers several tools to build and deploy AI solutions responsibly. The CoE develops a secure data platform to connect diverse sources, enable real-time analysis, scalable AI, and achieve regulatory compliance. This data foundation lays the groundwork for broader AI adoption, as demonstrated by Mercks manufacturing data and analytics platform on AWS, which tripled performance and reduced costs by 50%.

The AI CoE evaluates and standardizes AI and GenAI technologies, tools, and vendors based on manufacturing needs, requirements, and best practices. AWS offers a comprehensive set of AI and Gen AI services to build, deploy, and manage solutions that reinvent customer experiences. Scaling AI requires automation. An AI CoE designs automated data and deployment pipelines that reduce manual work and errors, accelerating time-to-market. Toyota exemplifies AI deployment at scale by using AWS services to process data from millions of vehicles, enabling real-time responses in emergencies.

The value of the AI CoE should be measured in business terms. This requires a holistic approach that is a mix of both hard and soft metrics. Metrics should include business outcomes such as ROI, improved customer experience, efficiency, and productivity gains from manufacturing operations. Internal surveys can gauge employee and stakeholder sentiment towards AI. These metrics help stakeholders understand the value of the AI CoE and investments.

Figure 2 Steps for building AI CoE foundations

Setting up an AI CoE requires a phased approach as illustrated in Figure 2. The first step is to secure executive support from both OT and IT leadership. The next step is to assemble a diverse team of experts consisting of shop floor personnel and AI IT experts. The team is trained in AI and defines the objectives of the CoE. They identify and deliver pilot use cases to demonstrate value. In parallel, they develop and enhance governance frameworks, provide training, foster collaboration, gather feedback, and iterate for continuous improvement. Integrating Gen AI can further enhance the CoEs content creation and problem-solving abilities, accelerating AI adoption across the enterprise. An AI CoE evolves over time. Initially, it can take on a hands-on role, building expertise, setting standards, and launching pilot projects. Over time, they transition to an advisory role, providing training, facilitating collaboration, and tracking success metrics. This empowers the workforce and ensures long-term AI adoption.

AI and GenAI technologies have the potential to create radical, new product designs, drive unprecedented levels of manufacturing productivity, and optimize supply chain applications. Adopting these technologies requires a holistic approach that addresses technical, organizational, and cultural challenges. The AI CoE acts as a catalyst by bridging the gap between business needs and responsible AI solutions. It fosters collaboration, training, and data solutions to optimize efficiency, cut costs, and spur innovation on the factory floor.

Artificial Intelligence and Machine Learning for Industrial

AWS Industrial Data Platform (IDP)

AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and Generative AI

The organization of the future: Enabled by gen AI, driven by people

Deloitte: 2024 manufacturing industry outlook

World Economic Forum: Mastering AI quality for successful adoption of AI in manufacturing

Harnessing the AI Revolution in Industrial Operations: A Guidebook

Managing Organizational Transformation for Successful OT/IT Convergence

The Future of Industrial AI in Manufacturing

Digital Manufacturing escaping pilot purgatory

Nurani Parasuraman is part of the Customer Solutions team in AWS. He is passionate about helping enterprises succeed and realize significant benefits from cloud adoption by driving basic migration to large-scale cloud transformation across people, processes, and technology. Prior to joining AWS, he held multiple senior leadership positions and led technology delivery and transformation in financial services, retail, telecommunications, media, and manufacturing. He has an MBA in Finance and a BS in Mechanical Engineering.

Saurabh Sharma is a Technical and Strategic Sr. Customer Solutions Manager (CSM) at AWS. He is part of the account team that supports enterprise customers in their cloud transformation journey. In this role, Saurabh works with customers to drive cloud strategy & adoption, provides thought leadership on how to move and modernize their workloads that can help them move fast to cloud, and drive a culture of innovation

Matthew leads the Customer Solutions organization for our North American Automotive & Manufacturing division. He and his team focus on helping customers transformation across people, process, and technology. Prior to joining AWS, Matthew led efforts for numerous organization to transform their operational processes using automation, and AI/ML technologies

Go here to read the rest:
Empowering Manufacturing Innovation: How AI & GenAI Centers of Excellence can drive Modernization | Amazon Web ... - AWS Blog

Read More..

10 Best Altcoins in July 2024 – Benzinga

For the month of July, Ethereum (ETH), Solana (SOL), and Chainlink (LINK) are our top three picks. If you are looking to buy them now, you can choose the best exchanges like eToro, Coinbase or Kraken

Besides Bitcoin (BTC), altcoins are also booming and have been striving to attract investors' eyeballs. Since the reliability dynamics of cryptocurrencies change rapidly, investors have to improvise and adjust their crypto portfolios accordingly. Our experts work hard to find you the best coins, which have huge potential to better you both in the short and long run. We have carved the list of the top 10 altcoins to invest in for the month of July 2024.

Here is the list of our top altcoin picks for July 2024:

Our team is diligently working to keep up with trends in the crypto markets. Keep up to date on the latest news and up-and-coming coins.

MOONBUST

Disclosure: eToro USA LLC; Investments are subject to market risk, including the possible loss of principal.

Market Cap: $415 billion

Ethereum (ETH) is a hot contender for investors as of July 2024. There are a few reasons for this. First, Ethereum's technology is seen as innovative. It's like a robust computer network that allows people to build things on top of it, like apps and even special contracts that run automatically called smart contracts. This opens up many possibilities for new and exciting cryptocurrency uses. Second, Ethereum is already established. Since its launch in 2015, it's become the second-biggest cryptocurrency by market value, which is a way of measuring its overall importance. It's also overcome challenges in the past, which shows its staying power. Third, recent news suggests even more interest in Ethereum. In May 2024, a big regulatory agency (the SEC) approved a spot ETF for Ethereum. This means it'll be easier for traditional investors to get involved with Ethereum, which could increase its price. On top of its technology and market position, Ethereum uses a more energy-efficient way of verifying transactions than Bitcoin. Plus, it's constantly being improved to handle more transactions at once. So, because of its cutting-edge tech, a wide range of uses, and strong market presence, Ethereum is our top recommendation for altcoin investors in July 2024. It's a good choice for those looking for a cryptocurrency with growth potential and continued development in the future.

Our team is diligently working to keep up with trends in the crypto markets. Keep up to date on the latest news and up-and-coming coins.

MOONBUST

Disclosure: eToro USA LLC; Investments are subject to market risk, including the possible loss of principal.

Market cap: $68 billion

Solana (SOL) aims to provide a safe and user-friendly experience for Web3 users. After launching its own Saga smartphone, the project has decided to launch the second smartphone named Chapter 2. The network's strength is further bolstered by its thriving decentralized finance (DeFi) scene, with over $4.4 billion locked in DeFi projects, according to Defillama. This indicates a large and engaged community actively using the platform. The SEC's recent approval of spot Ethereum ETFs has some experts optimistic about similar approval for Solana ETFs. This could open the door for more traditional investors to enter the Solana market, potentially driving its price. Since its launch in 2020, Solana's fast and scalable blockchain platform has attracted significant investment and a wide range of decentralized applications. While Solana possesses impressive speed and processing power (low latency and high throughput), it has faced criticism for occasional network disruptions and concerns about centralization. Despite these drawbacks, Solana's solid technological foundation, thriving ecosystem, and potential for broader investment make it a cryptocurrency worth considering in July 2024.

VanEck Files Solana ETF Application

Our team is diligently working to keep up with trends in the crypto markets. Keep up to date on the latest news and up-and-coming coins.

MOONBUST

Disclosure: eToro USA LLC; Investments are subject to market risk, including the possible loss of principal.

Market cap: $8.6 billion

Chainlink (LINK) is emerging as a top altcoin candidate for July 2024 due to its growing presence across the blockchain world. One key development is the Chainlink Cross-Chain Interoperability Protocol (CCIP). This increases Chainlink's revenue potential and expands its reach into connecting different blockchains (cross-chain functionality). Another noteworthy feature is Chainlink Functions, a serverless platform for developers to integrate real-world data (through Web2 APIs) into their smart contracts. Chainlink has been around since 2017, with its main network launching in 2019. Since then, it's achieved several milestones, including introducing SCALE, Chainlink Staking v0.1, and Chainlink Functions. Several key integrations solidify this potential. On May 29, the Celo network, a layer-2 solution built on Ethereum, announced it would integrate Chainlink's CCIP protocol, boosting its ability to connect different blockchains (cross-chain interoperability). Additionally, Gnosis developers can now leverage Chainlink's network to outsource complex computations, significantly reducing gas fees (officially launched on June 12th alongside Chainlink's Automation services). Finally, GMX, a platform for perpetual futures trading, is utilizing Chainlink's technology to improve its decentralized exchange. These developments showcase Chainlink's expanding influence and innovative capabilities, making it a strong contender for investment in July 2024.

Our team is diligently working to keep up with trends in the crypto markets. Keep up to date on the latest news and up-and-coming coins.

MOONBUST

Disclosure: eToro USA LLC; Investments are subject to market risk, including the possible loss of principal.

Market cap: $9 billion

Polka dot (DOT) is making waves in the altcoin market (July 2024) thanks to its cutting-edge technology and a constant push for development. Recent innovations like the Join-Accumulate Machine (JAM) and Asynchronous Backing aim to boost the network's performance and efficiency, making it more scalable. Polka dot is also committed to fostering the future of Web3 by supporting education and startup growth, as seen in their collaboration with the Founder Institute for a Web3 cohort. The community actively engages in events like the Odyssey Program Trials on Moonbeam Network. Security and governance also improve with Sinai Upgrade on the Acala Network. Launched in 2016, Polkadot focused on solving scalability and security issues in the blockchain space. It achieved this through a multi-chain design and a nominated proof-of-stake (NPoS) consensus method, allowing for secure and efficient data transfer across blockchains (cross-chain). This multi-chain strategy is further highlighted by the seamless migration of the KILT Protocol to the Polkadot Relay chain. Polkadot's proven track record, focus on development, and multi-chain approach solidify its position as a top altcoin contender in July 2024.

Our team is diligently working to keep up with trends in the crypto markets. Keep up to date on the latest news and up-and-coming coins.

MOONBUST

Market cap: ~$5.6 billion

Polygon (MATIC), launched initially as Matic Network in 2017, has become a top player in the cryptocurrency world. Polygon uses a multi-chain architecture, allowing for scalability and security through a Nominated Proof-of-Stake (NPoS) system. This technology makes it easy for other networks to migrate onto Polygon, like the recent successful migration of the KILT Protocol. Another strength of Polygon is its interoperability. Developers can easily build decentralized applications (dApps) that work seamlessly with existing Ethereum dApps using Polygon's Software Development Kit (SDK). This opens doors for broader adoption of Polygon's technology. Polygon's commitment to development goes beyond its own network. They're actively supporting the future of Web3 by partnering with the Founder Institute to educate and accelerate Web3 startups. Recent advancements like Polygon's Asynchronous Backing technology aim to improve scalability without sacrificing security. The Polygon network is also heavily focused on community engagement. Events like the Odyssey Program Trials by Moonbeam Network showcase this focus. Security and interoperability are also being prioritized, as seen with Sinai Upgrade on the Acala Network. With its impressive technology, focus on education and community, and a staggering total value locked (TVL) of $856.12 million (second only to Arbitrum in Layer 2 networks), Polygon is a strong contender in the altcoin market in July 2024.

Our team is diligently working to keep up with trends in the crypto markets. Keep up to date on the latest news and up-and-coming coins.

MOONBUST

Market Cap: $26 billion

Should you invest in Ripple (XRP) in July 2024? To answer that, let's look at Ripple's recent history. In 2023, a successful legal battle with the SEC boosted XRP's price, showing renewed confidence in the token. However, the price increase wasn't as widespread as some expected. So, is XRP for everyone? It depends on your investment style. XRP can be a good fit for general cryptocurrency enthusiasts as it is a well-known altcoin. Thematic investors focused on the financial sector might also find XRP appealing, as it targets mainstream financial institutions. Ripple (XRP) is a potential front-runner among altcoins for July 2024 due to recent developments. On June 27, Ripple's legal team highlighted a court decision criticizing the SEC, suggesting a potentially less favorable environment for the SEC's ongoing case against Ripple. This comes as the SEC's internal investigation into possible crypto conflicts of interest nears completion. Additionally, July will see Ripple unlock 1 billion XRP as part of its long-term decentralization plan. While this could put downward pressure on XRP's price, Ripple's plans for the RLUSD stablecoin offer a silver lining. This stablecoin, launching on the XRP Ledger, aims to improve cross-border transactions, potentially increasing XRP's utility and strengthening its market position. Finally, XRP's volatility makes it attractive to day traders who can capitalize on short-term price movements through technical analysis. Looking ahead, the outcome of the SEC case has the potential to impact the entire cryptocurrency industry significantly. The SEC's actions against Ripple could be the first step towards regulating all tokens as securities. This final decision could significantly affect XRP's value.

Our team is diligently working to keep up with trends in the crypto markets. Keep up to date on the latest news and up-and-coming coins.

MOONBUST

Disclosure: eToro USA LLC; Investments are subject to market risk, including the possible loss of principal.

Market Cap: $10.9 billion

Avalanche (AVAX) is one of the hottest cryptocurrencies in July 2024. There are two main reasons for this. First, Avalanche has super-advanced tech, making it a crypto leader. Second, more and more things are being built on Avalanche, creating a whole ecosystem of crypto projects. It was launched in late 2020 and offers features similar to Ethereum, but it works more energy-efficiently. It combines a Proof-of-Stake system with another special method to make transactions super quick. Avalanche (AVAX) is attracting attention as a potential July 2024 investment due to its technical strengths. It possesses faster transaction speeds and lower fees compared to Ethereum, a major player in smart contracts. However, Ethereum maintains an edge in decentralization and an established user base. While Avalanche (like Cardano and Solana) represents a new generation of blockchains with potential for future growth, it has recently experienced price dips. However, technical indicators suggest the asset might be oversold in June and due for a rebound in July. These factors combine to make Avalanche a potentially promising investment opportunity for July.

Our team is diligently working to keep up with trends in the crypto markets. Keep up to date on the latest news and up-and-coming coins.

MOONBUST

Disclosure: eToro USA LLC; Investments are subject to market risk, including the possible loss of principal.

Market Cap: $2.1 billion

Dogwifhat (WIF) is a memecoin that burst onto the scene in December 2023. Unlike many other cryptocurrencies with a specific purpose, Dogwifhat doesn't really do anything yet its value relies purely on its popularity and people buying and selling it like a trading card. Since it's comparatively new, Dogwifhat's price has been volatile. In February 2024, it jumped from 30 cents to over $4 in just a few weeks. This significant jump was because it got listed on a giant crypto exchange, Binance. There are a few things that affect Dogwifhat's price. First, there are a lot of fans online who are excited about it. Second, because it's a memecoin, its price can be especially volatile, meaning it can swing up and down quickly. Finally, if the overall crypto market is doing well, Dogwifhat might go up in price, too. So, what does this all mean for July 2024? Because it's so new and unpredictable, it's hard to say what will happen to Dogwifhat's price. But with its strong community and the possibility of a good crypto market, it might be one to watch.

Market Cap: $18.9 billion

Toncoin, born from Telegram but redesigned after legal hurdles, is a fast and scalable blockchain network. Integrated with Telegram, it aims to smooth transactions and support decentralized applications (dApps). Toncoin has established itself as a player in the market, boasting a market cap of over $18.9 billion and strong trading activity.

So, is Toncoin a good investment? Here's what makes it attractive:

However, Toncoin also carries some risks. Its price can fluctuate significantly (volatility), cryptocurrency regulations are still evolving, and Toncoin isn't yet widely available on all major exchanges. Before investing, carefully research Toncoin, consider the risks involved, and seek reliable investment advice.

Our team is diligently working to keep up with trends in the crypto markets. Keep up to date on the latest news and up-and-coming coins.

MOONBUST

Disclosure: eToro USA LLC; Investments are subject to market risk, including the possible loss of principal.

Market Cap: $18 billion

Based on a popular internet meme, Dogecoin (DOGE) started as a goof in 2013. Created by Billy Markus and Jackson Palmer, it quickly gained a passionate group of fans who liked its lighthearted approach to cryptocurrency. Dogecoin became known for its charitable giving and friendly online community, where tipping each other with DOGE became a popular trend. Unlike many cryptocurrencies with specific purposes, Dogecoin has no major use case yet. It might be used for payments someday, but for now, its value depends on its fans and the overall health of the crypto market. Whenever someone famous like Elon Musk tweets something positive about Dogecoin, its price often jumps. For example, Elon Musk said he'd eventually let people buy Tesla cars with Dogecoin! With its strong community and the possibility of a good crypto market in July 2024, Dogecoin might be a cryptocurrency to keep an eye on. However, it's important to remember it's still a bit unpredictable since it started as a joke.

In the world of cryptocurrency, Bitcoin reigns supreme, but it's not alone. There's a whole category of digital currencies known as altcoins, short for alternative coins. As the name suggests, altcoins are all the cryptocurrencies besides Bitcoin.

Some altcoins were created to address perceived weaknesses in Bitcoin's design, like scalability or transaction speeds. Others focus on entirely new applications for cryptocurrency technology, such as powering decentralized finance (DeFi) projects or enabling the creation and trading of Non-Fungible Tokens (NFTs). This variety makes the altcoin market a fascinating and ever-evolving space.

Altcoins offer a vast and colorful spectrum of cryptocurrencies, each with distinct purposes. Well-known examples include stablecoins pegged to real-world assets (like Tether or USD Coin), Layer 1 blockchains as platforms for other applications (like Ethereum or Solana), and tokens serving specific utilities or functions within a project. We can also find meme coins (like Dogecoin or Shiba Inu) driven by online trends and governance tokens that allow holders to influence decisions on a blockchain project.

Take a look at some pros and cons.

Pros

Cons

Take these actions into account before investing in altcoins:

1. Conduct thorough research on any project before investing. Key points to consider in any altcoin project are its use cases, road map, team, and technology used.

2. Do your risk assessment on your intended altcoins and consider factors including market volatility, regulatory challenges, and potential security vulnerabilities.

3. Plan to diversify your altcoins' investment and spread your investments across different altcoins to minimize risk.

4. Set your budget if you wish to invest in altcoins. It is recommended to risk the amount you can afford to lose.

5. Choose a reliable exchange with a good track record, strong security measures, and a user-friendly interface.

6. Stay Informed and conduct due diligence before investing in altcoins.

The IRS considers Bitcoin and other digital assets, including altcoins, taxable income. Whether mining, selling, or buying crypto, keeping precise records and understanding how taxes apply to avoid penalties is essential.

Choosing between Bitcoin and altcoins hinges on your investment goals and risk tolerance. The well-established leader, Bitcoin, offers stability and wider recognition but might need more growth potential. In contrast, altcoins boast a more comprehensive range of technologies and potentially higher returns, but they're also riskier and more volatile.

Altcoins present a diverse and dynamic landscape for investors and users. From stablecoins pegged to real-world assets to utility tokens powering specific projects, thousands of altcoins offer unique functionalities. While the market's volatility poses risks, careful research can unlock significant returns. Building a portfolio with a mix of established and emerging altcoins helps mitigate risks and capture opportunities in this ever-evolving space.

A

There are many altcoins out there that can potentially profit you 100x returns. According to Reddit, ADA, XLM, AERO, ICP, SOL, ALGO, DOT, ARB, RIO, and LINK are the go-to coins that may pump in the coming time.

A

According to our experts, ETH, LINK, and SOL are the coins that may explode in 2024. However, these are merely predictions, and it is impossible to know where the crypto market will stand tomorrow.

A

Ethereum is the top altcoin with a solid potential to reach $10,000.

A

According to our experts, DOT, ETH, SOL, LINK, and MATIC are the coins that may boom in 2025.

See original here:
10 Best Altcoins in July 2024 - Benzinga

Read More..

Exploring why bitcoin’s price surge to $62.5k affected altcoins – The National

A peek into the Bitcoin price recovery to $62.5k

The world of cryptocurrencies is always abuzz with developments and Bitcoin, as the flagbearer of all cryptos, is not an exception. News regarding the Bitcoin price recovery to $62.5k has been making rounds, triggering conversations in trading circles as it proposes potential breakout points.

Triggered by an unprecedented level of capital inflow from the United States, retail traders and institutional investors alike have shown an increased interest in Bitcoin. This surge in demand coupled with the US Federal Reserves more sustainable and prudent approach towards tackling inflation has positively impacted the price of Bitcoin. On the back of these recent developments, Bitcoins price jumped to a recovery high of $62,500.

The bullish prediction paints a positive picture suggesting that traders who had previously sold their Bitcoin holdings are now returning to make new investments, resulting in an impressive bullish trend. However, as always in the cryptocurrency market, it is essential to maintain a cautious stance and conduct thorough due diligence before making any investment decisions.

Apart from Bitcoins price dynamics, theres a lot happening in the thriving world of altcoins. Bitcoins recent price recovery seems to have a ripple effect on the altcoin market, triggering notable price actions on altcoins such as TON, AVAX, KAS, and XMR.

TON Crystal (TON), Avalanche (AVAX), Kasumah (KAS) and Monero (XMR) all experienced significant increases in their prices in direct correlation with Bitcoins latest price jump. The surge in these lesser-known coins showcases how Bitcoins financial health can influence the overall well-being of the cryptocurrency sphere.

However, its crucial to note that the relationship between Bitcoin and altcoins is not always direct. Sometimes a spike in Bitcoins price could lead to a decrease in the value of altcoins and vice versa. Therefore, potential investors should take time to understand these complexities before deciding to invest.

An understanding of the Bitcoin-altcoin relationship is vital for any investor looking to venture into the cryptocurrency market. Most altcoins often exhibit a direct proportional relationship with Bitcoin. When Bitcoins price surges, the altcoin market generally follows suit due to their interlinked financial ecosystems.

However, this correlation is not fixed as various factors can introduce unpredictability in this relationship. Therefore, the vigilant investor must always factor in the multifaceted dynamics of this correlation when assessing cryptocurrency investments.

While crypto investments promise exciting returns, they also come with their fair share of risks. Understanding these risks and manoeuvring through the volatile market requires knowledge, patience, and resilience. Cryptocurrencies like Bitcoin and its altcoin subsidiaries offer a golden opportunity, but one that needs to be approached with caution and foresight.

Being mindful of the shifts and trends in the industry and understanding the intricacies of these digital assets can indeed pave the way for informed decisions in crypto investments. Always remember, no investment is risk-free, and carrying out a rigorous analysis is the key to a potential success in this fascinating yet unpredictable world of cryptocurrencies.

Jake Morrison is an insightful cryptocurrency journalist and analyst, renowned for his deep understanding of the volatile and fascinating world of digital currencies. At 30 years old, Jake combines a background in Computer Science, with a degree from a reputable tech college, and a passion for decentralized finance, making him a prominent figure in the crypto journalism landscape.

Starting his career as a software developer with a focus on blockchain technologies, Jake quickly realized that his true calling lay in educating others about the potential and pitfalls of cryptocurrencies. Transitioning to journalism, he now serves as a leading voice for a major online financial news platform, specializing in the crypto category.

Jakes articles are a blend of technical analysis, market predictions, and feature stories on the latest in blockchain innovation. He has a talent for breaking down complex crypto concepts into understandable terms, making his writing accessible to both seasoned traders and crypto novices alike. His coverage spans a wide range, from Bitcoin and Ethereum to lesser-known altcoins, as well as the evolving regulatory landscape surrounding digital currencies.

What sets Jake apart is his critical approach to the hype that often surrounds the crypto space. He emphasizes the importance of due diligence and risk management, providing his readers with the tools they need to navigate the market intelligently. His investigative pieces on crypto scams and security breaches have been instrumental in raising awareness about the importance of security in digital asset investments.

Beyond his writing, Jake is an active participant in crypto conferences and online forums, where he shares his expertise and engages with the community. He also hosts a popular podcast that delves into the latest crypto trends, featuring interviews with leading figures in the blockchain space.

Jakes commitment to transparency and education in the cryptocurrency world has made him a trusted source of information and analysis. Through his work, he aims to foster a more informed and cautious approach to cryptocurrency investment, contributing to the maturity of the space.

Read more here:
Exploring why bitcoin's price surge to $62.5k affected altcoins - The National

Read More..

Prominent Analyst Shares His Top Altcoin Picks for July 2024 – CryptoGlobe

In a recent video update, crypto analyst Miles Deutscher provides a comprehensive analysis of the current crypto market and highlights his top altcoin picks for July 2024.

Historical July Bitcoin Performance

Deutscher starts by examining the historical performance of Bitcoin in July. According to Deutscher, the month of June typically results in negative returns for Bitcoin, averaging -6.96% over the past five years. In contrast, July tends to be more favorable, with an average performance of +7.39%. Deutscher notes that while July has historically been better, the cyclical nature of the market suggests that August and September could be weaker months.

Quarterly Returns and Market Cycles

Deutscher highlights the seasonal trends in Bitcoins performance, noting that Q3 is typically the weakest quarter, with an average return of only 5.87%. He points out that Q1 and Q4 are historically the strongest quarters. According to Deutscher, understanding these patterns can help investors position themselves better for future opportunities.

Bitcoin Price Analysis

Deutscher provides a technical analysis of Bitcoin, emphasizing the importance of the $60K support level. He explains that despite some liquidation wicks, Bitcoin has held this level, indicating strong buying interest. Deutscher advises that buying at range lows is typically profitable, though it requires mental fortitude to counter trade market sentiment.

Upcoming Crypto Catalysts

Deutscher discusses several upcoming catalysts that could impact the crypto market. These include the potential listing of an Ethereum spot ETF, the distribution of $16 billion in cash to FTX customers, and the ongoing market activity surrounding meme coins. Deutscher suggests that these events could create opportunities for significant market movements.

Deutschers Top Altcoin Picks for July 2024

Deutscher shares his top altcoin picks for July, each selected based on its narrative strength, catalyst potential, and current price levels.

1. Ethereum (ETH)

Deutscher likes Ethereum for its liquidity and relatively lower volatility. He mentions the upcoming Ethereum spot ETF as a potential catalyst.

2. Pendle (PENDLE)

Deutscher highlights Pendle as a strong play due to its involvement in real-world assets and liquid staking. He notes recent price drops due to market expiry but sees this as a buying opportunity.

3. Ondo (ONDO)

Deutscher sees potential in ONDO, particularly for scalping trades. He advises watching for key support levels around $1 and $3.20.

4. Pepe (PEPE)

Deutscher is bullish on Pepe, citing its strong meme value and potential for significant price appreciation during bull runs.

5. Solana (SOL)

Deutscher includes Solana as a major part of his portfolio, noting its role in the meme coin ecosystem and its strong performance during liquidation events.

6. Prime (PRIME)

Deutscher views Prime as an attractive long-term hold, especially due to its involvement in AI and gaming.

7. Bittensor (TAO)

Deutscher believes Bittensor could be one of the easiest 4x opportunities in the market due to its dominance in the AI sector.

Additional Insights

Deutscher also touches on several other narratives and altcoins, including Everclear (NEXT) and Lingo (LNGO). He discusses the importance of chain abstraction and the potential for utility-based coins to perform well in the future.

Market Strategy

Deutscher emphasizes the importance of maintaining a disciplined approach during periods of market chop. He advises accumulating during major sell-offs and taking profits during significant rallies. Deutschers strategy revolves around counter trading market sentiment and positioning for long-term gains.

View post:
Prominent Analyst Shares His Top Altcoin Picks for July 2024 - CryptoGlobe

Read More..

Next Altcoin Season: Why Meme Coins Will Outperform? – BeInCrypto

As cryptocurrency markets evolve, meme coins are poised to lead the next altcoin season. With Bitcoin in a consolidation phase, analysts are shifting focus to these speculative yet popular assets.

Originally created as jokes or parodies, meme coins have evolved beyond their novelty origins to become significant profit generators.

In the first half of 2024, meme coins ranked among the top earners in the crypto market. This trend signifies a shift in investor sentiment and market dynamics, suggesting a sustainable movement rather than a temporary surge.

If an altcoin season happens, memes will outperform the alts. If an altcoin season doesnt happen, memes will outperform the alts, meme coin analyst Murad Mahmudov, predicted.

Read more: 7 Hot Meme Coins and Altcoins that are Trending in 2024

Mahmudov targets meme coins within the Ethereum (ETH) or Solana (SOL) ecosystems, specifically those with market caps between $5 million and $200 million. He looks for coins that have reached or are approaching critical mass, indicating a strong, cult-like community.

Mahmudov believes in a straightforward investment strategy buy and hold for over a year, avoiding derivatives and volatile micro-cap meme coins.

Among his high-conviction picks is American Coin (USA), believed to benefit from national events like July 4, the Olympics, or the US Elections. These events could catalyze significant price movements.

My thesis is simple, identify the number 1 coin in each meme coin sub-category and simply Buy & Hold. This is the number 1 country coin, whose citizens are the wealthiest and most represented on crypto Twitter, Mahmudov explained.

Other meme coins in his portfolio include Popcat (POPCAT), Retardio (RETARDIO), and GigaChad (GIGA), which highlight the diversity within the meme coin sector.

The cultural resonance of meme coins is a major draw. They reflect contemporary internet culture, mixing humor, critique, and community spirit. This cultural connection engages a broad demographic, particularly young, tech-savvy investors active on Crypto Twitter.

Miles Deutscher, another crypto influencer, supports this view. He notes Pepe (PEPE) and Foxy (FOXY) as top picks and praises their balance of return potential and risk, and observes that meme coins often outperform more fundamentally driven altcoins during major market downturns.

Meme coins, in general, I dont think you can really fade in terms of having some positioning there in your portfolio. They are still the strongest coins in the market. They continue to exhibit relative strength, Deutscher said.

Read more: 11 Top Solana Meme Coins to Watch in July 2024

Investors are increasingly sophisticated about meme coins, seeking quick profits and looking for assets with longevity and cultural impact. This strategy reflects wider investment trends where narrative and community engagement can significantly influence an assets value.

However, market participants should understand that the current dominance of meme coins is driven more by culture than by the fundamentals of individual coins. A diversified portfolio and thorough research are essential for anyone considering meme coins as part of their investment strategy. Its important to note that retail investors behavior can sometimes skew market sentiment regarding legitimate projecs.

Retail investors are drawn to meme coins because they offer the same opportunities as venture capitalists during seed, pre-seed, and private sale rounds. Platforms like Solana provide technological and affordable infrastructure for launching meme coins on decentralized exchanges. While meme coins have a higher failure rate compared to traditional investments, new coins keep entering the market due to the low development costs and sustained demand from contrarian financial investors. Its challenging to view them as long-term investments in the traditional sense, but the niche and industry are here for the long run. Long-term investments in meme coins do exist, like Dogecoin, but investors should recognize the speculative nature of these assets, Jonas Dovydaitis, Co-Founder & CEO at PAiT, told BeInCrypto.

Disclaimer

In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content. Please note that ourTerms and Conditions,Privacy Policy, andDisclaimershave been updated.

More here:
Next Altcoin Season: Why Meme Coins Will Outperform? - BeInCrypto

Read More..

OkayCoin Elevates Ethereum Staking Returns in Anticipation of Altcoin Surge – GlobeNewswire

Los Angeles, USA, July 04, 2024 (GLOBE NEWSWIRE) --

In a strategic move to capitalize on the anticipated surge in altcoin markets, OkayCoin, a premier platform in thecryptocurrency stakingarena, today announced significant enhancements to its Ethereum staking yields. Under the leadership of CEO William Miller, OkayCoin is setting new standards for high-yield staking opportunities, positioning its users to maximize returns as market dynamics evolve.

"Anticipating a significant uptick in the altcoin sector, we have proactively optimized our Ethereum staking services to offer some of the highest yields available in the market today," said William Miller, CEO of OkayCoin. "This initiative is designed to equip our users with robust tools to increase their earnings potential ahead of expected market movements."

Strategically Increasing Staking Yields The enhancements to OkayCoinsEthereum staking programinclude increased interest rates and improved reward structures that are designed to attract both new and seasoned investors. These changes come at a crucial time when the cryptocurrency community is buzzing with predictions of a strong altcoin season, which typically sees increased activity and price spikes across alternative cryptocurrencies outside of Bitcoin.

Empowering Users with Competitive Staking Options OkayCoin's revamped Ethereum staking strategy includes several key features:

Education and Support to Navigate Market Changes Understanding the complexities of crypto market trends and their implications for staking can be daunting for many investors. To address this, OkayCoin has ramped up its educational initiatives, offering comprehensive resources and expert support to help users understand market trends and optimize their staking strategies.

"We are committed to providing our users with not only the tools but also the knowledge they need to succeed in the evolving crypto landscape," added Miller. "Our educational programs are designed to demystify market trends and empower investors to take strategic actions based on informed insights."

Enhancing Security in Volatile Markets With increased yields and market activity comes the need for heightened security measures. OkayCoin has implemented advanced security protocols to ensure that all staking operations are protected against potential cyber threats and market volatilities.

Future Outlook and Expansion Plans As OkayCoin continues to anticipate and react to market trends, it is also expanding its service offerings beyond Ethereum to include a variety of other promising altcoins. This expansion is part of the company's broader strategy to offer diverse staking opportunities across a range of cryptocurrencies, thereby catering to a wider audience and fostering a more inclusive cryptocurrency ecosystem.

"We see this enhancement of ourEthereum stakingyields as just the beginning," Miller concluded. "OkayCoin is poised to lead the charge in offering lucrative staking options that align with market trends and investor expectations. We look forward to continuing to innovate and expand our offerings to meet the needs of our global user base."

Expanding Staking Options Adding to its robust suite of staking options, OkayCoin now offers a diverse range of packages designed to meet the needs of different investor types, including:

Each package ensures the return of principal post-staking, enhancing investor confidence and supported by OkayCoins unwavering commitment to security, simplicity, and transparency.

Security and Compliance Recognizing the importance of security, especially in a fluctuating market,OkayCoinhas enhanced its platform with state-of-the-art security features that safeguard investor assets against potential threats. Additionally, OkayCoin adheres strictly to regulatory standards, ensuring that all staking activities are compliant with global financial regulations.

Looking Forward With the crypto market's rapid growth and the increasing popularity of staking as a passive income stream, OkayCoin's strategic blueprint is timely. It positions the company to continue leading the charge in innovation and service excellence in the cryptocurrency staking space.

"We are committed to continuously evolving and adapting our services to meet the needs of our users," Miller concluded. "This strategic blueprint is just the beginning. We look forward to empowering our clients to achieve greater financial success and security through effective crypto staking strategies."

For more information about how to get started withOkayCoinand make the most of the crypto summer, visithttps://okaycoin.com or use media contacts.

Media Contact Details Contact Name: William Miller Contact Email: william (at) okaycoin.com Company Add: 525 Flower St, Los Angeles, CA 90071 USA City/Country: Los Angeles, USA Website:https://okaycoin.com

Disclaimer: The information provided in this press release is not a solicitation for investment, nor is it intended as investment advice, financial advice, or trading advice. It is strongly recommended you practice due diligence, including consultation with a professional financial advisor, before investing in or trading cryptocurrency & securities.

Link:
OkayCoin Elevates Ethereum Staking Returns in Anticipation of Altcoin Surge - GlobeNewswire

Read More..