Page 1,605«..1020..1,6041,6051,6061,607..1,6101,620..»

Applied Intuition Acquires the SceneBox Platform to Strengthen … – PR Newswire

MOUNTAIN VIEW, Calif., March 21, 2023 /PRNewswire/ -- Applied Intuition, Inc., a simulation and software provider for autonomous vehicle (AV) development, has acquired SceneBox, a data management and operations platform built specifically for machine learning (ML). The core team of Caliber Data Labs, Inc., the creator of SceneBox, will join the Applied team.

The SceneBox platform enables engineers to train better, more accurate ML models with a data-centric approach. To successfully train production-grade ML models, teams rely heavily on high-quality datasets. When working with enormous unstructured data, finding the right datasets can be difficult, time-consuming, and costly. SceneBox lets engineers explore, curate, and compare datasets rapidly, diagnose problems, and orchestrate complex data operations. The platform offers a rich web interface, extensive APIs, and advanced features such as embedding-based search.

"We are thrilled to welcome Yaser and the SceneBox team to Applied," said Qasar Younis, Co-Founder and CEO of Applied Intuition. "When we learned of Yaser's vision and our complementary product strategies, we immediately wanted to join forces. The SceneBox team brings a wealth of knowledge and experience in ML and data ops that will help strengthen our offerings. We look forward to working together and better serving our customers."

"We are proud to be a part of the Applied team and the company's mission to accelerate the world's adoption of safe and intelligent machines," said Yaser Khalighi, Founder and CEO of Caliber Data Labs. "Autonomy is a data problem. I am confident that our joint expertise will allow customers to spend less time wrangling data and more time building better ML models."

DLA Piper LLP (U.S.) served as legal counsel to Applied Intuition. Fasken served as legal counsel to Caliber Data Labs.

About Applied IntuitionApplied Intuition's mission is to accelerate the world's adoption of safe and intelligent machines. The company's suite of simulation, validation, and data management software makes it faster, safer, and easier to bring autonomous systems to market. Autonomy programs across industries and 17 of the top 20 global automotive OEMs rely on Applied's solutions to develop, test, and deploy autonomous systems at scale. Learn more at https://applied.co.

About SceneBoxSceneBox is a Software 2.0 data engine for computer vision engineers. The Caliber Data Labs team built SceneBox as a modular and scalable platform that enables engineers to quickly search, curate, orchestrate, visualize, and debug massive perception datasets (e.g., camera and lidar images, videos, etc.). Teams can measure the performance of their ML models and fix problems using the right data. By helping engineers spend more time building ML models and less time wrangling data, SceneBox aims to fundamentally change the way perception data is managed at a global scale.

SOURCE Applied Intuition

Visit link:
Applied Intuition Acquires the SceneBox Platform to Strengthen ... - PR Newswire

Read More..

AWS and NVIDIA Collaborate on Next-Generation Infrastructure for … – NVIDIA Blog

New Amazon EC2 P5 Instances Deployed in EC2 UltraClusters Are Fully Optimized to Harness NVIDIA Hopper GPUs for Accelerating Generative AI Training and Inference at Massive Scale

GTCAmazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), and NVIDIA (NASDAQ: NVDA) today announced a multi-part collaboration focused on building out the world's most scalable, on-demand artificial intelligence (AI) infrastructure optimized for training increasingly complex large language models (LLMs) and developing generative AI applications.

The joint work features next-generation Amazon Elastic Compute Cloud (Amazon EC2) P5 instances powered by NVIDIA H100 Tensor Core GPUs and AWSs state-of-the-art networking and scalability that will deliver up to 20 exaFLOPS of compute performance for building and training the largest deep learning models. P5 instances will be the first GPU-based instance to take advantage of AWSs second-generation Elastic Fabric Adapter (EFA) networking, which provides 3,200 Gbps of low-latency, high bandwidth networking throughput, enabling customers to scale up to 20,000 H100 GPUs in EC2 UltraClusters for on-demand access to supercomputer-class performance for AI.

AWS and NVIDIA have collaborated for more than 12 years to deliver large-scale, cost-effective GPU-based solutions on demand for various applications such as AI/ML, graphics, gaming, and HPC, said Adam Selipsky, CEO at AWS. AWS has unmatched experience delivering GPU-based instances that have pushed the scalability envelope with each successive generation, with many customers scaling machine learning training workloads to more than 10,000 GPUs today. With second-generation EFA, customers will be able to scale their P5 instances to over 20,000 NVIDIA H100 GPUs, bringing supercomputer capabilities on demand to customers ranging from startups to large enterprises.

Accelerated computing and AI have arrived, and just in time. Accelerated computing provides step-function speed-ups while driving down cost and power as enterprises strive to do more with less. Generative AI has awakened companies to reimagine their products and business models and to be the disruptor and not the disrupted, said Jensen Huang, founder and CEO of NVIDIA. AWS is a long-time partner and was the first cloud service provider to offer NVIDIA GPUs. We are thrilled to combine our expertise, scale, and reach to help customers harness accelerated computing and generative AI to engage the enormous opportunities ahead.

New Supercomputing ClustersNew P5 instances are built on more than a decade of collaboration between AWS and NVIDIA delivering the AI and HPC infrastructure and build on four previous collaborations across P2, P3, P3dn, and P4d(e) instances. P5 instances are the fifth generation of AWS offerings powered by NVIDIA GPUs and come almost 13 years after its initial deployment of NVIDIA GPUs, beginning with CG1 instances.

P5 instances are ideal for training and running inference for increasingly complex LLMs and computer vision models behind the most-demanding and compute-intensive generative AI applications, including question answering, code generation, video and image generation, speech recognition, and more.

Specifically built for both enterprises and startups racing to bring AI-fueled innovation to market in a scalable and secure way, P5 instances feature eight NVIDIA H100 GPUs capable of 16 petaFLOPs of mixed-precision performance, 640 GB of high-bandwidth memory, and 3,200 Gbps networking connectivity (8x more than the previous generation) in a single EC2 instance. The increased performance of P5 instances accelerates the time-to-train machine learning (ML) models by up to 6x (reducing training time from days to hours), and the additional GPU memory helps customers train larger, more complex models. P5 instances are expected to lower the cost to train ML models by up to 40% over the previous generation, providing customers greater efficiency over less flexible cloud offerings or expensive on-premises systems.

Amazon EC2 P5 instances are deployed in hyperscale clusters called EC2 UltraClusters that are comprised of the highest performance compute, networking, and storage in the cloud. Each EC2 UltraCluster is one of the most powerful supercomputers in the world, enabling customers to run their most complex multi-node ML training and distributed HPC workloads. They feature petabit-scale non-blocking networking, powered by AWS EFA, a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS. EFAs custom-built operating system (OS) bypass hardware interface and integration with NVIDIA GPUDirect RDMA enhances the performance of inter-instance communications by lowering latency and increasing bandwidth utilization, which is critical to scaling training of deep learning models across hundreds of P5 nodes. With P5 instances and EFA, ML applications can use NVIDIA Collective Communications Library (NCCL) to scale up to 20,000 H100 GPUs. As a result, customers get the application performance of on-premises HPC clusters with the on-demand elasticity and flexibility of AWS. On top of these cutting-edge computing capabilities, customers can use the industrys broadest and deepest portfolio of services such as Amazon S3 for object storage, Amazon FSx for high-performance file systems, and Amazon SageMaker for building, training, and deploying deep learning applications. P5 instances will be available in the coming weeks in limited preview. To request access, visit https://pages.awscloud.com/EC2-P5-Interest.html.

With the new EC2 P5 instances, customers like Anthropic, Cohere, Hugging Face, Pinterest, and Stability AI will be able to build and train the largest ML models at scale. The collaboration through additional generations of EC2 instances will help startups, enterprises, and researchers seamlessly scale to meet their ML needs.

Anthropic builds reliable, interpretable, and steerable AI systems that will have many opportunities to create value commercially and for public benefit. At Anthropic, we are working to build reliable, interpretable, and steerable AI systems. While the large, general AI systems of today can have significant benefits, they can also be unpredictable, unreliable, and opaque. Our goal is to make progress on these issues and deploy systems that people find useful, said Tom Brown, co-founder of Anthropic. Our organization is one of the few in the world that is building foundational models in deep learning research. These models are highly complex, and to develop and train these cutting-edge models, we need to distribute them efficiently across large clusters of GPUs. We are using Amazon EC2 P4 instances extensively today, and we are excited about the upcoming launch of P5 instances. We expect them to deliver substantial price-performance benefits over P4d instances, and theyll be available at the massive scale required for building next-generation large language models and related products.

Cohere, a leading pioneer in language AI, empowers every developer and enterprise to build incredible products with world-leading natural language processing (NLP) technology while keeping their data private and secure. Cohere leads the charge in helping every enterprise harness the power of language AI to explore, generate, search for, and act upon information in a natural and intuitive manner, deploying across multiple cloud platforms in the data environment that works best for each customer, said Aidan Gomez, CEO at Cohere. NVIDIA H100-powered Amazon EC2 P5 instances will unleash the ability of businesses to create, grow, and scale faster with its computing power combined with Coheres state-of-the-art LLM and generative AI capabilities.

Hugging Face is on a mission to democratize good machine learning. As the fastest growing open source community for machine learning, we now provide over 150,000 pre-trained models and 25,000 datasets on our platform for NLP, computer vision, biology, reinforcement learning, and more, said Julien Chaumond, CTO and co-founder at Hugging Face. With significant advances in large language models and generative AI, were working with AWS to build and contribute the open source models of tomorrow. Were looking forward to using Amazon EC2 P5 instances via Amazon SageMaker at scale in UltraClusters with EFA to accelerate the delivery of new foundation AI models for everyone.

Today, more than 450 million people around the world use Pinterest as a visual inspiration platform to shop for products personalized to their taste, find ideas to do offline, and discover the most inspiring creators. We use deep learning extensively across our platform for use-cases such as labeling and categorizing billions of photos that are uploaded to our platform, and visual search that provides our users the ability to go from inspiration to action," said David Chaiken, Chief Architect at Pinterest. "We have built and deployed these use-cases by leveraging AWS GPU instances such as P3 and the latest P4d instances. We are looking forward to using Amazon EC2 P5 instances featuring H100 GPUs, EFA and Ultraclusters to accelerate our product development and bring new Empathetic AI-based experiences to our customers.

As the leader in multimodal, open-source AI model development and deployment, Stability AI collaborates with public- and private-sector partners to bring this next-generation infrastructure to a global audience. At Stability AI, our goal is to maximize the accessibility of modern AI to inspire global creativity and innovation, said Emad Mostaque, CEO of Stability AI. We initially partnered with AWS in 2021 to build Stable Diffusion, a latent text-to-image diffusion model, using Amazon EC2 P4d instances that we employed at scale to accelerate model training time from months to weeks. As we work on our next generation of open-source generative AI models and expand into new modalities, we are excited to use Amazon EC2 P5 instances in second-generation EC2 UltraClusters. We expect P5 instances will further improve our model training time by up to 4x, enabling us to deliver breakthrough AI more quickly and at a lower cost.

New Server Designs for Scalable, Efficient AILeading up to the release of H100, NVIDIA and AWS engineering teams with expertise in thermal, electrical, and mechanical fields have collaborated to design servers to harness GPUs to deliver AI at scale, with a focus on energy efficiency in AWS infrastructure. GPUs are typically 20x more energy efficient than CPUs for certain AI workloads, with the H100 up to 300x more efficient for LLMs than CPUs.

The joint work has included developing a system thermal design, integrated security and system management, security with the AWS Nitro hardware accelerated hypervisor, and NVIDIA GPUDirect optimizations for AWS custom-EFA network fabric.

Building on AWS and NVIDIAs work focused on server optimization, the companies have begun collaborating on future server designs to increase the scaling efficiency with subsequent-generation system designs, cooling technologies, and network scalability.

Visit link:
AWS and NVIDIA Collaborate on Next-Generation Infrastructure for ... - NVIDIA Blog

Read More..

Making an Impact: IoT and Machine Learning in Business – Finextra

Two is better than one, isnt it? This is undoubtedly true in the case of IoT and machine learning. These two most popular and trending technologies are offering a solid growth system for companies if implemented together correctly. When combined, they help you unlock the true power of data and boost business efficiency, sales, and customer relationships.

Therefore, incorporation of IoT and machine learning in business is seen on a wide scale. We are going to discuss some of the popular areas where these technologies are used. Before that, lets see some statistics around them.

Statistics Showing Trend of IoT and MLAccording to IoT analytics, the world will have 14.4 billion IoT-connected devices by the end of 2022 which is 10% more than the previous year.

By 2025, this number will reach approximately 27 billion clearly indicating that businesses are quickly adopting it. The market of machine learning, on the other hand, is expected to cross the $200 billion mark by 2025. These figures are enough to confidently say that the market of IoT and machine learning are not going to slow down anytime, but rather will increase over time.

Now, a question pops up: what are the benefits of using IoT and machine learning in business? First things first, knowing how they work together will help you understand the true value they add to your business.

How IoT and Machine Learning Work Together?As the name suggested, the Internet of things is a network of all devices having sensors, connected through the internet. This connection gives them the ability to communicate with any other device on the network.

What after that? How will you put that data to use? Machine learning is the answer. It is a subset of AI and a process of using data to develop mathematical models or algorithms to train the computer without much human interference.

With that learning, the system can be used to anticipate the most likely plot based on the data. The prediction can be wrong or right and depending on that algorithm updates itself to deliver a better possible scenario next time.

Thus, both complement each other to give a competitive advantage to businesses over others through data accumulation and analysis so that they can decide whats better for their growth. This is true for every type of sector, be it healthcare, finance, automotive, agriculture, manufacturing, and more.

But theres more than the above-mentioned reason to use IoT and machine learning in business processes. Lets understand their role in different businesses better and what advantages they offer.

Benefits of IoT and Machine Learning for Businesses -It Automates the Business ProcessesFor any organization, whether small or large, there are a certain set of business processes. Each one should be efficient to achieve the organizations goal. However, monotonous tasks like scheduling emails or record-keeping processes can cause unnecessary delays and hamper overall productivity.

Machine learning and IoT can automate those boring and repetitive tasks to streamline the business process. Not just that, it reduces the chances of human errors, and inefficiencies, improves follow-up with the lead, scheduling of marketing campaigns, events, etc.

Adds an Extra Layer of SecurityNo place is protected from accidents, frauds, and cyber-attacks. They are common in the industry and if not addressed immediately can cause major losses to the business, its employees, and customers.

But it is hard to keep an eye on every single area or device. Using IoT and machine learning in business not only help in monitoring each aspect to identify loopholes and threats but also let you take necessary preventive measures beforehand.

Helps Identifying the Productive ResourcesWhether it's financial, human, physical, or technological resources your business has, it is essential to filter out the most productive ones and eliminate the rarely used resources. With use of IoT and machine learning in business processes, you can assist you in analysing this and prevent unnecessary expenses on those unused and non-productive resources. They can also suggest where your company needs to utilize those resources.

Helps Understanding the CustomersCustomers are an important asset of any company. Making them satisfied is thus important to be successful and increase revenue. Machine learning and IoT can help companies in delivering what their customers want without guessing it. They can learn how customers are interacting with their brand and what things they dislike or like the most.

With all the valuable insights in your hands, you can create products and services they are expecting the most. Or analyze which one is doing good in the market. This way brands can benefit in two ways- delivering better customer experience and increasing revenue by delivering the right products to the audience. For e-commerce platforms, machine learning and IoT are the go-to technologies to achieve this.

Use Cases of IoT and Machine Learning in Various Businesses -Retail Industry: Supply Chain ManagementThe supply chain industry is data-reliant which means wrong or incomplete data can cause several issues in the process. Cost inefficiency, technical downtimes, problem in determining price and transportation costs, inventory theft and loss, etc are a few such problems they face.

Implementing IoT sensors on the devices involved to extract vital data and then send them to machine-learning models can help in the following ways.

Improve the quality of products Reduce operational costs Check the status of delivery Prevent inventory theft and fraud Maintain the balance between demand and supply Improve supply chain visibility to boost customer satisfaction Boost transportation of goods across borders Increase operational efficiency and revenue opportunities Check for any defects in the product or industrial equipment

Automotive Industry: Self-Driving CarsIoT sensors are enhancing the capabilities of vehicles making them smarter and more independent. We call them smart cars or self-driving cars, where human presence is not even an option. Together with artificial intelligence and machine learning, these vehicles can evaluate the situation on the road and can make better decisions in real-time.

They now have reliable cameras to get a clear understanding of roads. Radar detectors allow autonomous vehicles to see even at night thus improving their visibility.

Healthcare Industry: Smart Healthcare SolutionsPatient monitoring has become easy with machine learning and IoT. Doctors can now get real-time data on patients health conditions from connected gadgets and suggest tailored treatments.

Remote glucose monitoring is one such use case where doctors can monitor the glucose level of patients through CGM( continuous glucose monitoring) systems. If there is any anomaly in the glucose level, a warning notification is issued so that patients can immediately connect to the doctor and get the necessary treatment.

AI-equipped Apple Watch is another best use case of machine learning and IoT. The smartwatch is very useful in monitoring the heartbeat. According to a study by Cardiogram, the Apple watch gives 97 percent accurate results on heart rate monitoring and can detect paroxysmal atrial fibrillation which is mainly caused due to irregularity in heart rhythm.

Manufacturing Industry: Condition-Based MonitoringMachines are undoubtedly not going to last forever; they continuously undergo wear and tear and ultimately reach a point where they need to be repaired or discarded. As the manufacturing industry is one of the sectors that depend heavily on machines, they need to keep an eye on machines health strictly.

CBM is one of the most important predictive maintenance strategies that work in this case. Using machine learning techniques and combined with the information gathered from the IoT sensors, conclusions regarding the status of the equipment can be monitored.

For example, mechanical misalignment, short circuits, and wear-out conditions can be detected through this technique. This helps identify the root problem and how early a machine needs maintenance.

Furthermore, this type of automated machine learning assistance decreases the human engineering effort by 50 %, reduces the maintenance budget, and boosts the availability of machines. False alarming, which is one of the main issues of condition monitoring, is also solved by 90% with the help of machine learning models in CBM.

ConclusionNo single technology can alone bring massive success to businesses. Thus, they should be flexible enough to incorporate several technologies together. The Internet of Things (IoT), and Machine Learning are two such powerful combinations that when used correctly can scale up the growth of a business.

They are reshaping almost every industry from agriculture to IT making them more efficient, scalable, and productive.

Read more here:
Making an Impact: IoT and Machine Learning in Business - Finextra

Read More..

Biological research and self-driving labs in deep space supported … – Nature.com

Afshinnekoo, E. et al. Fundamental biological features of spaceflight: advancing the field to enable deep-space exploration. Cell 183, 11621184 (2020).

Article Google Scholar

Loftus, D. J., Rask, J. C., McCrossin, C. G. & Tranfield, E. M. The chemical reactivity of lunar dust: from toxicity to astrobiology. Earth Moon Planets 107, 95105 (2010).

Article Google Scholar

Pohlen, M., Carroll, D., Prisk, G. K. & Sawyer, A. J. Overview of lunar dust toxicity risk. NPJ Microgravity 8, 55 (2022).

Paul, A.-L. & Ferl, R. J. The biology of low atmospheric pressureimplications for exploration mission design and advanced life support. Am. Soc. Gravit. Space Biol. 19, 317 (2005).

Council, N. R. Recapturing a Future for Space Exploration: Life and Physical Sciences Research for a New Era (National Academies Press, 2011).

Goswami, N. et al. Maximizing information from space data resources: a case for expanding integration across research disciplines. Eur. J. Appl. Physiol. 113, 16451654 (2013).

Article Google Scholar

Nangle, S. N. et al. The case for biotech on Mars. Nat. Biotechnol. 38, 401407 (2020).

Article Google Scholar

Costes, S. V., Sanders, L. M. & Scott, R. T. Workshop on Artificial Intelligence & Modeling for Space Biology. Zenodo https://doi.org/10.5281/zenodo.7508535 (2023).

Jordan, M. I. & Mitchell, T. M. Machine learning: trends, perspectives, and prospects. Science 349, 255260 (2015).

Article MathSciNet MATH Google Scholar

Topol, E. J. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (Basic Books, 2019).

Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25, 4456 (2019).

Article Google Scholar

Scott, R. T. et al. Biomonitoring and precision health in deep space supported by artificial intelligence. Nat. Mach. Intell. https://doi.org/10.1038/s42256-023-00617-5 (2023).

National Academies of Sciences, Engineering, and Medicine, Policy and Global Affairs, Board on Research Data and Information & Committee on Toward an Open Science Enterprise Open Science by Design: Realizing a Vision for 21st Century Research (National Academies Press, 2018).

Wilkinson, M. D. et al. The FAIR guiding principles for scientific data management and stewardship. Sci. Data 3, 160018 (2016).

Article Google Scholar

Berrios, D. C., Beheshti, A. & Costes, S. V. FAIRness and usability for open-access omics data systems. AMIA Annu. Symp. Proc. 2018, 232241 (2018).

Google Scholar

Low, L. A. & Giulianotti, M. A. Tissue chips in space: modeling human diseases in microgravity. Pharm. Res. 37, 8 (2019).

Article Google Scholar

Ronca, A. E., Souza, K. A. & Mains, R. C. (eds) Translational Cell and Animal Research in Space: 19652011 NASA Special Publication NASA/SP-2015-625 (NASA Ames Research Center, 2016).

Alwood, J. S. et al. From the bench to exploration medicine: NASA life sciences translational research for human exploration and habitation missions. NPJ Microgravity 3, 5 (2017).

Schatten, H., Lewis, M. L. & Chakrabarti, A. Spaceflight and clinorotation cause cytoskeleton and mitochondria changes and increases in apoptosis in cultured cells. Acta Astronaut. 49, 399418 (2001).

Article Google Scholar

Shi, L. et al. Spaceflight and simulated microgravity suppresses macrophage development via altered RAS/ERK/NFB and metabolic pathways. Cell. Mol. Immunol. 18, 14891502 (2021).

Article Google Scholar

Ferl, R. J., Koh, J., Denison, F. & Paul, A.-L. Spaceflight induces specific alterations in the proteomes of Arabidopsis. Astrobiology 15, 3256 (2015).

Article Google Scholar

Ou, X. et al. Spaceflight induces both transient and heritable alterations in DNA methylation and gene expression in rice (Oryza sativa L.). Mutat. Res. 662, 4453 (2009).

Article Google Scholar

Overbey, E. G. et al. Spaceflight influences gene expression, photoreceptor integrity, and oxidative stress-related damage in the murine retina. Sci. Rep. 9, 13304 (2019).

Article Google Scholar

Clment, G. & Slenzka, K. Fundamentals of Space Biology: Research on Cells, Animals, and Plants in Space (Springer Science & Business Media, 2006).

Yeung, C. K. et al. Tissue chips in space-challenges and opportunities. Clin. Transl. Sci. 13, 810 (2020).

Article Google Scholar

Low, L. A., Mummery, C., Berridge, B. R., Austin, C. P. & Tagle, D. A. Organs-on-chips: into the next decade. Nat. Rev. Drug Discov. 20, 345361 (2021).

Article Google Scholar

Globus, R. K. & Morey-Holton, E. Hindlimb unloading: rodent analog for microgravity. J. Appl. Physiol. 120, 11961206 (2016).

Article Google Scholar

Simonsen, L. C., Slaba, T. C., Guida, P. & Rusek, A. NASAs first ground-based Galactic cosmic ray simulator: enabling a new era in space radiobiology research. PLoS Biol. 18, e3000669 (2020).

Article Google Scholar

Buckey, J. C. Jr & Homick, J. L. The Neurolab Spacelab Mission: Neuroscience Research in Space: Results from the STS-90, Neurolab Spacelab Mission. NASA Technical Reports Server (NASA, 2003).

Diallo, O. N. et al. Impact of the International Space Station Research Results. NASA Technical Reports Server (NASA, 2019).

Vandenbrink, J. P. & Kiss, J. Z. Space, the final frontier: a critical review of recent experiments performed in microgravity. Plant Sci. 243, 115119 (2016).

Article Google Scholar

Massaro Tieze, S., Liddell, L. C., Santa Maria, S. R. & Bhattacharya, S. BioSentinel: a biological CubeSat for deep space exploration. Astrobiology https://doi.org/10.1089/ast.2019.2068 (2020).

Ricco, A. J., Maria, S. R. S., Hanel, R. P. & Bhattacharya, S. BioSentinel: a 6U nanosatellite for deep-space biological science. IEEE Aerospace Electron. Syst. Mag. 35, 618 (2020).

Article Google Scholar

Chen, Y. et al. Automated cells-to-peptides sample preparation workflow for high-throughput, quantitative proteomic assays of microbes. J. Proteome Res. 18, 37523761 (2019).

Article Google Scholar

Zampieri, M., Sekar, K., Zamboni, N. & Sauer, U. Frontiers of high-throughput metabolomics. Curr. Opin. Chem. Biol. 36, 1523 (2017).

Article Google Scholar

Stephens, Z. D. et al. Big data: astronomical or genomical? PLoS Biol. 13, e1002195 (2015).

Article Google Scholar

Tomczak, K., Czerwiska, P. & Wiznerowicz, M. The Cancer Genome Atlas (TCGA): an immeasurable source of knowledge. Contemp. Oncol. 19, A68A77 (2015).

Google Scholar

Lonsdale, J. et al. The Genotype-Tissue Expression (GTEx) project. Nat. Genet. 45, 580585 (2013).

Article Google Scholar

Atta, L. & Fan, J. Computational challenges and opportunities in spatially resolved transcriptomic data analysis. Nat. Commun. 12, 5283 (2021).

Article Google Scholar

Marx, V. Method of the year: spatially resolved transcriptomics. Nat. Methods 18, 914 (2021).

Article Google Scholar

Deamer, D., Akeson, M. & Branton, D. Three decades of nanopore sequencing. Nat. Biotechnol. 34, 518524 (2016).

Article Google Scholar

Mardis, E. R. DNA sequencing technologies: 20062016. Nat. Protoc. 12, 213218 (2017).

Article Google Scholar

Stuart, T. & Satija, R. Integrative single-cell analysis. Nat. Rev. Genet. 20, 257272 (2019).

Article Google Scholar

Asp, M. et al. A spatiotemporal organ-wide gene expression and cell atlas of the developing human heart. Cell 179, 16471660.e19 (2019).

Article Google Scholar

Giacomello, S. et al. Spatially resolved transcriptome profiling in model plant species. Nat Plants 3, 17061 (2017).

Article Google Scholar

Mao, X. W. et al. Characterization of mouse ocular response to a 35-day spaceflight mission: evidence of blood-retinal barrier disruption and ocular adaptations. Sci. Rep. 9, 8215 (2019).

Article Google Scholar

Jonscher, K. R. et al. Spaceflight activates lipotoxic pathways in mouse liver. PLoS ONE 11, e0152877 (2016).

Article Google Scholar

Beheshti, A. et al. Multi-omics analysis of multiple missions to space reveal a theme of lipid dysregulation in mouse liver. Sci. Rep. 9, 19195 (2019).

Article Google Scholar

Malkani, S. et al. Circulating miRNA spaceflight signature reveals targets for countermeasure development. Cell Rep. 33, 108448 (2020).

Article Google Scholar

da Silveira, W. A. et al. Comprehensive multi-omics analysis reveals mitochondrial stress as a central biological hub for spaceflight impact. Cell 183, 11851201.e20 (2020).

Article Google Scholar

Jiang, P., Green, S. J., Chlipala, G. E., Turek, F. W. & Vitaterna, M. H. Reproducible changes in the gut microbiome suggest a shift in microbial and host metabolism during spaceflight. Microbiome 7, 113 (2019).

Article Google Scholar

Beisel, N. S., Noble, J., Barbazuk, W. B., Paul, A.-L. & Ferl, R. J. Spaceflight-induced alternative splicing during seedling development in Arabidopsis thaliana. NPJ Microgravity 5, 9 (2019).

Polo, S.-H. L. et al. RNAseq analysis of rodent spaceflight experiments is confounded by sample collection techniques. iScience 23, 101733 (2020).

Article Google Scholar

Choi, S., Ray, H. E., Lai, S.-H., Alwood, J. S. & Globus, R. K. Preservation of multiple mammalian tissues to maximize science return from ground based and spaceflight experiments. PLoS ONE 11, e0167391 (2016).

Article Google Scholar

Krishnamurthy, A., Ferl, R. J. & Paul, A.-L. Comparing RNA-seq and microarray gene expression data in two zones of the Arabidopsis root apex relevant to spaceflight. Appl. Plant Sci. 6, e01197 (2018).

Article Google Scholar

Vrana, J. et al. Aquarium: open-source laboratory software for design, execution and data management. Synth. Biol. 6, ysab006 (2021).

Article Google Scholar

Miles, B. & Lee, P. L. Achieving reproducibility and closed-loop automation in biological experimentation with an IoT-enabled lab of the future. SLAS Technol. 23, 432439 (2018).

Article Google Scholar

Visit link:
Biological research and self-driving labs in deep space supported ... - Nature.com

Read More..

How Deep Learning is Revolutionizing Biology – BBN Times

In recent years, the field of biology has been rapidly transformed by the use of deep learning technology.

Deep learning algorithms have revolutionized the way we understand and analyze biological data, providing powerful tools for drug discovery, genomics, disease diagnosis, and protein folding. With the ability to quickly and accurately analyze vast amounts of data, deep learning is helping researchers identify patterns, make predictions, and develop new treatments for a variety of diseases.

Source: Science Direct

Drug discovery is one of the most promising applications of deep learning in biology. Traditionally, drug discovery has been a time-consuming, expensive, and often unreliable process, involving testing thousands of compounds for their potential to treat a specific disease. However, deep learning algorithms can analyze large amounts of data from drug trials, animal models, and clinical studies to identify promising drug candidates. For example, the pharmaceutical company Atomwise has developed a deep learning algorithm that can predict the efficacy of potential drug compounds by analyzing their chemical structures. By using deep learning, researchers can reduce the time and cost of drug discovery, while also increasing the chances of success.

Source: Nature Magazine

Another area where deep learning is making a significant impact is in genomics. Genomics involves the analysis of the human genome, which is composed of over three billion base pairs. Traditional methods of analyzing this data are slow and inefficient, but deep learning algorithms can quickly and accurately analyze genomic data, allowing researchers to identify genetic mutations associated with diseases such as cancer. For example, a team of researchers from the University of California, San Francisco, and Stanford University used deep learning to identify a genetic mutation that increases the risk of breast cancer. Deep learning has the potential to transform our understanding of genetics, leading to new treatments and therapies for a variety of diseases.

Source: Healthcare IT News

Deep learning is also being used to improve disease diagnosis. Traditionally, doctors have relied on their experience and medical training to diagnose diseases. However, deep learning algorithms can analyze large amounts of patient data, including medical histories, laboratory results, and imaging scans, to identify patterns and make accurate diagnoses. For example, a team of researchers from Stanford University developed a deep learning algorithm that can diagnose skin cancer with the same accuracy as board-certified dermatologists. By using deep learning, doctors can make more accurate diagnoses, leading to better patient outcomes and improved healthcare.

Source: PNAS

One of the most challenging problems in biology is predicting the three-dimensional structure of proteins. The shape of a protein determines its function, and understanding protein structure is critical for developing new drugs and understanding disease. Deep learning is being used to tackle this problem by analyzing large datasets of protein structures to identify patterns and predict the structure of unknown proteins. For example, Google's DeepMind developed a deep learning algorithm called AlphaFold that can accurately predict protein structure, outperforming traditional methods. By using deep learning, researchers can accelerate their understanding of protein folding, leading to new treatments and therapies for a variety of diseases.

Source: EMBO Press

Deep learning has many benefits for the field of biology. It allows researchers to analyze vast amounts of data quickly and accurately, leading to breakthroughs in our understanding of complex biological problems. By using deep learning, researchers can develop new treatments and therapies for a variety of diseases, leading to improved healthcare and better patient outcomes. Deep learning also has the potential to accelerate our understanding of genetics and protein folding, leading to new discoveries and innovations in the field of biology.

Source: Genome Biology

While deep learning has many benefits for the field of biology, there are also limitations and challenges in implementing this technology. One of

the main challenges is the need for large amounts of high-quality data to train deep learning models. Biological data is often noisy and incomplete, which can make it difficult to train accurate deep learning models. Additionally, deep learning algorithms are often considered "black boxes" because they can be difficult to interpret, making it challenging to understand how they arrived at their conclusions. This can make it difficult for researchers to replicate results and ensure that deep learning models are making accurate predictions.

Source: Technology Networks

The future of deep learning in biology looks promising. As technology continues to improve, we can expect to see even more powerful deep learning algorithms developed specifically for biological applications. With the continued growth of big data and advances in machine learning algorithms, we can expect deep learning to become an increasingly important tool for researchers in the field of biology. Deep learning has the potential to revolutionize our understanding of complex biological problems, leading to new treatments and therapies for a variety of diseases.

Source: PLOS

Deep learning is revolutionizing the field of biology, providing researchers with powerful tools for drug discovery, genomics, disease diagnosis, and protein folding. By analyzing vast amounts of data quickly and accurately, deep learning is helping researchers identify patterns, make predictions, and develop new treatments for a variety of diseases. While there are limitations and challenges in implementing deep learning in biology, the future looks promising. With continued advancements in technology and machine learning algorithms, we can expect deep learning to become an increasingly important tool for researchers in the field of biology.

Read the original:
How Deep Learning is Revolutionizing Biology - BBN Times

Read More..

AI finds the first stars were not alone – Science Daily

By using machine learning and state-of-the-art supernova nucleosynthesis, a team of researchers have found the majority of observed second-generation stars in the universe were enriched by multiple supernovae, reports a new study in The Astrophysical Journal.

Nuclear astrophysics research has shown elements including and heavier than carbon in the universe are produced in stars. But the first stars, stars born soon after the Big Bang, did not contain such heavy elements, which astronomers call 'metals'. The next generation of stars contained only a small amount of heavy elements produced by the first stars. To understand the universe in its infancy, it requires researchers to study these metal-poor stars.

Luckily, these second-generation metal-poor stars are observed in our Milky Way Galaxy, and have been studied by a team of Affiliate Members of the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) to close in on the physical properties of the first stars in the universe.

The team, led by Kavli IPMU Visiting Associate Scientist and The University of Tokyo Institute for Physics of Intelligence Assistant Professor Tilman Hartwig, including Visiting Associate Scientist and National Astronomical Observatory of Japan Assistant Professor Miho Ishigaki, Visiting Senior Scientist and University of Hertfordshire Professor Chiaki Kobayashi, Visiting Senior Scientist and National Astronomical Observatory of Japan Professor Nozomu Tominaga, and Visiting Senior Scientist and The University of Tokyo Professor Emeritus Ken'ichi Nomoto, used artificial intelligence to analyze elemental abundances in more than 450 extremely metal-poor stars observed to date. Based on the newly developed supervised machine learning algorithm trained on theoretical supernova nucleosynthesis models, they found that 68 per cent of the observed extremely metal-poor stars have a chemical fingerprint consistent with enrichment by multiple previous supernovae.

The team's results give the first quantitative constraint based on observations on the multiplicity of the first stars.

"Multiplicity of the first stars were only predicted from numerical simulations so far, and there was no way to observationally examine the theoretical prediction until now," said lead author Hartwig. "Our result suggests that most first stars formed in small clusters so that multiple of their supernovae can contribute to the metal enrichment of the early interstellar medium," he said.

"Our new algorithm provides an excellent tool to interpret the big data we will have in the next decade from on-going and future astronomical surveys across the world" said Kobayashi, also a Leverhulme Research Fellow.

"At the moment, the available data of old stars are the tip of the iceberg within the solar neighborhood. The Prime Focus Spectrograph, a cutting-edge multi-object spectrograph on the Subaru Telescope developed by the international collaboration led by Kavli IPMU, is the best instrument to discover ancient stars in the outer regions of the Milky Way far beyond the solar neighborhood.," said Ishigaki.

The new algorithm invented in this study opens the door to make the most of diverse chemical fingerprints in metal-poor stars discovered by the Prime Focus Spectrograph.

"The theory of the first stars tells us that the first stars should be more massive than the Sun. The natural expectation was that the first star was born in a gas cloud containing the mass million times more than the Sun. However, our new finding strongly suggests that the first stars were not born alone, but instead formed as a part of a star cluster or a binary or multiple star system. This also means that we can expect gravitational waves from the first binary stars soon after the Big Bang, which could be detected future missions in space or on the Moon," said Kobayashi.

Original post:
AI finds the first stars were not alone - Science Daily

Read More..

7 free learning resources to land top data science jobs – Cointelegraph

Data science is an exciting and rapidly growing field that involves extracting insights and knowledge from data. To land a top data science job, it is important to have a solid foundation in key data science skills, including programming, statistics, data manipulation and machine learning.

Fortunately, there are many free online learning resources available that can help you develop these skills and prepare for a career in data science. These resources include online learning platforms such as Coursera, edX and DataCamp, which offer a wide range of courses in data science and related fields.

Data science and related subjects are covered in a variety of courses on the online learning platform Coursera. These courses frequently involve subjects such as machine learning, data analysis and statistics and are instructed by academics from prestigious universities.

Here are some examples of data science courses on Coursera:

One can apply for financial aid to earn these certifications for free. However, doing a course just for certification may not land a dream job in data science.

Kaggle is a platform for data science competitions that provides a wealth of resources for learning and practicing data science skills. One can refine their skills in data analysis, machine learning and other branches of data science by participating in the platforms challenges and host of datasets.

Here are some examples of free courses available on Kaggle:

Related:9 data science project ideas for beginners

EdX is another online learning platform that offers courses in data science and related fields. Many of the courses on edX are taught by professors from top universities, and the platform offers both free and paid options for learning.

Some of the free courses on data science available on edX include:

All of these courses are free to audit, meaning that you can access all the course materials and lectures without paying a fee. Nevertheless, there will be a cost if you wish to access further course features or receive a certificate of completion. A comprehensive selection of paid courses and programs in data science, machine learning and related topics are also available on edX in addition to these courses.

DataCamp is an online learning platform that offers courses in data science, machine learning and other related fields. The platform offers interactive coding challenges and projects that can help you build real-world skills in data science.

The following courses are available for free on DataCamp:

All of these courses are free and can be accessed through DataCamps online learning platform. In addition to these courses, DataCamp also offers a wide range of paid courses and projects that cover topics such as data visualization, machine learning and data engineering.

Udacity is an online learning platform that offers courses in data science, machine learning and other related fields. The platform offers both free and paid courses, and many of the courses are taught by industry professionals.

Here are some examples of free courses on data science available on Udacity:

Related:5 high-paying careers in data science

MIT OpenCourseWare is an online repository of course materials from courses taught at the Massachusetts Institute of Technology. The platform offers a variety of courses in data science and related fields, and all of the materials are available for free.

Here are some of the free courses on data science available on MIT OpenCourseWare:

GitHub is a platform for sharing and collaborating on code, and it can be a valuable resource for learning data science skills. However, GitHub itself does not offer free courses. Instead, one can explore the many open-source data science projects that are hosted on GitHub to find out more about how data science is used in practical situations.

Scikit-learn is a popular Python library for machine learning, which provides a range of algorithms for tasks such as classification, regression and clustering, along with tools for data preprocessing, model selection and evaluation.The project is open-source and available on GitHub.

Jupyter is an open-source web application for creating and sharing interactive notebooks. Jupyter notebooks provide a way to combine code, text and multimedia content in a single document, making it easy to explore and communicate data science results.

These are just a few examples of the many open-source data science projects available on GitHub. By exploring these projects and contributing to them, one can gain valuable experience with data science tools and techniques, while also building their portfolio and demonstrating their skills to potential employers.

Read the original here:
7 free learning resources to land top data science jobs - Cointelegraph

Read More..

Autonomous shuttle gets new capabilities through machine learning … – Fleet World

Autonomous transport company Aurrigo has improved its driverless vehicles capabilities in a project with Aston University.

Aurrigos airport Auto-Dolly is now able to differentiate between many different objects

The two-year Knowledge Transfer Partnership (KTP) with the university developed a new machine vision solution, using machine learning and artificial intelligence that means the Coventry-based companys driverless vehicles are now able to see and recognise objects in greater detail. This results in improved performance across a wider spectrum of test situations.

Previously the companys driverless vehicles were only capable of detecting that there was an object in their path, not the type of object, so would just stop when they encountered something in their way.

The new computer vision systems, coupled with machine learning and artificial intelligence, are now able to differentiate between different objects, enabling Aurrigos airport Auto-Dolly to differentiate between many different objects airside.

Professor David Keene, CEO of Aurrigo, said: This partnership has allowed us to produce a system which has resulted in our vehicles becoming smarter and more capable and enabled us to expand our operations, particularly with baggage handling in airports worldwide.

Dr George Vogiatzis, senior lecturer in computer science at Aston University, added: This KTP has been a great way for us to work with a new industrial partner whilst applying our expertise in deep learning and robotics to the exciting field of autonomous vehicles.

It is very rewarding to see the success of this collaboration.

The project findings will also be applied to other vehicles in the Aurrigo product range.

Read the original:
Autonomous shuttle gets new capabilities through machine learning ... - Fleet World

Read More..

Data Annotation and Labeling Global Market Report 2023 … – GlobeNewswire

Dublin, March 23, 2023 (GLOBE NEWSWIRE) -- The "Data Annotation and Labeling Market Component, Data Type, Application (Dataset Management, Sentiment Analysis), Annotation Type, Vertical (BFSI, IT and ITES, Healthcare and Life Sciences) and Region - Global Forecast to 2027" report has been added to ResearchAndMarkets.com's offering.

The global data annotation and labeling market is projected to grow from USD 0.8 billion in 2022 to USD 3.6 billion by 2027, at a CAGR of 33.2% during the forecast period.

Any model or system that relies on a computer-driven decision-making system must annotate and label the data in order to guarantee that the decisions are accurate and pertinent. Businesses use massive amounts of datasets when building an ML model, carefully customizing them according to the model training needs.

As a result, machines can detect data that has been annotated in a variety of comprehensible formats, including images, texts, and videos. This explains why AI and ML firms seek out this type of annotated data to put into their ML algorithm, training it to learn and detect recurrent patterns, and ultimately employing the same to create accurate estimates and predictions.

The major market players, such as Google, Appen, IBM, Oracle, TELUS International, Adobe, AWS have adopted numerous growth strategies, which include acquisitions, new product launches, product enhancements, and business expansions, to enhance their market shares.

By organization size, SMEs are anticipated to grow at the highest CAGR during the forecast period

The increased competitive market scenario is expected to prompt SMEs to invest in cutting-edge technologies and adopt go-to-market strategies for making informed business decisions. SMEs are more open to adopting new technology to improve and streamline business operations as well as to expand their market presence in the global economy. During the forecast period, SMEs are anticipated to grow at highest CAGR.

By application, catalogue management segment to register the highest CAGR during the forecast period

The catalogue management tool helps businesses handle enormous amounts of unstructured data across many AI and ML projects. Teams that work on data annotation need strong tools that can gather all different kinds of data and information from various sources into a single, searchable database. Companies such as LabelBox developed a data annotation tool powered by catalogue management with the intent to filter out unstructured data based on metadata properties. Among applications, catalogue management is projected to register the highest CAGR during the forecast period.

Asia Pacific market to register highest CAGR during the forecast period

The data annotation and labeling market is projected to register the highest CAGR in the Asia Pacific region during the forecast period. The rapid industrialization of the countries across the Asia Pacific and the increasing digitalization trend is leading to the production of the bulk of unstructured data. Since the Asia Pacific region has shown untapped potential in the increased adoption of data annotation and labeling solutions, most organisations are moving there to extend their market reach. Due to the expanding corporate productivity awareness and the competently designed data annotation and labeling solutions offered by the vendors in this market, the Asia Pacific has emerged as a very promising region.

Market Dynamics

Drivers

Restraints

Opportunities

Challenges

Key Attributes:

Key Topics Covered:

1 Introduction

2 Research Methodology

3 Executive Summary

4 Premium Insights

5 Market Overview and Industry Trends

6 Data Annotation and Labeling Market, By Component6.1 Introduction6.2 Solutions6.3 Services6.3.2 Professional Services6.3.2.1 Training and Consulting6.3.2.2 System Integration and Implementation6.3.2.3 Support and Maintenance6.3.3 Managed Services

7 Data Annotation and Labeling Market, By Data Type7.1 Introduction7.2 Text7.3 Image7.4 Video7.5 Audio

8 Data Annotation and Labeling Market, By Deployment Type8.1 Introduction8.2 On-Premises8.3 Cloud

9 Data Annotation and Labeling Market, By Organization Size9.1 Introduction9.2 Small and Medium-Sized Enterprises9.3 Large Enterprises

10 Data Annotation and Labeling Market, By Annotation Type10.1 Introduction10.2 Manual10.3 Automatic10.4 Semi-Supervised

11 Data Annotation and Labeling Market, By Application11.1 Introduction11.2 Dataset Management11.3 Security and Compliance11.4 Data Quality Control11.5 Workforce Management11.6 Content Management11.7 Catalogue Management11.8 Sentiment Analysis11.9 Other Applications

12 Data Annotation and Labeling Market, By Vertical12.1 Introduction12.2 BFSI12.3 Healthcare and Life Sciences12.4 Telecom12.5 Government, Defense, and Public Agencies12.6 IT and ITES12.7 Retail and Consumer Goods12.8 Automotive12.9 Other Verticals

13 Market, By Region

14 Competitive Landscape

15 Company Profiles

16 Adjacent and Related Markets

17 Appendix

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/j2el21

About ResearchAndMarkets.comResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Read the original post:
Data Annotation and Labeling Global Market Report 2023 ... - GlobeNewswire

Read More..

ECU resolution on players participation after the Russian Chess Federation joining the Asian Chess Federation – European Chess Union

ECU notes that chess is by definition an individual sport, and that all players have the right to participate in FIDE world championships or continental championships under the flag of a new federation, already considering representation of a national team at the highest level.

ECU decides on players formerly belonging to the Chess Federation of Russia (CFR) that move to a European federation under the special resolution of the FIDE Council dated 22.2.2023 (https://fide.com/news/2247), hereinafter the 22.2.2023 resolution, given the Asian Chess Federation (ACF) accepted the (CFR) as a member of the ACF as of May 1,2023:

Forthe ECU Individual Championships:

FIDE resolution defines thatall theseplayers (22.2.2023 resolution) have the right to represent the newfederation in all official individual events of FIDE from the next day ofsubmitting their application without any restrictions.

ECU clarifies that from 1 May 2023 players who belong to the CFR and the players who have moved to the FIDE flag from the CFR cannot compete in European Individual Chess Championships.

Exceptions

In good faith and in the spirit of sportsmanship, two senior players playing under the FIDE flag who registered for the European Senior Championship in Italy (May 25th) prior to the Asian Chess Federations decision to admit the CFR, can still compete, but they have no right to be awarded any European title or medal.

Forthe European Team Chess Championship 2023:

For the year 2025 onwards:

Any Federation can enlist any player who had moved under its flag according to the 22.2.2023 resolution. The ECU notes that according to the current FIDE Handbook B.04, any player formerly belonging to the CFR may play any official FIDE event free from any transfer or compensation fee provided in a term of one to two years depending on residence.

Forany other ECU Team Competition:

See the rest here:
ECU resolution on players participation after the Russian Chess Federation joining the Asian Chess Federation - European Chess Union

Read More..