Page 1,800«..1020..1,7991,8001,8011,802..1,8101,820..»

This Week’s Awesome Tech Stories From Around the Web (Through September 10) – Singularity Hub

This Jellyfish Can Live Forever. Its Genes May Tell Us How.Veronique Greenwood | The New York TimesWhen their bodies are damaged, the mature adults, known as medusas, can turn back the clock and transform back into their youthful selves. While a predator or an injury can kill T. dohrnii, old age does not. They are, effectively, immortal. Now, in a paper published Monday in The Proceedings of the National Academy of Sciences, scientists have taken a detailed look at the jellyfishs genome, searching for the genes that control this remarkable process.

Cryptos Core Values Are Running Headfirst Into RealityWill Gottsegen | The AtlanticThe cats out of the bag on crypto regulations, forcing some companies to choose between their principles and their profits. For all the talk of crypto as a slick new alternative to a corrupt and outmoded banking system, companies have now found themselves backed into a corner: Either they can comply with regulations that could essentially defang the promise of the technology, or they can stay the course, at great cost to their bottom lines.

Scientists Have Made a Human Microbiome From ScratchCarl Zimmer | The New York TimesWhen the researchers gave the concoction to mice that did not have a microbiome of their own, the bacterial strains established themselves and remained stableeven when the scientists introduced other microbes. The new synthetic microbiome can even withstand aggressive pathogens and cause mice to develop a healthy immune system, as a full microbiome does.

Limitations of Deepminds Alphafold Detailed in MIT StudyKatyanna Quach | The RegisterEssentially, the AI software is useful in one step of the [drug discovery] processstructure predictionbut cant help in other stages, such as modeling how drugs and proteins would physically interact. Breakthroughs such as AlphaFold are expanding the possibilities for in silico (computer simulation) drug discovery efforts, but these developments need to be coupled with additional advances in other aspects of modeling that are part of drug discovery efforts, James Collins, lead author of the study publishedin Molecular Systems Biology and a bioengineering professor at MIT,said in a statement.

Uber Eats to Use Autonomous Electric Vehicles for DeliveriesMeara Isenberg | CNETUberis teaming up withNuro to use the latter companys autonomous, electric vehicles for food deliveries in a multiyear partnership, the companies announced Thursday. Deliveries begin this fall in Mountain View, California, and Houston, Texas, and the plan is for the service to expand to the greater Bay Area, according to a release. Nuros autonomous delivery vehicles are built specifically to carry food and other goods, the release says. They dont contain driversor passengers, and they run on public roads.

This Follicle-Hacking Drug Could One Day Treat BaldnessSimar Bajaj | WiredWith its roughly half a million hair follicles, you can think of your scalp as a gigafactory of 3D printers. According to Plikus, nearly all these follicles need to be constantly printing in order to create a full mop of hair. But in common baldness, these printers start shutting down, leading to hair thinning (if roughly 50 percent have switched off) and balding (when more than 70 percent are off). By activating stem cells present in peoples scalps, [the protein] SCUBE3 hacks hair follicles to restart the production line and promote rapid growth.

Black Holes Ring of Light Could Encrypt Its Inner SecretsThomas Lewton | QuantaThese findings imply to [Harvards Andrew] Strominger that the photon ring, rather than the event horizon, is a natural candidate for part of the holographic plate of a spinning black hole. If so, there may be a new way to picture what happens to information about objects that fall into black holesa long-standing mystery known as the black hole information paradox.

United Airlines Invests $15 Million in Electric Aviation Startup, Orders 200 Air TaxisAndrew J. Hawkins | The VergeThis is the second major investment from United in the nascent world of electric air mobility afterinvesting an undisclosed amount of money in Archer last year. These companies propose to develop small, electric vertical takeoff and landing (eVTOL) aircraft that can fly from rooftop to rooftop in a dense city as a taxi service. But so far, none have received clearance from federal aviation regulators to fly passengers.

The EUs AI Act Could Have a Chilling Effect on Open Source Efforts, Experts WarnKyle Wiggers | TechCrunchiThis could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the publics understanding of AI, Alex Engler, the analyst at Brookings who published the piece, wrote. In the end, the [EUs] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI.i

Read the original post:
This Week's Awesome Tech Stories From Around the Web (Through September 10) - Singularity Hub

Read More..

What 5 Benefits a Machine Learning Developer Can Bring to Your Business – Business Review

Machine learning (ML) is at the peak of popularity right now. This branch of artificial intelligence is attracting more and more investments, and its market value is growing at a rapid pace.

As a result, from over $21 billion in 2022, the machine learning market will likely reach more than $209 billion in 2029! Its an indicator of annual growth of almost 39%.

Even though many businesses are actively adopting this innovation to keep up with the competition, some companies are still hesitant. Today well convince you why business owners should step out of their comfort zone and hire a machine learning developer.

Well discuss ML in general and provide the reasons and benefits of implementing this innovation into your business processes. Todays article will also explain what skills a decent machine learning engineer should have.

Machine learning is often confused with artificial intelligence or used interchangeably. These terms certainly relate to the same field, but lets figure out what ML means.

Machine learning is a branch of artificial intelligence and the most common application of AI. These are algorithms that enable software products to perform a given task more precisely or forecast outcomes more accurately. It becomes possible thanks to this technologys ability to process data: the more data available to ML, the more accurate the predictions and performance of tasks.

ML is the technology thanks to which artificial intelligence can work. Machine learning is a way to teach machines to process, analyze information, and draw conclusions based on it.

A recommendation engine is the most typical use case for ML. Its software that analyzes data and makes assumptions. In addition, machine learning is used to detect software threats, fraud, spam filtering, or business process automation.

Machine learning is gradually becoming the innovation that drives companies in various industries.

Organizations implement this technology to speed up routine tasks, help with data analysis, and search for more effective business solutions. So, if you hire dedicated machine learning developer, you can improve the productivity and scalability of your company.

On the other hand, organizations that continue to ignore ML implementation for their processes are sliding backward. As a result, they perform more manual work, complete their tasks slower, and experience an increased risk of human error.

Instead, with machine learning, companies can avoid all these limitations. By leveraging this technology, you begin to understand your customers better, get tools to improve your existing products, create new ones, and boost the competitiveness of your business.

Yet, properly implementing ML in your processes requires an experienced IT specialist who is well versed in this innovation. Only such an expert will help you maximize the benefits of machine learning for your business.

To choose the most suitable specialist to handle your machine learning tasks, you must have a good understanding of the basic skills of such an expert. Here are the most fundamental of them:

Statistics

Skills in statistics are a must for an ML developer. In particular, this includes an understanding of analysis of variance and hypothesis testing. Such knowledge is critical because machine learning algorithms are based on statistical models.

Probability

Knowledge of mathematical probability equations is also significant for a machine learning engineer. Such a skill will help a specialist to predict future results and train artificial intelligence for this.

Data modeling

Knowledge of data models is critical for a developer. With a deep understanding of this process, a specialist can identify data structures, discover patterns between them, and fill in the gaps where data is missing.

Machine learning data labeling

Data labeling in machine learning is directly responsible for teaching machines and software. Your specialist should be able to process raw data such as images, video, and audio and give them meaningful labels. That is why machine learning labeling skills are necessary.

Programming skills

Since machine learning works through algorithms, your expert must have programming skills. In particular, this is knowledge of such languages as Python, R, C, or C++. With these tools, IT specialists can create algorithms, scenarios, etc.

Source: Mobilunity

ML libraries and algorithms knowledge

Knowledge of existing machine learning libraries and algorithms is helpful for your specialist. These are, for example, such tools as Microsoft Cognitive Toolkit, MLlib, or Google TensorFlow.

Now that you know what ideal ML developers should be and what skills they should possess, lets see what this expert can offer your business.

We have collected five main advantages that you will get by hiring a machine learning engineer:

The success of any company depends on the ability to plan and make balanced business decisions carefully. An expert in machine learning will help leverage this technology to process large amounts of data. As a result of analyzing this information, you will be able to find efficient solutions, minimize risks, and receive accurate forecasts for your company.

A machine learning specialist can help you streamline your business processes. In the ML industry, this method is also called intelligent process automation. Your IT expert can not only transfer routine tasks to automatic mode but also do it with more complex duties. For example, ML can automate even data entry.

A machine learning expert will help your business gain significantly more loyal customers. It will be possible by analyzing your client data and their behavior. Based on your audience research, youll offer them exactly what they need.

Along with personalizing the customer experience, you get many more benefits. Specifically, its increased revenue by growing your audience, improving overall customer satisfaction, and faster customer data analysis.

Based on the forecasts that machine learning algorithms will prepare for you, you will be able to evaluate the resources of your business more reasonably. As a result, you will always be ready for the changing demand for your products and know what customers expect from you.

Machine learning experts will also help you with inventory, save on materials, understand the exact scope of work, and reduce company waste.

As already mentioned, machine learning technologies help detect malicious attacks and fraudulent activities. By hiring an in-house or nearshore IT team in ML, you get the opportunity to implement advanced security standards into your business. Machine learning algorithms will gather data about cyber threats and immediately respond to suspicious activity.

Now, machine learning technologies are the latest trend that more and more businesses are chasing. Companies are implementing ML to automate their processes, improve customer experience, optimize costs and resources, and enhance data security.

Moreover, by leveraging machine learning, you can set your business apart from the crowd and offer your customers something unique.

When you hire an experienced ML developer, all these benefits will be available. You can do it directly in your country or try to find experts abroad, for example, opting for a nearshore team in Portugal.

Regardless of your choice, a machine learning engineer is a valuable asset to your business. So dont neglect innovation.

The rest is here:
What 5 Benefits a Machine Learning Developer Can Bring to Your Business - Business Review

Read More..

Collaborative machine learning that preserves privacy | MIT News | Massachusetts Institute of Technology – MIT News

Training a machine-learning model to effectively perform a task, such as image classification, involves showing the model thousands, millions, or even billions of example images. Gathering such enormous datasets can be especially challenging when privacy is a concern, such as with medical images. Researchers from MIT and the MIT-born startup DynamoFL have now taken one popular solution to this problem, known as federated learning, and made it faster and more accurate.

Federated learning is a collaborative method for training a machine-learning model that keeps sensitive user data private. Hundreds or thousands of users each train their own model using their own data on their own device. Then users transfer their models to a central server, which combines them to come up with a better model that it sends back to all users.

A collection of hospitals located around the world, for example, could use this method to train a machine-learning model that identifies brain tumors in medical images, while keeping patient data secure on their local servers.

But federated learning has some drawbacks. Transferring a large machine-learning model to and from a central server involves moving a lot of data, which has high communication costs, especially since the model must be sent back and forth dozens or even hundreds of times. Plus, each user gathers their own data, so those data dont necessarily follow the same statistical patterns, which hampers the performance of the combined model. And that combined model is made by taking an average it is not personalized for each user.

The researchers developed a technique that can simultaneously address these three problems of federated learning. Their method boosts the accuracy of the combined machine-learning model while significantly reducing its size, which speeds up communication between users and the central server. It also ensures that each user receives a model that is more personalized for their environment, which improves performance.

The researchers were able to reduce the model size by nearly an order of magnitude when compared to other techniques, which led to communication costs that were between four and six times lower for individual users. Their technique was also able to increase the models overall accuracy by about 10 percent.

A lot of papers have addressed one of the problems of federated learning, but the challenge was to put all of this together. Algorithms that focus just on personalization or communication efficiency dont provide a good enough solution. We wanted to be sure we were able to optimize for everything, so this technique could actually be used in the real world, says Vaikkunth Mugunthan PhD 22, lead author of a paper that introduces this technique.

Mugunthan wrote the paper with his advisor, senior author Lalana Kagal, a principal research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL). The work will be presented at the European Conference on Computer Vision.

Cutting a model down to size

The system the researchers developed, called FedLTN, relies on an idea in machine learning known as the lottery ticket hypothesis. This hypothesis says that within very large neural network models there exist much smaller subnetworks that can achieve the same performance. Finding one of these subnetworks is akin to finding a winning lottery ticket. (LTN stands for lottery ticket network.)

Neural networks, loosely based on the human brain, are machine-learning models that learn to solve problems using interconnected layers of nodes, or neurons.

Finding a winning lottery ticket network is more complicated than a simple scratch-off. The researchers must use a process called iterative pruning. If the models accuracy is above a set threshold, they remove nodes and the connections between them (just like pruning branches off a bush) and then test the leaner neural network to see if the accuracy remains above the threshold.

Other methods have used this pruning technique for federated learning to create smaller machine-learning models which could be transferred more efficiently. But while these methods may speed things up, model performance suffers.

Mugunthan and Kagal applied a few novel techniques to accelerate the pruning process while making the new, smaller models more accurate and personalized for each user.

They accelerated pruning by avoiding a step where the remaining parts of the pruned neural network are rewound to their original values. They also trained the model before pruning it, which makes it more accurate so it can be pruned at a faster rate, Mugunthan explains.

To make each model more personalized for the users environment, they were careful not to prune away layers in the network that capture important statistical information about that users specific data. In addition, when the models were all combined, they made use of information stored in the central server so it wasnt starting from scratch for each round of communication.

They also developed a technique to reduce the number of communication rounds for users with resource-constrained devices, like a smart phone on a slow network. These users start the federated learning process with a leaner model that has already been optimized by a subset of other users.

Winning big with lottery ticket networks

When they put FedLTN to the test in simulations, it led to better performance and reduced communication costs across the board. In one experiment, a traditional federated learning approach produced a model that was 45 megabytes in size, while their technique generated a model with the same accuracy that was only 5 megabytes. In another test, a state-of-the-art technique required 12,000 megabytes of communication between users and the server to train one model, whereas FedLTN only required 4,500 megabytes.

With FedLTN, the worst-performing clients still saw a performance boost of more than 10 percent. And the overall model accuracy beat the state-of-the-art personalization algorithm by nearly 10 percent, Mugunthan adds.

Now that they have developed and finetuned FedLTN, Mugunthan is working to integrate the technique into a federated learning startup he recently founded, DynamoFL.

Moving forward, he hopes to continue enhancing this method. For instance, the researchers have demonstrated success using datasets that had labels, but a greater challenge would be applying the same techniques to unlabeled data, he says.

Mugunthan is hopeful this work inspires other researchers to rethink how they approach federated learning.

This work shows the importance of thinking about these problems from a holistic aspect, and not just individual metrics that have to be improved. Sometimes, improving one metric can actually cause a downgrade in the other metrics. Instead, we should be focusing on how we can improve a bunch of things together, which is really important if it is to be deployed in the real world, he says.

Continued here:
Collaborative machine learning that preserves privacy | MIT News | Massachusetts Institute of Technology - MIT News

Read More..

Ilya Feige Joins Cerberus Technology Solutions as Global Head of Artificial Intelligence and Machine Learning – Business Wire

NEW YORK & LONDON--(BUSINESS WIRE)--Cerberus Capital Management, L.P. (together with its affiliates, Cerberus) today announced that Ilya Feige, Ph.D., has joined as Global Head of Artificial Intelligence and Machine Learning for Cerberus Technology Solutions (CTS).

Launched in 2018, CTS is an operating subsidiary of Cerberus focused exclusively on applying leading technologies and advanced analytics to drive business transformations. Today, CTS has more than 80 in-house and partner technologists organized across practice areas, including technology strategy, digital and e-Commerce, solutions architecture, data management and operations, advanced analytics and business intelligence, and cyber security. Dr. Feige will lead the platforms artificial intelligence (AI) and machine learning (ML) practice to apply data-driven solutions across Cerberus portfolio of investment as well as analyze value creation opportunities during diligence processes.

Our platform brings together top experts across the technology and data domains that are fundamental to an organizations operations and growth, said Ben Sylvester, Chief Executive Officer of CTS. Beyond his expertise, Ilya has an impressive track record of harnessing data to apply innovative solutions. We are excited for the global impact he will have on our partners in helping to unlock value across their businesses.

Dr. Feige was an executive with Faculty, one of Europes leading AI companies, and most recently served as Director of AI. During his tenure, he founded the companys AI research lab and subsequently built and led a team of 25 applied AI and ML practitioners. In this role, he spearheaded the expansion of Facultys AI platform and go-to-market strategy. Dr. Feige graduated McGill University with the highest honors and received a Ph.D. in Theoretical Physics from Harvard University, where he was awarded the Goldhaber Prize as the top Ph.D. student in physics. He has authored several peer-reviewed publications on AI safety, ML, and physics.

Dr. Feige commented: The use of technology is only becoming more critical to companies across all industries and Ive seen firsthand how the right technical solutions can be transformative to an organizations performance. CTS is a world-class platform that is truly unique. Their integration and deployment of technology expertise at scale helps partners not only improve their businesses, but also become more competitive. Im looking forward to joining this great team and the broader Cerberus family.

John Tang, Head of EMEA for CTS, added: We are thrilled to welcome Ilya to our CTS team. This addition underscores our platforms commitment to integrating cutting edge capabilities, including in next generation AI/ML technologies.

About CerberusFounded in 1992, Cerberus is a global leader in alternative investing with approximately $60 billion in assets across complementary credit, private equity, and real estate strategies. We invest across the capital structure where our integrated investment platforms and proprietary operating capabilities create an edge to improve performance and drive long-term value. Our tenured teams have experience working collaboratively across asset classes, sectors, and geographies to seek strong risk-adjusted returns for our investors. For more information about our people and platforms, visit us at http://www.cerberus.com.

See the original post here:
Ilya Feige Joins Cerberus Technology Solutions as Global Head of Artificial Intelligence and Machine Learning - Business Wire

Read More..

The Worldwide Artificial Intelligence Industry is Expected to Reach $1811 Billion by 2030 – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence Market Size, Share & Trends Analysis Report by Solution, by Technology (Deep Learning, Machine Learning, Natural Language Processing, Machine Vision), by End Use, by Region, and Segment Forecasts, 2022-2030" report has been added to ResearchAndMarkets.com's offering.

The global artificial intelligence market size is expected to reach USD 1,811.8 billion by 2030. The market is anticipated to expand at a CAGR of 38.1% from 2022 to 2030.

Companies Mentioned

Artificial Intelligence (AI) denotes the concept and development of computing systems capable of performing tasks customarily requiring human assistance, such as decision-making, speech recognition, visual perception, and language translation. AI uses algorithms to understand human speech, visually recognize objects, and process information. These algorithms are used for data processing, calculation, and automated reasoning.

Artificial intelligence researchers continuously improve algorithms for various aspects, as conventional algorithms have drawbacks regarding accuracy and efficiency. These advancements have led manufacturers and technology developers to focus on developing standard algorithms. Recently, several developments have been carried out for enhancing artificial intelligence algorithms. For instance, in May 2020, International Business Machines Corporation announced a wide range of new AI-powered services and capabilities, namely IBM Watson AIOps, for enterprise automation. These services are designed to help automate the IT infrastructures and make them more resilient and cost reduction.

Various companies are implementing AI-based solutions such as RPA (Robotic Process Automation) to enhance the process workflows to handle and automate repetitive tasks. AI-based solutions are also being coupled with the IoT (Internet of Things) to provide robust results for various business processes. For Instance, Microsoft announced to invest USD 1 billion in OpenAI, a San Francisco-based company. The two businesses teamed up to create AI supercomputing technology on Microsoft's Azure cloud.

The COVID-19 pandemic has emerged as an opportunity for AI-enabled computer systems to fight against the epidemic as several tech companies are working on preventing, mitigating, and containing the virus. For instance, LeewayHertz, a U.S.-based custom software development company, offers technology solutions using AI tools and techniques, including the Face Mask Detection System to identify individuals without a mask and the Human Presence System to monitor patients remotely. Besides, Voxel51 Inc., a U.S.-based artificial intelligence start-up, has developed Voxel51 PDI (Physical Distancing Index) to measure the impact of the global pandemic on social behavior across the world.

AI-powered computer platforms or solutions are being used to fight against COVID - 19 in numerous applications, such as early alerts, tracking and prediction, data dashboards, diagnosis and prognosis, treatments and cures, and maintaining social control. Data dashboards that can visualize the pandemic have emerged with the need for coronavirus tracking and prediction. For instance, Microsoft Corporation's Bing's AI tracker gives a global overview of the pandemic's current statistics.

Artificial Intelligence Market Report Highlights

Key Topics Covered:

Chapter 1 Methodology and Scope

Chapter 2 Executive Summary

Chapter 3 Market Variables, Trends & Scope

3.1 Market Trends & Outlook

3.2 Market Segmentation & Scope

3.3 Artificial Intelligence Size and Growth Prospects

3.4 Artificial Intelligence-Value Chain Analysis

3.5 Artificial Intelligence Market Dynamics

3.5.1 Market Drivers

3.5.1.1 Economical parallel processing set-up

3.5.1.2 Potential R&D in artificial intelligence systems

3.5.1.3 Big data fuelling AI and Machine Learning profoundly

3.5.1.4 Increasing Cross-Industry Partnerships and Collaborations

3.5.1.5 AI to counter unmet clinical demand

3.5.2 Market Restraint

3.5.2.1 Vast demonstrative data requirement

3.6 Penetration & Growth Prospect Mapping

3.7 Industry Analysis-Porter's

3.8 Company Market Share Analysis, 2021

3.9 Artificial Intelligence-PEST Analysis

3.10 Artificial Intelligence-COVID-19 Impact Analysis

Chapter 4 Artificial Intelligence Market: Solution Estimates & Trend Analysis

Chapter 5 Artificial Intelligence Market: Technology Estimates & Trend Analysis

Chapter 6 Artificial Intelligence Market: End-Use Estimates & Trend Analysis

Chapter 7 Artificial Intelligence Market: Regional Estimates & Trend Analysis

Chapter 8 Competitive Landscape

For more information about this report visit https://www.researchandmarkets.com/r/ykyt2m

Continued here:
The Worldwide Artificial Intelligence Industry is Expected to Reach $1811 Billion by 2030 - ResearchAndMarkets.com - Business Wire

Read More..

Artificial Intelligence in Breast Ultrasound: The Emerging Future of Modern Medicine – Cureus

In women, the most common neoplasm in terms of malignancy is breast cancer. Also, among deaths due to cancer, breast cancer is the second leading cause [1]. By using ultrasound and X-ray, one can diagnose breast cancer. Other significant techniques are mammography and magnetic resonance imaging (MRI), which successfully help make the appropriate diagnosis. First preference in imaging is given to ultrasound for the depiction and categorization of breast lesions as it is non-invasive, feasible, and cost-effective. Along with these, its availability is high and shows acceptable diagnostic performance. Those mentioned above are the basic techniques used as diagnostic tools. Besides these, some newer techniques are available, including color Doppler and contrast-enhanced ultrasound. Spectral Doppler, as well as elastography, also contributes to the diagnosis. These newer techniques support ultrasound doctors to obtain more prcised information. However, the drawback is that it does suffer from operator dependence [2]. Deep learning (DL) algorithms, which are precisely a part of artificial intelligence (AI) in particular, have received considerable attention in the past few years due to their outstanding performance in imaging tasks. Technology inbuilt in AI makes better evaluation of the appreciated data related to imaging [3]. AI in ultrasound lays significant focus on distinguishing between benign and malignant masses related to the breast. Radiologists nowadays interpret, analyze, and detect breast images. With a heavy and long-term volume of work, radiologists are more likely to make errors in image interpretation due to exhaustion, which is likely to result in a misidentification or failed diagnosis, which AI can prevent. Humans make various errors in the diagnosis part. To reduce those errors, there is the implementation of a technique known as computer-aided diagnosis (CAD). In this, an algorithm is present that completes the processing of the image along with analysis [4]. Convolutional neural networks (CNNs), a subset of DL, are the most recent technology used in medical imaging [5,6]. AI has the potential to enhance breast imaging interpretation accuracy, speed, and quality. By standardizing and upgrading workflow, minimizing monotonous tasks, and identifying issues, AI has the potential to revolutionize breast imaging. This will most likely free up physician resources that could be used to improve communication with patients and integrate with colleagues. Figure 1 shows the relation between the various subsets of AI in the article.

Traditional machine learning is the basis and the area of focus that is included under early AI. It deals with the problems in a stepwise manner. It involves a two-step procedure that is object detection followed by object recognition. The first step is object detection, in which case there exists an algorithm for bounding box detection that the machine uses in scanning the image to locate the appropriate object area. The other step, which is the second step, includes the object recognition algorithm that is based on the initial step. Identifying certain characteristic features and encoding the same into a data type are the tasks that experts perform in the identification process. The advantage of a machine is that it extracts the characteristic features, which is followed by performing quantitative analysis, processes the information, and gives the final judgment. In this way, it provides assistance to radiologists in detecting the lesions and analyzing them [5]. Through this, both the efficiency and the accuracy of the diagnosis can be enhanced and improved. In the previous few decades, the popularity of CAD is prevailing in terms of development as well as advancement. CAD includes machine learning methodologies along with multidisciplinary understanding as well as techniques. Analyzing the information of the patient is done by using these techniques. Similarly, the results can provide assistance to clinicians in the process of making an accurate diagnosis [7]. CAD could very well evaluate imaging data. It directly provides the information after analyzing it to the clinician and also correlates the results with some diseases that involve the use of statistical modelling of previous cases in the population. It has many other applications, such as lesion detection along with characterization and staging of cancer, including the enactment of a proper treatment plan with the assessment of its response. Adding to it, prediction of prognosis and recurrence are some more applications. DL has transformed computer vision [8,9].

DL, which is an advanced form of machine learning, does not depend solely on features and ROIs (region of interest) that are preset by humans, which is the opposite of traditional machine learning algorithms [9,10]. Along with this, it prefers to complete all the targeted processes independently. CNNs are the evolving configuration in healthcare, which is a part of DL. It can be explained by an example. Majorly, the model consists of three layers: input layers, hidden layers, and output layers. In this case, the hidden layer is the most vital determinant in achieving recognition. Being the most crucial determinant of achieving recognition, a significant number of convolutional layers, along with a fully connected layer, are encompassed in the hidden layers. The various massive problems generated by the machine based on input activity are handled by the convolutional layers. They are connected to form a complex system with the help of convolution layers, and, hence, it can easily output the results [11,12]. DL methods have an excessive dependence on data and hardware. In spite of this, it has easily defeated other frameworks in computer vision completion [13]. Furthermore, DL methods perform flawlessly not only in ultrasound but also in computed tomography (CT) [14,15]. According to certain studies, it has also been shown that there has been an adequate performance of DL methods in MRI [16]. DL uses a deep neural network architecture to transform the input information into multiple layers of abstraction and thereby discover the data representations [17]. The deep neural network's multiple layers of weights are iteratively updated with a large dataset with effective functioning. This yields a mathematical model of a complex type that is capable of extracting relevant features from input data showing high selectivity. DL has made major advances in many tasks such as target identification, including characterization, speech and text recognition, and face recognition. Some other advancements are smart devices and robotics.

An ultrasonic machine is used to upload images taken to the workstation, where they are reprocessed. The DL technique (S-detect) can, on the other hand, directly pinpoint breast lesions on the ultrasound. It is also used in segmentation, feature analysis, and depictions. The BI-RADS (Breast Imaging-Reporting and Data System) 2013 lexicon may also be used for the same. It can provide instantaneous results in the form of a frozen image on an ultrasound machine to detect any malignancy. This is performed by selecting ROI automatically or by manual means [18]. They assessed the performance of S-detect in terms of diagnosis so as to confirm whether the breast lesion was benign or malignant. On setting the cutoff at category 4a in BI-RADS, it was observed that the accuracy, along with specificity and PPV (positive predictive value), was high in S-detect in comparison with the radiologist (p = 0.05 for all). Ultrasonologists typically use macroscopic and microscopic features of breast images to recognize and segment potentially malicious lesions. Shape and edge, including orientation and accurate location of calcification, can be detected. Certain features, such as rear type and echo type, along with hardness, can also be identified. Following that, suspicious masses are classified using the BI-RADS scale so as to assess and estimate the level of cancer speculation in breast lesions. However, its macroscopic and microscopic characteristics are critical in distinguishing whether the masses are of malignant type. As a result, ultrasound experts are in high demand for correctly obtaining these features.

Mammography is a non-invasive technique with high resolution that is commonly used. It also shows good repeatability. Mammography detects those masses that doctors fail to palpate in the breasts and can reliably distinguish whether the lesions are benign or malignant. Mammograms are retrieved from digital mammography (DM). They are possibly provided to process (raw imaging data) as well as to present (a post-treated form of the raw data) image layouts using DM systems [19]. Breast calcification appears on mammography as narrow white spots, which are breast calcifications caused by narrow deposits of calcium salts in the tissues of the breast. Calcification is classified into two types: microcalcifications and macrocalcifications. The large, along with rough, are macrocalcifications; they are usually benign and depend on the age group. Microcalcifications, which range in size from 0.1 mm to 1 mm, can be found within or outside visible masses and may act as early warning signs of breast cancer [20]. Nowadays, significant CAD systems are progressing to detect calcifications in mammography.

DL, like DM, DBT (digital breast tomosynthesis), and USG (ultrasonography), is primarily utilized in MRI to conduct or assist in the categorization and identification of breast lesions. The other modalities and MRI. differ in their dimensions. MRI produces 3D scans; unlike it, 2D images are formed by other modalities such as DM, DBT, and USG. Furthermore, MRI observes the input along with the outflow of contrast agents (dynamic contrast-enhanced MRI) and changes its pre-existing dimensions to 4D. Moreover, hurdles are created while applying DL models on the 3D or 4D scans because the majority of models are designed to function on 2D pictures. To address these issues, various ways have been proposed. The most frequent method is to convert 3D photos to 2D images. It is accomplished by means of slicing, in which the 3D image is sliced into 2D, or by applying the highest intensity projection (MIP) [21,22]. DL is utilized to classify the axillary group of lymph node metastases in addition to lesion categorization [23-25]. Instead of biopsy data, positron emission tomography (PET) is used as the gold standard. The reason is that, while a biopsy is conclusive as truth, it leaves artifacts such as needle marks along with biopsy clips, which may unintentionally lead to shifting of the DL algorithm toward a malignant categorization [23,24].

PET or scintigraphy is a nuclear medicine imaging technique. They are predicted to be not much more suitable than the other previously stated imaging modalities, namely DM, digital tomosynthesis, USG, and MRI, for evaluating early-stage of cancerous lesions in the breast. The nuclear techniques, on the other hand, provide added utility for detecting and classifying axillary lymph nodes along with distant staging [26]. As a result, it is not surprising that DL is being used in this imaging field, albeit in a limited capacity. PET/CT assessment of whole-body metabolic tumor volume (MTV) could provide a measure of tumor burden. If a DL model could handle this operation, it would considerably minimize manual labor because, in practical application, for acquiring MTV, all tumors must be identified [27,28]. Weber et al. investigated whether a CNN trained to detect and segment the lesions in the breast with whole-body PET/CT scans of patients who have cancer could also detect and segment lesions in lymphoma and lung cancer patients. Moreover, the technique of DL, along with nuclear medicine techniques, are used parallelly in improving the tasks that are similarly used in other imaging approaches. Li et al. developed a 3D CNN model to help doctors detect axillary lymph node metastases on PET/CT scans [28]. Because of their network, clinicians' sensitivity grew by 7.8% on average, while their specificity remained unchanged (99.0%). However, both clinicians outscored the DL model on its own.

Worldwide, it has been observed that in women, there is a higher incidence and fatality rate of breast cancer; hence, many countries have implemented screening centers for women of the appropriate age group for detection of breast cancer. The ideology behind the implementation of screening centers is to distinguish between benign breast lesions and malignant breast lesions. The primary classification system used is BI-RADS for classifying lesions in breast ultrasound. AI systems have been developed with equipped features for classifying benign and malignant breast lesions to assist clinicians in making consistent and accurate decisions. Ciritsis et al. categorized breast ultrasound images into BI-RADS 2-3 and BI-RADS 4-5 using a deep convolution neural network (dCNN) with an internal data set and an external data set. The dCNN had a classification accuracy of 93.1% (external 95.3%), whereas radiologists had a classification accuracy of 91.65% (external 94.1 2%). This indicates that deep neural networks (dCNNs) can be utilized to simulate human decision-making. Becker et al. analyzed 637 breast ultrasound pictures using DL software (84 malignant and 553 benign lesions). The software was trained on a randomly chosen subset of the photographs (n=445, 70%), with the remaining samples (n=192) used to validate the resulting model during the training process. The findings demonstrated that the neural network, which had only been trained on a few hundred examples, had the same accuracy as a radiologist's reading. The neural network outperformed a trained medical student with the same training data set [29-31]. This finding implies that AI-assisted classification and diagnosis of breast illnesses can significantly cut diagnostic time and improve diagnostic accuracy among novice doctors. Table 1 shows BIRADS scoring.

AI is still struggling to advance to a higher level. Although it is tremendously progressing in the healthcare fraternity, it still has to cover a long journey to properly blend into clinicians' work and be widely implemented around the world. Many limitations have been reported for CAD systems for breast cancer screening, including a global shortage of public datasets, a high reliance on ROI annotation, increased image standards in terms of quality, regional discrepancies, and struggles in binary classification. Furthermore, AI is designed for single-task training and cannot focus on multiple tasks at once, and, hence, it is one of the significant obstacles to the advancement of DL associated with breast imaging. CAD systems are progressively evolving in ultrasound elastography [32]. Similarly, it is also progressing in the technology related to contrast-enhanced mammography as well as MRI [33,34]. AI in breast imaging can be used to not only detect but also classify breast diseases and anticipate lymph node tumor progression [35]. Moreover, it can also predict disease recurrence. As the technology of AI advances, there will be higher accuracy, along with greater efficiency, and a more precise plan of treatment for breast ailments, enabling them to achieve early detection with accurate diagnosis among radiologists. Moreover, it can also predict disease recurrence. The lack of a consistent strategy for segmentation (2D vs. 3D), feature extraction, and selection and categorization of significant radiomic data is a common limitation shared by all imaging modalities. Future studies with greater datasets will allow for subgroup analysis by patient group and tumor type [36].

Without a crystal ball, it is impossible to predict whether further advances in AI will one day start replacing radiologists or other functions in diagnostics reportedly performed by humans, but AI will undeniably play a major role in radiology, one that is currently unfolding rapidly. When compared to traditional clinical models, AI has the added benefit of being able to pinpoint distinctive features, textures, and details that radiologists seem unable to appreciate, as well as quantitatively define image explicit details, making its evaluation more objective. Moreover, AI in breast imaging can be used to not only detect but also classify breast diseases. As a result, greater emphasis needs to be placed on higher-quality research studies that have the potential to influence treatment, outcomes for patients, and social impact.

More:
Artificial Intelligence in Breast Ultrasound: The Emerging Future of Modern Medicine - Cureus

Read More..

The Adoption of Artificial Intelligence And Machine Learning In The Music Streaming Market Is Gaining Popularity As Per The Business Research…

LONDON, Sept. 07, 2022 (GLOBE NEWSWIRE) -- According to The Business Research Companys research report on the music streaming market, artificial intelligence and machine learning in music streaming devices are the key trends in the music streaming market. Technologies like artificial intelligence and Machine learning enhance the music streaming experience by increasing storage and improving the search recommendations, improving the overall experience.

For instance, in January 2022, Gaana, an India-based music streaming app introduced a new product feature using artificial intelligence to enhance the music listening experience for its listeners. The app will modify music preferences using artificial intelligence to suit a person's particular occasion or daily mood.

Request for a sample of the global music streaming market report

The global online music streaming market size is expected to grow from $24.09 billion in 2021 to $27.24 billion in 2022 at a compound annual growth rate (CAGR) of 13.08%. The global music streaming market share is expected to grow to $45.31 billion in 2026 at a compound annual growth rate (CAGR) of 13.57%.

The increasing adoption of smart devices is expected to propel the growth of the music streaming market. Smart devices such as smartphones, and smart speakers have changed the way of listening to music. They include smart features like the ability to set alarms, play music on voice command, control smart devices in-home, and stream live music, as they are powered by a virtual assistant. For instance, according to statistics from Amazon Alexa 2020, nearly 53.6 million Amazon Echo speakers (smart speakers) were sold in 2020 which increase to 65 million in 2021. Therefore, the increasing adoption of smart devices will drive the music streaming market growth.

Major players in the music streaming market are Amazon, Apple, Spotify, Gaana, SoundCloud, YouTube Music, Tidal, Deezer, Pandora, Sirius XM Holdings, iHeartRadio, Aspiro, Tencent Music Entertainment, Google, Idagio, LiveXLive, QTRAX, Saavn, Samsung, Sony Corporation, TuneIn, JOOX, NetEase, Kakao and Times Internet.

The global music streaming market is segmented by service into on-demand streaming, live streaming; by content into audio, video; by platform into application-based, web-based; by revenue channels into non-subscription, subscription; by end-use into individual, commercial.

North America was the largest region in the music streaming market in 2021. The regions covered in the global music streaming industry analysis are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, the Middle East, and Africa.

Music Streaming Global Market Report 2022 Market Size, Trends, And Global Forecast 2022-2026 is one of a series of new reports from The Business Research Company that provide music streaming market overviews, music streaming market analyze and forecast market size and growth for the whole market, music streaming market segments and geographies, music streaming market trends, music streaming market drivers, music streaming market restraints, music streaming market leading competitors revenues, profiles and market shares in over 1,000 industry reports, covering over 2,500 market segments and 60 geographies.

The report also gives in-depth analysis of the impact of COVID-19 on the market. The reports draw on 150,000 datasets, extensive secondary research, and exclusive insights from interviews with industry leaders. A highly experienced and expert team of analysts and modelers provides market analysis and forecasts. The reports identify top countries and segments for opportunities and strategies based on market trends and leading competitors approaches.

Not the market you are looking for? Check out some similar market intelligence reports:

Music Recording Global Market Report 2022 By Type (Record Production, Music Publishers, Record Distribution, Sound Recording Studios), By Application (Mechanical, Performance, Synchronization, Digital), By End-User (Individual, Commercial), By Genre (Rock, Hip Hop, Pop, Jazz) Market Size, Trends, And Global Forecast 2022-2026

Content Streaming Global Market Report 2022 By Platform (Smartphones, Laptops & Desktops, Smart TVs, Gaming Consoles), By Type (On-Demand Video Streaming, Live Video Streaming ), By Deployment (Cloud, On-Premise), By End User (Consumer, Enterprise) Market Size, Trends, And Global Forecast 2022-2026

Smart Home Devices Global Market Report 2022 By Technology (Wi-Fi Technology, Bluetooth Technology), By Application (Energy Management, Climate Control System, Healthcare System, Home Entertainment System, Lighting Control System, Security & Access Control System), By Sales Channel (Online, Offline) Market Size, Trends, And Global Forecast 2022-2026

Interested to know more about The Business Research Company?

The Business Research Company is a market intelligence firm that excels in company, market, and consumer research. Located globally it has specialist consultants in a wide range of industries including manufacturing, healthcare, financial services, chemicals, and technology.

The Worlds Most Comprehensive Database

The Business Research Companys flagship product, Global Market Model, is a market intelligence platform covering various macroeconomic indicators and metrics across 60 geographies and 27 industries. The Global Market Model covers multi-layered datasets which help its users assess supply-demand gaps.

Blog: http://blog.tbrc.info/

Link:
The Adoption of Artificial Intelligence And Machine Learning In The Music Streaming Market Is Gaining Popularity As Per The Business Research...

Read More..

Explainable artificial intelligence through graph theory by generalized social network analysis-based classifier | Scientific Reports – Nature.com

In this subsection, we present details on how we process the dataset, turn it into a network graph and finally how we produce, and process features that belong to the graph. Topics to be covered are:

splitting the data,

preprocessing,

feature importance and selection,

computation of similarity between samples, and

generating of the raw graph.

After preprocessing the data, the next step is to split the dataset into training and test samples for validation purposes. We selected cross-validation (CV) as the validation method since it is the de facto standard in ML research. For CV, the full dataset is split into k folds; and the classifier model is trained using data from (k-1) folds then tested on the remaining kth fold. Eventually, after k iterations,, average performance scores (like F1 measure or ROC) of all folds are used to benchmark the classifier model.

A crucial step of CV is selecting the right proportion between the training and test subsamples, i.e., number of folds. Determining the most appropriate number of folds k for a given dataset is still an open research question17, besides de facto standard for selecting k is accumulated around k=2, k=5, or k=10. To address the selection of the right fold size, we have identified two priorities:

Priority 1Class Balance: We need to consider every split of the dataset needs to be class-balanced. Since the number of class types has a restrictive effect on selecting enough similar samples, detecting the effective number of folds depends heavily on this parameter. As a result, whenever we deal with a problem which has low represented class(es), we selected k=2.

Priority 2High Representation: In our model, briefly, we build a network from the training subsamples. Efficient network analysis depends on the size (i.e., number of nodes) of the network. Thus, maximize training subsamples with enough representatives from each class (diversity) is our priority as much as we can when splitting the dataset. This way we can have more nodes. In brief, whenever we do not cross priority 1, we selected k=5.

By balancing these two priorities, we select efficient CV fold size by evaluating the characteristics of each datasets in terms of their sample size and the number of different classes. The selected fold value for each dataset will be specified in the Experiments and results section. To fulfill the class balancing priority, we employed stratified sampling. In this model, each CV fold contains approximately the same percentage of samples of each target class as the complete set.

Preprocessing starts with the handling of missing data. For this part, we preferred to omit all samples which have one or more missing feature(s). By doing this, we have focused merely on developing the model, skipping trivial concerns.

As stated earlier, GSNAc can work on datasets that may have both numerical and categorical values. To ensure proper processing of those data types, as a first step, we separate numerical and categorical features18. First, in order to process them mathematically, categorical (string) features are transformed into unique integers for each unique category by a technique called labelization. It is worth noting that, against the general approach, we do not use the one-hot-encoding technique for transforming categorical features, which is the method of creating dummy binary-valued features. Labelization does not generate extra features, whereas one-hot-encoding extend the number of features.

For the numerical part, as a very important stage of preprocessing, scaling19 of the features follows. Scaling is beneficial since the features may have a very different range and this might affect scale-dependent processes like distance computation. We have two generally accepted scaling techniques which are normalization and standardization. Normalization transforms features linearly into a closed range like [0, 1], which does not affect the variation of values among features. On the other hand, standardization transforms the feature space into a distribution of values that are centered around the mean with a unit standard deviation. This way, the mean of the attribute becomes zero and the resultant distribution has a unit standard deviation. Since GSNAc is heavily dependent on vectorial distances, we do not prefer to lose the structure of the variation within features and this way our choice for scaling the features becomes normalization. Here, it is worth mentioning that all the preprocessing is applied on the training part of the data and transformed on the test data, ensuring no data leakage occurs.

Feature Importance (FI) broadly refers to the scoring of features based on their usefulness in prediction. It is obvious that in any problem some features might be more definitive in terms of their predictive capability of the class. Moreover, a combination of some features may have a higher effect than others in total than the sum of their capacity in this sense. FI models, in general, address this type of concern. Indeed, almost all ML classification algorithms use a FI algorithm under the hood; since this is required for the proper weighting of features before feeding data into the model. It is part of any ML classifier and GSNAc. As a scale-sensitive model, vectorial similarity needs to benefit much from more distinctive features.

For computing feature importance, we preferred to use an off-the-shelf algorithm, which is a supervised k-best feature selection18 method. The K-best feature selection algorithm simply ranks all features by evaluating features ANOVA analysis against class labels. ANOVA F-value analyzes the variance between each feature and its respective class and computes F-value which is the ratio of the variation between sample means, over the variation within the samples. This way, it assigns F values as features importance. Our general strategy is to keep all features for all the datasets, with an exception for genomic datasets, that contain thousands of features, we practiced omitting. For this reason, instead of selecting some features, we prefer to keep all and use the importance learned at this step as the weight vector in similarity calculation.

In this step, we generate an undirected network graph G, its nodes will be the samples and its edges will be constructed using the distance metrics20 between feature values of the samples. Distances will be converted to similarity scores to generate an adjacency matrix from the raw graph. As a crucial note, we state that since we aim to predict test samples by using G, in each batch, we only process the training samples.

In our study for constructing a graph from a dataset we defined edge weights as the inverse of the Euclidean distance between the sample vectors. Simply, Euclidean distance (also known as L2-norm) gives the unitless straight line (shortest) distance between two vectors in space. In formal terms, for f-dimensional vectors u and v, Euclidean distance is defined as:

$$dleft(u,vright)=sqrt[2]{sum_{f}{left({u}_{i}-{v}_{i}right)}^{2}}$$

A slightly modified use of the Euclidean distance is introducing the weights for dimensions. Recall from the discussion of the feature importance in the former sections, some features may carry more information than others. So, we addressed this factor by computing a weighted form of L2 norm based on distance which is presented as:

$${dist_L2}_{w}left(u,vright)=sqrt[2]{sum_{f}{{w}_{i}({u}_{i}-{v}_{i})}^{2}}$$

where w is the n-dimensional feature importance vector and i iterates over numerical dimensions.

The use of the Euclidean distance is not proper for the categorical variables, i.e. it is ambiguous and not easy to find how much a canarys habitat sky is distant from a sharks habitat sea. Accordingly, whenever the data contains categorical features, we have changed the distance metric accordingly to L0 norm. L0 norm is 0 if categories are the same; it is 1 whenever the categories are different, i.e., between the sky and the sea L0 norm is 1, which is the maximum value. Following the discussion of weights for features, the L0 norm is also computed in a weighted form as ({dist_L0}_{w}left(u,vright)=sum_{f}{w}_{j}(({u}_{j}ne {v}_{j})to 1)), where j iterates over categorical dimensions.

After computing the weighted pairwise distance between all the training samples, we combine numerical and categorical parts as: ({{dist}_{w}left(u,vright)}^{2}={{dist_L2}_{w}left(u,vright)}^{2}+ {{dist_L0}_{w}left(u,vright)}^{2}). With pairwise distances for each pair of samples, we get a n x n square and symmetric distance matrix D, where n is the number of training samples. In matrix D, each element shows the distance between corresponding vectors.

$$D= left[begin{array}{ccc}0& cdots & d(1,n)\ vdots & ddots & vdots \ d(n,1)& cdots & 0end{array}right]$$

We aim to get a weighted network, where edge weights represent the closeness of its connected nodes. We need to first convert distance scores to similarity scores. We simply convert distances to similarities by subtracting the maximum distance on distances series from each element.

$$similarity_s(u,v)=mathrm{max}_mathrm{value}_mathrm{of}(D)-{dist}_{w}left(u,vright)$$

Finally, after removing self-loops (i.e. setting diagonal elements of A to zero), we use adjacency matrix A to generate an undirected network graph G. In this step, we delete the lower triangular part (which is symmetric to the upper triangular part) to avoid redundancy. Note that, in transition from the adjacency matrix to a graph, the existence of a (positive) similarity score between two samples u and v creates an edge between them, and of course, the similarity score will serve as the vectorial weight of this particular edge in graph G.

$$A= left[begin{array}{ccc}-& cdots & s(1,n)\ vdots & ddots & vdots \ -& cdots & -end{array}right]$$

The raw graph generated in this step is a complete graph: that is, all nodes are connected to all other nodes via an edge having some weight. Complete graphs are very complex and sometimes impossible to analyze. For instance, it is impossible to produce some SNA metrics such as betweenness centrality in this kind of a graph.

View post:
Explainable artificial intelligence through graph theory by generalized social network analysis-based classifier | Scientific Reports - Nature.com

Read More..

Update your domain’s name servers | Cloud DNS | Google Cloud

After you create a managed zone, youmust change the name servers that are associated with your domain registrationto point to the Cloud DNS name servers. The process differs by domainregistrar provider. Consult the documentation for your provider todetermine how to make the name server change.

If you don't already have a domain name, you can create and register a newdomain name atGoogle Domains or Cloud Domains,or you can use a third-party domain name registrar.

If you are using Cloud Domains, see Configure DNS for thedomain in theCloud Domains documentation.

If you are using Google Domains, follow these instructions to update yourdomain's name servers.

For Cloud DNS to work, you must determine the name servers thathave been associated with your managed zone and verify that they match the nameservers for your domain. Different managed zones have different name servers.

In the Google Cloud console, go to the Cloud DNS zonespage.

Go to Cloud DNS zones

Under Zone name, select the name of your managed zone.

On the Zone details page, click Registrar setup at the top rightof the page.

To return the list of name servers that are configured to serveDNS queries for your zone, run thedns managed-zones describecommand:

Replace ZONE_NAME with the name of the managed zone forwhich you want to return a list of name servers.

The IP addresses of your Cloud DNS name servers change, andmay be different for users in different geographic locations.

To find the IP addresses for the name servers in the a name server shard,run the following command:

For private zones, you can't query name servers on the public internet.Therefore, it's not necessary to find their IP addresses.

To find all the IP address ranges used by Google Cloud, seeWhere can I find Compute Engine IP ranges?

Verify that the name servers for the domain match the name servers listed inthe Cloud DNS zone.

To look up name servers that are currently in use, run the dig command:

Now that you have the list of Cloud DNS name servers hosting yourmanaged zone, use your domain registrar toupdate the name servers for your domain. Your domain registrar might be Google Domains,Cloud Domains, or a third-party registrar.

Typically, you must provide at least two Cloud DNS name serversto the domain registrar. To benefit from Cloud DNS's highavailability, you must use all the name servers.

After changing your domain registrar's name servers, it can take a while forresolver traffic to be directed to your new Cloud DNS nameservers. Resolvers could continue to use your old name servers until the TTL onthe old NS records expire.

More here:
Update your domain's name servers | Cloud DNS | Google Cloud

Read More..

Cloud servers are proving to be an unfortunately common entry route for cyberattacks – TechRadar

Cloud servers are now the number one entry route for cyberattacks, new research has claimed, with 41% of companies reporting it as the first entry point.

The problem is only getting worse, with the number of attacks using cloud servers as their initial point of entry rose 10% year-on-year, and they've also leapfrogged corporate servers as the main way for criminals to find their way into organizations.

The data, collected by cyber insurer Hiscox from a survey of 5,181 professionals from eight countries, found it's not just cloud servers that are letting hackers in, as 40%of businesses highlighted business emails as the main access point for cyberattacks.

Other common entry methods included remote access servers (RAS),which were cited by 31% of respondents, and employee-owned mobile devices,which were cited by 29% (a 6% rise from the year before).

Distributed denial of service (DDoS)attacks were also a popular method, cited by 26% of those surveyed.

The data also provided some into how cyberattacks are impacting different countries.

Businesses in the United Kingdom were found to be the least likely out of all the countries surveyed to have experienced a cyberattack in the last year at 42%, significantly beating out the Netherlands and France, who had figures of 57% and 52% respectively.

However, on the flip side, the UK had the highest median cost for cyberattacks out of all the countries looked at, coming in at $28,000.

It's not just the smaller, underfunded firms that can fall victim to cloud server-based attacks.

Accenture, one of the worlds largest IT consultancy firms, recently suffered an attack involving the LockBit ransomware strain which impacted a cloud server environment.

View original post here:
Cloud servers are proving to be an unfortunately common entry route for cyberattacks - TechRadar

Read More..