Page 2,131«..1020..2,1302,1312,1322,133..2,1402,150..»

Yo-Yo DDoS Cyber Attacks; What they Are and How You Can Beat Them – Geektime

Typically, DDoS (Distributed Denial of Service) attacks use massive traffic such as HTTP, DNS, TCP, and other methods to allow attackers to disrupt even the most well-defended networks or servers. But Yo-Yo DDoS is an entirely different animal.

They are a much more innovative way to attack public cloud infrastructure resources. In today's cloud architecture, almost every resource can scale quickly. It could be nodes, Kubernetes Pods, load balancers, etc. You have unlimited resources when it comes to scaling in the public cloud. The cyber attackers use those cloud auto-scaling capabilities against you and hurt you financially. It literally could destroy small organizations that have limited cloud budgets. This article will shed more light on these types of attacks to help you increase your cyber readiness.

This is a simulation of how it looks:

Yo-Yo DDoS attacks can be tricky to identify because these attacks are brief and dont necessarily result in denial-of-service (DOS) conditions. When carrying out a yo-yo attack, hackers flood their targets with so much traffic that it automatically scales cloud resources such as load balancers, front-end services, and other cloud resources. Then they suddenly halt traffic so that the application is over-provisioned and automatically scales down again. Once the autoscaler decides that traffic volume has decreased, it scales down its resources. The attacker turns on the DDoS traffic anew, and the cycle repeats, hence the name Yo-Yo attack.

Constantly scaling up and down can be a financial drain on the applications owners, who must pay a lot of money to the hyperscalers. In some cases, this behaviour can be difficult or impossible to differentiate from legitimate requests. Unlike other forms of DDoS attacks, Yo-Yos have no centralized sourcethey often originate from many different machines across the Internet.

You should control your cloud scaling behaviour by setting limitations for every cloud resource you scale to avoid large financial spending. If you dont set a max scaling limitation, you could waste a lot of cloud computing resources and cloud-native services. Monitor your compute autoscaling groups and use anomaly detection to recognize unusual scaling patterns automatically. Then you will be able to create alerts for unusual scaling patterns and further investigate your infrastructure scaling and spending.

Although theyre difficult to detect, Yo-Yo attacks can be mitigated by hiding traffic scaling configuration. Attackers need to know how much scaling has taken place to stop the DDoS attack and eventually turn it on again once the traffic goes to a predetermined average level. If the website or service owner can hide scaling information, this would help mitigate any preparations attackers might have made before launching the attack.

To improve the security of your cloud against such attacks, its worth exploring third-party solutions made by specialized security companies such as AWS Shield and Google Armor that can help you mitigate complex attacks. They are Hyperscalers security cloud-native services, but you can pick third-party solutions such as Cloudflare or Incapsula.

Another way to mitigate against Yo-Yo DDoS attacks is to not use the default values for downscaling and upscaling when it comes to the cloud service providers load balancing mechanism. Doing so also disrupts any plan attackers might have made of when to stop sending extra junk traffic and when to start again.

The general tips to guard against DDoS attacks include keeping everything on the system updated. Fix all the security issues and bugs and quickly develop a plan to identify such problems. Its also important to emphasize that Yo-Yo DDoS attacks are a relatively recent development, and mitigation is generally available only within the best web security platforms. For example, the native security tools included in the top-tier cloud platforms are usually not adequate for defeating these attacks.

Some of the more common Yo-Yo mitigation techniques include:

Quick Takeaways to Defend Against Yo-Yo DDoS Cyber Attacks

DDos and Yo-Yo DDoS attacks happen all the time, and the attacks are getting more innovative and more frequent. In general, Yo-Yo DDoS attacks are meant to hurt companies and countries financially.

In the end, the best way to beat a Yo-Yo DDoS attack is to stay vigilant. You dont want to be the next victim of such an attack. To ensure that doesnt happen, use multiple layered defences against attack, keep your systems up-to-date, and stay on top of threats.

Written by Ido Vapner, CTO and Chief Architect at Kyndryl

More:
Yo-Yo DDoS Cyber Attacks; What they Are and How You Can Beat Them - Geektime

Read More..

Securing Digital Transformation with PowerEdge and VMware – Marketscreener.com

With the explosive growth in the amount, and value, of data fueling today's global environment, the need for businesses to readily adapt is critical for survival. IT managers are under enormous pressure to deliver applications and services that not only innovate and transform the business, but also keep up with the ongoing security threats to their data. The number one barrier to transformation is data privacy and security concerns. How IT managers protect and secure infrastructure against escalating threats can make or break your digital transformation initiatives.

Today, security threats can come from all different directions and could include things like identity theft, theft of IP, extortion, viruses, worms, etc. The security threat landscape has grown over the past few years, with 81% of businesses experiencing a security breach, and continues to evolve rapidly, which has been challenging for organizations to keep up with, manage and forecast. This requires periodically monitoring the threat landscape and assessing the organization's resiliency to potential threats. With the unpredictability of threats, a practical approach would be to adopt a security framework that addresses not just traditional threats but one that adapts to today's growing and ever-changing threat landscape.

Securing the data center has become crucial to securing business, as it has become a valuable target for malicious attackers seeking access to the information, applications, and services that organizations rely on every day. While the move to a software-defined data center (SDDC) - where compute, storage and networking are virtualized - can improve agility and support digital transformation efforts, virtualized data centers create a bigger need for security at the infrastructure layer. Security integrated within the hardware and software enables a more extensive security approach, along with greater agility and flexibility when dealing with security threats. As servers become more central in an SDDC architecture, server security becomes the foundation of overall enterprise security.

Together, Dell PowerEdge and VMware solutions enable simpler, scalable and more agile IT that is secure by default and flexible to meet the needs of diverse workloads. By automating and protecting hybrid cloud environments - from chip to firmware to virtual machine to container - Dell Technologies and VMware maximize your ability to withstand cyberattacks, protect information and transform IT. Our flexible, purpose-built solutions are designed to provide optimal performance across the edge, core and cloud with automation and consistency for physical and virtual environments - all backed by industry-leading support and the ability to leverage existing investments in joint solutions and services.

Server security is vital to securing IT infrastructure - it allows you to protect against, detect and recover from malicious attacks. Unfortunately, while security teams often focus on protecting the operating system and applications, less attention is given to the underlying server infrastructure, including hardware and firmware.

The Dell Technologies approach to security is built-in, with security integrated into every step of the Dell Secure Development Lifecycle. Cyber Resilient Architecture includes the embedded server firmware, the OS, peripheral devices, and the management operations within it to promote effective and reliable protection from attacks, providing for rapid recovery with little to no business interruption.

* Protect servers during every aspect of the lifecycle, including BIOS, firmware, data and physical hardware.

* Detect malicious cyberattacks and unapproved changes; engage IT administrators proactively.

* Recover BIOS, firmware and OS to a known good state; securely retire or repurpose servers.

Organizations can build a process to protect valuable server infrastructure and the data within it by detecting abnormalities, breaches and unauthorized operations and recovering from unintended or malicious events.

Just as security is built into PowerEdge, VMware leverages infrastructure to protect apps and data from endpoint to cloud - in real-time, across any cloud or device. As an integral and distributed part of the enterprise - the software stack incorporates all aspects of the technology ecosystem to deliver more effective security. The result is security that's built-in and distributed with your control points of users, devices, workloads, and networks, with fewer tools and silos and better context.

For example, with the intrinsic nature of security on VMware software the following features can help protect you from threats: VMware vSphere with Tanzu:

* Easily enable VM encryption and advanced security with vSphere Native Key Provider.

* Ease compliance audits with vSphere Product Audit Guides and FIPS validation.

* Deliver seamless enterprise and multifactor authentication with Identity Federation.

* Get intrinsic security and control with remote verification using vSphere Trust Authority.

* Apply security policies and storage limits to virtual machines and Kubernetes clusters with vSphere Pod Services.

VMware vSAN:

* vSAN encryption provides data-at-rest and data-in-transit security at cluster level including deduplication and compression.

* Over-the-wire encryption for data-in-transit between vSAN nodes.

* FIPs 140-2 validated encryption modules that meet U.S. federal requirements.

* New monitoring and analysis tools plus root cause analysis help customers rapidly diagnose and treat underlying issues.

VMware NSX:

* Attainable and efficient Zero-trust security because critical apps are locked down.

* Leverage IDS/IPS to defend against lateral threats.

* Complete L1-L7 controls with NSX Micro-segmentation.

* Logical DMZ created in software.

Dell PowerEdge and VMware automate and protect hybrid cloud environments to maximize your ability to endure cyberattacks, protect information, and transform IT. Jointly tested and backed by industry-leading support, our flexible, purpose-built solutions provide optimal performance across the edge, core and cloud to leverage existing investments in joint solutions and services.

You can learn more about Cybersecurity with PowerEdge and VMware here, and Securing Business with PowerEdge and VMware here.

VMware, VMware Global Insights Security Report, June 2021

.

Excerpt from:
Securing Digital Transformation with PowerEdge and VMware - Marketscreener.com

Read More..

Artificial intelligence in factory maintenance is no longer a matter of the future – ReadWrite

Undetected machine failures are the most expensive ones. That is why many manufacturing companies are looking for solutions that automate and reduce maintenance costs. Traditional vibrodiagnostic methods can be too late in many cases. Taking readings in the presence of a diagnostician occasionally may not detect a fault in advance. 2017 Position Paper from Deloitte (Deloitte Analytics Institute 7/2017) claimed that maintenance in the environment of Industry 4.0.The benefits of predictive maintenance are dependent on the industry or the specific processes that it is applied to. However, Deloitte analyses at that time have already concluded that material cost savings amount to 5 to 10% on average. Equipment uptime increases by 10 to 20%. Overall maintenance costs are reduced by 5 to 10% and maintenance planning time is even reduced by 20 to 50%! Neuron Soundware has developed a artificial intelligence powered technology for predictive maintenance.

Stories from companies that have embarked on the digital journey are no longer just science fiction. They are real examples of how companies are coping with the lack of skilled labor on the market. Usually mechanic-maintainer who regularly goes around all the machines and diagnoses their condition by listening to them. Some companies are nowlooking for new maintenance technologies to replace

A failure without early identification means replacing the entire piece of equipment or its part. Waiting for the spare part which may not be in stock right now. Because it is expensive to stock replacement equipment. Devaluation of the current pieces of the component in the production thus the discarding of the entire production run. Finally, yet importantly, it would represent up to XY hours of production downtime. The losses might run into tens of thousands of euros.

Such a critical scenario is not possible if the maintenance technology is equipped with artificial intelligence in addition to the mechanical knowledge of the machines. It applies this knowledge itself to the current state of the machine. It is also able to recognize which anomalous behavior is currently occurring on the machine. Based on that send the send the corresponding alert with precise maintenance instructions. Manufacturers of mechanical equipment such as lifts, escalators, and mobile equipment use this today, for example.

However, predictive maintenance technologies have much wider applications. Thanks to the learning capabilities of artificial intelligence, they are very versatile. For example, the technology is able to assist in end-of-line testing. For example to identify defective parts of produced goods which are invisible to the eye and appear randomly.

The second area of application lies in the monitoring of production processes. We can imagine this with the example of a gravel crusher. A conveyor delivers different sized pieces of stone into grinders, which are to yield a given granularity of gravel. Previously, the manufacturer would run the crusher for a predetermined amount of time. To make sure that even in the presence of the largest pieces of rock, sufficient crushing occurred. With the artificial intelligence listening to the size of the gravel. He can stop the crushing process at the right point. This means not only saving wear and tear on the crushing equipment but more importantly, saving time and increasing the volume of gravel delivered per shift. This brings great financial benefit to the producer.

When implementing predictive maintenance technology, it does not matter how big the company is. The most common decision criterion is the scalability of the deployed solution. In companies with a large number of mechanically similar devices, it is possible to quickly collect samples that represent individual problems. From which the neural network learns. It can then handle any number of machines at once. The more machines, the more opportunities for the neural network to learn and apply detection of unwanted sounds.

Condition monitoring technologies are usually designed for larger plants rather than for workshops with a few machine tools. However, as hardware and data transmission and processing get progressively cheaper, the technology is getting there too. So even a home marmalade maker will soon have the confidence that his machines will make enough produce, deliver orders to customers on time, and not ruin its reputation.

In the future, predictive maintenance will be a necessity. In industry also in larger electronic appliances such as refrigerators and coffee machines, or in cars. For example, we can all recognize a damaged exhaust or an unusual sounding engine. Nevertheless, it is often too late to drive the car safely home from a holiday. For example, without a visit to the workshop. With the installation of an AI-driven detection device, we will know about the impending breakdown in time and be able to resolve the problem in time, before the engine seizes up and we have to call a towing service.

Pavel is a tech visionary, speaker, and founder of AI and IoT startup Neuron Soundware. He started his career at Accenture, where he took part in 35+ technology and strategy projects on 3 continents over 11years. He got into entrepreneurship in 2016 when he founded a company focused on predictive machine maintenance using sound analysis.

Here is the original post:
Artificial intelligence in factory maintenance is no longer a matter of the future - ReadWrite

Read More..

Schlumberger Expands Global AI Innovation Network with Opening of Artificial Intelligence Center in Europe – Yahoo Finance

Expanding the Benefits of Enterprise-Scale AI: Agile, Collaborative Development to Extract Maximum Value from Data

HOUSTON, April 29, 2022--(BUSINESS WIRE)--Schlumberger today announced it has expanded its global INNOVATION FACTORI network with the inauguration of a new center in Oslo, Norway.

"At INNOVATION FACTORI, customer teams will benefit from an agile, collaborative development approach with our domain and data science experts to address their strategic demands, such as drilling automation, digital twins for production optimization, and carbon capture and storage modeling," said Rajeev Sonthalia, president, Digital & Integration, Schlumberger. "Through INNOVATION FACTORI, customers can turn promising concepts into fully deployed digital solutions that extract maximum value from data to drive a major leap in business performance and, in turn, sustainability."

Schlumberger customers will gain access to a powerful machine learning platform with market leading AI capabilities. Through its partnership with Dataiku, a world leader in "Every Day AI," Schlumberger will empower its customers to leverage a single, centralized platform to design, deploy, govern, and manage AI and analytics applications.

Schlumbergers INNOVATION FACTORI network expansion comes after its successful inauguration of two AI centers in the Americas, one in Rio, Brazil, and a recently opened AI center in Houston, Texas. These centers compliment the global network of experts in Abu Dhabi, Beijing and Kuala Lumpur.

About Schlumberger

Schlumberger (SLB: NYSE) is a technology company that partners with customers to access energy. Our people, representing over 160 nationalities, are providing leading digital solutions and deploying innovative technologies to enable performance and sustainability for the global energy industry. With expertise in more than 120 countries, Schlumberger collaborates to create technology that unlocks access to energy for the benefit of all.

Story continues

Find out more at http://www.slb.com.

Cautionary Statement Regarding Forward-Looking Statements

This press release contains "forward-looking statements" within the meaning of the federal securities lawsthat is, any statements that are not historical facts. Such statements often contain words such as "expect," "may," "can," "believe," "plan," "estimate," "intend," "anticipate," "should," "could," "will," "likely," "goal," "objective," "aspire," "aim," "potential," "projected" and other similar words. Forward-looking statements address matters that are, to varying degrees, uncertain, such as forecasts or expectations regarding the deployment of, or anticipated benefits of, digital technologies and partnerships. These statements are subject to risks and uncertainties, including, but not limited to, the inability to recognize intended benefits from digital strategies, initiatives or partnerships; and other risks and uncertainties detailed in Schlumbergers most recent Forms 10-K, 10-Q, and 8-K filed with or furnished to the U.S. Securities and Exchange Commission. If one or more of these or other risks or uncertainties materialize (or the consequences of any such development changes), or should underlying assumptions prove incorrect, actual results or outcomes may vary materially from those reflected in our forward-looking statements. The forward-looking statements speak only as of the date of this press release, Schlumberger disclaims any intention or obligation to update publicly or revise such statements, whether as a result of new information, future events or otherwise.

View source version on businesswire.com: https://www.businesswire.com/news/home/20220429005452/en/

Contacts

Giles Powell Director of Corporate Communication, Schlumberger LimitedTel: +1 (713) 375-3494communication@slb.com

Follow this link:
Schlumberger Expands Global AI Innovation Network with Opening of Artificial Intelligence Center in Europe - Yahoo Finance

Read More..

Artificial Intelligence and Chemical and Biological Weapons – Lawfare – Lawfare

Sometimes reality is a cold slap in the face. Consider, as a particularly salient example, a recently published article concerning the use of artificial intelligence (AI) in the creation of chemical and biological weapons (the original publication, in Nature, is behind a paywall, but this link is a copy of the full paper). Anyone unfamiliar with recent innovations in the use of AI to model new drugs will be unpleasantly surprised.

Heres the background: In the modern pharmaceutical industry, the discovery of new drugs is rapidly becoming easier through the use of artificial intelligence/machine learning systems. As the authors of the article describe their work, they have spent decades building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery.

In other words, computer scientists can use AI systems to model what new beneficial drugs may look like for specifically targeted afflictions and then task the AI to work on discovering possible new drug molecules to use. Those results are then given to the chemists and biologists who synthesize and test the proposed new drugs.

Given how AI systems work, the benefits in speed and accuracy are significant. As one study put it:

The vast chemical space, comprising >1060 molecules, fosters the development of a large number of drug molecules. However, the lack of advanced technologies limits the drug development process, making it a time-consuming and expensive task, which can be addressed by using AI. AI can recognize hit and lead compounds, and provide a quicker validation of the drug target and optimization of the drug structure design.

Specifically, AI gives society a guide to the quicker creation of newer, better pharmaceuticals.

The benefits of these innovations are clear. Unfortunately, the possibilities for malicious uses are also becoming clear. The paper referenced above is titled Dual Use of Artificial-Intelligence-Powered Drug Discovery. And the dual use in question is the creation of novel chemical warfare agents.

One of the factors investigators use to guide AI systems and narrow down the search for beneficial drugs is a toxicity measure, known as LD50 (where LD stands for lethal dose and the 50 is an indicator of how large a dose would be necessary to kill half the population). For a drug to be practical, designers need to screen out new compounds that might be toxic to users and, thus, avoid wasting time trying to synthesize them in the real world. And so, drug developers can train and instruct an AI system to work with a very low LD50 threshold and have the AI screen out and discard possible new compounds that it predicts would have harmful effects. As the authors put it, the normal process is to use a generative model [that is, an AI system, which] penalizes predicted toxicity and rewards predicted target activity. When used in this traditional way, the AI system is directed to generate new molecules for investigation that are likely to be safe and effective.

But what happens if you reverse the process? What happens if instead of selecting for a low LD50 threshold, a generative model is created to preferentially develop molecules with a high LD50 threshold?

One rediscovers VX gasone of the most lethal substances known to humans. And one predictively creates many new substances that are even worse than VX.

One wishes this were science fiction. But it is not. As the authors put the bad news:

In less than 6 hours ... our model generated 40,000 [new] molecules ... In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents. This was unexpected because the datasets we used for training the AI did not include these nerve agents.

In other words, the developers started from scratch and did not artificially jump-start the process by using a training dataset that included known nerve agents. Instead, the investigators simply pointed the AI system in the general direction of looking for effective lethal compounds (with standard definitions of effectiveness and lethality). Their AI program then discovered a host of known chemical warfare agents and also proposed thousands of new ones for possible synthesis that were not previously known to humankind.

The authors stopped at the theoretical point of their work. They did not, in fact, attempt to synthesize any of the newly discovered toxins. And, to be fair, synthesis is not trivial. But the entire point of AI-driven drug development is to point drug developers in the right directiontoward readily synthesizable, safe and effective new drugs. And while synthesis is not easy, it is a pathway that is well trod in the market today. There is no reasonnone at allto think that the synthesis path is not equally feasible for lethal toxins.

And so, AI opens the possibility of creating new catastrophic biological and chemical weapons. Some commentators condemn new technology as inherently evil tech. However, the better view is that all new technology is neutral and can be used for good or ill. But that does not mean nothing can be done to avoid the malignant uses of technology. And there is a real risk when technologists run ahead with what is possible, before human systems of control and ethical assessment catch up. Using artificial intelligence to develop toxic biological and chemical weapons would seem to be one of those use-cases where severe problems may lie ahead.

Here is the original post:
Artificial Intelligence and Chemical and Biological Weapons - Lawfare - Lawfare

Read More..

Global Telecommunications Artificial Intelligence of Things Market Report 2022: TSPs Increasingly Offer Industry Vertical Solutions as Part of Their…

Dublin, April 29, 2022 (GLOBE NEWSWIRE) -- The "Global Artificial Intelligence of Things (AIoT) in Telecommunications Growth Opportunities" report has been added to ResearchAndMarkets.com's offering.

This report examines the strategic position of telecommunication service providers (TSPs) in using artificial intelligence (AI) and the Internet of Things (IoT) to offer enterprises Artificial Intelligence of Things (AIoT) solutions. TSPs play a vital role in deploying enterprise AIoT solutions amid the increasing deployment of 5G networks, edge infrastructure capabilities, and location-based data at their disposal.

Given their network and connectivity capabilities and AI and services focus, TSPs are in a unique position to monetize AIoT opportunities. They increasingly offer solutions by industry vertical as part of their AIoT focus.

The report highlights TSPs' role as system integrators to provide value-added solutions and services to progress beyond connectivity and move up the value chain.

The report provides stakeholders insights by identifying AI growth drivers that will facilitate AIoT solutions deployment and opportunities in AI advisory and consulting services, edge infrastructure adoption, and building specific industry vertical solutions.

Key Topics Covered:

1. Strategic Imperatives

2. Growth Environment

3. Growth Opportunity Analysis

4. Growth Opportunity Universe

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/o5a043

See more here:
Global Telecommunications Artificial Intelligence of Things Market Report 2022: TSPs Increasingly Offer Industry Vertical Solutions as Part of Their...

Read More..

The Business Case For AI Is A Good Management Introduction To Real-World Artificial Intelligence – Forbes

Artificial Intelligence

Too many technologist, in every generation of technology, state that management need to think more like programmers. Thats not the case. Rather, the technology professionals need to learn to speak to management. The Business Case for AI, by Kavita Ganesan, PhD, is a good overview for managers wishing to understand and control the complexities of implementing artificial intelligence (AI) systems in businesses.

Im always skeptical of self-published books. Usually that means the books just arent that good. However, sometimes, especially in non-fiction, it means that publishers are clueless about the subject and hesitant to work with people who arent names. This book is an example of the second option, and it will give management an introduction to the concepts surrounding AI and how to address implementation in a way that will increase the odds of success for AI initiatives.

The indication that the author mostly lives in the real world comes quickly. The first chapter is a good, introduction to what matters for business about AI. Forget the technical focus, its about solving problems in an efficient and cost-effective way.

Chapter 2, What is AI? isnt bad either, though I disagree with the idea that machine learning (ML) is part of AI. Business Intelligence (BI) has advanced, along with computing performance, that standard analytics provide insight that can be termed ML, so ML and AI overlap. That, however, is a religious argument and what Ganeson has to say about AI is at a good level for management understanding.

The weakest chapter in the introduction is the fourth, where the science fiction addict in me had to sigh at Movies such as I Robot. Ummm, check your library.

That chapters list of myths is also a bit problematic. The first, about job loss, is the one area where it shows the part of the real world in which the author exists isnt one most people are in. The AI revolution is very different than the industrial revolution and earlier technology revolutions. She talks about artificial general intelligence (AGI) and says that since its still far away that means a lot of jobs wont be lost. We dont need AGI to replace jobs.

The next couple of chapters are good for setting up examples of business processes that could be impacted by AI. I do have an issue with which companies she decides to name and which remain anonymous, as that seems to imply protecting customers. The best part was a good discussion of IT & manufacturing operations, but that could have been improved by discussing infrastructure operations such as pipelines and the electrical grid.

Part 3 (chapters 7-9) is very good but, again, has a few things to keep in mind. On page 117, six phases of the development lifecycle are defined. I agree with them, but want to point out that data acquisition and model development, phases 2 & 3, can be done somewhat in parallel. The things you learn from each can impact the other. The other nit is that the author seems to use warehouse improperly. Data warehouses have a specific, more narrow purpose. When she uses the term, think data lake. The importance of logging, transactions and more, is often ignored, and the end of this section of the book has a good explanation of its importance.

The fourth part of the book is a set of chapters that drills down into the finding AI projects portion of the analysis process, and is well laid out.

The final section has two chapters. The first is about build v buy. It is no surprise that a consultant leans towards build, thats her livelihood. What managers need to understand is that businesses arent as unique as they wish to think. There are unique things, but the vast majority of business is like other businesses. AI is a new technology and there arent enough easy to use tools for a buy decision in many areas, but that will change over time. Managers need have a flexible understanding of the equation and balance that in the real-world.

The final chapter is, as expected, a good summation and a return to focusing on business results. It continues the authors use of good, simple, graphics to show the points of her arguments. Regardless of the issues Ive mentioned above, the book does a great job of laying out the challenges of artificial intelligence from a business perspective. The book doesnt delve deeply into algorithms or other details that dont matter to management, while it does provide a framework to look at AI projects through a business lens that integrates the technology into organizations in a way that doesnt leave everything to technologist. The Business Case for AI is a good introduction for IT and line managers to think about how to integrate artificial intelligence into their organizations.

More here:
The Business Case For AI Is A Good Management Introduction To Real-World Artificial Intelligence - Forbes

Read More..

Viewpoint: Artificial intelligence poised to play greater role in science – Science Business

COVID-19 changed the way we all work, live and socialise with technology and communication tools more important than ever. The world of scientific research was no exception, with the use of technology and specifically artificial intelligence (AI) increasing. AI is now increasingly relied upon to speed up research and generate new insights across all of science.

The positive endorsement of AI was noted in the second iteration of Elseviers global research project Research Futures which aims to gather the views and opinions of researchers across the world to help us as science publishers better understand the challenges and opportunities they face.

Forty seven percent of researchers believe that a long-lasting impact of the pandemic will be a greater dependency on technology and AI in their work, underlining the importance of AI for the future of research.

The study shows the number of researchers using AI extensively has increased from 12% in 2020 to 16% in 2021. In materials science, which covers the structure and properties of materials, the discovery of new materials and how they are made, 18% of researchers are now likely to be extensive AI users, up from zero a year ago. In chemistry, the number has grown from 2% to 19% and in mathematics from 4% to 13%. Unsurprisingly, 64% of computer scientists say they are heavy users of AI.

Most often researchers who use AI do so to analyse research results (66%) or to spot defects or issues with data (49%), while a minority are using it to help generate new hypotheses (17%).

As we note in our Research Futures Report 2.0, AI has been crucial to healthcare throughout the pandemic. We have seen hospitals use it to help predict which patients would be most severely affected by COVID-19, as well as manage their resources.

Attitudes toward the use of AI in peer review have also changed. Around one in five researchers (21%) agree they would read papers that rely on AI for peer review instead of humans, a 5% increase on 2020. Looking at the results by age, those aged 36 and under have increased their willingness to read such articles the most, compared to a year ago (21%, up from 14% prior year). But while attitudes are changing, most researchers continue to be reticent about AI in peer review, with 58% saying they are unwilling to read such articles.

Its clear that the place of AI in research is evolving and it is gradually becoming a crucial and trustworthy tool. However, not all reservations surrounding AI have been answered by the accelerated reliance on it during the pandemic.

Nonetheless, the technological strides made, especially in the fields of material science, medicine and chemistry, show the crucial role AI will play in the future.

The Elsevier Research Futures Report 2.0 is free to download here. It builds on the firstResearch Futures Report (2019)which considered what the world of research might look like in 10 years time. The new data highlights mounting pressure across publishing and funding, while highlighting new opportunities in new funding sources, technology, and collaboration.

Adrian Mulligan is Research Director for Customer Insights at the science publisher Elsevier

More:
Viewpoint: Artificial intelligence poised to play greater role in science - Science Business

Read More..

Endoluxe and Optimus-ISE Enter Marketing and Development Agreement to Realize Advanced Imaging and Artificial Intelligence in Advanced Operating Rooms…

MANHATTAN BEACH, Calif.--(BUSINESS WIRE)--Endoluxe and Optimus-ISE are proud to announce that they have entered into a co-marketing and development agreement to realize the advanced technology synergies of both organizations. With Optimus-ISE focused on safer, more efficient, and improved financial performing operating rooms, the Endoluxe platform fits perfectly into these guiding principles to provide an optimal clinical environment.

We are thrilled to enter this new global partnership. The Endoluxe platform, consisting of a wireless camera, cloud-based storage, and AI/ML clinical applications, is a fantastic fit with the vision of the Optimus operating room. says Devon Bream, CEO of Endoluxe. Our product eliminates the cables and cords of legacy camera platforms, which aligns with the clutter free design of the Optimus-ISE operating room. Additionally, Endoluxe provides a cloud-based storage solution that eliminates antiquated recording boxes and seamlessly connects to hospital EMRs. The Endoluxe cloud lets clinicians immediately share images from a procedure with patients and family, increasing patient satisfaction experiences. But one of the most exciting opportunities to collaborate with Optimus-ISE is through our novel AI/ML Endoluxe applications that provide clinicians with insights that legacy camera platforms simply cannot offer.

The Endoluxe EVS is the perfect camera system for all endoscopic procedures that utilize industry standard rigid and flexible analogue scopes such as urology, gynecology, ENT, general surgery, and orthopedics. The handheld Orb replaces the legacy endoscopic tower with advanced, portable technology at 1/6 the cost.

We are excited to enter into a partnership with Endoluxe, states Bill Passmore, CCO of Optimus. The Endoluxe co-founders are both practicing surgeons, which adds yet another validation that Optimus-ISE is designing advanced solutions that are meaningful to those that will ultimately be using them. While Optimus remains vendor agnostic, the advantage of collaborating with innovative technologies like Endoluxe allows us to provide our customers integrated options that no other providers can. The potential for collaboration and co-development is vast with both organizations benefiting from shared resources, sales platforms and becoming greater than the sum of the parts with a great cultural fit.

Endoluxe is a world-class endoscopic video imaging organization based in the United States with worldwide distribution of its medical industry design award-winning Endoluxe Orb. The company is focused on reducing costs of legacy video platforms, enhancing procedure adoption, and improving patient outcomes through better therapy application. Endoluxe is committed to being a vendor agnostic platform that allows customers to utilize their existing investment in traditional scopes and supporting devices, while taking advantage of future technological advancements utilizing our portable, integrated, and feature-laden platform at 1/6th the cost of legacy products. More information can be found at Endoluxe.com.

Optimus Integrated Surgical Environment AG is a Swiss-based company that delivers a holistic solution for the entire operating room and surrounding support services. Optimus integrates all vendors by acting as the single supplier for planning, installation and maintenance services for new hospital and refurbished operating room facility builds. The company provides services for the entire lifecycle of the operating sector of hospitals: from blue-sky phase of new operating room build planning, installation, and project management, through the total time of ownership including maintenance, servicing, and technology updates. More information can be found at Optimus-ISE.com.

Link:
Endoluxe and Optimus-ISE Enter Marketing and Development Agreement to Realize Advanced Imaging and Artificial Intelligence in Advanced Operating Rooms...

Read More..

Are machine-learning tools the future of healthcare? – Cosmos

Terms like machine learning, artificial intelligence and deep learning have all become science buzzwords in recent years. But can these technologies be applied to saving lives?

The answer to that is a resounding yes. Future developments in health science may actually depend on integrating rapidly growing computing technologies and methods into medical practice.

Cosmos spoke with researchers from the University of Pittsburgh, in Pennsylvania, US, who have just published a paper in Radiology on the use of machine-learning techniques to analyse large data sets from brain trauma patients.

Co-lead author Shandong Wu, associate professor of radiology, is an authority on the use of machine learning in medicine. Machine-learning techniques have been around for several decades already, he explains. But it was in about 2012 that the so-called deep learning technique became mature. It attracted a lot of attention from the research field not only in medicine or healthcare, but in other domains, such as self-driving cars and robotics.

More on machine learning: Machine learning for cancer screening

So, what is deep learning? Its a kind of multi-layered, neural network-based model that is constantly mimicking how the human brain works to process a large set of data to learn or distill information, explains Wu.

The key to the increased maturity of machine-learning techniques in recent years is due to three interrelated developments, he says. These are the technical improvements in the algorithms of machine learning; the developments in the hardware being used, such as the improved graphical processing units; and the large volumes of digitised data readily available.

That data is key. Lots of it.

Get an update of science stories delivered straight to your inbox.

Machine-learning techniques use data to train the model to function better, and the more data the better. If you only have a small set of data, then you dont have a very good model, Wu explains. You may have very good questioning or good methodology, but youre not able to get a better model, because the model learns from lots of data.

Even though the available medical data is not as large as, say, social media data, there is still plenty to work with in the clinical domain.

Machine-learning models and algorithms can inform clinical decision-making, rapidly analysing massive amounts of data to identify patterns, says the papers other co-lead author, David Okonkwo.

Human beings can only process so much information. Machine learning permits orders of magnitude more information available than what an individual human can process, Okonkwo adds.

Okonkwo, a professor of neurological surgery, focuses on caring for patients with brain and spinal cord injuries, particularly those with traumatic brain injuries.

Our goal is to save lives, says Okonkwo. Machine-learning technologies will complement human experience and wisdom to maximise the decision-making for patients with serious injuries.

Even though today you dont see many examples, this will change the way that we practise medicine. We have very high hopes for machine learning and artificial intelligence to change the way that we treat many medical conditions from cancer, to making pregnancy safer, to solving the problems of COVID.

But important safeguards must be put in place. Okonkwo explains that institutions such as the US Food and Drugs Administration (FDA) must ensure that these new technologies are safe and effective before being used in real life-or-death scenarios.

Wu points out that the FDA has already approved about 150 artificial intelligence or machine learning-based tools. Tools need to be further developed or evaluated or used with physicians in the clinical settings to really examine their benefit for patient care, he says. The tools are not there to replace your physician, but to provide the tools and information to better inform physicians.

See original here:
Are machine-learning tools the future of healthcare? - Cosmos

Read More..