Page 1,942«..1020..1,9411,9421,9431,944..1,9501,960..»

Best TunnelBear Alternative in 2022 [Paid & Free Alternatives] – Cloudwards

TunnelBear is an excellent VPN with a great free plan. However, since McAfee acquired TunnelBear in 2018, there have been concerns that it would start logging user data given that its a U.S. company. If youre not a fan of TunnelBear, you might want to look for a TunnelBear alternative.

The privacy issue isnt the only concern with TunnelBear, though. If you need a VPN to access geoblocked streaming platforms, TunnelBear offers little help, if any. It often fails to get into major streaming platforms (it does unblock HBO Max and Netflix U.S.) and has slow connection speeds.

If youve made up your mind to switch your VPN service provider, weve listed five of the best TunnelBear alternatives below to help you decide on your new VPN, whether its a free option like Windscribe or a premium one like ExpressVPN.

Many VPNs are better than TunnelBear. A free VPN like Windscribe or Speedify can help you improve streaming experience with better speeds. If youre looking for a premium service, ExpressVPN is the best alternative to TunnelBear.

TunnelBear still offers a free plan. The free plan lets you use the VPN on one device and offers 500MB of free data (1GB if you tweet about the service).

You cant get unlimited data for free on the TunnelBear VPN. You need to switch to a paid plan for unlimited data. If you want unlimited data for free, consider ProtonVPN.

TunnelBear is largely popular because of its free plan. The VPN offers 500MB of free data, which can be increased to 1GB by tweeting about TunnelBear. For that reason, weve put particular emphasis on trustworthy VPNs with free plans, of which there arent many.

You may need a TunnelBear alternative for three reasons. First, there are security concerns given its U.S.-based parent company. Second, it has limited ability to access blocked content on major streaming platforms. Third, even if you manage to bypass geo-restrictions on streaming websites, the slow connection speeds will deliver a terrible streaming experience.

When you select a TunnelBear alternative, you want a service that has none of those issues. All the services we discuss in this guide offer excellent speeds. Of course, you should also look for other features before settling on a VPN provider, such as unlimited bandwidth, DNS leak protection and the ability to access all the major platforms.

Weve included free and paid alternatives on the list. Some of the paid alternatives to TunnelBear listed below also offer free service, but if the free version is inadequate for your needs, you can always upgrade to a paid plan.

Windscribe is the best free VPN.

More details about Windscribe:

Pros:

Cons:

Windscribe is the best free VPN, and per our recent VPN speed comparison, its also one of the fastest. If youve used TunnelBear for a while, youll find Windscribes performance a lot better, even on the free version.

The free service offers up to 10GB of monthly data and access to server locations in 10 countries. Unlike TunnelBear, Windscribe can also access almost all major streaming platforms.

10GB might get you through a month if you stream a few times a week, but if you stream in 4K regularly, you might need more data. Fortunately, you can get unlimited data and access to Windscribes R.O.B.E.R.T. (learn more about R.O.B.E.R.T. and other Windscribe features in our Windscribe review) using the build-a-plan option.

The minimum checkout value for the option is $2. You can add server locations for $1 per month each. If you also spend $1 per month on getting unlimited data and access to R.O.B.E.R.T., youll satisfy the minimum checkout value parameter (or you can just add another server location). Unlimited data on a VPN that offers excellent security and online privacy for $2 is a terrific deal.

Windscribe offers unlimited simultaneous connections.

If you need access to more server locations, you might want to consider the full-access plan. The one-year subscription is the best value at $4.08 per month. Considering Windscribes performance, the price seems reasonable. However, if you want to test Windscribes paid service, you can use its three-day money-back guarantee.

Pro Plan

ProtonVPN offers unlimited free data.

More details about ProtonVPN:

Pros:

ProtonVPN is another reliable free service that offers unlimited bandwidth, a strong lineup of features and an intuitive interface. It also has one of the most transparent privacy policies of any VPN.

You only get access to three free servers on ProtonVPNs free plan (compared to TunnelBears 48), but thats a small trade-off given the slew of extra features you get.

You can get into almost all major streaming platforms, including Netflix, Amazon Prime Video and Hulu (find it on our best VPN for Hulu guide). Its not the best VPN for 4K streaming, though, since its slower than most services on the list.

However, you can stream in SD or HD without a lot of buffering, especially after you switch to the IKEv2 security protocol (though IKEv2 isnt available in the desktop app). Learn more about the service in our ProtonVPN review.

ProtonVPN offers many features. From basics like a kill switch (blocks internet traffic when the VPN connection drops abruptly) and custom DNS settings to secure core servers that work like the double-hop VPN servers you find on NordVPN.

ProtonVPN unblocks almost all major streaming websites.

While ProtonVPN is popular because its a free VPN, it also has several paid plans. Its two-year plan for $4.99 per month is the best value. While that doesnt make ProtonVPN the cheapest VPN service, its still a great service. Plus, you can try it risk-free with its 30-day money-back guarantee.

ExpressVPN offers lightning fast connection speeds.

More details about ExpressVPN:

Pros:

ExpressVPN is our best VPN service overall. It offers an unbeatable feature set and delivers excellent performance with fast connection speeds and the ability to bypass geoblocks on all major streaming platforms.

With ExpressVPN, you get all the basic features, like unlimited bandwidth, a kill switch, military-grade encryption and app-based split tunneling, but thats barely scratching the surface. ExpressVPN is miles ahead of TunnelBear in almost anything you compare between the two services.

For example, ExpressVPN is the best VPN for streaming and the most secure VPN. Learn more about the service in our ExpressVPN review.

ExpressVPNs streaming capabilities are unmatched, making it the best alternative to TunnelBear for cord-cutters. While TunnelBear has a tough time accessing geo-restricted content, ExpressVPN can unblock all major platforms including Netflix, Amazon Prime Video and BBC iPlayer.

When you use TunnelBear, youll also likely spend a few minutes waiting for the videos to load. This isnt an issue with ExpressVPN because its one of the fastest VPNs. Even when you stream content in 4K, youll experience no buffering, just as if it were a local session.

ExpressVPN is the best VPN overall.

ExpressVPN doesnt come cheap. Its best value is a one-year plan that costs $6.67 per month. However, if you dont mind spending a little more for top-notch performance, ExpressVPN is the best VPN provider youll find. If you dont like the service, you can claim a full refund using the 30-day money-back guarantee.

PIA offers an excellent and detailed user interface.

More details about Private Internet Access:

Pros:

Cons:

Private Internet Access (PIA) offers a detailed but user-friendly interface. Most VPNs keep the apps home screen clean for ease of use. However, PIA takes a contrarian approach and adds details like the protocol youre using, bandwidth usage, connection time and other details right on the main screen of its docked UI.

PIA also offers 10 simultaneous connections, compared to only five on TunnelBears paid plans. With PIA, you also get better connection speeds and security features. For example, you can use a proxy on PIA to make your connection more secure. Learn more about everything that PIA offers in our comprehensive Private Internet Access review.

PIA has more security features that deserve a mention, too. It has two kill switches, one of which is a regular kill switch and the other is an advanced kill switch that blocks all internet traffic unless your VPN connection is active.

PIA also comes with a powerful ad blocker called MACE that locks DNS requests without looking at your blocklist. Learn how it works in our DNS records guide.

PIA is one of the most cost-effective VPNs.

PIA is the lowest-cost VPN provider on the list, except for the free VPN services, of course. The best value PIA offers is a two-year plan that costs $2.03 per month. Thats an attractive price given PIAs feature set and performance. However, if youre not entirely confident, you can test PIA with its 30-day money-back guarantee.

Speedify is one of the fastest free VPNs.

More details about Speedify:

Pros:

Cons:

Unlike most free VPNs, Speedify offers a good feature set, has no cap on speed and supports streaming and P2P sharing on select servers. The free version gets you access to servers in over 30 countries, which is more than our best free VPN Windscribe but less than TunnelBear.

However, you only get 2GB of free data, meaning you can browse the internet, but streaming might require switching to a paid plan.

Speedify is one of the few free VPNs that supports the ChaCha20 cipher. You can use AES-128 GCM, too, but ChaCha20 can help improve speeds on a mobile device. Speedify also lets you split your internet traffic into unencrypted and encrypted data with split tunneling or set up the kill switch to ensure complete security if youre on a paid plan.

However, Speedify isnt the best VPN if you want to ensure privacy. It claims to have a no-logs policy, but its privacy policy states that Speedify collects your IP address, device ID and connection timestamps. Learn more about Speedify in our Speedify review.

As the name suggests, Speedify puts a lot of emphasis on fast speeds. It uses channel bonding technology, which helps use the internet via multiple channels WiFi, wired and cellular connections simultaneously. This technology makes Speedify one of the fastest VPNs, matching the download speeds of the best VPNs, like ExpressVPN and NordVPN.

Speedify uses channel bonding technology for fast speeds.

Speedify is a free VPN service, but if you need unlimited data and access to more servers, you can upgrade. Its best value is a three-year plan that costs $4.99 per month. Thats more expensive than some premium VPNs, like NordVPNs Standard Plan, but if you like Speedifys free version and want to test the paid plan, you can use its 30-day money-back guarantee.

Speedify for Families

Speedify for Teams

Yes, Windscribe is better than TunnelBear. Its actually the best free VPN. Windscribe offers excellent connection speeds, more security options, a better server spread and unlimited simultaneous connections.

Its hard to beat, especially when compared with other free VPNs, including TunnelBear. Learn more about the difference between the two services in our Windscribe vs TunnelBear comparison.

Since TunnelBear is slow and fails to unblock some popular streaming websites, you might want to look for a better alternative to TunnelBear. The five VPNs listed here include excellent free and paid alternatives that beat TunnelBear in multiple aspects.

Have you switched from TunnelBear to a different service before? If yes, which one, and how was your experience? Let us know in the comments below. As always, thank you for reading.

Let us know if you liked the post. Thats the only way we can improve.

YesNo

Read more from the original source:
Best TunnelBear Alternative in 2022 [Paid & Free Alternatives] - Cloudwards

Read More..

Sophos links three expert security teams together with X-Ops – SecurityBrief Australia

Sophos has announced Sophos X-Ops, a new cross-operational unit linking SophosLabs, Sophos SecOps and Sophos AI, three established teams of cybersecurity experts at Sophos, to help organisations better defend against constantly changing and increasingly complex cyber attacks.

Sophos X-Ops leverages the predictive, real-time, real-world and researched threat intelligence from each group, which, in turn, collaborate to deliver stronger, more innovative protection, detection and response capabilities.

In addition to this announcement, Sophos is issuing 'OODA: Sophos X-Ops Takes on Burgeoning SQL Server Attacks', research about increased attacks against unpatched Microsoft SQL servers, and how attackers used a fake downloading site and grey-market remote access tools to distribute multiple ransomware families.

Sophos X-Ops identified and thwarted the attacks because the Sophos X-Ops teams combined their respective knowledge of the incidents, jointly analysed them, and took action to quickly contain and neutralise the adversaries, the company states.

Joe Levy, chief technology and product officer at Sophos, says, Modern cybersecurity is becoming a highly interactive team sport, and as the industry has matured, necessary analysis, engineering and investigative specialisations have emerged.

"Scalable end-to-end operations now need to include software developers, automation engineers, malware analysts, reverse engineers, cloud infrastructure engineers, incident responders, data engineers and scientists, and numerous other experts, and they need an organisational structure that avoids silos.

"Weve unified three globally recognised and mature teams within Sophos to provide this breadth of critical, subject matter and process expertise. Joined together as Sophos X-Ops, they can leverage the strengths of each other, including analysis of worldwide telemetry from more than 500,000 customers, industry-leading threat hunting, response and remediation capabilities, and rigorous artificial intelligence to measurably improve threat detection and response.

"Attackers are often too organised and too advanced to combat without the unique combined expertise and operational efficiency of a joint task force like Sophos X-Ops.

Speaking in March 2022 to the Detroit Economic Club about the FBI partnering with the private sector to counter the cyber threat, FBI Director Christopher Wray said, "We're disrupting three things: the threat actors, their infrastructure and their money. And we have the most durable impact when we work with all of our partners to disrupt all three together.

"Sophos X-Ops is taking a similar approach: gathering and operating on threat intelligence from its own multidisciplinary groups to help stop attackers earlier, preventing or minimising the harms of ransomware, espionage or other cyber crimes that can befall organisations of all types and sizes, and working with law enforcement to neutralise attacker infrastructure.

"While Sophos internal teams already share information as a matter of course, the formal creation of Sophos X-Ops drives forward a faster, more streamlined process necessary to counter equally fast-moving adversaries."

Michael Daniel, president and CEO Cyber Threat Alliance, comments, Effective cybersecurity requires robust collaboration at all levels, both internally and externally; it is the only way to discover, analyse and counter malicious cyber actors at speed at scale. Combining these separate teams into Sophos X-Ops shows that Sophos understands this principle and is acting on it.

Sophos X-Ops also provides a stronger cross-operational foundation for innovation, an essential component of cybersecurity due to the aggressive advancements in organised cyber crime, the company states.

By intertwining the expertise of each group, Sophos states the company is pioneering the concept of an artificial intelligence (AI) assisted Security Operations Centre (SOC), which anticipates the intentions of security analysts and provides relevant defensive actions. In the SOC of the future, Sophos states this approach can dramatically accelerate security workflows and the ability to more quickly detect and respond to novel and priority indicators of compromise.

Craig Robinson, IDC research vice president, Security Services, says, The adversary community has figured out how to work together to commoditise certain parts of attacks while simultaneously creating new ways to evade detection and taking advantage of weaknesses in any software to mass exploit it.

"The Sophos X-Ops umbrella is a noted example of stealing a page from the cyber miscreants tactics by allowing cross-collaboration amongst different internal threat intelligence groups. Combining the ability to cut across a wide breadth of threat intelligence expertise with AI assisted features in the SOC allows organisations to better predict and prepare for imminent and future attacks.

More:
Sophos links three expert security teams together with X-Ops - SecurityBrief Australia

Read More..

A technique to improve both fairness and accuracy in artificial intelligence – MIT News

For workers who use machine-learning models to help them make decisions, knowing when to trust a models predictions is not always an easy task, especially since these models are often so complex that their inner workings remain a mystery.

Users sometimes employ a technique, known as selective regression, in which the model estimates its confidence level for each prediction and will reject predictions when its confidence is too low. Then a human can examine those cases, gather additional information, and make a decision about each one manually.

But while selective regression has been shown to improve the overall performance of a model, researchers at MIT and the MIT-IBM Watson AI Lab have discovered that the technique can have the opposite effect for underrepresented groups of people in a dataset. As the models confidence increases with selective regression, its chance of making the right prediction also increases, but this does not always happen for all subgroups.

For instance, a model suggesting loan approvals might make fewer errors on average, but it may actually make more wrong predictions for Black or female applicants. One reason this can occur is due to the fact that the models confidence measure is trained using overrepresented groups and may not be accurate for these underrepresented groups.

Once they had identified this problem, the MIT researchers developed two algorithms that can remedy the issue. Using real-world datasets, they show that the algorithms reduce performance disparities that had affected marginalized subgroups.

Ultimately, this is about being more intelligent about which samples you hand off to a human to deal with. Rather than just minimizing some broad error rate for the model, we want to make sure the error rate across groups is taken into account in a smart way, says senior MIT author Greg Wornell, the Sumitomo Professor in Engineering in the Department of Electrical Engineering and Computer Science (EECS) who leads the Signals, Information, and Algorithms Laboratory in the Research Laboratory of Electronics (RLE) and is a member of the MIT-IBM Watson AI Lab.

Joining Wornell on the paper are co-lead authors Abhin Shah, an EECS graduate student, and Yuheng Bu, a postdoc in RLE; as well as Joshua Ka-Wing Lee SM 17, ScD 21 and Subhro Das, Rameswar Panda, and Prasanna Sattigeri, research staff members at the MIT-IBM Watson AI Lab. The paper will be presented this month at the International Conference on Machine Learning.

To predict or not to predict

Regression is a technique that estimates the relationship between a dependent variable and independent variables. In machine learning, regression analysis is commonly used for prediction tasks, such as predicting the price of a home given its features (number of bedrooms, square footage, etc.) With selective regression, the machine-learning model can make one of two choices for each input it can make a prediction or abstain from a prediction if it doesnt have enough confidence in its decision.

When the model abstains, it reduces the fraction of samples it is making predictions on, which is known as coverage. By only making predictions on inputs that it is highly confident about, the overall performance of the model should improve. But this can also amplify biases that exist in a dataset, which occur when the model does not have sufficient data from certain subgroups. This can lead to errors or bad predictions for underrepresented individuals.

The MIT researchers aimed to ensure that, as the overall error rate for the model improves with selective regression, the performance for every subgroup also improves. They call this monotonic selective risk.

It was challenging to come up with the right notion of fairness for this particular problem. But by enforcing this criteria, monotonic selective risk, we can make sure the model performance is actually getting better across all subgroups when you reduce the coverage, says Shah.

Focus on fairness

The team developed two neural network algorithms that impose this fairness criteria to solve the problem.

One algorithm guarantees that the features the model uses to make predictions contain all information about the sensitive attributes in the dataset, such as race and sex, that is relevant to the target variable of interest. Sensitive attributes are features that may not be used for decisions, often due to laws or organizational policies. The second algorithm employs a calibration technique to ensure the model makes the same prediction for an input, regardless of whether any sensitive attributes are added to that input.

The researchers tested these algorithms by applying them to real-world datasets that could be used in high-stakes decision making. One, an insurance dataset, is used to predict total annual medical expenses charged to patients using demographic statistics; another, a crime dataset, is used to predict the number of violent crimes in communities using socioeconomic information. Both datasets contain sensitive attributes for individuals.

When they implemented their algorithms on top of a standard machine-learning method for selective regression, they were able to reduce disparities by achieving lower error rates for the minority subgroups in each dataset. Moreover, this was accomplished without significantly impacting the overall error rate.

We see that if we dont impose certain constraints, in cases where the model is really confident, it could actually be making more errors, which could be very costly in some applications, like health care. So if we reverse the trend and make it more intuitive, we will catch a lot of these errors. A major goal of this work is to avoid errors going silently undetected, Sattigeri says.

The researchers plan to apply their solutions to other applications, such as predicting house prices, student GPA, or loan interest rate, to see if the algorithms need to be calibrated for those tasks, says Shah. They also want to explore techniques that use less sensitive information during the model training process to avoid privacy issues.

And they hope to improve the confidence estimates in selective regression to prevent situations where the models confidence is low, but its prediction is correct. This could reduce the workload on humans and further streamline the decision-making process, Sattigeri says.

This research was funded, in part, by the MIT-IBM Watson AI Lab and its member companies Boston Scientific, Samsung, and Wells Fargo, and by the National Science Foundation.

Link:
A technique to improve both fairness and accuracy in artificial intelligence - MIT News

Read More..

How AI and decision intelligence are changing the way we work – VentureBeat

Join executives from July 26-28 for Transform's AI & Edge Week. Hear from top leaders discuss topics surrounding AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Reserve your free pass now!

In a digital world fueled by a steady influx of data, achieving organizational excellence depends on giving everyone immediate access to accurate, up-to-date information. Organizational-wide communication and collaboration are vital. Mission-critical decisions, in particular, depend on the timely sharing of lessons learned and insights from all departments.

With new technology based on artificial intelligence (AI) and machine learning (ML), its easier than ever to share data effectively and consistently. Significantly, technology now enables businesses to tap the knowledge stored in one persons mind and translate it into actionable data that anyone can leverage whenever they need it. This technology takes organizational communication and collaboration to the next level aninvaluable advantage in the pursuit of excellence.

The trend toward decision intelligence brings together processes and technologies to support continuous, connected, contextual decision automation and augmentation, according to Gartner senior research director Pieter J. den Hamer. Decision intelligence encompasses data-driven decision-making and the growing use of AI and ML to accelerate and improve data analysis. With the advantage of more data thats also more relevant and trustworthy, data analysts can produce stronger insights resulting in faster and more confident decisions, greater efficiency and productivity, and a sound foundation for business strategies.

Indeed, companies now have the power to leverage data from anywhere using new technology platforms made possible by AI. The in-depth and diverse knowledge derived from external data, internal data, or the minds of individual employees supports data-driven decisions and can be shared company-wide.

Technology can also provide a simple yet powerful AI tool for employees to use during their day-to-day activities. They can capture lessons learned as they work in real time, and adjust their actions when a corrective action is needed, also in real time. Throughout this process, AI defines actionable takeaways, shares insights and offers concise lessons learned (suggesting corrective actions, for example), all of which can boost the entire teams performance.

Since AI turns the data collected from daily work into actionable lessons learned, every team member can contribute to and draw on their teams collective knowledge and the entire companys collective knowledge as well. The technology prompts them to capture their work, and it knows when a team member should see information relevant to their current task. AI ensures everyone has the right data at the right time, exactly when they need it.

In this vision of a data-driven environment, access to data liberates and empowers employees to pursue new ideas, Harvard Business Review writes. In some companies, however, a cultural shift might be needed first. Organizations must elevate their data strategy, so it is intertwined with their business strategy both strategies need to be given the same weight and importance and become embedded in the culture.

All data is crucial to the data-driven organization not just information that flows in from countless sources outside the company, but information generated inside it as well. Unfortunately, separate departments and teams often isolate their own data in silos, making it inaccessible to others.

Its easy to see why data silos represent possibly the most common obstacle to communication and collaboration. If teams dont know whats happening in other departments, they may never realize that information they need already exists within the enterprise. Continuous improvement and knowledge-sharing processes allow data to flow freely throughout the enterprise putting an end to disruptive silos.

Organizations may contain another kind of silo as well, in the form of employees who alone hold unique information and expertise in their heads. Artificial intelligence can quantify individual knowledge and experience and convert it into data-driven insights, ready to be leveraged on demand across the organization.

The sophisticated technology behind constant learning, communication, and collaboration enable organizations to leverage their most valuable asset data even down to the highly specific knowledge and expertise of individual team members.

In the important practice of decision intelligence, AI gives people tools to capture and share their own knowledge instantly and learn from one anothers experiences, helping businesses expand their institutional knowledge simply, organically and successfully.

Ofir Paldi is founder and CEO at Shamaym.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Read more from the original source:
How AI and decision intelligence are changing the way we work - VentureBeat

Read More..

Kurt Vonnegut, the visionary writer who predicted the rise of artificial intelligence – Express

The visionary Slaughterhouse-Five author predicted the rise of artificial intelligence back in the 1950s and became convinced computers would replace the human workforce.

The science fiction legend also tells how large US corporations such as General Electric felt guilty about the tech revolution.

The biographical film reveals that he took a job at GE writing press releases when he could not make enough money from his short stories.

He said: "GE showed me a milling machine that was being run by punch cards. They could do it better than a man could but they were ashamed.

"They felt guilty about what they'd done. The whole emphasis now is throwing people out of work."

It would inspire his first novel Player Piano, in which humans were replaced by machines. It was published in 1952 for an advance of $2,500, but achieved little success.

For his entire writing career the chain-smoking author used a typewriter and never sent an email.

He said: "We're here on earth to fart around. And, of course, the computers will do us out of that. What the computer people don't realise, or don't care [about], is we're dancing animals. We love to move around."

Vonnegut, who died in 2007 aged 84, lectured on the value of the "extended family". He said: "We need more people in our lives. What we need is numbers."

After his sister and her husband died a month apart from each other, he took in their four teenage boys to add to his own three children.

Fame as an author arrived in 1969 with the publication of his modern classic, Slaughterhouse-Five, about his wartime experiences.

Vonnegut fought at the Battle of the Bulge in the Ardennes region of Belgium. He was captured by the Germans and taken to Dresden, where he was kept in a "meat locker" while the Allies bombed the city.

When released he saw the devastation first-hand and helped to bury the dead. When asked if he would rather not have seen what he did, he replied, with his trademark wit, "I wouldn't have missed it for anything!"

He continued to rail against computers throughout his life.

"All the new technology seems redundant to me. I was quite happy with the United States mail service - I don't even have an answering machine."

His parting shot to mankind was: "Life is no way to treat an animal."

Read more from the original source:
Kurt Vonnegut, the visionary writer who predicted the rise of artificial intelligence - Express

Read More..

Artificial Intelligence in Personalized Medicine, Genomic Sequencing Advances, Human Brain Organogenesis, Building Trust with Patients, Guiding…

CHICAGO, July 24, 2022 /PRNewswire/ -- At the 2022 AACC Annual Scientific Meeting & Clinical Lab Expo, laboratory medicine experts will present the cutting-edge research and technology that is revolutionizing clinical testing and patient care. From July 24-28 in Chicago, the meeting's 250-plus sessions will deliver insights on a broad range of timely healthcare topics. Highlights include discussions exploring the use of artificial intelligence (AI) in personalized medicine, advances in multiplexed genomic sequencing and imaging, real-life applications of human brain organogenesis, how to build trust with patients, and guiding clinical decisions with mass spectrometry.

AI in Personalized Medicine. Precision medicine involves tailoring treatments to individual patients and, increasingly, clinicians are using AI in their clinical prediction models to do this. In the meeting's opening keynote, Dr. Lucila Ohno-Machado, health associate dean of informatics and technology at the University of California San Diego, will introduce how AI models are developed, tested, and validated as well as performance measures that may help clinicians select these models for routine use.

Multiplexed Genomic Sequencing and Imaging. Thanks to advances in multiplexed genomic sequencing and imaging, we can identify small but crucial differences in DNA, RNA, proteins, and more. These techniques have also undergone a 50-million-fold reduction in cost and comparable improvements in quality since they first emerged. In spite of this, healthcare is just beginning to catch up with the implications of these technologies. Dr. George Church, AACC's 2022 Wallace H. Coulter Lectureship Awardee and founding core faculty and lead at the Synthetic Biology Wyss Institute at Harvard University, will discuss advances and implications of multiplex technologies at this plenary session.

Applications of Human Brain Organoid Technology. The human brain is a very complex biological system and is susceptible to several neurological and neurodegenerative disorders that affect millions of people worldwide. In this plenary session, Dr. Alysson R. Muotri, professor of cellular and molecular medicine at the University of California San Diego School of Medicine, will explore the concept of human brain organogenesis, or how to recreate the human brain in a dish. Several applications of this technology in neurological care will be discussed.

Building Trust in Healthcare. The world is having a trust crisis that is affecting healthcare delivery across the globe. Dr. Thomas Lee, chief medical officer of Press Ganey Associates and professor of health policy and management at the Harvard T.H. Chan School of Public Health, will describe the importance of building trust among patients and healthcare workers in this plenary session. He will explore a three-component model for building trust, and the types of interventions most likely to be effective.

Guiding Clinical Decisions with Mass Spectrometry. In this, the meeting's closing keynote, Dr. Livia Schiavinato Eberlin, associate professor of surgery and director of translational and innovations research at Baylor College of Medicine, will discuss the development and application of direct mass spectrometry techniques used in clinical microbiology labs, clinical pathology labs, and the operating room. The presentation will focus on results obtained in ongoing clinical studies employing two direct mass spectrometry techniques, desorption electrospray ionization mass spectrometry imaging and the MasSpec Pen technology.

Additionally, at the Clinical Lab Expo, more than 750 exhibitors will display innovative technologies that are just coming to market in every clinical lab discipline.

"Laboratory medicine's capacity to adapt to changing healthcare circumstances and use the field's scientific insights to improve quality of life is unparalleled. This capacity is constantly growing, with cutting-edge diagnostic technologies emerging every day in areas as diverse as mass spectrometry, artificial intelligence, genomic sequencing, and neurology," said AACC CEO Mark J. Golden. "The 2022 AACC Annual Scientific Meeting will shine a light on the pioneers in laboratory medicine who are mobilizing these new advances to enhance patient care."

Session Information

AACC Annual Scientific Meeting registration is free for members of the media. Reporters can register online here: https://www.xpressreg.net/register/aacc0722/media/landing.asp

AI in Personalized Medicine

Session 11001 Biomedical Informatics Strategies to Enhance Individualized Predictive ModelsSunday, July 245-6:30 p.m.U.S. Central Time

Multiplexed Genomic Sequencing and Imaging

Session 12001 Multiplexed and Exponentially Improving TechnologiesMonday, July 258:45 10:15 a.m.U.S. Central Time

Applications of Human Brain Organoid Technology

Session 13001 Applications of Human Brain Organoid TechnologyTuesday, July 268:45 10:15 a.m.U.S. Central Time

Building Trust in Healthcare

Session 14001 Building Trust in a Time of TurmoilWednesday, July 278:45 10:15 a.m.U.S. Central Time

Guiding Clinical Decisions with Mass Spectrometry

Session 15001 Guiding Clinical Decisions with Molecular Information provided by Direct Mass Spectrometry TechnologiesThursday, July 288:45 10:15 a.m.U.S. Central Time

All sessions will take place in Room S100 of the McCormick Place Convention Center in Chicago.

About the 2022 AACC Annual Scientific Meeting & Clinical Lab ExpoThe AACC Annual Scientific Meeting offers 5 days packed with opportunities to learn about exciting science from July 24-28. Plenary sessions will explore artificial intelligence-based clinical prediction models, advances in multiplex technologies, human brain organogenesis, building trust between the public and healthcare experts, and direct mass spectrometry techniques.

At the AACC Clinical Lab Expo, more than 750 exhibitors will fill the show floor of the McCormick Place Convention Center in Chicago with displays of the latest diagnostic technology, including but not limited to COVID-19 testing, artificial intelligence, mobile health, molecular diagnostics, mass spectrometry, point-of-care, and automation.

About AACCDedicated to achieving better health through laboratory medicine, AACC brings together more than 70,000 clinical laboratory professionals, physicians, research scientists, and business leaders from around the world focused on clinical chemistry, molecular diagnostics, mass spectrometry, translational medicine, lab management, and other areas of progressing laboratory science. Since 1948, AACC has worked to advance the common interests of the field, providing programs that advance scientific collaboration, knowledge, expertise, and innovation. For more information, visit http://www.aacc.org.

Christine DeLongAACCSenior Manager, Communications & PR(p) 202.835.8722[emailprotected]

Molly PolenAACCSenior Director, Communications & PR(p) 202.420.7612(c) 703.598.0472[emailprotected]

SOURCE AACC

More here:
Artificial Intelligence in Personalized Medicine, Genomic Sequencing Advances, Human Brain Organogenesis, Building Trust with Patients, Guiding...

Read More..

The Future Is Bright For Artificial Intelligence In The Middle East – OilPrice.com

As part of ongoing efforts to diversify their economies and build a platform for sustainable future growth, MENA nations are increasingly turning towards artificial intelligence (AI). A slew of recent investments and initiatives primarily in academia and the government, but also in the private sector has reinvigorated interest from industry leaders around the globe in the potential for AI to strengthen the efficiency and sustainability of MENA economies.

According to a report from the Economist Impact Unit (EIU) and Google published earlier this year, AI could bring about an additional $320bn in economic growth in the MENA region by 2030.

Many long-term economic strategies in the region target high-value sectors with the potential to benefit from the Fourth Industrial Revolution a raft of technological advancements in AI, data and cloud computing that merge the physical, digital and biological worlds.

In recent years the UAE, Saudi Arabia, Qatar and Egypt have published ambitious, government-driven strategies to develop AI. However, much of their momentum was derailed in the Covid-19 pandemics early months, as attention turned to dealing with the unfolding heath situation, the broader economic downturn and the collapse in oil prices.

Despite the temporary setback, the pandemic has underscored the urgency of economic diversification, and several MENA nations have accelerated investment in non-hydrocarbons sectors where AI could play a key role.

Global private sector investment in AI, largely driven by companies in China and the US, increased by 40% in 2020, according to research from Stanford University, underscoring the surging interest in the field and its potential applications, especially in high-value-added sectors.

A March report from Saudi management consultancy Strategic Gears recommended that the country focus on harnessing AI to boost three sectors oil and gas, government services and financial services that already contribute more than 50% of GDP. Manufacturing, health care, education, automotive, retail and e-commerce, and transport are also positioned to benefit from the technology.

Rather than being restricted to ICT and tech-based fields, AI is expected to have a far-reaching impact across broader economies and will be key to realising long-term economic plans.

The implementation of AI is helping businesses become more customer-centric, efficient, productive and competitive in both local and regional markets, Said bin Abdullah Al Mandhari, CEO of ITHCA Group, an Omani ICT company, told OBG.

This is already the case in Omans oil and gas industry, and it will be particularly important moving forwards for priority sectors like fisheries, tourism and logistics. AI can ultimately help unlock these sectors potential, see them become significant contributors to national GDP and help achieve their targets under Oman Vision 2040.

Cybersecurity is another area where AI can add value. As OBG recently detailed, cyberattacks have been on the rise since Russias invasion of Ukraine, presenting an elevated threat to emerging markets.

According to media reports, an extensive phishing campaign that involved the impersonation of the UAEs Ministry of Human Resources was recently discovered with the help of an AI digital risk-monitoring platform from Indias CloudSEK.

In a region where several countries derive sizeable portions of GDP and export revenue from hydrocarbons, it is unsurprising that the energy sector has attracted significant AI investment from governments and companies looking not only to diversify away from oil and gas, but also to bolster the sectors efficiency and reduce its carbon emissions.

Related: MI6 Chief: Iran May Not Want A Nuclear Deal

Abu Dhabi National Oil Company (ADNOC) has already deployed machine learning to mine its historical and current data, which has helped generate scenarios and forecast operations that have, in ADNOCs estimation, generated $1bn in business value over three years.

AI is also expected to be highly valuable in enabling the transition to green energy by managing the decentralised electricity systems renewable sources rely upon and monitoring carbon emissions.

To this end, in May London-based AI start-up Arloid Automation announced three new partnerships across the Middle East to track and reduce emissions.

Given their large youth populations, many MENA nations are making significant investments in AI education, training and research to ensure that such technologies play a key role in the future economy and workforce.

Of the $320bn the EIU-Google report estimates that MENA nations will generate by 2030 thanks to the adoption of AI, Strategic Gears expects Saudi Arabia to yield 42%, partly due to its investment in education. Roughly three-quarters of Saudi Vision 2030 goals involve data and AI, and the Kingdom plans to train 20,000 data and AI specialists by the end of the decade.

Highlighting this focus, in April national energy major Aramco signed a memorandum of understanding with King Abdullah University of Science and Technology to establish a new research centre to advance AI technological development.

Among the UAEs largest investments in AI education was the establishment of the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) in 2019. Located in the smart city and innovation cluster of Masdar City, MBZUAI ranks 30th globally among institutions that conduct research in AI, computer vision, machine learning and natural language processing, according to computer science metrics-based platform CSRankings.

Elsewhere, Qatar has several branch campuses of renowned universities such as Carnegie Mellon University in the US, where students can pursue AI-related degrees and research. The country is also home to the Qatar Centre for Artificial Intelligence, which is working to attract talent to its AI faculty and establish a research and policy centre.

Given that AIs benefits are multi- and intersectoral, MENA countries can craft strategies and build AI tailored ecosystems to suit their respective economic and social structures.

For example, as part of the Egyptian governments efforts to harness AI for economic growth and quality-of-life improvements, it is allocating funds for teacher training programmes and other AI-related vocational initiatives.

As MENA nations and other emerging markets continue to invest in AI education, some industry figures say they may have a distinct advantage over developed nations by leveraging local talent.

With the drive towards affordability a defining trait in developing markets now also a feature of more advanced markets, software engineers in developing markets are gaining a competitive advantage based on the combination of their inherent affinity for cost-effective solutions and the possibilities opened up by AI, Soham Chokshi, CEO and co-founder of logistics software provider Shipsy, told OBG.

However, to realise this competitive advantage and achieve significant improvements in domestic AI capacity, countries the region will also need to incentivise investment.

In order for Oman Vision 2040 to become a reality and accelerate economic development, the country needs to work on creating a business environment conducive to greater investment in advanced technology, particularly in the area of AI and data analytics, Maqbool Al Wahaibi, CEO of Oman Data Park, told OBG. In this context, local IT companies will need to prepare to compete against global players that are expanding their presence in the local market.

By Oxford Business Group

More Top Reads From Oilprice.com

Read more here:
The Future Is Bright For Artificial Intelligence In The Middle East - OilPrice.com

Read More..

Google Is Selling Advanced AI to Israel, Documents Reveal – The Intercept

Training materials reviewed by The Intercept confirm that Google is offering advanced artificial intelligence and machine-learning capabilities to the Israeli government through its controversial Project Nimbus contract. The Israeli Finance Ministry announced the contract in April 2021 for a $1.2 billion cloud computing system jointly built by Google and Amazon. The project is intended to provide the government, the defense establishment and others with an all-encompassing cloud solution, the ministry said in its announcement.

Google engineers have spent the time since worrying whether their efforts would inadvertently bolster the ongoing Israeli military occupation of Palestine. In 2021, both Human Rights Watch and Amnesty International formally accused Israel of committing crimes against humanity by maintaining an apartheid system against Palestinians. While the Israeli military and security services already rely on a sophisticated system of computerized surveillance, the sophistication of Googles data analysis offerings could worsen the increasingly data-driven military occupation.

According to a trove of training documents and videos obtained by The Intercept through a publicly accessible educational portal intended for Nimbus users, Google is providing the Israeli government with the full suite of machine-learning and AI tools available through Google Cloud Platform. While they provide no specifics as to how Nimbus will be used, the documents indicate that the new cloud would give Israel capabilities for facial detection, automated image categorization, object tracking, and even sentiment analysis that claims to assess the emotional content of pictures, speech, and writing. The Nimbus materials referenced agency-specific trainings available to government personnel through the online learning service Coursera, citing the Ministry of Defense as an example.

A slide presented to Nimbus users illustrating Google image recognition technology.

Credit: Google

The former head of Security for Google Enterprise who now heads Oracles Israel branch has publicly argued that one of the goals of Nimbus is preventing the German government from requesting data relating on the Israel Defence Forces for the International Criminal Court, said Poulson, who resigned in protest from his job as a research scientist at Google in 2018, in a message. Given Human Rights Watchs conclusion that the Israeli government is committing crimes against humanity of apartheid and persecution against Palestinians, it is critical that Google and Amazons AI surveillance support to the IDF be documented to the fullest.

Though some of the documents bear a hybridized symbol of the Google logo and Israeli flag, for the most part they are not unique to Nimbus. Rather, the documents appear to be standard educational materials distributed to Google Cloud customers and presented in prior training contexts elsewhere.

Google did not respond to a request for comment.

The documents obtained by The Intercept detail for the first time the Google Cloud features provided through the Nimbus contract. With virtually nothing publicly disclosed about Nimbus beyond its existence, the systems specific functionality had remained a mystery even to most of those working at the company that built it.In 2020, citing the same AI tools, U.S Customs and Border Protection tapped Google Cloud to process imagery from its network of border surveillance towers.

Many of the capabilities outlined in the documents obtained by The Intercept could easily augment Israels ability to surveil people and process vast stores of data already prominent features of the Israeli occupation.

Data collection over the entire Palestinian population was and is an integral part of the occupation, Ori Givati of Breaking the Silence, an anti-occupation advocacy group of Israeli military veterans, told The Intercept in an email. Generally, the different technologicaldevelopments we are seeing in the Occupied Territories all direct to one central element which is more control.

The Israeli security state has for decades benefited from the countrys thriving research and development sector, and its interest in using AI to police and control Palestinians isnt hypothetical. In 2021, the Washington Post reported on the existence of Blue Wolf, a secret military program aimed at monitoring Palestinians through a network of facial recognition-enabled smartphones and cameras.

Living under a surveillance state for years taught us that all the collected information in the Israeli/Palestinian context could be securitized and militarized, said Mona Shtaya, a Palestinian digital rights advocate at 7amleh-The Arab Center for Social Media Advancement, in a message. Image recognition, facial recognition, emotional analysis, among other things will increase the power of the surveillance state to violate Palestinian right to privacy and to serve their main goal, which is to create the panopticon feeling among Palestinians that we are being watched all the time, which would make the Palestinian population control easier.

The educational materials obtained by The Intercept show that Google briefed the Israeli government on using whats known as sentiment detection, an increasingly controversial and discredited form of machine learning. Google claims that its systems can discern inner feelings from ones face and statements, a technique commonly rejected as invasive and pseudoscientific, regarded as being little better than phrenology. In June, Microsoft announced that it would no longer offer emotion-detection features through its Azure cloud computing platform a technology suite comparable to what Google provides with Nimbus citing the lack of scientific basis.

Google does not appear to share Microsofts concerns. One Nimbus presentation touted the Faces, facial landmarks, emotions-detection capabilities of Googles Cloud Vision API, an image analysis toolset. The presentation then offered a demonstration using the enormous grinning face sculpture at the entrance of Sydneys Luna Park. An included screenshot of the feature ostensibly in action indicates that the massive smiling grin is very unlikely to exhibit any of the example emotions. And Google was only able to assess that the famous amusement park is an amusement park with 64 percent certainty, while it guessed that the landmark was a place of worship or Hindu Temple with 83 percent and 74 percent confidence, respectively.

A slide presented to Nimbus users illustrating Google AIs ability to detect image traits.

Credit: Google

Vision API is a primary concern to me because its so useful for surveillance, said one worker, who explained that the image analysis would be a natural fit for military and security applications. Object recognition is useful for targeting, its useful for data analysis and data labeling. An AI can comb through collected surveillance feeds in a way a human cannot to find specific people and to identify people, with some error, who look like someone. Thats why these systems are really dangerous.

A slide presented to Nimbus users outlining various AI features through the companys Cloud Vision API.

Credit: Google

Training an effective model from scratch is often resource intensive, both financially and computationally. This is not so much of a problem for a world-spanning company like Google, with an unfathomable volume of both money and computing hardware at the ready. Part of Googles appeal to customers is the option of using a pre-trained model, essentially getting this prediction-making education out of the way and letting customers access a well-trained program thats benefited from the companys limitless resources.

An AI can comb through collected surveillance feeds in a way a human cannot to find specific people and to identify people, with some error, who look like someone. Thats why these systems are really dangerous.

Custom models generated through AutoML, one presentation noted, can be downloaded for offline edge use unplugged from the cloud and deployed in the field.

That Nimbus lets Google clients use advanced data analysis and prediction in places and ways that Google has no visibility into creates a risk of abuse, according to Liz OSullivan, CEO of the AI auditing startupParity and a member of the U.S. National Artificial Intelligence Advisory Committee. Countries can absolutely use AutoML to deploy shoddy surveillance systems that only seem like they work, OSullivan said in a message. On edge, its even worse think bodycams, traffic cameras, even a handheld device like a phone can become a surveillance machine and Google may not even know its happening.

In one Nimbus webinar reviewed by The Intercept, the potential use and misuse of AutoML was exemplified in a Q&A session following a presentation. An unnamed member of the audience asked the Google Cloud engineers present on the call if it would be possible to process data through Nimbus in order to determine if someone is lying.

Im a bit scared to answer that question, said the engineer conducting the seminar, in an apparent joke. In principle: Yes. I will expand on it, but the short answer is yes. Another Google representative then jumped in: It is possible, assuming that you have the right data, to use the Google infrastructure to train a model to identify how likely it is that a certain person is lying, given the sound of their own voice. Noting that such a capability would take a tremendous amount of data for the model, the second presenter added that one of the advantages of Nimbus is the ability to tap into Googles vast computing power to train such a model.

Id be very skeptical for the citizens it is meant to protect that these systems can do what is claimed.

A broad body of research, however, has shown that the very notion of a lie detector, whether the simple polygraph or AI-based analysis of vocal changes or facial cues, is junk science. While Googles reps appeared confident that the company could make such a thing possible through sheer computing power, experts in the field say that any attempts to use computers to assess things as profound and intangible as truth and emotion are faulty to the point of danger.

One Google worker who reviewed the documents said they were concerned that the company would even hint at such a scientifically dubious technique. The answer should have been no, because that does not exist, the worker said. It seems like it was meant to promote Google technology as powerful, and its ultimately really irresponsible to say that when its not possible.

Andrew McStay, a professor of digital media at Bangor University in Wales andhead of the Emotional AI Lab, told The Intercept that the lie detector Q&A exchange was disturbing, as is Googles willingness to pitch pseudoscientific AI tools to a national government. It is [a] wildly divergent field, so any technology built on this is going to automate unreliability, he said. Again, those subjected to them will suffer, but Id be very skeptical for the citizens it is meant to protect that these systems can do what is claimed.

According to some critics, whether these tools work might be of secondary importance to a company like Google that is eager to tap the ever-lucrative flow of military contract money. Governmental customers too may be willing to suspend disbelief when it comes to promises of vast new techno-powers. Its extremely telling that in the webinar PDF that they constantly referred to this as magical AI goodness, said Jathan Sadowski, a scholar of automation technologies and research fellow at Monash University, in an interview with The Intercept. It shows that theyre bullshitting.

Google CEO Sundar Pichai speaks at the Google I/O conference in Mountain View, Calif. Google pledges that it will not use artificial intelligence in applications related to weapons or surveillance, part of a new set of principles designed to govern how it uses AI. Those principles, released by Pichai, commit Google to building AI applications that are socially beneficial, that avoid creating or reinforcing bias and that are accountable to people.

Photo: Jeff Chiu/AP

Israel, though, has set up its relationship with Google to shield it from both the companys principles and any outside scrutiny. Perhaps fearing the fate of the Pentagons Project Maven, a Google AI contract felled by intense employee protests, the data centers that power Nimbus will reside on Israeli territory, subject to Israeli lawand insulated from political pressures. Last year, the Times of Israel reported that Google would be contractually barred from shutting down Nimbus services or denying access to a particular government office even in response to boycott campaigns.

Google employees interviewed by The Intercept lamented that the companys AI principles are at best a superficial gesture. I dont believe its hugely meaningful, one employee told The Intercept, explaining that the company has interpreted its AI charter so narrowly that it doesnt apply to companies or governments that buy Google Cloud services. Asked how the AI principles are compatible with the companys Pentagon work, a Google spokesperson told Defense One, It means that our technology can be used fairly broadly by the military.

Google is backsliding on its commitments to protect people from this kind of misuse of our technology. I am truly afraid for the future of Google and the world.

Moreover, this employee added that Google lacks both the ability to tell if its principles are being violated and any means of thwarting violations. Once Google offers these services, we have no technical capacity to monitor what our customers are doing with these services, the employee said. They could be doing anything. Another Google worker told The Intercept, At a time when already vulnerable populations are facing unprecedented and escalating levels of repression, Google is backsliding on its commitments to protect people from this kind of misuse of our technology. I am truly afraid for the future of Google and the world.

Ariel Koren, a Google employee who claimed earlier this year that she faced retaliation for raising concerns about Nimbus, said the companys internal silence on the program continues. I am deeply concerned that Google has not provided us with any details at all about the scope of the Project Nimbus contract, let alone assuage my concerns of how Google can provide technology to the Israeli government and military (both committing grave human rights abuses against Palestinians daily) while upholding the ethical commitments the company has made to its employees and the public, she told The Intercept in an email. I joined Google to promote technology that brings communities together and improves peoples lives, not service a government accused of the crime of apartheid by the worlds two leading human rights organizations.

Sprawling techcompanies have published ethical AI charters to rebut critics who say that their increasingly powerful products are sold unchecked and unsupervised. The same critics often counter that the documents are a form of ethicswashing essentially toothless self-regulatory pledges that provide only the appearance of scruples, pointing to examples like the provisions in Israels contract with Google that prevent thecompany from shutting down its products. The way that Israel is locking in their service providers through this tender and this contract, said Sadowski, the Monash University scholar, I do feel like that is a real innovation in technology procurement.

To Sadowski, it matters little whether Google believes what it peddles about AI or any other technology. What the company is selling, ultimately, isnt just software, but power. And whether its Israel and the U.S. today or another government tomorrow, Sadowski says that some technologies amplify the exercise of power to such an extent that even their use by a country with a spotless human rights record would provide little reassurance. Give them these technologies, and see if they dont get tempted to use them in really evil and awful ways, he said. These are not technologies that are just neutral intelligence systems, these are technologies that are ultimately about surveillance, analysis, and control.

Read the original post:
Google Is Selling Advanced AI to Israel, Documents Reveal - The Intercept

Read More..

Artificial Intelligence and Machine Learning in Healthcare | JHL – Dove Medical Press

Innovative scientific and technological developments have ushered in a remarkable transformation in medicine that continues to impact virtually all stakeholders from patients to providers to Healthcare Organizations (HCOs) and the community in general.1,2 Increasingly incorporated into clinical practice over the past few decades, these innovations include widespread use of Electronic Health Records (EHR), telemedicine, robotics, and decision support for surgical procedures. Ingestible microchips allow healthcare providers to monitor patient compliance with prescribed pharmacotherapies and their therapeutic efficacy through big data analysis,15 as well as streamlining drug design, screening, and discovery.6 Adoption of novel medical technologies has allowed US healthcare to maintain its vanguard position in select domains of clinical care such as improving access by reducing wait times, enriching patient-provider communication, enhancing diagnostic accuracy, improving patient satisfaction, augmenting outcome prediction, decreasing mortality, and extending life expectancy.35,7

Yet despite the theoretical advantages of these innovative medical technologies, many issues remain requiring careful consideration as we integrate these novel technologies into our armamentarium. This descriptive literature-based article explicates on the advantages, future potential, challenges, and caveats with the predictable and impending importation of AI and ML into all facets of healthcare.

By far the most revolutionary of these novel technologies is Artificial Intelligence (AI), a branch of computer science that attempts to construct intelligent entities via Machine Learning (ML), which is the ability of computers to learn without being explicitly programed.8 ML utilizes algorithms to identify patterns, and its subspecialty Deep Learning (DL) employs artificial neural networks with intervening frameworks to identify patterns and data.1,8 Although ML was first conceived by computer scientist Arthur Samuel as far back as 1956, applications of AI have only recently begun to pervade our daily life with computers simulating human cognitioneg, visual perception, speech recognition, decision-making, and language translation.8 Everyday examples of AI include smart phones, autonomous vehicles, digital assistants (eg, Siri, Alexa), chatbots and auto-correcting software, online banking, facial recognition, and transportation (eg, Uber, air traffic control operations, etc.). The iterative nature of ML allows the machine to adapt its systems and outputs following exposure to new data with supervised learningie, utilizing training algorithms to predict future events from historical data inputsor unsupervised learning, whereby the machine explores the data and attempts to develop patterns or structures de novo. The latter methodology is often used to determine and distinguish outliers. Neural networks in AI utilize an adaptive system comprised of an interconnected group of artificial neurons and mathematical or computational modeling for processing information from input and output data via pattern recognition.9 Through predictive analytics, ML has demonstrated its effectiveness in the realm of finance (eg, identifying credit card fraud) and in the retail industry to anticipate customer behavior.1,10,11

Extrapolation of AI to medicine and healthcare is expected to increase exponentially in the three principal domains of research, teaching, and clinical care. With improved computational efficiencies, common applications of ML in healthcare will include enhanced diagnostic modalities, improved therapeutic interventions, augmenting and refining workflow by processing large amounts of hospital and national EHR data, more accurate clinical course and prediction through precision and personalized medicine, and genome interpretation. ML can provide basic clinical triage in geographical areas inaccessible to specialty care. It can also detect treatable psychiatric conditions via analysis of affective and anxiety disorders using speech patterns and facial expressions (eg, bipolar disorder, major depression, anxiety spectrum and psychotic disorders, attention deficit hyperactivity disorder, addiction disorders, Tourettes Syndrome, etc.)12,13 (Figure 1). Deep learning algorithms are highly effective compared to human interpretation in medical subspecialties where pattern recognition plays a dominant role, such as dermatology, hematology, oncology, histopathology, ophthalmology, radiology (eg, programmed image analyses), and neurology (eg, analysis for seizures utilizing electroencephalography). Artificial neural networks are being developed and employed for diagnostic accuracy, timely interventions, outcomes and prognostication of neurosurgical conditions, such as spinal stenosis, traumatic brain injury, brain tumors, and cerebral vasospasm following aneurysmal subarachnoid hemorrhage.14 Theoretically, ML can improve triage by directing patients to proper treatments at lower cost and by keeping those with chronic conditions out of costly and time-intensive emergency care centers. In clinical practice, ~5% of all patients account for 50% of healthcare costs, and those with chronic medical conditions comprise 85% of total US healthcare costs.3

Figure 1 Potential Applications of Machine Learning.

Patients can benefit from ML in other ways. For follow-up visits, not having to arrange transportation or take time off work for face-to-face interaction with healthcare providers may be an attractive alternative to patients and to the community, even more so in restricted circumstances like the recent COVID-19 pandemic-associated lockdowns and social distancing.

Ongoing ML-related research and its applications are robust. Companies developing automation, topological data analysis, genetic mapping, and communications systems include Pathway Genomics, Digital Reasoning Systems, Ayandi, Apixio, Butterfly Network, Benevolent AI, Flatiron Health, and several others.1,10

Despite the many theoretical advantages and potential benefits of ML in healthcare, several challenges (Figure 2) must be met15 before it can achieve broader acceptance and application.

Figure 2 Caveats and Challenges with use of Machine Learning.

Frequent software updates will be necessary to ensure continued improvement in ML-assisted models over time. Encouraging the use of such software, the Food and Drug Administration has recommended a pre-certified approach for agility.1,2 To be of pragmatic clinical import, high-quality input-data is paramount for validating and refining diagnostic and therapeutic procedures. At present, however, there is a dearth of robust comparative data that can be validated against the commonly accepted gold standard, comprised of blinded, placebo-controlled randomized clinical trials versus the ML-output data that is typically an area-under-the-curve analysis.1,7 Clinical data generated from ML-assisted calculations and more rigorous multi-variate analysis will entail integration with other relevant patient demographic information (eg, socio-economic status, including values, social and cultural norms, faith and belief systems, social support structures in-situ, etc.).16

All stakeholders in the healthcare delivery system (HCOs, providers, patients, and the community) will have to adjust to the paradigm shift away from traditional in-person interactions. Healthcare providers will have to surmount actual or perceived added workload to avoid burnout especially during the initial adaptive phase. They will also have to cope with increased ML-generated false-positive and -negative alerts. The traditional practice of clinical medicine is deeply entrenched in the framework of formulating a clinical hypothesis via rigorous history-taking and physical examination followed by sequential confirmation through judicious ancillary and diagnostic testing. Such traditional in-person interactions have underscored the importance of an empathetic approach to the provider-patient relationship. This traditional view has been characterized as archaic, particularly by those with a futuristic mindset, who envision an evolutionary change leading to whole body scans that deliver a more accurate assessment of health and diagnosis of disease. However, incidental findings not attributable to symptoms may lead to excessive ancillary tests underscoring the adage testing begets more testing.17

Healthcare is one of the fastest growing segments of the world economy and is presently at a crossroads of unprecedented transformation. As an example, US healthcare expenditure has accelerated dramatically over the past several decades (~19% of Gross National Product; exceeding $4.1 trillion, or $12,500 per person per year)18 with widespread ramifications for all stakeholders including patients and their families, healthcare providers, government, community, and the US economy.1,35 A paradigm shift from volume-based to performance-based reimbursements from third-party payers warrants focus on some of the most urgent issues in healthcare including cost containment, access, and providing low-cost, high-value healthcare commensurate with the proposed six-domain framework (safe, effective, patient-centered, timely, efficient, and equitable) articulated by the Institute of Medicine in 2001.35,19 Of note, uncontrolled use of expensive technology and excessive ancillary testing account for ~2530% of total healthcare costs.17 While technologies will probably never completely replace the function of healthcare providers, they will definitely transform healthcare, benefiting both providers and patients. However, there is a paucity of costbenefit data and analysis of the use of these innovative emerging medical technologies. All stakeholders should remain cost-conscious as the newer technological diagnostic approaches may further drive up the already rising costs of healthcare. Educating and training the next generation of healthcare providers in the context of AI will also require transformation with simulation approaches and inter-professional education. Therefore, the value proposition of novel technologies must be critically appraised via longitudinal and continuous valuations and patient outcomes in terms of its impact on health and disease management.13 To mitigate healthcare costs, we must control the technological imperativethe overuse of technology because of easy availability without due consideration to disease course or outcomes and irrespective of costbenefit ratio.3

Issues surrounding consumer privacy and proprietorship of colossal quantities of healthcare data under an AI regime are legitimate concerns. Malicious or unintentional breaches may result in financial or other harm. Akin to the challenges encountered with EHR, easy access to data and interoperability with broader compatibility of interfaces by healthcare providers spread across space and time will present unique challenges. Databases will likely be owned by large profit-oriented technology companies who may decide to dispense data to third parties. Additional costs are predictable as well, particularly during the early stages of development of ML algorithms, which is likely to be more bearable to large HCOs. Delay in the use of such processes is anticipated by smaller organizations with resulting potential for mergers and acquisitions or even failure of smaller hospitals and clinics. Concerns regarding ownership, responsibility, and accountability of ML algorithms may arise owing to the probability of detrimental outcomes, which ideally should be apportioned between developer, interpreter, healthcare provider, and patient.1 Simulation techniques can be preemptively utilized for ML training for clinical scenarios; practice runs may require formal certification courses and workshops. Regulations must be developed by policymakers and legislative bodies to delineate the role of third-party payers in ML-assisted healthcare financing. Finally, education and training via media outlets, internet, and social media will be necessary to address public opinion, misperceptions, and nave expectations about ML-assisted algorithms.7

For centuries, the practice of medicine has been deeply embedded in a tradition of meticulous history-taking, physical examination, and thoughtful ancillary investigations to confirm clinical hypotheses and diagnoses. The great physician, Sir William Osler (18491919)14,20 encapsulated the desired practice of good medicine with his famous quotes, Listen to your patient he is telling you the diagnosis, The good physician treats the disease; the great physician treats the patient who has the disease, and Medicine is a science of uncertainty and an art of probability. With rapid technological advances, we are at the crossroads of practicing medicine that would be distinctly different from the traditional approach and practice(s), a change that may be characterized as evolutionary.

AI and ML have enormous potential to transform healthcare and the practice of medicine, although these modalities will never substitute an astute and empathetic bedside clinician. Furthermore, several issues remain as to whether their value proposition and cost-benefit are complementary to the overarching focus on providing low-cost, high-value healthcare to the community at large. While innovative technological advances play a critical role in the rapid diagnosis and management of disease, the phenomenon of the technological imperative35,17 deserves special consideration among both public and providers for the future use of AI and ML in delivering healthcare.

The author reports no conflicts of interest in this work.

1. Bhardwaj R, Nambiar AR, Dutta D A Study of Machine Learning in Healthcare. 2017 IEEE 41st Annual Computer Software and Applications Conference. 236241. Available from: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8029924. Accessed March 30, 2022.

2. Deo RC. Machine Learning in Medicine. Circulation. 2015;132:19201930. doi:10.1161/CIRCULATIONAHA.115.001593

3. Shi L, Singh DA. Delivering Health Care in America: A Systems Approach. 7th ed. Burlington, MA: Jones & Bartlett Learning; 2019.

4. Barr DA. Introduction to US Health Policy. The Organization, Financing, and Delivery of Health Care in America. 4th ed. Baltimore, MD: John Hopkins University Press; 2016.

5. Wilensky SE, Teitelbaum JB. Essentials of Health Policy and Law. Fourth ed. Burlington, MA: Jones & Bartlett Learning; 2020.

6. Gupta R, Srivastava D, Sahu M, Tiwan S, Ambasta RK, Kumar P. Artificial intelligence to deep learning; machine intelligence approach for drug discovery. Mol Divers. 2021;25:13151360. doi:10.1007/s11030-021-10217-3

7. Dabi A, Taylor AJ. Machine Learning, Ethics and Brain Death Concepts and Framework. Arch Neurol Neurol Disord. 2020;3:19.

8. Handelman GS, Kok HK, Chandra RV, Razavi AH, Lee MJ, Asadi H. eDoctor: machine learning and the future of medicine. J Int Med. 2018;284:603619. doi:10.1111/joim.12822

9. Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A. 1982;79:25542558. doi:10.1073/pnas.79.8.2554

10. Ghassemi M, Naumann T, Schulam P, Beam AL, Ranganath R Opportunities in Machine Learning for Healthcare. 2018. Available from: https://pdfs.semanticscholar.org/1e0b/f0543d2f3def3e34c51bd40abb22a05937bc.pdf. Accessed March 30, 2022.

11. Jnr YA Artificial Intelligence and Healthcare: a Qualitative Review of Recent Advances and Predictions for the Future. Available from: https://pimr.org.in/2019-vol7-issue-3/YawAnsongJnr_v3.pdf. Accessed March 30, 2022.

12. Chandler C, Foltz PW, Elvevag B. Using machine learning in Psychiatry; the need to establish a Framework that nurtures trustworthiness. Schizophr Bull. 2019;46:1114.

13. Ray A, Bhardwaj A, Malik YK, Singh S, Gupta R. Artificial intelligence and Psychiatry: an overview. Asian J Psychiatr. 2022;70:103021. doi:10.1016/j.ajp.2022.103021

14. Ganapathy K Artificial intelligence in neurosciences-are we really there? Available from: https://www.sciencedirect.com/science/article/pii/B9780323900379000084. Accessed June 10, 2022.

15. Sunarti S, Rahman FF, Naufal M, Risky M, Febriyanto K, Mashina R. Artificial intelligence in healthcare: opportunities and risk for future. Gac Sinat. 2012;35(S1):S67S70. doi:10.1016/j.gaceta.2020.12.019.

16. Yu B, Beam A, Kohane I. Artificial Intelligence in Healthcare. Nature Biomed Eng. 2018;2:719731. doi:10.1038/s41551-018-0305-z

17. Bhardwaj A. Excessive Ancillary Testing by Healthcare Providers: reasons and Proposed Solutions. J Hospital Med Management. 2019;5(1):16.

18. Fact Sheet NHE. Centers for Medicare and Medicaid Services. Available from: https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NHE-Fact-Sheet. Accessed April 14, 2022.

19. Institute of Medicine (IOM). Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, D.C: National Academy Press; 2001.

20. Bliss M. William Osler: A Life in Medicine. New York, NY: Oxford University Press; 1999.

Read the original here:
Artificial Intelligence and Machine Learning in Healthcare | JHL - Dove Medical Press

Read More..

Artificial Intelligence and Renewables: The rise of renewables in Australia – Lexology

What role will machine learning, a subset of AI, play in Australia's transition to a zero carbon energy future, and what are some of the implications and trade-offs that come with its increased use in our energy infrastructure?

The rise of renewables

Over the past decade, renewable energy consumption has grown globally at an average annual rate of 13.7%. In Australia, 2020 saw more than a quarter of the countrys total electricity generation coming from renewable sources for the first time. Tasmania, the Australian island state, currently runs on 100% renewable energy.

This is good news for climate change. However, integrating these renewable energy sources into existing electricity networks poses challenges. A possible solution are artificial intelligence (AI) technologies, which are becoming increasingly sophisticated at the same time as the growth in renewable energy.

Applying AI to renewable energy technologies

Over the next 10-15 years, more intermittent distributed energy sources (such as wind and solar) will come online as dispatchable centralised generators such as coal are retired. This will require a delicate balancing act to match supply with demand without collapsing the grid. Machine learning technologies can help tackle this challenge.

Machine learning uses algorithms to analyse huge data sets to identify trends and patterns. The algorithm is tasked with making predictions about a target variable, based on the rules programmed into the algorithm and the data it receives. The algorithm will seek to identify patterns in the data, use those findings to modify its rules, and optimise to better undertake further analysis, i.e. the algorithm learns as it works.

Machine learning technologies are increasingly being used in climate science to improve climate modelling and track the effects of climate change.

In the electricity sector, machine learning can also be applied to improve the way energy is distributed and used in the grid. With more renewable energy sources coming online all the time, utilities need better ways of predicting how much energy is needed, in real time and over the long term. Algorithms already exist that can forecast energy demand, but they could be improved by taking into account finer local weather and climate patterns or household behaviour.

Resilience and reliability

The electricity grid must be able to meet customer needs, with minimal power interruptions. When disruptions occur, the grid needs to be able to quickly recover.

Instead of waiting for grid assets to break down, AI is being used to predict problems before they occur. Given the interconnected nature of an electric grid such as the National Electricity Market in Australia, preventing just one equipment failure can avoid colossal cascading blackouts, such as the blackouts in South Australia in 2016. Utilities and generators are using algorithms that analyse industry-wide early failure rates for equipment in order to predict the probability of failure.

Decentralised generation

The distributed nature of many renewable energy sources, such as rooftop and solar is also creating challenges for the grid. An increasingly renewable energy grid has many moving parts, requiring coordination, forecasting and optimization to keep it in balance. Utilities are struggling to manage distributed resources, many of which are not owned, managed, or even visible to the grid managers.

AI software can help to enable utilities to direct electricity arriving into the grid to where it is most needed. However, if machine learning is going to play a greater and more integral role in our energy systems, utilities, policy makers and regulatory bodies need to start thinking about what role they want to play in a much more decentralised energy grid.

A new role for utilities?

The patchwork of distributed energy producers will still need coordination and management. These advancements require a shift in thinking from legacy models of capital investment in a few large energy generation assets to demand management of a growing number of privately owned assets. The protection of customer data and privacy and ensuring cybersecurity of grid management will also need to be considered.

This could mean a new role for utilities as they face a shrinking pool of customers purchasing electricity as more homes and businesses become energy producers themselves. Utilities may have to decide whether to work with software companies to implement AI technologies in their business, or whether they become software companies in their own right.

What are the implications and trade-offs?

Some herald AI technologies as a silver bullet solution to all manner of complex and pressing human challenges from tackling food security to climate change.

What are some of the implications and trade-offs that come with the increased use of machine learning models in our energy infrastructure?

Environmental cost

It is somewhat ironic that for all its potential to reduce electricity usage and support the mass take up of renewable energy generation, machine learning models can also be a large consumer of energy.

Training an algorithm can use vast amounts of computing power. For example, natural language processing (a machine learning technique that helps machines interpret and generate text) is especially power-hungry. Training a single large natural language processing model may consume as much energy as a car over its entire lifetimeincluding the energy needed to build it1.

If artificial intelligence is to be used to support the mass adoption of renewable energy power generation, we need to ensure that the negative environmental impacts of artificial intelligence are outweighed by its positive ones.

To do this, greater transparency and disclosure of the environmental impact of artificial intelligence technologies, particularly machine learning models, is critical2.

Steps are being taken to address this. TheAllen Institute of AIhas proposed a certification for AI practices, labelling carbon-neutral AI as green and non-carbon-neutral AI as red in a bid to help companies navigate this complex area. Research is also being carried out to reduce the carbon footprint ofmachine learning models.

Data stewardship

Another significant issue that arises across all applications of artificial intelligence is the privacy implications of mass data collection and sharing.

With the widespread adoption of smart grid infrastructure, utilities and network operators have unprecedented levels of consumer data. As machine learning processes rely on vast amounts of data to achieve more accurate and reliable results, data sharing will be critical to successfully implementing machine learning driven technologies in the energy sector.

However, this gives rise to privacy concerns about the information that can be gleaned about an individual from that data and the potential for accidental or malicious surveillance, profiling, behaviour tracking, or even identity theft.

This places additional responsibilities primarily on utilities as guardians of this data and raises important legal questions about customer consent, and the storage, use, transfer, ownership and disposal of customer data.

Ethical issues

Algorithmic decision making is not neutral. There is increasing concern from those working in proximity to the technology sector around bias in algorithmic decision-making and the opacity of the decision making process.

While the primary goal of smart grids is to avoid energy scarcity, balancing this goal against competing priorities will become a source of political and societal debate. For example, how should AI prioritise energy distribution between domestic uses, industrial uses and electric car uses, particularly in cases of energy scarcity? These are not the types of questions that can be resolved by AI alone.

If energy companies and governments are to use AI to make important decisions in our critical national infrastructure, it is imperative that there is public trust in the system.

In order to build public trust, those who are using AI systems must be able to understand and explain why the algorithm made a particular decision and be accountable for the consequences of these decisions. This means remaining open to criticism, disclosing unknowns, and allowing for fixing up issues with poor training data.

We wrote an article about the Australian Human Rights Commissions recent report on Human Rights and Technology. The report looks closely at the implications of the use of decision making artificial intelligence in the public and private sector in Australia and highlights potential areas of law reform to ensure decision making is lawful, transparent, explainable, used responsibly, and subject to human oversight, review and intervention3. Although much of the current guidance on the ethical use of artificial intelligence around the world is not legally binding, energy companies adopting machine learning technologies will be well advised to keep abreast of the fast moving regulatory landscape in their jurisdiction and follow best-practice guidance on how v models should be designed and deployed.

CONCLUSION

The Australian Energy Market Operator (AEMO), has recognised the opportunities for AI and machine learning in this space and the need to keep pace with the private sector, which has being embracing machine learning technology for some time. In its 2021 corporate plan AEMO commits to developing new capabilities enabled by technology such as artificial intelligence and machine learning in order to develop strategic forecasting capabilities over the next three years.

This commitment is welcome. We are at a moment of systemic change in the energy industry with the opportunity to build the foundations of a clean energy future. AI technologies have a meaningful role to play as well as the potential to reshape the operation of the energy market more broadly.

We are at a moment of systemic change in the energy industry with the opportunity to build the foundations of a clean energy future. AI technology is undoubtedly a driving force behind this transition. However, careful consideration must be given to how such technology is designed and deployed and its impact on society to ensure that holistically, we are building a better future for generations to come

The rest is here:
Artificial Intelligence and Renewables: The rise of renewables in Australia - Lexology

Read More..