Page 2,529«..1020..2,5282,5292,5302,531..2,5402,550..»

As machine learning becomes standard in military and politics, it needs moral safeguards | TheHill – The Hill

Over the past decade, the world has experienced a technological revolution powered by machine learning (ML). Algorithms remove the decision fatigue of purchasing books and choosing music, and the work of turning on lights and driving, allowing humans to focus on activities more likely to optimize their sense of happiness. Futurists are now looking to bring ML platforms to more complex aspects of human society, specifically warfighting and policing.

Technology moralists and skeptics aside, this move is inevitable, given the need for rapid security decisions in a world with information overload. But as ML-powered weapons platforms replace human soldiers, the risk of governments misusing ML increases. Citizens of liberal democracies can and should demand that governments pushing for the creation of intelligent machines for warfighting include provisions maintaining the moral frameworks that guide their militaries.

In his popular book The End of History, Francis Fukuyama summarized debates about the ideal political system for achieving human freedom and dignity. From his perspective in the middle of 1989, months before the unexpected fall of the Berlin Wall, no other systems like democracy and capitalism could generate wealth, pull people out of poverty and defend human rights; both communism and fascism had failed, creating cruel autocracies that oppressed people. Without realizing it, Fukuyama prophesied democracys proliferation across the world. Democratization soon occurred through grassroots efforts in Asia, Eastern Europe and Latin America.

These transitions, however, wouldnt have been possible unless the military acquiesced to these reforms. In Spain and Russia, the military attempted a coup before recognizing the dominant political desire for change. China instead opted to annihilate reformers.

The idea that the military has veto power might seem incongruous to citizens of consolidated democracies. But in transitioning societies, the military often has the final say on reform due to its symbiotic relationship with the government. In contrast, consolidated democracies benefit from the logic of Clausewitzs trinity, where there is a clear division of labor between the people, the government and the military. In this model, the people elect governments to make decisions for the overall good of society while furnishing the recruits for the military tasked with executing government policy and safeguarding public liberty. The trinity, though, is premised on a human military with a moral character that flows from its origins among the people. The military can refuse orders that harm the public or represent bad policy that might lead to the creation of a dictatorship.

ML risks destabilizing the trinity by removing the human element of the armed forces and subsuming them directly into the government. Developments in ML have created new weapons platforms that rely less and less on humans, as new warfighting machines are capable of provisioning security or assassinating targets with only perfunctory human supervision. The framework of machines acting without human involvement risks creating a dystopian future where political reform will become improbable, because governments will no longer have human militaries restraining them from opening fire on reformers. These dangers are evident in China, where the government lacks compunction in deploying ML platforms to monitor and control its population while also committing genocide.

In the public domain, there is some recognition of these dangers on the misuses of ML for national security. But there hasnt been a substantive debate about how ML might shape democratic governance and reform. There isnt a nefarious reason for this. Rather its that many of those who develop ML tools have STEM backgrounds and lack an understanding of broader social issues. From the government side, leaders in agencies funding ML research often dont know how to consume ML outputs, relying instead on developers to explain what theyre seeing for them. The governments measure for success is whether it keeps society safe. Throughout this process, civilians operate as bystanders, unable to interrogate the design process for ML tools used for war.

In the short term, this is fine because there arent entire armies made of robots, but the competitive advantage offered by mechanized fighting not limited by frail human bodies will make intelligent machines essential to the future of war. Moreover, these terminators will need an entire infrastructure of satellites, sensors, and information platforms powered by ML to coordinate responses to battlefield advances and setbacks, further reducing the role of humans. This will only amplify the power governments have to oppress their societies.

The risk that democratic societies might create tools that lead to this pessimistic outcome is high. The United States is engaged in an ML arms race withChina and Russia, both of which are developing and exporting their own ML tools to help dictatorships remain in power and freeze history.

There is space for civil society to insert itself into ML, however. ML succeeds and fails based on the training data used for algorithms, and civil society can collaborate with governments to choose training data that optimizes the warfighting enterprise while balancing the need to sustain dissent and reform.

By giving machines moral safeguards, the United States can create tools that instead strengthen democracys prospects. Fukuyamas thesis is only valid in a world where humans can exert their agency and reform their governments through discussion, debate and elections. The U.S., in the course of confronting its authoritarian rivals, shouldnt create tools that hasten democracys end.

Christopher Wall is a social scientist for Giant Oak, a counterterrorism instructor for Naval Special Warfare, a lecturer on statistics for national security at Georgetown University and the co-author of the recent book, The Future of Terrorism: ISIS, al-Qaeda, and the Alt-Right. Views of the author do not necessarily reflect the views of Giant Oak.

Read the original:
As machine learning becomes standard in military and politics, it needs moral safeguards | TheHill - The Hill

Read More..

Leveraging AI and machine learning in RAN automation – Ericsson

The left side of Figure 3 illustrates how the task of efficiently operating a RAN to best utilize the deployed resources (base stations or frequencies) can be divided into different control loops acting according to different time scales and with different scopes. A successful RAN automation solution will require the use of AI/ML technologies [6] in all of these control loops to ensure functionality that can work autonomously in different deployments and environments in an optimal way.

The two fastest control loops (purple and orange) are related to traditional RRM. Examples include scheduling and link adaptation in the purple (layer 1 and 2) control loop and bearer management and handover in the orange (layer 3) control loop. Functionality in these control loops has already been autonomous for quite some time, with the decision-making based on internal data for scheduling and handover in a timeframe ranging from milliseconds (ms) to several hundred ms, for example. From an architecture perspective, these control loops are implemented in the RAN network function domain shown in Figure 3.

The slower control loops shown on the left side of Figure 3 represent network design (dark green) and network optimization and assurance (light green). In contrast to the two fast control loops, these slower loops are to a large degree manual at present. Network design covers activities related to the design and deployment of the full RAN, while network automation covers observation and optimization of the deployed functionality. Network optimization and assurance is done by observing the performance of a certain functionality and changing the exposed configuration parameters to alter the behavior of the deployed functionality, so that it assures the intents in the specific environment where it has been deployed. From an architecture perspective, these control loops are implemented in the RAN automation application domain [7].

The green control loops encompass the bulk of the manual work that will disappear as a result of RAN automation, which explains why AI/ML is already being implemented in those loops [8]. It would, however, be a mistake to restrict the RAN automation solution to just the green control loops. AI/ML also makes it possible to enhance the functionality in the purple and orange control loops to make them more adaptive and robust for deployment in different environments. This, in turn, minimizes the amount of configuration optimization that is needed in the light-green control loop.

While the control loops in Figure 3 are all internal to the RAN domain, some of the functionality in a robust RAN automation solution will depend on resources from other domains. That functionality would be implemented as part of the RAN automation application domain. The RAN automation platform domain will provide the services required for cross-domain interaction.

One example of RAN automation functionality in the RAN automation application domain is the automated deployment and configuration of ERAN. In ERAN deployments, AI/ML is used to cluster basebands that share radio coverage and therefore should be configured to coordinate functionality such as scheduling [8]. To do this, data from several network functions needs to be clustered to understand which of them share radio coverage. This process requires topology and inventory information that will be made available to the rApps through the services exposed by the network automation platform over R1.

The outcome of the clustering results is a configuration of the basebands that should coordinate as well as a request for resources from the transport domain. This information can also be obtained by services provided by transport automation applications exposing services through the R1 framework. When designing the rApp for clustering, it is beneficial to have detailed knowledge about the implementation of coordination functionality in the RAN network function to understand how the clustering analysis in the rApp should be performed.

An example of RAN automation functionality in the network function domain is AI/ML-based link adaptation, where AI/ML-based functionality optimizes the selection of the modulation and coding scheme for either maximum throughput or minimum delay, removing the block error rate target parameter and thereby the need for configuration-based optimization. Another example is secondary carrier prediction [8], where AI/ML is used to learn coverage relations between different carriers for a certain deployment. Both of these examples use data that is internal to the network function.

As the objective of RAN automation is to replace the manual work of developing, installing, deploying, managing, optimizing and retiring RAN functions, it is certain to have a significant impact on the way that the LCM of RAN software works. Specifically, as AI/ML has proven to be an efficient tool to develop functionality for RAN automation, different options for training and inference of ML models will drive corresponding options for the LCM of software with AI/ML-based functionality.

Figure 4 presents a process view of the LCM of RAN components, ranging from the initial idea for a RAN component to its eventual retirement. A RAN component is defined as either a pure software entity or a hardware/software (physical network function) entity. As the different steps in the LCM structure include the manual work associated with RAN operations, it is a useful model to describe how RAN automation changes the processes, reduces the manual effort and improves the quality and performance of the RAN.

Read more from the original source:
Leveraging AI and machine learning in RAN automation - Ericsson

Read More..

Board of the International Organisation of Securities Commissions (IOSCO) publishes final guidance report for artificial intelligence and machine…

Subsequent to the consultation report published by IOSCO in June 2021, the final guidance report (IOSCO Report) for artificial intelligence (AI) and machine learning (ML) entitled The use of artificial intelligence and machine learning by market intermediaries and asset managers was released on 7 September 2021.

Per the IOSCO Report, market intermediaries and asset managers tend to achieve cost reductions and improve efficiency through the use of AL and ML. While market intermediaries, asset managers and investors receive benefits including efficiency enhancement, cost reduction and resource sparing, a concern is amplification of risk that affects the interests of consumers and other market participants.

In the light of the above, the IOSCO Report sets out some recommended measures to ensure that the interests of investors and other relevant stakeholders are protected. Further, Annex I and Annex 2 to the IOSCO Report outline the regulators responses to the challenges arising from AI and ML, and the guidance issued by supranational bodies respectively.

Read the rest here:
Board of the International Organisation of Securities Commissions (IOSCO) publishes final guidance report for artificial intelligence and machine...

Read More..

Publisher Discovery relaunches affiliate analysis tools with AI and machine learning – Pro News Report

(ProNewsReport Editorial):- London, United Kingdom Oct 26, 2021 (Issuewire.com)Publisher Discovery is excited to announce the launch of a brand newAI-driven platform incorporating advanced machine learning technology. The development of Publisher Discovery competitor analysis tools followed the initial trialling of the in-network application in conjunction with Affiliate Future, a leading UK affiliate network. This enabled their merchants access to the first application of AI and Machine Learning in the management and recruitment of new affiliates.

Results from the in-network technology have shown hugely increased user efficiency in internal affiliate recruitment saving hours of account management time each week. The effectiveness of the technology led to the award of Highly Commended in the Best Use of AI category at this years Performance Marketing Awards in London.

This really proved the power of AI in affiliate recruitment. Publisher Discovery CEO Tom Bourne explained The ability to use AI to analyse the network data and from that to match, the best affiliates to the right merchant programmes have proven its worth in recruitment time savings as well as in commercial terms.

John Vickers of Affiliate Future which trialled the initial installation of Publisher Discoverys Cloudfind app said This has been a great proof of concept for us and has helped our clients to achieve some impressive results really quickly. Weve been really looking forward to adding the new tools analysing the external competitors and initial tests have shown the same impressive results.

The new platform uses AI and machine learning to help advertisers to analyse their competitors affiliate programmes, understand more about the publishers and find their best affiliates to recruit. This will provide a much simplified and far more intuitive platform enabling quick searches to add to your recruitment process.

The new technology launch has been complemented by an upgraded UI in the platform and is reflected in the new branding and website launched just recently.

Publisher Discovery showed these new technologies at Affiliate Summit East in New York last week and will be giving live demonstrations on their stand at PerformanceIn.live later this year. You can read more on the website at publisherdiscovery.com.

See the original post:
Publisher Discovery relaunches affiliate analysis tools with AI and machine learning - Pro News Report

Read More..

Humans in the loop: it takes people to ensure artificial intelligence success – ZDNet

When it comes to artificial intelligence, don't try to go it alone. IT departments, no matter how skilled and ready developers and data scientists may be, can only go so far past proofs of concept. It takes people --from all corners of the enterprise and working collaboratively -- to deliver AI success,

In discussing lessons learned about AI in recent years, industry experts point to the need to get the people from across the enterprise on board. "A copious amount of training data and elastic compute power are not the cornerstones for successful AI implementations," saysSreedhar Bhagavatheeswaran, global head of Mindtree Consulting.

That cornerstone of AI success is people -- not only AI skills, but involvement from all disciplines, from marketing to supply chain management. In recent years -- and especially over the past year, as the need for automated or unattended processes accelerated, "enterprises learned that they must get stakeholder buy-in, with a true champion for AI within the organization's leadership team," saysDan Simion, VP of AI and analytics at Capgemini Americas.

A concerted AI development and deployment effort also needs "strong governance, internal marketing within the company, and proper training to fuel further adoption of the AI initiatives across the business' functional areas," he adds. The key is being able to showcase the valuable insights being generated by these models,

In efforts to make AI pervasive, "enterprises are now conscious of critical factors such as identifying the right journeys and use cases where AI intervention can make a business impact, operationalizing AI by establishing an AI operations and governance mechanisms, and blending the right proportion of data engineering and AI talent," says Bhagavatheeswaran.

The catch, of course, is many of these efforts get undermined by organizational politics, or simple inertia. AI seems glamorous and promising, but acceptance and adoption takes time. "Companies should plan for the time and effort needed to conduct training sessions, and continuously reinforce the use and benefits of the AI system over the traditional methods," advises Nitin Aggarwal, vice president data analytics at The Smart Cube. "Sharing and celebrating small and frequent wins is a proven catalyst."

AI also needs to have a friendly face, rather than perceptions of robots, software or otherwise, taking the reins of the company. "Make the end user interface business-friendly and intuitive," Aggarwal suggests. "The lower the burden on the end user to understand the insights in terms of 'so what,' the higher the chances of them actually using the system." If possible, he advises having an MLOps team on hand "to ensure the deployed solutions continue to work as expected."

To date, the areas of the business having the most success with AI "are those with direct connections to customer interactions -- such as marketing and sales," says Simion. "These areas are constantly looking to drive revenue, and are more open to innovative new methods and tactics to improve efficiencies, which AI offers." Aggarwal agrees, noting that areas seeing the most initial success with AI include "marketing mix optimization, pricing and promotions ROI improvement, demand forecasting, CRM and hyper-personalization." Lately however, AI's power has also been turned on areas such as supply chain risk management, he adds.

AI is more than technology -- it's new ways of thinking about problems and opportunities. Everyone needs to have access to this powerful new tool, Simion urges. "Make sure everyone across the enterprise is using the same technology stack, so each functional area can have access to the same lessons and insights. Consistency of the technology and the value it can bring is what makes the most difference."

AI adoption also hinges on perceptions that it is fair and accurate, making fighting AI bias is another challenge proponents need to address head-on. Start with the data, Aggarwal states. "As AI algorithms learn from data, make a conscious effort for collecting and feeding richer data, that is corrected for bias and is fairly representative of all classes," he advises.

In most cases, "when you deploy AI models into production at scale, you have automatic tools to monitor the results in real-time," says Simion. "When the AI models are outside of their pre-set boundaries and limits, human intervention is necessary. This is done to ensure AI is performing as expected to drive efficiencies for the business, and it also is done to ensure any issues with AI bias or trust are caught and corrected."

It's critical that humans be kept in the loop, says Aggarwal. "Sometimes human decision making alongside the algorithm is helpful to understand different responses and identify any inherent errors or biases. Human judgement can bring in more awareness, context, understanding and research ability to guide fair decision making. Debiasing should be looked at as an ongoing commitment."

As part of this, companies may benefit by establishing an "AI governance council that reviews not only the business results influenced by their AI initiatives, but is also responsible for explaining the results of specific use cases when needed," says Bhagavatheeswaran.

IT leaders and staff need to receive more training and awareness to alleviate AI bias as well. "It also ties into how staff performance is evaluated and how incentives are aligned," says Aggarwal. "If creating the most accurate AI system is the key result area for a data scientist, chances are that you will get a highly accurate system but one, which may not be the most responsible. Similarly, for all staff, an important training should be on where to look for and how to detect biases in AI, and then reward teams who are able to find and recognize flaws."

Go here to see the original:
Humans in the loop: it takes people to ensure artificial intelligence success - ZDNet

Read More..

Vlodomyr Kindratenko Named Director of the Center for Artificial Intelligence Innovation at NCSA – HPCwire

Oct. 29, 2021 NCSA is pleased to announce the appointment of Dr. Vlodomyr Kindratenko as Director of the Center for Artificial Intelligence Innovation(CAII). In this new role, he will be responsible for providing the overall leadership, oversight and management of the center, including developing partnerships and projects at regional and national levels, and overseeing day-to-day operations. Dr. Kindratenko will also be fostering and actively participating in a vigorous research program, with responsibilities for which he is especially adept thanks to his prior experience.

NCSA Engagement Director John Towns says Dr. Kindratenko, a seasoned NCSA researcher with prior NCSA leadership experience, is poised to expand the CAIIs ability to forge new relationships while strengthening existing ones.

Vlad comes into this role transitioning from another NCSA leadership role and as the former lead for the NCSA Innovative Systems Lab, Towns says. He has deep connections with campus with a history of collaboration and teaching. We look forward to expanding and deepening those relationships with a focus on the development of AI methods and applications of them to academic and industry challenges.

In addition to his role at NCSA, Dr. Kindratenko maintains positions as adjunct associate professor in theDepartment of Electrical and Computer Engineeringand a research associate professor in theDepartment of Computer Scienceat theUniversity of Illinois Urbana-Champaign. He currently serves as a department editor of IEEE Computing in Science and Engineering magazine and an associate editor of the International Journal of Reconfigurable Computing. Dr. Kindratenkos work has been funded by the National Science Foundation, NASA, Office of Naval Research, Department of Energy, and industry. He has published over 70 papers in peer-reviewed scientific journals and conference proceedings and holds five U.S. patents. He is a senior member of theInstitute of Electrical and Electronics Engineers(IEEE) andAssociation for Computing Machinery(ACM).

Dr. Kindratenkos research interests include high-performance computing, special-purpose computing architectures, cloud computing and machine learning. His combined interest and experience will contribute to his ability to facilitate collaboration across multiple disciplines and support advancements in AI, leveraging NCSAs cutting-edge technology and expertise.

I am very excited to become the CAII Director, and I am looking forward to growing the Center within NCSA and connecting it with the AI-related activities carried out by the UofI faculty, Kindratenko says. Main goals for the Center include finding ways to bring together the U of I AI research community for a chance to collaborate while aligning academic research with industry challenges and opportunities, and providing students with ways to learn and work in the AI domain. The Center will also partner with leading researchers and technology developers to bring state-of-the-art AI capabilities to the UofI research community.

About NCSA

TheNational Center for Supercomputing Applicationsat theUniversity of Illinois at Urbana-Champaignprovides supercomputing and advanced digital resources for the nations science enterprise. At NCSA, University of Illinois faculty, staff, students and collaborators from around the globe use these resources to address research challenges for the benefit of science and society. NCSA has been advancing many of the worlds industry giants for over 35 years by bringing industry, researchers and students together to solve grand challenges at rapid speed and scale.

Source: NCSA

Read more:
Vlodomyr Kindratenko Named Director of the Center for Artificial Intelligence Innovation at NCSA - HPCwire

Read More..

Yeah, were spooked: AI starting to have big real-world impact, says expert – The Guardian

A scientist who wrote a leading textbook on artificial intelligence has said experts are spooked by their own success in the field, comparing the advance of AI to the development of the atom bomb.

Prof Stuart Russell, the founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, said most experts believed that machines more intelligent than humans would be developed this century, and he called for international treaties to regulate the development of the technology.

The AI community has not yet adjusted to the fact that we are now starting to have a really big impact in the real world, he told the Guardian. That simply wasnt the case for most of the history of the field we were just in the lab, developing things, trying to get stuff to work, mostly failing to get stuff to work. So the question of real-world impact was just not germane at all. And we have to grow up very quickly to catch up.

Artificial intelligence underpins many aspects of modern life, from search engines to banking, and advances in image recognition and machine translation are among the key developments in recent years.

Russell who in 1995 co-authored the seminal book Artificial Intelligence: A Modern Approach, and who will be giving this years BBC Reith lectures entitled Living with Artificial Intelligence, which begin on Monday says urgent work is needed to make sure humans remain in control as superintelligent AI is developed.

AI has been designed with one particular methodology and sort of general approach. And were not careful enough to use that kind of system in complicated real-world settings, he said.

For example, asking AI to cure cancer as quickly as possible could be dangerous. It would probably find ways of inducing tumours in the whole human population, so that it could run millions of experiments in parallel, using all of us as guinea pigs, said Russell. And thats because thats the solution to the objective we gave it; we just forgot to specify that you cant use humans as guinea pigs and you cant use up the whole GDP of the world to run your experiments and you cant do this and you cant do that.

Russell said there was still a big gap between the AI of today and that depicted in films such as Ex Machina, but a future with machines that are more intelligent than humans was on the cards.

I think numbers range from 10 years for the most optimistic to a few hundred years, said Russell. But almost all AI researchers would say its going to happen in this century.

One concern is that a machine would not need to be more intelligent than humans in all things to pose a serious risk. Its something thats unfolding now, he said. If you look at social media and the algorithms that choose what people read and watch, they have a huge amount of control over our cognitive input.

The upshot, he said, is that the algorithms manipulate the user, brainwashing them so that their behaviour becomes more predictable when it comes to what they chose to engage with, boosting click-based revenue.

Have AI researchers become spooked by their own success? Yeah, I think we are increasingly spooked, Russell said.

It reminds me a little bit of what happened in physics where the physicists knew that atomic energy existed, they could measure the masses of different atoms, and they could figure out how much energy could be released if you could do the conversion between different types of atoms, he said, noting that the experts always stressed the idea was theoretical. And then it happened and they werent ready for it.

The use of AI in military applications such as small anti-personnel weapons is of particular concern, he said. Those are the ones that are very easily scalable, meaning you could put a million of them in a single truck and you could open the back and off they go and wipe out a whole city, said Russell.

Russell believes the future for AI lies in developing machines that know the true objective is uncertain, as are our preferences, meaning they must check in with humans rather like a butler on any decision. But the idea is complex, not least because different people have different and sometimes conflicting preferences, and those preferences are not fixed.

Russell called for measures including a code of conduct for researchers, legislation and treaties to ensure the safety of AI systems in use, and training of researchers to ensure AI is not susceptible to problems such as racial bias. He said EU legislation that would ban impersonation of humans by machines should be adopted around the world.

Russell said he hoped the Reith lectures would emphasise that there is a choice about what the future holds. Its really important for the public to be involved in those choices, because its the public who will benefit or not, he said.

But there was another message, too. Progress in AI is something that will take a while to happen, but it doesnt make it science fiction, he said.

See the original post:
Yeah, were spooked: AI starting to have big real-world impact, says expert - The Guardian

Read More..

Expert warns that artificial intelligence could soon be able to ‘hack’ human beings | TheHill – The Hill

A world-renowned historian and philosopher is warning that humanity needs to begin regulating artificial intelligence and data collection globally or risk being hacked.

In an upcoming interview with CBSs 60 Minutes, bestselling author Yuval Harari said the nations and large corporations that control the biggest share of data on consumers will control the world in the future, noting that the raw data is worth much more than money.

The world is increasingly kind of cut up into spheres of data collection, of data harvesting. In the Cold War, you had the Iron Curtain. Now we have the Silicon Curtain, that the world is increasingly divided between the USA and China, Harari told Anderson Cooper.

Harari said the increasing sophistication of AI used in algorithms concentrated in the hands of a powerful few could ultimately result in the manipulation of people.

America is changing faster than ever! Add Changing America to your Facebook or Twitter feed to stay on top of the news.

Netflix tells us what to watch and Amazon tells us what to buy. Eventually within 10 or 20 or 30 years such algorithms could also tell you what to study at college and where to work and whom to marry and even whom to vote for, Harari told Cooper.

To hack a human being is to get to know that person better than they know themselves. And based on that, to increasingly manipulate you, he said.

Harari emphasized the need for countries to work together to put concrete regulations in place to avoid such a scenario and ensure data and AI arent used to exercise control over the public.

One of his recommendations is to make sure the data isnt consolidated in place, adding Thats a recipe for a dictatorship.

The author noted the rise of AI can also be incredibly beneficial to society, but the question is what is being done with it and who regulates it.

Harari is the author of the 2014 global bestseller Sapiens.

READ MORE STORIES FROM CHANGING AMERICA

BIDEN ADMINISTRATION CONSIDERING GIVING $450,000 PER PERSON TO IMMIGRANTS SEPARATED AT THE BORDER

HIGH SCHOOL PRINCIPAL WHO GOT LAP DANCES FROM STUDENTS UNDER INVESTIGATION

CROSS-COUNTRY FLIGHT DIVERTED AFTER FLIGHT ATTENDANT ATTACKED MID-AIR

FOUNDER OF THE AMERICAN HAPPINESS PROJECT EXPLAINS 3 EASY STEPS TO BEGIN EACH DAY

PARKS AND RECREATION ACTOR CHARGED WITH VANDALIZING GEORGE FLOYD STATUE

Visit link:
Expert warns that artificial intelligence could soon be able to 'hack' human beings | TheHill - The Hill

Read More..

Artificial Intelligence in Healthcare Market worth $67.4 billion by 2027 – Exclusive Report by MarketsandMarkets – Yahoo Finance

CHICAGO, Oct. 29, 2021 /PRNewswire/ -- According to the new market research report "Artificial Intelligence in Healthcare Market by Offering (Hardware, Software, Services), Technology (Machine Learning, NLP, Context-aware Computing, Computer Vision), Application, End User and Geography - Global Forecast to 2027", published by MarketsandMarkets, the market is projected to grow from USD 6.9 billion in 2021 to USD 67.4 billion by 2027; it is expected to grow at a CAGR of 46.2%from 2021 to 2027.

MarketsandMarkets Logo

Ask for PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=54679303

The key factors fueling the growth of the market include market influx of large and complex healthcare datasets, growing need to reduce healthcare costs, improving computing power and declining hardware cost, rising number of partnerships and collaborations among different domains in healthcare sector, and surging need for improvised healthcare services due to imbalance between health workforce and patients. Additionally, growing potential of AI-based tools for elderly care, increasing focus on developing human-aware AI systems, and rising potential of AI technology in genomics, drug discovery, and imaging & diagnostics to fight COVID-19 is expected to create a growth opportunity for the artificial intelligence in healthcare market.

The software segment is projected to account for the largest share of the artificial intelligence in healthcare market during the forecast period.

Many companies are developing software solutions for various healthcare applications; this is the key factor complementing the growth of the software segment. Strong demand among software developers (especially in medical centers and universities) and widening applications of AI in the healthcare sector are among the prime factors complementing the growth of the AI platform within the software segment. Google AI Platform, TensorFlow, Microsoft Azure, Premonition, Watson Studio, Lumiata, and Infrrd are some of the top AI platforms.

Story continues

Browse in-depth TOC on "Artificial Intelligence in Healthcare Market"

163 Tables 52 Figures 252 Pages

Inquiry Before Buying: https://www.marketsandmarkets.com/Enquiry_Before_BuyingNew.asp?id=54679303

The market for machine learning segment is expected to grow at the highest CAGR during the forecast period

The increasing adoption of machine learning technology (especially deep learning) in various healthcare applications such as inpatient monitoring & hospital management, drug discovery, medical imaging & diagnostics, and cybersecurity is driving the adoption of machine learning technology in the AI in healthcare market.

The medical imaging & diagnostics segment is expected to grow at the highest CAGR of the artificial intelligence in healthcare market during the forecast period.

The high growth of the medical imaging and diagnostics segment can be attributed to factors such as the presence of a large volume of imaging data, advantages offered by AI systems to radiologists in diagnosis and treatment management, and the influx of a large number of startups in this segment.

North America region is expected to hold the largest share of artificial intelligence in healthcare market during the forecast period.

Increasing adoption of AI technology across the continuum of care, especially in the US, and high healthcare spending combined with the onset of COVID-19 pandemic accelerating the adoption of AI in hospital and clinics across the region are the major factors driving the growth of the North American market.

The key players operating in the artificial intelligence in healthcare market include Intel (US), Koninklijke Philips (Netherlands), Microsoft (US), IBM (US), and Siemens Healthineers (Germany).

Related Reports:

Artificial Intelligence in Manufacturing Market by Offering (Hardware, Software, and Services), Technology (Machine Learning, Computer Vision, Context-Aware Computing, and NLP), Application, End-user Industry and Region - Global Forecast to 2026

Artificial Intelligence in Cybersecurity Market by Offering (Hardware, Software, and Service), Deployment Type, Security Type, Technology (ML, NLP, and Context-Aware), Application (IAM, DLP, and UTM), End User, and Geography- Global Forecast to 2026

About MarketsandMarkets

MarketsandMarkets provides quantified B2B research on 30,000 high growth niche opportunities/threats which will impact 70% to 80% of worldwide companies' revenues. Currently servicing 7500 customers worldwide including 80% of global Fortune 1000 companies as clients. Almost 75,000 top officers across eight industries worldwide approach MarketsandMarkets for their painpoints around revenues decisions.

Our 850 fulltime analyst and SMEs at MarketsandMarkets are tracking global high growth markets following the "Growth Engagement Model GEM". The GEM aims at proactive collaboration with the clients to identify new opportunities, identify most important customers, write "Attack, avoid and defend" strategies, identify sources of incremental revenues for both the company and its competitors. MarketsandMarkets now coming up with 1,500 MicroQuadrants (Positioning top players across leaders, emerging companies, innovators, strategic players) annually in high growth emerging segments. MarketsandMarkets is determined to benefit more than 10,000 companies this year for their revenue planning and help them take their innovations/disruptions early to the market by providing them research ahead of the curve.

MarketsandMarkets's flagship competitive intelligence and market research platform, "Knowledge Store" connects over 200,000 markets and entire value chains for deeper understanding of the unmet insights along with market sizing and forecasts of niche markets.

Contact:

Mr. Aashish MehraMarketsandMarkets INC.630 Dundee RoadSuite 430Northbrook, IL 60062USA: +1-888-600-6441Email: sales@marketsandmarkets.com Research Insight: https://www.marketsandmarkets.com/ResearchInsight/artificial-intelligence-healthcare-market.asp Visit Our Web Site: https://www.marketsandmarkets.comContent Source: https://www.marketsandmarkets.com/PressReleases/artificial-intelligence-healthcare.asp

Cision

View original content:https://www.prnewswire.com/news-releases/artificial-intelligence-in-healthcare-market-worth-67-4-billion-by-2027--exclusive-report-by-marketsandmarkets-301411884.html

SOURCE MarketsandMarkets

Follow this link:
Artificial Intelligence in Healthcare Market worth $67.4 billion by 2027 - Exclusive Report by MarketsandMarkets - Yahoo Finance

Read More..

Machine learning can revolutionize healthcare, but it also carries legal risks – Healthcare IT News

As machine learning and artificial intelligence have become ubiquitous in healthcare, questions have arisen about their potential impacts.

And as Matt Fisher, general counsel for the virtual care platform Carium, pointed out, those potential impacts can, in turn, leave organizations open to possible liabilities.

"It's still an emerging area," Fisher explained in an interview with Healthcare IT News. "There are a bunch of different questions about where the risks and liabilities might arise."

Fisher, who is moderating a panel on the subject at the HIMSS Machine Learning & AI for Healthcareevent this December, described two main areas of legal concern: cybersecurity and bias.(HIMSS is the parent organization of Healthcare IT News.)

When it comes to cybersecurity, he said, the potential issues are not so much with the consequence of using the model as with the process of training it."If big companies are contracting with a healthcare system, we're going to be working to develop new systems to analyze data and produce new outcomes," he said.

And all that data could represent a juicy target for bad actors. "If a health system is transferring protected health information over to a big tech company, not only do you have the privacy issue, there's also the security issue," he said. "They need to make sure their systems are designed to protect against attack."

Some hospitals that are victimized by ransomware have faced the double whammy of lawsuits from affected patients who say health systems should have taken more action to protect their information.

And a breach is a matter of when, not if, said Fisher. Fisher said synthetic or de-identified data are options to help alleviate the risk, if the sets are sufficient for training.

"Anyone working with sensitive information needs to be aware of and thinking about that," he said.

Meanwhile, if a device relies on a biased algorithm and results in a less than ideal outcome for a patientthat could possibly lead to claims against the manufacturer or a health organization. Research has shown, for instance, that biased models may worsen the disproportionate impact the COVID-19 pandemic has already had on people of color.

"You've started to see electronic health record-related claims come up in malpractice cases," Fisher pointed out. If a patient experiences a negative result from a device at home, they could bring the claim against a manufacturer, he said.

And a clinician relying on a device in a medical setting who doesn't account for varied outcomes for different groups of people might be at risk of a malpractice lawsuit. "When you have these types of issues widely reported and talked about, it presents more of a favorable landscape to try and find people who have been harmed," said Fisher.

In the next few years, he said, "We'll start to see those claims arise."

Addressing and preventing such legal risks depends on the situation, said Fisher. When an organization is going to subscribe to or implement a tool, he said, it should screen the vendor: Ask questions about how an algorithm was developed and how the system was trained, including whether it was tested on representative populations.

"If it's going to be directly interacting with patient care, consider building [the device's functionality] into informed consent if appropriate," he said.

Fisher said he hopes panel attendees leave the discussion inspired to engage in discourse about the legal risks at their own organizations. "I hope it spurs people to think about it and to start a dialogue," he said.

Ultimately, he said, while an organization can take steps to reduce liability, it's not possible to fully shield yourself from the threat of legal action. "You can never prevent a case from being brought," he said, but "you can try to set yourself up for the best footing."

At the HIMSS Machine Learning & AI for Healthcare event, Fisher will continue the discussion with Baker and McKenzie LLP's Bradford Newman and Dianne Bourque of Mintz Levin Cohn Ferris Glovsky and Popeo PC. Their virtual panel,"Sharing Data and Ethical Challenges: AI and Legal Risks," is scheduled for 2:30 p.m. ET on Tuesday, December 14.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

See the rest here:
Machine learning can revolutionize healthcare, but it also carries legal risks - Healthcare IT News

Read More..