Page 1,747«..1020..1,7461,7471,7481,749..1,7601,770..»

Bitcoin’s role in the evolution of money with University of Exeter’s Dr. Jack Rogers – CoinGeek

Is Bitcoin just another kind of money? An undergraduate module dedicated to Bitcoin at the University of Exeter sheds light on the question by delving into the history of money. The module, called Bitcoin, Money and Trust, was launched in 2018 after a high demand from students to learn about Bitcoin.

Dr. Jack Rogers, a senior lecturer in economics at the University of Exeter launched the Bitcoin module as a precursor to an MSc Fintech course which he leads. He says participation increased tenfold in recent yearsfrom less than 50 students when it launched to 700 this year.

Despite the impressive turnout, Jack believes that probably the big number is partly driven for the wrong reasonsall the various hype and sense that you could get rich from this.

On this weeks episode of CoinGeek Conversations, Charles Miller talks to Jack about the University of Exeters Bitcoin teaching, the evolution of money and the role Bitcoin plays.

Jack points to the emergence of central banking as a step change in history. He quoted Felix Martin, the author of Money: The Unauthorized Biography, whom he recalls speaking of a compromise power structure between the central bank and the government. For the first time in centuries, a decoupling between money and the state is happening before our eyes, Charles suggests. Jack agrees, saying a new technology that allows people to potentially pay each other without using existing fiat-based systems has indeed raised fundamental questions. He pertains to the authors view on cryptocurrency and how its lead to a disruption in the payment system which central banks have been in control of for a long time.

Jack believes the disruption in central banking was inevitable eventually: I think this stuff, central bank, digital currencies and things that you see now, maybe it was coming anyway I think the emergence of Bitcoin and all the hype and everything has kind of brought that forward.

Based on Jacks comments, its safe to say that the future of money will depend on the outcome of the competition between blockchain-based payment systems. For now, he admits no one is certain as to where Bitcoin is heading.

One of my students did a great dissertation on this, speculating that in 20 years time, will there be loads of different types of money? What does it look like? I mean, no-one can really say, Jack said.

Dr. Jack Rogers is co-authoring a textbook alongside Brendan Lee and Neil Smith. The book will be out by the end of 2023.

Hear the whole of Dr. Jack Rogers interview in this weeks CoinGeek Conversations podcast or catch up with other recent episodes:

You can also watch the podcast video on YouTube.

Please subscribe to CoinGeek Conversations this is part of the podcasts ninth season. If youre new to it, there are plenty of previous episodes to catch up with.

Heres how to find them:

Search for CoinGeek Conversations wherever you get your podcasts

Subscribe oniTunes

Listen onSpotify

Visit theCoinGeek Conversations website

Watch on theCoinGeek Conversations YouTube playlist

New to Bitcoin? Check out CoinGeeksBitcoin for Beginnerssection, the ultimate resource guide to learn more about Bitcoinas originally envisioned by Satoshi Nakamotoand blockchain.

Continue reading here:
Bitcoin's role in the evolution of money with University of Exeter's Dr. Jack Rogers - CoinGeek

Read More..

Why Its Important To Build Stablecoins On Bitcoin – Bitcoin Magazine

This is a transcribed excerpt of the Bitcoin Magazine Podcast, hosted by P and Q. In this episode, they are joined by Lightning Labs head of business development, Ryan Gentry, to talk about how the company is building the ability to use stablecoins on the Lightning Network with their new Taro protocol.

Watch This Episode On YouTube Or Rumble

Listen To The Episode Here:

Q: I want to talk about stablecoins with you and have this conversation of: Are they necessary within the Bitcoin ecosystem and why or why not?

Ryan Gentry: It's a great question, and it's one that really drove our decision to focus on Taro this year.

Rewind to Bitcoin 2021 when we got news of the El Salvador bitcoin legal tender law, we got Jack Mallers amazing presentation. That kick-started this massive wave of emerging markets adoption of Lightning, Lightning apps and Lightning wallets everywhere from Brazil to Argentina to El Salvador, Nigeria, Ghana, South Africa, Vietnam, like all around the world.

I think the coolest part of my job is that I get to work with Bitcoin entrepreneurs and Bitcoin developers all around the world who are all trying to get Lightning adopted. In discussing with them all of last year as they were getting hit with tons of new signups, tons of new adoption, we were very excited.

As the year started coming to a close, we kept hearing this repeated thing from these entrepreneurs in emerging markets that was like, OK, this has been the best year ever, huge adoption, numbers all up and to the right and I have now successfully acquired all the Bitcoiners, like in Chiang Mai, Vietnam. Neutron Pay: We got all the Bitcoiners. We have acquired all of 'em. They're all using our app. It's amazing. This is great. The next tier of users that we're looking to acquire, they want the dollar.

That was just something that we kept hearing from all around the world, from South America to Africa to Southeast Asia was that there's this next group of users that we want to onboard into the Bitcoin ecosystem, but using bitcoin for everyday payments was a little too much and they really wanted to use the dollar.

Of course, being at Lightning Labs, by definition, you're a Bitcoin Maximalist. I think everybody on the team is extremely bullish on bitcoin. We wouldn't be building a payments protocol on top of Bitcoin if we werent bullish on bitcoin the asset. But we just kept hearing from these real people out in the world, trying to solve real problems and trying to grow adoption of their apps that they really need the dollar.

I think that that's just one of those things where if we can provide the same Lightning experience, we can onboard more users to the Lightning Network, we can help out all of the startups that are pushing Bitcoin infrastructure and bringing users in and trying to educate users on why bitcoin is important, if we can give them this tool that allows them to reach the next 50,000 users, 100,000 users, million users, I think that's an absolute win. I think that's a huge, huge boom to the ecosystem and it's just following user demand, which I think is really important.

One side benefit of this that I think is not discussed very much is because Taro is running on Bitcoin rails, because it requires a full Bitcoin node, because it requires a Lightning node as well, if we give the market what it wants in stablecoins, we are getting the benefit as these new companies adopt of spreading Bitcoin infrastructure and spreading Bitcoin nodes and spreading Lightning nodes and Lightning channels into all these places that maybe they wouldn't necessarily adopt if it was just bitcoin only.

I think that's an underappreciated point, just the spread of Bitcoin infrastructure. Because if we're right about what bitcoin the asset is, then over time, demand for the dollar will decline anyway and this Bitcoin infrastructure will be in place for users to switch their demand from USD to BTC. I think that's a moment that we're all really excited for and really pushing for, but there's just this bridge step in the middle where we gotta give the people what they want.

Follow this link:
Why Its Important To Build Stablecoins On Bitcoin - Bitcoin Magazine

Read More..

Hacker Forfeits $21,849,087 Worth of Bitcoin to Feds, Sentenced to 20 Years in Prison – The Daily Hodl

A Canadian hacker is forfeiting millions of dollars worth of Bitcoin (BTC) to the U.S. government while facing a multi-decade prison sentence.

According to a new press release from the Department of Justice (DOJ), 35-year-old Sebastian Vachon-Desjardins has been sentenced to 20 years in prison for his role in the NetWalker ransomware attacks and also ordered to relinquish $21.5 million worth of the leading digital asset.

Authorities allege that Vachon-Desjardins was the mastermind behind the NetWalker hacks, which targeted victims all over the world by encrypting and exfiltrating information from Windows-based systems and demanding a ransom of BTC in exchange for the decryption of the data.

Among the bad actors victims were companies, municipalities, law enforcement agencies, and education providers such as colleges, universities and school districts.

The hacker also specifically targeted healthcare providers during the Covid-19 pandemic to take advantage of the troubling times to extort victims, according to the press release.

As stated by Assistant Attorney General Kenneth A. Polite, Jr. of the Justice Departments Criminal Division,

The defendant identified and attacked high-value ransomware victims and profited from the chaos caused by encrypting and stealing the victims data.

Todays sentence demonstrates that ransomware actors will face significant consequences for their crimes and exemplifies the Departments steadfast commitment to pursuing actors who participate in ransomware schemes.

At the time of his arrest in January 2021, authorities searched Vachon-Desjardins home and seized roughly $544,000 and 719 BTC, worth about $14.5 million at time of writing.

Featured Image: Shutterstock/Ahmad Kurnia Sandy

Read more here:
Hacker Forfeits $21,849,087 Worth of Bitcoin to Feds, Sentenced to 20 Years in Prison - The Daily Hodl

Read More..

US Senator Says ‘I Love That Bitcoin Can’t Be Stopped’ Citing Concerns About National Debt and Inflation Regulation Bitcoin News – Bitcoin News

U.S. Senator Cynthia Lummis says she loves that bitcoin cannot be stopped and that governments cannot just confiscate the cryptocurrency. Its actually comforting to know that bitcoin is there, she said, citing concerns about the national debt and inflation.

U.S. Senator Cynthia Lummis (R-WY) talked about bitcoin in an interview with Hard Moneys Natalie Brunell, published last week.

Discussing the merits of bitcoin, including how the cryptocurrency cannot be stopped, the senator from Wyoming said:

I love that it cant be stopped especially because Im concerned about our national debt. Im concerned about inflation.

She continued: I see people in my home state of Wyoming that are going to food banks now because they need fuel, they need gasoline, to get to their jobs, and they have to choose now between high-priced gasoline and food so they are going to food banks for their food.

The senator further detailed: So when we see things that are inflationary, when we see the value of a dollar drop when you go to the grocery store and you come out with one sack of food and used to for the same price come out with two, we really need to look at assets that are going to be there for the long term. She noted:

Thats why to me its actually comforting to know that bitcoin is there.

She further explained that in some countries where the government is unstable, it can come to take peoples homes and property. The senator stressed:

Bitcoin is something the government cannot take.

For people in foreign countries that are living in places that are very insecure, that is definitely a backstop and something that they can comfortably go to bed at night and know its going to be there in the morning, the senator opined.

The senator from Wyoming introduced a crypto bill titled Lummis-Gillibrand Responsible Financial Innovation Act in June with Senator Kirsten Gillibrand (D-NY).

Providing an update on the bill, Lummis described, this is a very comprehensive piece of legislation, probably too comprehensive given the time remaining in 2022 for the bill to pass. She added: But what that does is give us more time to get more input on the bill, and we want to embrace that. We want people to provide additional input and ideas and thoughts.

What do you think about the comments by Senator Cynthia Lummis? Let us know in the comments section below.

A student of Austrian Economics, Kevin found Bitcoin in 2011 and has been an evangelist ever since. His interests lie in Bitcoin security, open-source systems, network effects and the intersection between economics and cryptography.

Image Credits: Shutterstock, Pixabay, Wiki Commons

Disclaimer: This article is for informational purposes only. It is not a direct offer or solicitation of an offer to buy or sell, or a recommendation or endorsement of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

See original here:
US Senator Says 'I Love That Bitcoin Can't Be Stopped' Citing Concerns About National Debt and Inflation Regulation Bitcoin News - Bitcoin News

Read More..

How Artificial Intelligence Testing is Changing the Cyberworld? – ReadWrite

In the cybersecurity sector, artificial intelligence testing is crucial. This is because AI has the potential to help cybersecurity overcome some of its major obstacles. And there are many obstacles, including the incapacity of many organizations to stay on top of the numerous new risks and attacks that emerge as the internet and technological usage increase.

AI-powered cybersecurity is expected to change how we respond to cyber attacks. Because of its capacity to study and learn from enormous volumes of data, artificial intelligence will be crucial in identifying sophisticated threats. Moreover, AI testing is an all-in-one answer to safeguard these gadgets from malicious actors, as new technology and gadgets are always available.

This blog will walk you through the difficulties that the cybersecurity sector is now facing, the significance of employing Artificial Intelligence testing to overcome those difficulties and some of the drawbacks of doing so. Finally, we shall examine some actual applications of AI in this area before we conclude.

Cybersecurity describes the processes followed by people or organizations to safeguard their online-connected computer hardware and software against cyberattacks.

The proliferation of emerging digital technologies like the Internet of Things (IoT). The rising frequency and intricacy of cyberattacks and rigorous data protection laws for data security. An uptick in attacks that target software supply chains is the key driver of the cybersecurity market.

In addition, the COVID-19 pandemic has increased the incidence of malicious attacks on databases in large enterprises. They are necessitating tighter database protection and fostering the expansion of the cybersecurity industry. In healthcare, banking, insurance, manufacturing, and financial services, growth in adopting organization security solutions is provident.

You may be surprised to learn that human mistake accounts for 95% of cybersecurity breaches, according to a Google survey. These mistakes might include everything from downloading a virus-filled email attachment to using a weak password to access an unsafe website. According to studies, phishing attacks are among the most common cyber events, CEO fraud, stolen computers, and ransomware assaults. The effects of these attacks are stunning, even though they may seem easy to handle. In small and medium businesses (SMBs), data breaches cost, on average, $3.9 million. The top four are the top four: large-scale data monitoring, a slower turnaround, a lack of threat understanding, and organizational compliance standards.

Cybercrime is always changing, with hackers constantly refining their tactics to cause the most harm, complicating the issues outlined in the previous section. Malware that could modify its source to evade detection made up 93.67% of the malware observed in 2019. Additionally, within the same year, 53% of consumer PCs and 50% of commercial computers both relapsed the infection. To eradicate this virus from its source, action and awareness are vital.

We should all be aware of the following examples of the typical cybersecurity threats that clever hackers have cleverly created.

When a hacker uses the social engineering technique of phishing, they send you an email that contains a dangerous link. By clicking the link, you could give them access to your computer so they can infect it with a bug and steal all of your personal data.

If your systems hardware and software are not updated to the most recent versions, missing critical security updates can be a risk. It can be introduced to back doors or trojans and obtain access to the system.

Data going to and from a network endpoint can be hindered by malicious actors and decrypted. If they arent caught in time, they might alter it, tamper with it, or use it illegally.

Since more people are using private and public clouds, unencrypted data stored there is an open invitation to malicious hackers. Data saved in the cloud can also be composed due to unreliable interfaces or APIs, insufficient access control, and inadequate security architecture.

Mobile devices internal operating systems may become unreliable due to this dangerous malware, which could reduce their functionality. This frequently occurs as a result of URLs being insecure online. In addition, downloaded applications with security flaws also contribute to mobile virus problems.

One of the most common types of cyberattacks is ransomware, in which the attackers send a virus into peoples personal laptops and smartphones to access and use the data on those devices. They then want a ransom to give you access to it again.

A notable benefit of AI testing is that it significantly reduces some labor-intensive jobs known to be time-consuming, such as security monitoring, which is unquestionably a significant time-sink for IT security experts. AI testing can do this repetitious labor instead of humans having to keep an eye on numerous gadgets. To enforce proper cybersecurity, decrease attack surfaces, and detect malicious behavior, AI and machine learning testing need to be in collar.

Lets look at some additional crucial areas where AI testing proves to be of the utmost significance:

Each day, data of over 2.5 quintillion bytes are produced. Artificial intelligence (AI) technologies can assist in automating data processing. It makes sense of vast amounts of data that would be impossible for humans to understand in a usable manner. Security experts cannot evaluate and classify every piece of information because firms face millions of risks. As a result, it is tough for security specialists to foresee dangers before they destroy IT systems. Artificial intelligence testing can identify numerous cyber-security threats and issues without human analysts.

By analyzing how users typically interact with their devices, ML algorithms are intelligent enough to learn and create a pattern of user behavior.

AI testing flag the user as suspicious and possibly block them if it notices unexpected behaviors that are out of the ordinary. These actions include altering the users typing speed or attempting to access the system at odd times.

AI testing analyzes millions of events and detects a wide range of threats. These threats include malware that exploits zero-day vulnerabilities, phishing attempts, and malicious code downloads. As a result, AI and ML have emerged as essential information security technologies. Companies may better understand dangers and respond to them faster thanks to these insights. It also helps them adhere to the best security procedures.

Spam detection, as well as other types of social engineering aided by natural language processing(NLP), is a subfield of deep learning.

In general, NLP employs a variety of statistical techniques and extensively learns typical verbal and nonverbal communication patterns to identify and prevent spam content.

These systems can detect harmful network activity, guard against intrusions, and warn users of potential dangers. Systems using ID and IP frequently prove useful in addressing data breaches and improving the security of user information.

Furthermore, it is feasible to guarantee a more effective operation of ID/IP systems by utilizing deep learning, recurrent, and convolutional neural networks. The methods above will make it easier for security teams to distinguish between safe and risky network activity. In addition, it improves traffic analysis accuracy and decreases false alarm frequency.

When it comes to hacking networks, cybercriminals are becoming more skilled and quick. The use of cutting-edge technology, such as machine learning, makes it easier to detect cyberattacks. However, it is hard for humans to keep track of every connected system for every possible hazard. These data are used to educate AI-powered devices, which can then learn from real and digital world data.

Given the rising interest in AI in cybersecurity, its realistic to assume that in the future, well see even more sophisticated solutions capable of resolving difficulties in the business that is even more difficult and complex. By automating threat detection, artificial intelligence testing will strive to save cybersecurity and contribute to internet safety.

IT security professionals now utilize AI to reinforce sound cybersecurity procedures. It reduces the attack surface and tracks malicious activity. In addition, it evaluates and deals with massive volumes of data and assesses human behavior.

This is by no means a comprehensive list of its functions. Its preferable to embrace technology today and keep up with the times if you want to be more prepared for the AI-testing cybersecurity future.

Featured Image Credit: Provided by the Author; Thank you!

I am Timothy Joseph, a testing expert with over 10 years of experience in QASource. In a nutshell, a techie who enjoys studying the pinnacles of current technology & creativity!

Continue reading here:
How Artificial Intelligence Testing is Changing the Cyberworld? - ReadWrite

Read More..

The White House moves to hold artificial intelligence accountable with AI Bill of Rights – VentureBeat

Learn how your company can create applications to automate tasks and generate further efficiencies through low-code/no-code tools on November 9 at the virtual Low-Code/No-Code Summit. Register here.

Responsible artificial intelligence (AI), ethical AI, trustworthy AI. Call it what you want its a concept thats impossible to ignore if you pay attention to the tech industry.

As AI has rapidly advanced, more voices have joined in the cry to ensure that it remains safe. The near-unanimous consensus is that AI can easily become biased, unethical and even dangerous.

To address this ever-growing issue, today the White House released a Blueprint for an AI Bill of Rights. This outlines five principles that should guide the design, use and deployment of automated systems to protect Americans in this age of AI.

The issues with AI are well-documented, the Blueprint points out from unsafe systems in patient care to discriminatory algorithms used for hiring and credit decisions.

Low-Code/No-Code Summit

Join todays leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.

The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats and uses technologies in ways that reinforce our highest values, it reads.

The EU has led the way in realizing an ethical AI future with its proposed EU AI act. Numerous organizations have also broached the concept of developing an overarching framework.

The U.S. has notably lagged in the discussion. Prior to today, the federal government had not provided any concrete guidance on protecting citizens from AI dangers, even as President Joe Biden has called for protections around privacy and data collection.

Many still say that the Blueprint, while a good start, doesnt go far enough and doesnt have what it takes to gain true traction.

It is exciting to see the U.S. joining an international movement to help understand and control the impact of new computing technologies, and especially artificial intelligence, to make sure the technologies enhance human society in positive ways, said James Hendler, chair of the Association for Computing Machinery (ACM) technology policy council.

Hendler, a professor at Rensselaer Polytechnic Institute and one of the originators of the Semantic Web, pointed to recent statements including the Rome Call for AI Ethics, the proposed EU regulations on AI and statements from the UN committee.

They are all calling for more understanding of the impacts of increasingly autonomous systems on human rights and human values, he said. The global technology council of the ACM has been working with our member committees to update earlier statements on algorithmic accountability, as we believe regulation of this technology needs to be a global, not just national, effort.

Similarly, the Algorithmic Justice League posted on its Twitter page that the Blueprint is a step in the right direction in the fight toward algorithmic justice.

The League combines art and research to raise public awareness of the racism, sexism, ableism and other harmful forms of discrimination that can be perpetuated by AI.

Others point to the fact that the Blueprint doesnt include any recommendations for restrictions on the use of controversial forms of AI such as those that can identify people in real-time via biometric data or facial images. Some also point out that it does not address the critical issue of autonomous lethal weapons or smart cities.

The White Houses Office of Science and Technology Policy (OSTP), which advises the president on science and technology, first talked of its vision for the blueprint last year.

The five identified principles:

The Blueprint is accompanied by a handbook, From Principles to Practice, with detailed steps toward actualizing these principles in the technological design process.

It was framed based on insights from researchers, technologists, advocates, journalists and policymakers, and notes that, while automated systems have brought about extraordinary benefits, they have also caused significant harm.

It concludes that, these principles help provide guidance whenever automated systems can meaningfully impact the publics rights, opportunities, or access to critical needs.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Visit link:
The White House moves to hold artificial intelligence accountable with AI Bill of Rights - VentureBeat

Read More..

Artificial Intelligence in Genomics Market worth $1,671 million by 2025 says MarketsandMarkets – Yahoo Finance

MarketsandMarkets Research Pvt. Ltd.

Chicago, Oct. 06, 2022 (GLOBE NEWSWIRE) -- According to the new market research report by MarketsandMarkets, theArtificial Intelligence In Genomics Market is projected to reach USD 1,671 million by 2025 from USD 202 million in 2020, at a CAGR of 52.7% between 2020 and 2025. The need to control drug development and discovery costs and time, increasing public and private investments in AI in genomics, and the adoption of AI solutions in precision medicine are driving the growth of this market. However, the lack of a skilled AI workforce and ambiguous regulatory guidelines for medical software are expected to restrain the market growth during the forecast period.

Browse in-depth TOC on "Artificial Intelligence (AI) in Genomics Market"141 Tables24 Figures154 Pages

Download PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=36649899

List of Key Players in Artificial Intelligence in Genomics Industry:

IBM (US),

Microsoft (US),

NVIDIA Corporation (US),

Deep Genomics (Canada),

BenevolentAI (UK),

Fabric Genomics Inc. (US),

Verge Genomics (US),

Freenome Holdings, Inc. (US),

MolecularMatch Inc. (US),

Cambridge Cancer Genomics (UK),

SOPHiA GENETICS (US),

Data4Cure Inc. (US),

PrecisionLife Ltd (UK),

Genoox Ltd. (US),

Lifebit (UK),

Diploid (Belgium),

FDNA Inc. (US),

DNAnexus Inc. (US),

Empiric Logic (Ireland),

Engine Biosciences Pte. Ltd. (US)

Drivers, Restraints, Challenges and Opportunities in Artificial Intelligence in Genomics Industry:

Drivers: Need to control the time and cost of drug discovery and development

Restraints: Lack of skilled AI workforce and ambiguous regulatory guidelines for medical software Healthcare Fraud

Challenges: Lack of curated genomics data

Opportunities: focus on developing human-aware AI systems

Key Findings of Artificial Intelligence in Genomics Market Study:

Machine learning to dominate the AI in Genomics market in 2019

Diagnostics segment accounted for the largest share of the AI in Genomics market, by end user, in 2019

Pharmaceutical & biotechnology companies accounted for the largest market share in 2019

North America accounted for the largest share of the global AI in genomics market in 2019

Story continues

Request Free Sample Pages: https://www.marketsandmarkets.com/requestsampleNew.asp?id=36649899

Based on offering, the AI in genomics market is segmented into software and services. The software and services segment accounted for largest share of the global artificial intelligence in genomics market in 2019. Software is needed to generate new insights from large-scale datasets and help understand genomic variations, thus enhancing the search for disease-causing variants and reducing clinical analysis times. The benefits offered by AI in software are driving its adoption among end users.

Based on functionality, the AI in genomics market is segmented into genome sequencing, gene editing, clinical workflows, and predictive genetic testing & preventive medicine. Genome sequencing was the largest functionality segment in this market in 2019 and is estimated to grow at highest CAGR in coming years. The large share of this segment can be attributed to the use of AI solutions to identify chromosomal disorders, dysmorphic syndromes, teratogenic disorders, and single-gene disorders.

Geographical Growth Scenario:

The global AI in Genomics market is segmented into North America, Asia Pacific, Europe, Rest of the World. North America (comprising the US, and Canada) is expected to account for the largest share of the global AI in Genomics market in 2020, followed by Europe. The large share of North America can be attributed to the increasing research funding and government initiatives for promoting precision medicine in the US.

Speak to Analyst: https://www.marketsandmarkets.com/speaktoanalystNew.asp?id=36649899

Browse Adjacent Markets:Healthcare IT Market Research Reports & Consulting

Browse Related Reports:

Genomics Market

Artificial Intelligence ( AI ) in Drug Discovery Market

Spatial Genomics & Transcriptomics Market

Research Antibodies & Reagents Market

Human Microbiome Market

Bioinformatics Market

Metabolomics Market

Genotyping Assay Market

Sample Preparation Market

Read the rest here:
Artificial Intelligence in Genomics Market worth $1,671 million by 2025 says MarketsandMarkets - Yahoo Finance

Read More..

CSU hopes artificial intelligence can teach us more about the atmosphere – 9News.com KUSA

CSU professor says artificial intelligence can be good atmospheric science teachers.

FORT COLLINS, Colo. The science of weather prediction improves every year but there are still so many mysteries to solve.

Colorado State University (CSU) professor Elizabeth Barnes believes that some of those answers might come from artificial intelligence (AI) also known as machine learning. Essentially thats when a computer program makes a prediction based on patterns that it finds in huge amounts of data.

"It can sort through data much faster than we can and in most cases it can also do it better," said Barnes. "And sometimes it might even find relationships that we didn't know were there. We can learn new science."

Barnes said she is often impressed with the accuracy of an AI-driven climate forecast but she is more interested in learning how the machine got that answer in the first place.

What Barnes and her collages are working on at CSU is called Explainable Artificial Intelligence (XAI). Barnes said it's like cracking the lid of the so called black box that seals the methods behind the machine.

We take that forecast or that prediction, and the idea is that you push that information back through your machine learning model," said Barnes. "And it gives you a map of what was important for it to make its decision. What were the ingredients it used.

Barnes said that road map of information has already led to a new understanding of how the ocean conditions impact long-range weather more than a month in advance.

"It's also helping us learn more about our climate models," Barnes said "In the insides of the models, pieces are actually being replaced with machine learning algorithms to do a better job."

Barnes said one of the beauties of machine learning is that you can keep the rules very simple and can almost use any type of data, even maps, words and images instead of just numbers and statistics.

It's a straight data driven approach to prediction modeling; AI doesn't need any equations to find a solution. Unlike numerical weather forecast models which are a more physical approach. Those models use things like Newtonian and Thermodynamic equations to make a weather prediction.

Machine learning tools allow us to be creative about how we do science," said Barnes. "This has allowed me to think about how I ask questions and what kinds of questions I ask, without barriers in the way I think a lot of climate science had in the past.

SUGGESTED VIDEOS:Latest from 9NEWS

9NEWS+

Watch more from 9NEWS on the free 9NEWS+ app forRokuandFire TV.

9NEWS+ has multiple live daily shows including 9NEWS Mornings, Next with Kyle Clark and 9NEWS+ Daily, an original streaming program. 9NEWS+ is where you can watch live breaking news, weather updates, and press conferences. You can also replay recent newscasts and find videos on demand of our top stories, local politics, investigations and Colorado specific features.

To download9NEWS+ on Rokusearch for KUSA.

To download9NEWS+ on Fire TVsearch for 9NEWS.

Read the original:
CSU hopes artificial intelligence can teach us more about the atmosphere - 9News.com KUSA

Read More..

Artificial Intelligence (AI) in Cybersecurity Market to be Worth $93.75 Billion by 2030: Grand View Research, Inc. – PR Newswire

SAN FRANCISCO, Oct. 5, 2022 /PRNewswire/ --The global artificial intelligence in cybersecurity market size is estimated to reach USD 93.75 billion by 2030, expanding at a CAGR of 24.3% from 2022 to 2030, according to a new study by Grand View Research, Inc. An unprecedented spike in cyber incidents has fostered the demand for AI, cloud, and machine learning for seamless operations, data safety and prompt response to cyber threats. Some factors, such as soaring internet penetration, expanding footfall of connected devices, and escalating data protection concerns, have triggered the need for advanced cybersecurity solutions.

Key Industry Insights & Findings from the report:

Read 200-page full market research report, "Artificial Intelligence In Cybersecurity Market Size, Share & Trends Analysis Report By Type (Cloud Security, Network Security), By Offering, By Technology, By Application, By Vertical, By Region, And Segment Forecasts, 2022 - 2030", published by Grand View Research.

Artificial Intelligence In Cybersecurity Market Growth & Trends

Artificial intelligence (AI) in cybersecurity has leveraged a faster response to breaches and propelled the efficiency of cyber analysts. AI is likely to be sought for vulnerability management, threat hunting, and boosting network security. In doing so, emphasis on natural language processing, machine learning, deep learning, and neural networks could gain ground during the assessment period. For instance, deep learning has become trendier to track transactions, logs, and real-time data to detect threats. AI is highly sought-after to secure cloud services and on-premises architecture and spot abnormal user behavior.

Natural language processing could remain a value proposition to foster the penetration of AI technologies in cyberspace. The trend for natural language inference, sentiment analysis, and text summarization will bode well for major companies gearing to reinforce artificial intelligence in the cybersecurity market share. Prominently, NLP has received impetus for fake news detection, clickbait detection, and rumor detection. Leading companies are likely to bank on NLP to detect malicious language and domain names produced for phishing scams.

Stakeholders predict North America to witness investments galore, on the heels of the high footprint of connected devices, IoT, and 5G. Moreover, the possibility of DDoS attacks and the growing prominence of IoT-enabled activities have prompted major players to bank on cutting-edge technologies to deter cyber incidents. To illustrate, in August 2019, Microsoft was reported to have alleged Russian hackers using IoT devices to breach enterprise networks. Industry participants expect bullish investments in machine learning platforms, threat hunting, and advanced analytics. Besides, Microsoft Security blocked over 35.7 billion phishing and malicious emails and more than 9.6 billion malware threats in 2021.

The competitive landscape alludes to an increased emphasis on organic and inorganic growth strategies, including mergers & acquisitions, product offerings, technological advancements, collaborations, and innovations. For instance, in July 2022, Darktrace rolled out Darktrace PREVENT to assist organizations in pre-empting cyber-attacks. Meanwhile, in August 2022, it was reported that Thoma Bravo was contemplating acquiring Darktrace. In February 2019, BlackBerry completed the acquisition of Cylance to bolster its footprint in AI cybersecurity.

Artificial Intelligence In Cybersecurity Market Segmentation

Grand view research has segmented the global artificial intelligence in cybersecurity market in terms of type, offering, technology, application, vertical, and region:

AI In CybersecurityMarket - Type Outlook (Revenue, USD Billion, 2017 - 2030)

AI In CybersecurityMarket - Offering Outlook (Revenue, USD Million, 2017 - 2030)

AI In Cybersecurity Market - Technology Outlook (Revenue, USD Billion, 2017 - 2030)

AI In CybersecurityMarket - Application Outlook (Revenue, USD Billion, 2017 - 2030)

AI In CybersecurityMarket - Vertical Outlook (Revenue, USD Billion, 2017 - 2030)

AI In CybersecurityMarket - Regional Outlook (Revenue, USD Billion, 2017 - 2030)

List of Key Players of Artificial Intelligence In Cybersecurity Market

Check out more related studies published by Grand View Research:

Browse through Grand View Research's Next Generation Technologies Industry Research Reports.

About Grand View Research

Grand View Research, U.S.-based market research and consulting company, provides syndicated as well as customized research reports and consulting services. Registered in California and headquartered in San Francisco, the company comprises over 425 analysts and consultants, adding more than 1200 market research reports to its vast database each year. These reports offer in-depth analysis on 46 industries across 25 major countries worldwide. With the help of an interactive market intelligence platform, Grand View Research Helps Fortune 500 companies and renowned academic institutes understand the global and regional business environment and gauge the opportunities that lie ahead.

Contact:Sherry JamesCorporate Sales Specialist, USAGrand View Research, Inc.Phone: 1-415-349-0058Toll Free: 1-888-202-9519Email: [emailprotected]Web: https://www.grandviewresearch.comGrand View Compass| Astra ESG SolutionsFollow Us: LinkedIn | Twitter

Logo: https://mma.prnewswire.com/media/661327/Grand_View_Research_Logo.jpg

SOURCE Grand View Research, Inc

See the rest here:
Artificial Intelligence (AI) in Cybersecurity Market to be Worth $93.75 Billion by 2030: Grand View Research, Inc. - PR Newswire

Read More..

Artificial intelligence in the workplace – ComputerWeekly.com

Far from being a futuristic concept relegated to the realms of science fiction, the use of artificial intelligence (AI) in the workplace is becoming more common. The benefits of using AI are often cited by reference to time and productivity savings. However, the challenges of implementing AI into HR practice and procedures should not be underestimated.

AI technologies are already being used across a broad range of industries, at every stage in the employment cycle. From recruitment to dismissal, their use has significant implications. In recent months, incidents at Meta, Estee Lauder and payment service company Xsolla have hit the headlines for utilising AI when dismissing employees.

All three companies used algorithms as part of their selection process. For Meta and Xsolla, the algorithms used analysed employee performance against key metrics to identify those who were unengaged and unproductive. These employees were subsequently dismissed.

Similarly, Estee Lauder used an algorithm when making three makeup artists redundant, which assessed employees during a video interview. The software measured the content of the womens answers and expressions during interview and evaluated the results against other data about their job performance.It led to their dismissal.

Where algorithms are used in place of human decision-making, they risk replicating and reflecting existing biases and inequalities in society.

An AI system is created by a variety of participants, from those writing the code, inputting the instructions, those supplying the dataset on which the AI system is trained and those managing the process. There is significant scope for bias to be introduced at each stage.

If, for example, a bias towards recruiting men is included in the dataset, or women are under-represented, this is likely to be replicated in the AI decision. The result is an AI system making decisions that reproduces inherent bias. If unaddressed, those biases can become exaggerated as the AI learns becoming more adept at differentiating using those biases.

To mitigate this risk, HR teams should test the technology with comparison between AI and human decisions looking for bias. This is only going to be effective in combating unconscious bias if the reviewers comprise a diverse group themselves. If bias is discovered, the algorithm can and should be changed.

AI systems are increasingly being viewed by employers as an efficient way of measuring staff performance. While AI may identify top performers based on key business metrics, they lack personal experience, emotional intelligence and the ability to form an opinion to shape decisions. There is a danger that low-performing staff could be disregarded solely on an assessment of metrics. Smart employees are likely to find ways to manipulate AI to their advantage in a way that might not be so easy without technology.

It is tempting to trust AI to limit legal risks by using it for decision-making. Superficially, this may be right, but the potential unintended consequences of any AI system could easily create a lack of transparency and bias equivalent to that of its human creators.

When AI systems are used, there is an obligation to consider how these might impact on fairness, accountability and transparency in the workplace.There is also a risk of employers exposing themselves to costly discrimination claims, particularly where the policy of using AI disadvantages an employee because of a protected characteristic (such as sex or race) and discriminatory decisions are made as a result.

Until AI develops to outperform humans in learning from mistakes or understanding the law, its use is unlikely to materially mitigate risk in the meantime.

Catherine Hawkes is a senior associate in the employment law team at RWK Goodman.

Originally posted here:
Artificial intelligence in the workplace - ComputerWeekly.com

Read More..