Page 3,852«..1020..3,8513,8523,8533,854..3,8603,870..»

Artificial Intelligence (AI) And The Law: Helping Lawyers While Avoiding Biased Algorithms – Forbes

Sergey Tarasov - stock.adobe.com

Artificial intelligence (AI) has the potential to help every sector of the economy. There is a challenge, though, in sectors that have fuzzier analysis and the potential to train with data that can continue human biases. A couple of years ago, I described the problem with bias in an article about machine learning (ML) applied to criminal recidivism. Its worth revisiting the sector as time have changed in how bias is addressed. One way is to look at sectors in the legal profession where bias is a much smaller factor.

Tax law has a lot more explicit rules than, for instance, do many criminal laws. As much as there have been issues with ML applied to human resource systems (Amazons canceled HR system), employment law is another area where states and nations have created explicit rules. The key in choosing the right legal area. What seems to be the focus, according to conversations with people at Blue J Legal, is the to focus on areas with strong rules as opposed to standards. The former provide the ability to have clear feature engineering while that later dont have the specificity to train an accurate model.

Blue J Legal arose from a University of Toronto course started by the founders, combining legal and computer science skills to try to predict cases. The challenge was, as it has always been in software, to understand the features of the data set in the detail needed to properly analyze the problem. As mentioned, the choice of the tax system was picked for the first focus. Tax law has a significant set of rules that can be designed. The data can then be appropriately labeled. After their early work on tax, they moved to employment.

The products are aimed at lawyers who are evaluating their cases. The goal is to provide the attorneys statistical analysis about the strength and weaknesses of each case.

It is important to note that employment is a category of legal issues. Each issue must be looked at separately, and each issue has its own set of features. For instance, in todays gig economy, Is the worker a contractor or an employee? is a single issue. The Blue J Legal team mentioned that they found between twenty and seventy features for each issue theyve addressed.

That makes clear that feature engineering is a larger challenge than is the training of the ML system. That has been mentioned by many people but still too many folks have focused on the inference engine because its cool. Turning data into information is a more critical part of the ML challenge.

Once the system is trained, the next challenge is to get the lawyers to provide the right information in order to analyze their current cases. They must enter (or their clerks must enter) information about each case that match the features to be analyzed.

On a slightly technical note, their model uses decision trees. They did try the Random Forest model, of interest in other fields, but found their accuracy dropped.

Blue J Legal claims their early version provides 80-90% accuracy.

By removing variables that can drive bias, such as male v female, they are able to train a more general system. Thats good from a pure law point of view, but unlike the parole system mentioned above, that could cause problems in a lawyers analysis of a problem. For instance, if a minority candidate is treated more poorly in the legal system, a lawyer should know about that. The Blue J Legal team says they did look at bias, both in their Canadian and USA legal data, but state that the two areas they are addressing dont see bias that would change the results in a significant way.

One area of bias theyve also ignored is that of judges, for the same reason as above. Im sure its also ignored for marketing reasons. As they move to legal areas with fewer rules and more standards, I could see a strong value for lawyers in knowing if the judge to whom the case has been assigned has strong biases based on features of the case or the plaintiff. Still, if they analyzed the judges, I could see other bias being added as judges might be biased against lawyers using the system. Its an interesting conundrum that will have to be addressed in the future.

There is a clear ethical challenge in front of lawyers that exists regardless of bias. For instance, if the system comes back and tells the lawyer that 70% of cases that are similar go against the plaintiff, should the lawyer take the case? Law is a fluid profession with many cases being similar but not identical. How does the lawyer decide if the specific client is in the 70% or the 30%? How can a system provide information help a lawyer decide to take a case with lower probability or reject one with a higher probability? The hope is, as with any other profession, that the lawyer would carefully evaluate the results. However, as in all industries, busy people take shortcuts and far too many people have taken the old acronym of GIGO to no longer mean garbage in, garbage out, but rather garbage in, gospel out.

One way to help is to provide a legal memo. The Blue J Legal system provides a list of lawyer provided answers and similar cases for each answer. Not being a lawyer, I cant tell how well that has been done, but it is a critical part of the system. Just as too many developers focus on the engine rather than feature engineering, they focus on the engine while minimizing the need to explain the engine. In all areas where machine learning is applied, but especially in professions, black box systems cant be trusted. Analysis must be supported in order for lawyers to understand and evaluate how the generic decision impacts their specific cases.

Law is an interesting avenue in which to test the integration between AI and people. Automation wont be replacing the lawyer any time soon, but as AI evolves it will be able to increasingly assist the people in the industry, to become more educated about their options and to use their time more efficiently. Its the balance between the two that will be interesting to watch.

The rest is here:
Artificial Intelligence (AI) And The Law: Helping Lawyers While Avoiding Biased Algorithms - Forbes

Read More..

Can we realistically create laws on artificial intelligence? – Open Access Government

Regulation is an industry, but effective regulation is an art. There are a number of recognised principles that should be considered when regulating an activity, such as efficiency, stability and regulatory structure, general principles, and the resolution of conflicts between these various competing principles. With the regulation of artificial intelligence (AI) technology, a number of factors make the centralised application of these principles difficult to realise but AI should be considered as a part of any relevant regulatory regime.

Because AI technology is still developing, it is difficult to discuss the regulation of AI without reference to a specific technology, field or application where these principles can be more readily applied. For example, optical character recognition (OCR) was considered to be AI technology when it was first developed, but today, few would call it AI.

Predictive technology for marketing and for navigation; Technology for ridesharing applications; Commercial flights routing; And even email spam filters.

These technologies are as different from each other as they are from OCR technology. This demonstrates why the regulation of AI technology (from a centralised regulatory authority or based on a centralised regulatory principle) is unlikely to truly work.

Efficiency-related principles include the promotion of competition between participants by avoiding restrictive practices that impair the provision of new AI-related technologies. This subsequently lowers barriers of entry for such technologies, providing the freedom of choice between AI technologies and creating competitive neutrality between existing AI technologies and new AI technologies (i.e. a level playing field). OCR technology was initially unregulated, at least from a central authority, and it was therefore allowed to develop and become faster and more efficient, even though there are many situations where OCR documents contained a large number of errors.

In a similar manner, a centralised regulation regime that encompasses all uses of AI mentioned above from a central authority or based on a single focus (e.g. avoiding privacy violations) would be inefficient.

The reason for this inefficiency is clear: the function and markets for these technologies are unrelated.

Strict regulations that require all AI applications to evaluate and protect the privacy of users might not only result in the failure to achieve any meaningful goals to protect privacy, but could also render those AI applications commercially unacceptable for reasons that are completely unrelated to privacy. For example, a regulation that requires drivers to be assigned based on privacy concerns could result in substantially longer wait times for riders if the closest drivers have previously picked up the passenger at that location. However, industry-specific regulation to address privacy issues might make sense, depending on the specific technology and specific concern within that industry.

Stability-related principles include providing incentives for the prudent assessment and management of risk, such as minimum standards, the use of regulatory requirements that are based on market values and taking prompt action to accommodate new AI technologies.

Using OCR as an example, if minimum standards for an acceptable number of errors in a document had been implemented, then the result would have been difficult to police, because documents have different levels of quality and some documents would no doubt result in less errors than others. In the case of OCR, the market was able to provide sufficient regulation, as companies competed with each other for the best solution, but for other AI technologies there may be a need for industry-specific regulations for ensuring minimum standards or other stability-related principles.

In regard to regulatory structure, these include following a functional/institutional approach to regulation, coordinating regulation by different agencies, and using a small number of regulatory agencies for any regulated activity. In that regard, there is no single regulatory authority that could implement and administer AI regulations across all markets, activities and technologies, or that would add a new regulatory regime to the ones already in place.

For example, in the US many state and federal agencies have OCR requirements that centre on specific technologies/software for document submission, and software application providers can either make their application compatible with those requirements or can seek to be included on a list of allowed applications. They do the latter by working with the state or federal agency to ensure that documents submitted using their applications will be compatible with the agencys uses. For other AI technologies there may be similarly industry-specific regulations that make sense in the context of the existing regulatory structure for that industry.

General principles of regulation include identifying the specific objectives of a regulation, cost-effectiveness, equitable distribution of the regulation costs , flexibility of regulation and a stable relationship between the regulators and regulated parties. Some of these principles could have been implemented for OCR, such as a specific objective in the terms of a number of errors per page. However, the other factors would have been more difficult to determine, and again would depend on an industry- or market-specific analysis. For many specific applications in specific industries, these factors were able to be addressed even though an omnibus regulatory structure was not implemented.

Preventing conflict between these different objectives requires a regime in which these different objectives can be achieved. For AI that would require an industry- or market-specific approach, and in the US, that approach has generally been followed for all AI-related technologies. As discussed, OCR-related technology is regulated by specific federal, state and local agencies as it pertains to their specific mission. Another AI technology is facial recognition, and a regulatory regime of federal, state and local regulation is in progress. The facial recognition technology space has been used by many of these authorities for different applications, with some recent push-back on the use of the technology by privacy advocates.

It is only when conflicts develop between such different regimes that input from a centralised authority may be required.

In the United States, an industry- and market-based approach is generally being adopted. In the 115th Congress, thirty-nine bills were introduced that had the phrase artificial intelligence in the text of the bill, and four were enacted into law. A large number of such bills were also introduced in the 116th Congress. As of April 2017, twenty-eight states had introduced some form of regulations for autonomous vehicles, and a large number of states and cities have proposed or implemented regulations for facial recognition technology.

While critics will no doubt assert that nothing much is being done to regulate AI, a simplistic and heavy-handed approach to AI regulation, reacting to a single concern such as privacy is unlikely to satisfy these principles of regulation, and should be avoided. Artificial intelligence requires regulation with real intelligence.

By Chris Rourk, Partner at Jackson Walker, a member of Globalaw.

Editor's Recommended Articles

See the article here:
Can we realistically create laws on artificial intelligence? - Open Access Government

Read More..

Ohio to Analyze State Regulations with Artificial Intelligence – Governing

(TNS) A new Ohio initiative aims to use artificial intelligence to guide an overhaul of the states laws and regulations.

Lt. Gov. Jon Husted said his staff will use an AI software tool, developed for the state by an outside company, to analyze the states regulations, numbered at 240,000 in a recent study by a conservative think-tank, and narrow them down for further review.

Husted compared the tool to an advanced search engine that will automatically identify and group together like terms, getting more sophisticated the more its used.

He said the goal is to use the tool to streamline state regulations such as eliminating permitting requirements deemed to be redundant which is a long-standing policy goal of Republicans that lead the state government.

This gives us the capability to look at everything thats been done in 200 years in the state of Ohio and make sense of it, Husted said.

The project is part of two Husted-led projects the Common Sense Initiative, a state project to review regulations with the goal of cutting government red tape, and InnovateOhio,a Husted-led officethat aims to use technology to improve Ohios government operations

Husted announced the project on Thursday at a meeting of the Small Business Advisory Council. The panel advises the state on government regulations and tries to identify challenges they can pose for business owners.

State officials sought bids for projects last summer, authorized through the state budget. Starting soon, Husteds staff will load the states laws and regulations into the software, with the goal of starting to come up with recommendations for proposed law and rule changes before the summer.

Husteds office has authority to spend as much as $1.2 million on the project, although it could cost less, depending on how many user licenses they request.

I dont know if it will be a small success, a medium success, or a large success, Husted said. I dont want to over-promise, but we have great hope for it.

2020 The Plain Dealer, Cleveland.Distributed byTribune Content Agency, LLC.

View post:
Ohio to Analyze State Regulations with Artificial Intelligence - Governing

Read More..

Artificial intelligence and digital initiatives to be scrutinised by MEPs | News – EU News

Commissioner Breton will present to and debate with MEPs the initiatives that the Commission will put forward on 19 February:

When: Wednesday, 19 February, 16.00 to 18.00

Where: European Parliament, Spaak building, room 3C050, Brussels

Live streaming: You can also follow the debate on EP Live

A Strategy for Europe Fit for the Digital Age

The Commission has announced in its 2020 Work Programme that it will put forward a European Strategy for Europe - Fit for the Digital Age, setting out its vision on how to address the challenges and opportunities brought about by digitalisation.

Boosting the single market for digital services and introducing regulatory rules for the digital economy should be addressed in this strategy. It is expected to build on issues covered by the e-commerce directive and the platform-to-business regulation.

White Paper on Artificial Intelligence

The White Paper on Artificial Intelligence (AI) will aim to support its development and uptake in the EU, as well as to ensure that European values are fully respected. It should identify key opportunities and challenges, analyse regulatory options and put forward proposals and policy actions related to, e.g. ethics, transparency, safety and liability.

European Strategy for Data

The purpose of the Data Strategy would be to explore how to make the most of the enormous value of non-personal data as an ever-expanding and re-usable asset in the digital economy. It will build in part on the free flow of non-personal data regulation.

Original post:
Artificial intelligence and digital initiatives to be scrutinised by MEPs | News - EU News

Read More..

Artificial intelligence makes a splash in efforts to protect Alaska’s ice seals and beluga whales – Stories – Microsoft

When Erin Moreland set out to become a research zoologist, she envisioned days spent sitting on cliffs, drawing seals and other animals to record their lives for efforts to understand their activities and protect their habitats.

Instead, Moreland found herself stuck in front of a computer screen, clicking through thousands of aerial photographs of sea ice as she scanned for signs of life in Alaskan waters. It took her team so long to sort through each survey akin to looking for lone grains of rice on vast mounds of sand that the information was outdated by the time it was published.

Theres got to be a better way to do this, she recalls thinking. Scientists should be freed up to contribute more to the study of animals and better understand what challenges they might be facing. Having to do something this time-consuming holds them back from what they could be accomplishing.

That better way is now here an idea that began, unusually enough, with the view from Morelands Seattle office window and her fortuitous summons to jury duty. She and her fellow National Oceanic and Atmospheric Administration scientists now will use artificial intelligence this spring to help monitor endangered beluga whales, threatened ice seals, polar bears and more, shaving years off the time it takes to get data into the right hands to protect the animals.

The teams are training AI tools to distinguish a seal from a rock and a whales whistle from a dredging machines squeak as they seek to understand the marine mammals behavior and help them survive amid melting ice and increasing human activity.

Morelands project combines AI technology with improved cameras on a NOAA turboprop airplane that will fly over the Beaufort Sea north of Alaska this April and May, scanning and classifying the imagery to produce a population count of ice seals and polar bears that will be ready in hours instead of months. Her colleague Manuel Castellote, a NOAA affiliate scientist, will apply a similar algorithm to the recordings hell pick up from equipment scattered across the bottom of Alaskas Cook Inlet, helping him quickly decipher how the shrinking population of endangered belugas spent its winter.

The data will be confirmed by scientists, analyzed by statisticians and then reported to people such as Jon Kurland, NOAAs assistant regional administrator for protected resources in Alaska.

Kurlands office in Juneau is charged with overseeing conservation and recovery programs for marine mammals around the state and its waters and helping guide all the federal agencies that issue permits or carry out actions that could affect those that are threatened or endangered.

Of the four types of ice seals in the Bering Sea bearded, ringed, spotted and ribbon the first two are classified as threatened, meaning they are likely to become in danger of extinction within the foreseeable future. The Cook Inlet beluga whales are already endangered, having steadily declined to a population of only 279 in last years survey, from an estimate of about a thousand 30 years ago.

Individual groups of beluga whales are isolated and dont breed with others or leave their home, so if this population goes extinct, no one else will come in; theyre gone forever, says Castellote. Other belugas wouldnt survive there because they dont know the environment. So youd lose that biodiversity forever.

Yet recommendations by Kurlands office to help mitigate the impact of human activities such as construction and transportation, in part by avoiding prime breeding and feeding periods and places, are hampered by a lack of timely data.

Theres basic information that we just dont have now, so getting it will give us a much clearer picture of the types of responses that may be needed to protect these populations, Kurland says. In both cases, for the whales and seals, this kind of data analysis is cutting-edge science, filling in gaps we dont have another way to fill.

The AI project was born years ago, when Moreland would sit at her computer in NOAAs Marine Mammal Laboratory in Seattle and look across Lake Washington toward Microsofts headquarters in Redmond, Washington. She felt sure there was a technological solution to her frustration, but she didnt know anyone with the right skills to figure it out.

She hit the jackpot one week while serving on a jury in 2018. She overheard two fellow jurors discussing AI during a break in the trial, so she began talking with them about her work. One of them connected her with Dan Morris from Microsofts AI for Earth program, who suggested they pitch the problem as a challenge that summer at the companys Hackathon, a week-long competition when software developers, programmers, engineers and others collaborate on projects. Fourteen Microsoft engineers signed up to work on the problem.

Across the wildlife conservation universe, there are tons of scientists doing boring things, reviewing images and audio, Morris says. Remote equipment lets us collect all kinds of data, but scientists have to figure out how to use that data. Spending a year annotating images is not only a bad use of their time, but the questions get answered way later than they should.

Morelands idea wasnt as simple as it may sound, though. While there are plenty of models to recognize people in images, there were none until now that could find seals, especially real-time in aerial photography. But the hundreds of thousands of examples NOAA scientists had classified in previous surveys helped technologists, who are using them to train the AI models to recognize which photographs and recordings contained mammals and which didnt.

Part of the challenge was that there were 20 terabytes of data of pictures of ice, and working on your laptop with that much data isnt practical, says Morris. We had daily handovers of hard drives between Seattle and Redmond to get this done. But the cloud makes it possible to work with all that data and train AI models, so thats how were able to do this work, with Azure.

Morelands first ice seal survey was in 2007, flying in a helicopter based on an icebreaker. Scientists collected 90,000 images and spent months scanning them but only found 200 seals. It was a tedious, imprecise process.

Ice seals live largely solitary lives, making them harder to spot than animals that live in groups. Surveys are also complicated because the aircraft have to fly high enough to keep seals from getting scared and diving, but low enough to get high-resolution photos that enable scientists to differentiate a ring seal from a spotted seal, for example. The weather in Alaska often rainy and cloudy further complicates efforts.

Subsequent surveys improved by pairing thermal and color cameras and using modified planes that had a greater range to study more area and could fly higher up to be quieter. Even so, thermal interference from dirty ice and reflections off jumbled ice made it difficult to determine what was an animal and what wasnt.

And then there was the problem of manpower to go along with all the new data. The 2016 survey produced a million pairs of thermal and color images, which a previous software system narrowed down to 316,000 hot spots that the scientists had to manually sort through and classify. It took three people six months.

Read more from the original source:
Artificial intelligence makes a splash in efforts to protect Alaska's ice seals and beluga whales - Stories - Microsoft

Read More..

SparkCognition Partners with Informatica to Enable Customers to Operationalize Artificial Intelligence and Solve Problems at Scale – Yahoo Finance

SparkCognition's Data Science Automation Platform to Offer Integration With Informatica's Data Management Solutions

AUSTIN, Texas, Feb. 19, 2020 /PRNewswire/ --SparkCognition, a leading AI company, announced a partnership with enterprise cloud data management company, Informatica, to transform the data science process for companies. By combining Informatica's data management capabilities with SparkCognition's AI-powered data science automation platform, Darwin, users will benefit from an integrated end-to-end environment where they can gather and manage their data, create a custom and highly-accurate model based off of that data, and deploy the model to inform business decisions.

(PRNewsfoto/SparkCognition)

"There has never been a more critical time to leverage the power of data and today's leading businesses recognize that data not only enables them to stay afloat, but provides them with the competitive edge necessary to innovate within their industries," said Ronen Schwartz, EVP, global technical and ecosystem strategy and operations at Informatica. "Together with SparkCognition, we are helping users tackle some of the most labor and time-intensive aspects of data science in a user-friendly fashion that allows users of all skill levels to quickly solve their toughest business problems."

Informatica is the leading data integration and data management company, which offers users the ability to collect their data from even the most fragmented sources across hybrid enterprises, discover data, then clean and prepare datasets to create and expand data model features. SparkCognition is the world's leading industrial artificial intelligence company, and its Darwin data science automation platform accelerates the creation of end-to-end AI solutions to deliver business-wide outcomes. The partnership will allow users to seamlessly discover data, pull their data from virtually anywhere using Informatica's data ingestion capabilities, then input the data into the Darwin platform. Through the new integration, users will streamline workflows and speed up the model building process to provide value to their business faster.

"At SparkCognition, we're strong believers that this new decade will be dominated by model-driven enterprisescompanies who have embraced and operationalized artificial intelligence," said Dana Wright, Global Vice President of Sales at SparkCognition. "We recognize this shared mission with Informatica and are excited to announce our partnership to help companies solve their toughest business problems using artificial intelligence."

To learn more about Darwin, visit sparkcognition.com/product/darwin/

About SparkCognition:

With award-winning machine learning technology, a multinational footprint, and expert teams focused on defense, IIoT, and finance, SparkCognition builds artificial intelligence systems to advance the most important interests of society. Our customers are trusted with protecting and advancing lives, infrastructure, and financial systems across the globe. They turn to SparkCognition to help them analyze complex data, empower decision-making, and transform human and industrial productivity. SparkCognition offers four main products:DarwinTM, DeepArmor, SparkPredict, and DeepNLPTM. With our leading-edge artificial intelligence platforms, our clients can adapt to a rapidly changing digital landscape and accelerate their business strategies. Learn more about SparkCognition's AI applications and why we've been featured in CNBC's 2017 Disruptor 50, and recognized three years in a row on CB Insights AI 100, by visiting http://www.sparkcognition.com.

For Media Inquiries:

Cara SchwartzkopfSparkCognitioncschwartzkopf@sparkcognition.com512-956-5491

View original content to download multimedia:http://www.prnewswire.com/news-releases/sparkcognition-partners-with-informatica-to-enable-customers-to-operationalize-artificial-intelligence-and-solve-problems-at-scale-301007328.html

SOURCE SparkCognition

See the rest here:
SparkCognition Partners with Informatica to Enable Customers to Operationalize Artificial Intelligence and Solve Problems at Scale - Yahoo Finance

Read More..

Implementing artificial intelligence in the insurance industry: Cass breakfast briefing – City, University of London

Cass event addresses implementation, benefits and challenges of AI in insurance

How is artificial intelligence (AI) affecting the insurance industry and what should insurance providers consider before implementing this technology? These were just two points of discussion during the Artificial Intelligence and Insurance: Managing Risks and Igniting Innovation breakfast event held at Cass Business School.

Professor of Strategy and Founding Director of the Digital Leadership Research Centre, Gianvito Lanzolla was joined by technology and insurance professionals to explore the feasibility of digitisation on the industry, as well as looking at how and when AI should be implemented.

Professor Lanzolla presented his joint research (carried out with Cass research student Lei Fang and Reader in Actuarial Science, Dr Andreas Tsanakas) about the impact of digitisation on management attention in the banking and insurance industries highlighting the ambivalent consequences of digitisation. On the one hand there could be scope for increased coordination, but this potentially comes at the expense of increasing group thinking and systemic risk.

Santiago Restrepo, Director at global professional services consultancy BDO UK LLP, then spoke about how businesses should make the case for using AI. This includes assessing market needs, company objectives and potential scalability of the technology.

Founder and CEO of data insights provider Digital Fineprint, Bo-Erik Abrahamsson demonstrated the importance of data, and how raw data mining could be transformed into useful insights for insurance companies.

Paul Willoughby, Head of IT Strategy, Innovation and Architecture at insurance provider Beazley, then stressed the importance of only using AI where it was critically required and would make the boat move faster citing the example of anonymous chat bots as a piece of technology that does not necessarily deliver satisfactory insights or levels of customer service.

The session concluded with a Q&A session from audience members.

Reflecting on the discussion, Professor Lanzolla said:

Artificial Intelligence can have many and clear advantages for industries that are heavily reliant on data, such as insurance, but there are also considerations that need to be made before implementing technology."

Common considerations should include the implications of AI-related black boxing, fault lines between new digital skills and legacy skills, loss of emotional engagement with an organisation and risks for organisational stability when turbocharging some areas of an organisation with AI while leaving others lagging behind.

The event was introduced and chaired by Darren Munday, Partner at Internal Consulting Group (Global) and Visiting Fellow at the Digital Leadership Research Centre.

Find out more about upcoming events at Cass.

Link:
Implementing artificial intelligence in the insurance industry: Cass breakfast briefing - City, University of London

Read More..

Why Bill Gates thinks gene editing and artificial intelligence could save the world – Yahoo News

Microsoft co-founder Bill Gates has been working to improve the state of global health through his nonprofit foundation for 20 years, and today he told the nations premier scientific gathering that advances in artificial intelligence and gene editing could accelerate those improvements exponentially in the years ahead.

We have an opportunity with the advance of tools like artificial intelligence and gene-based editing technologies to build this new generation of health solutions so that they are available to everyone on the planet. And Im very excited about this, Gates said in Seattle during a keynote address at the annual meeting of the American Association for the Advancement of Science.

Such tools promise to have a dramatic impact on several of the biggest challenges on the agenda for the Bill & Melinda Gates Foundation, created by the tech guru and his wife in 2000.

When it comes to fighting malaria and other mosquito-borne diseases, for example, CRISPR-Cas9 and other gene-editing tools are being used to change the insects genome to ensure that they cant pass along the parasites that cause those diseases. The Gates Foundation is investing tens of millions of dollars in technologies to spread those genomic changes rapidly through mosquito populations.

Millions more are being spent to find new ways fighting sickle-cell disease and HIV in humans. Gates said techniques now in development could leapfrog beyond the current state of the art for immunological treatments, which require the costly extraction of cells for genetic engineering, followed by the re-infusion of those modified cells in hopes that theyll take hold.

For sickle-cell disease, the vision is to have in-vivo gene editing techniques, that you just do a single injection using vectors that target and edit these blood-forming cells which are down in the bone marrow, with very high efficiency and very few off-target edits, Gates said. A similar in-vivo therapy could provide a functional cure for HIV patients, he said..

Bill Gates shows how the rise of computational power available for artificial intelligence is outpacing Moores Law. (GeekWire Photo / Todd Bishop)

The rapid rise of artificial intelligence gives Gates further cause for hope. He noted that that the computational power available for AI applications has been doubling every three and a half months on average, dramatically improving on the two-year doubling rate for chip density thats described by Moores Law.

One project is using AI to look for links between maternal nutrition and infant birth weight. Other projects focus on measuring the balance of different types of microbes in the human gut, using high-throughput gene sequencing. The gut microbiome is thought to play a role in health issues ranging from digestive problems to autoimmune diseases to neurological conditions.

This is an area that needed these sequencing tools and the high-scale data processing, including AI, to be able to find the patterns, Gates said. Theres just too much going on there if you had to do it, say, with paper and pencil to understand the 100 trillion organisms and the large amount of genetic material there. This is a fantastic application for the latest AI technology.

Similarly, organs on a chip could accelerate the pace of biomedical research without putting human experimental subjects at risk.

In simple terms, the technology allows in-vitro modeling of human organs in a way that mimics how they work in the human body, Gates said. Theres some degree of simplification. Most of these systems are single-organ systems. They dont reproduce everything, but some of the key elements we do see there, including some of the disease states for example, with the intestine, the liver, the kidney. It lets us understand drug kinetics and drug activity.

Bill Gates explains how gene-drive technology can cause genetic changes to spread rapidly in mosquito populations. (GeekWire Photo / Todd Bishop)

Story continues

The Gates Foundation has backed a number of organ-on-a-chip projects over the years, including one experiment thats using lymph-node organoids to evaluate the safety and efficacy of vaccines. At least one organ-on-a-chip venture based in the Seattle area, Nortis, has gone commercial thanks in part to Gates support.

High-tech health research tends to come at a high cost, but Gates argues that these technologies will eventually drive down the cost of biomedical innovation.

He also argues that funding from governments and nonprofits will have to play a role in the worlds poorer countries, where those who need advanced medical technologies essentially have no voice in the marketplace.

If the solution of the rich country doesnt scale down then theres this awful thing where it might never happen, Gates said during a Q&A with Margaret Hamburg, who chairs the AAAS board of directors.

But if the acceleration of medical technologies does manage to happen around the world, Gates insists that could have repercussions on the worlds other great challenges, including the growing inequality between rich and poor.

Disease is not only a symptom of inequality, he said, but its a huge cause.

Other tidbits from Gates talk:

Read Gates prepared remarks in a posting to his Gates Notes blog, or watch the video on AAAS YouTube channel.

See the original post here:
Why Bill Gates thinks gene editing and artificial intelligence could save the world - Yahoo News

Read More..

The U.S. Is Very Worried About BitcoinAnd Its Finally Doing Something About It – Forbes

Bitcoin, cryptocurrencies, blockchain, decentralization, China's digital yuan, Facebook's librathe U.S. is understandably worried about the dominance of the almighty dollar.

Last year, U.S. president Donald Trump slammed bitcoin as based on "thin air," while his Treasury secretary Steven Mnuchin branded bitcoin a "national security threat."

Now, the U.S. has admitted bitcoin and cryptocurrency could undermine the dollar's status as the worlds reserve currencyand it wants to find out exactly how bad for the country, its economy, and security that could be.

The rise of bitcoin and cryptocurrencies has caused some to fear the dominance of the U.S. dollar ... [+] might be under threat.

"Many cryptocurrency enthusiasts predict that either a global cryptocurrency or a national digital currency could undermine the U.S. dollar," the U.S. Office of the Director of National Intelligence wrote in a job listing earlier this month, calling for two researchers to evaluate the impact of the U.S. dollar losing its status as the world reserve currency.

"If either of these scenarios or others come to pass, the U.S. would lose both its status in the world and its global authorities."

The two roles, looking for a postdoc Ph.D. graduate and a U.S. university or government laboratory employee research assistant, are with the U.S. Intelligence Community Postdoctoral Research Fellowship Program through the Department of Energys Oak Ridge Institute for Science and Technology.

Back in 2018, the Department of Energys Oak Ridge Institute for Science and Technology conducted research that found that the creation of new bitcoin, along with smaller cryptocurrencies ethereum, litecoin and monero, used more energy than mineral mining to produce the same market value.

The Department of Energys Oak Ridge Institute for Science and Technology did not respond to a request for comment.

"There are many advantages for U.S. national security to have the U.S. dollar as the world reserve currency," the job post, which has a deadline of the February 28, read, pointing to the combat of financial crimes, the prevention of terrorism and the development of weapons of mass destruction, the ability of the U.S. to sanction other countries, cause financial instability in global markets.

"The U.S. maintains international dominance in no small part due to its financial power and authorities."

Meanwhile, calls for the U.S. to begin development of a so-called digital dollar have been growing louder over recent months.

Christopher Giancarlo, former chairman of the Commodity Futures Trading Commission, recently set up the Digital Dollar Foundation to work on the design and potential framework of a digital dollar.

The bitcoin price,which has failed to return to its all-time highs set in late 2017 despite it climbing around 50% since the beginning of the year, was given a substantial boost in the first half of last year by Facebook's plans for a bitcoin-like rival.

The bitcoin price has soared in recent years, making bitcoin easily the last decade's best ... [+] investment.

Many have long expected governments to eventually try to undermine bitcoin's network to halt its adoptionthough bitcoin's decentralized nature makes it remarkably resilient.

"We can win a major battle [with governments] in the arms race and gain a new territory of freedom for several years," bitcoin's mysterious creator Satoshi Nakamoto wrote in 2008. "Governments are good at cutting off the heads of a centrally controlled networks like Napster, but pure [peer-to-peer] networks like Gnutella and Tor seem to be holding their own."

Bitcoin now stands with these networks in resistance to government control.

Originally posted here:
The U.S. Is Very Worried About BitcoinAnd Its Finally Doing Something About It - Forbes

Read More..

Bitcoin Bulls Back in the Driving Seat as Price Crosses $10K – CoinDesk – CoinDesk

View

Bitcoin (BTC) returned into the five-figure zone on Wednesday, reviving the bullish case and putting recent highs near $10,500 back on the radar.

At press time, bitcoin is trading at $10,139, representing a 4.48 percent gain on a 24-hour basis, as per CoinDesks Bitcoin Price Index.

However, the top cryptocurrency by market value was looking weak 24 hours ago, having breached the 2020 rising trendline support at $9,700. The subsequent sell-off, however, ran into bids near $9,600, following which prices charted a near 90-degree rise to $10,290 during the U.S. trading session.

Tuesdays spike marked an end of the pullback from recent highs near $10,500 and validated the positive shift in the long-term sentiment highlighted by the golden crossover the bull cross of the 50- and 200-day averages.

As a result, bigger gains could be in the offing in the short term, more so as the price of gold, a classic safe haven asset, is again rising.

The yellow metal jumped 1.32 percent on Tuesday its biggest single-day gain since Jan. 3 on haven demand amid losses in the U.S. stock markets. Investors shunned risk after Apple warned it does not expect to meet its March quarter revenue guidance due to the coronavirus outbreak's effect on suppliers in China.

Bitcoin has increasingly moved in tandem with gold so far this year. Its one-month correlation with gold strengthened to 0.70 in January from Decembers -0.12, according to cryptocurrency exchange Krakens January volatility report.

Gold is currently trading above $1,600 per ounce and appears on track to test the six-year high of $1,611 reached on Jan. 8.

Daily chart

Bitcoin jumped 5 percent on Tuesday, keeping the 2020 rising trendline support intact and confirming another bullish higher low at $9,467 (Feb. 17 low) a sign of continuation of the rally from January lows near $6,850.

Additionally, prices closed well above $10,050 the high of Sundays doji candle confirming a bullish breakout from a period of indecisive price action.

With the bulls back in the drivers seat, a re-test of the recent high of $10,500 looks likely.

4-hour chart

Bitcoin is still trading in an expanding descending channel on the four-hour chart. A breakout looks likely as the relative strength index has already violated the descending trendline and is pointing north.

Bearish scenario

If the cryptocurrency again finds acceptance under $10,000, prices may revisit the former hurdle-turned-support of $9,825 (marked by arrow) on the hourly chart (above left).

A violation there would shift the focus to the neckline support of the potential head-and-shoulders pattern on the four-hour line chart. At press time, the key support is located at 9,584. A break lower could discourage buyers, leading to a deeper slide toward $9,000.

Disclosure:The author holds no cryptocurrency at the time of writing

The leader in blockchain news, CoinDesk is a media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. CoinDesk is an independent operating subsidiary of Digital Currency Group, which invests in cryptocurrencies and blockchain startups.

Read the original here:
Bitcoin Bulls Back in the Driving Seat as Price Crosses $10K - CoinDesk - CoinDesk

Read More..