Page 4,101«..1020..4,1004,1014,1024,103..4,1104,120..»

Epiq expands company-wide initiative to accelerate the deployment of artificial intelligence for clients globally – GlobeNewswire

NEW YORK, Nov. 20, 2019 (GLOBE NEWSWIRE) -- Epiq, a global leader in the legal services industry, today announced the expansion of its artificial intelligence (AI) enabled eDiscovery and document review services through a combination of partnerships, proprietary technology, and increased service readiness globally.

AI provides our clients with a key to unlock the potential value, and decrease the risks and costs, inherent in the massive and growing volume of business data, said Roger Pilc, president and general manager, Epiq legal solutions. Already the leading industry practitioner of AI-enabled technology-assisted review services, Epiq has expanded the global reach of its AI services and trained staff.

As part of its expanded AI initiative, Epiq announced today it is rolling out the latest versions of NexLP, Brainspace, and Relativity Analytics globally, across its industry leading global data centers. In addition to working closely with select industry AI technology leaders, Epiq also leverages data scientists and proprietary algorithms to develop new products with transformative capabilities for its clients.

To thoroughly enable the application of AI in its services, Epiq is certifying over 300 of its document review, client service, solution architect, and operations team members through detailed and proprietary training programs.

With this expansion of capability, Epiq can effectively serve its global client base, including the management of multi-jurisdictional projects. Epiq can now assure the consistent ability to manage AI-supported workflows across languages such as Mandarin, Cantonese, Korean and Japanese.

Epiq brings the deepest human knowledge to the art and science of data review, working closely with the industrys leading analytics practitioners, said Eric Crawley, vice president of review and analytics. Effectively mining sensitive data, including privileged information, requires a robust combination of human and machine intelligence. We are excited that our AI competency also provides an advantage in areas of information governance and data breach, allowing Epiq to provide broader value to our clients.

Epiq has deep experience applying advanced analytics to eDiscovery matters for the benefit of its clients, using AI in over 1,000 matters in the past year, spanning an array of litigated matters, regulatory reviews, and internal investigations. Recent client engagements include leveraging AI for a leading international bank, a national healthcare provider, and one of Americas largest cities. The matter type and project size differed greatly in each case, but those clients sought the cost, time, and quality advantages that only Epiq can offer.About Epiq Epiq, a global leader in the legal services industry, takes on large-scale, increasingly complex tasks for corporate counsel, law firms, and business professionals with efficiency, clarity, and confidence. Clients rely on Epiq to streamline the administration of business operations, class action and mass tort, court reporting, eDiscovery, regulatory, compliance, restructuring, and bankruptcy matters. Epiq subject-matter experts and technologies create efficiency through expertise and deliver confidence to high-performing clients around the world. Learn more at https://www.epiqglobal.com.

Continued here:

Epiq expands company-wide initiative to accelerate the deployment of artificial intelligence for clients globally - GlobeNewswire

Read More..

How To Get Your Rsum Past The Artificial Intelligence Gatekeepers – Forbes

Getty

By Jeff Mills, Director, Solution Marketing at SAP SuccessFactors

Its no longer a secret that getting past the robot rsum readers to a human let alone land an interview can seem like trying to get in to see the Wizard of Oz. As the rsums of highly qualified applicants are rejected by the initial automated screening, job seekers suddenly find themselves having to learn rsum submission optimization to please the algorithms and beat the bots for a meeting with the Wizard.

Many enterprise businesses use Artificial Intelligence (AI) and machine learning tools to screen rsums when recruiting and hiring new employees.Even small to midsize companies who use recruiting services are using whatever algorithm or search-driven automated rsum screening those services utilize.

Why dont human beings read rsums anymore? Well, they do, but usually later in the process after the initial shortlist by the bots. Unfortunately, desirable soft skills and unquantifiable experience can go unnoticed by the best-trained algorithms. So far, the only solution is human interaction.

Despite the view from outside the organization, HR has good reason for using automated processes for screening rsums. To efficiently manage the hundreds or even thousands of applications submitted for one position alone, companies have adopted automated AI screening tools to not only save time and human effort but also to find qualified and desirable candidates before they move on or someone else gets to them first.

Nobodys ever seen the Great Oz!

The wealth of impressive time-saving and turnover reduction metrics equates to success and big ROI for organizations who automate recruiting and hiring processes. Most tales of headaches and frustration go untold for many thousands of qualified applicants whose rsums somehow failed to tickle the algorithm just right.

This trend is changing, however, as the bias built into AI and machine learning algorithms unintentionally or otherwise becomes more glaringly apparent and undeniable. Sure, any new technology will have its early adopters and zealous promoters and apologists as well as the naysayers and skeptics. But when that technology shows promise to change industry and increase profit, criticism can be drowned out and ignored.

The problem of bias in AI is not a new concern. For several years, scientists and engineers have warned that because AI is created and developed by humans, the likelihood of bias finding its way into the program code is high if not certain. And the time to think about that and address it as much as possible is during the design, development, and testing process. Blind spots are inevitable. Once buy-in is achieved and business ecosystems integrate that technology, the recursive and reciprocal influences of technology, commerce, and society can make changing course slow and/or costly.

Consider the recent trouble Amazon found itself in for some of its hiring practices when it had been determined that their AI recruiting tool was biased against women. AI in itself is not biased and performs only as it is instructed and adapts to new information. Rather, the bias comes from the way human beings program and develop the way machines learn and execute commands. Or if the outputs of the AI are taken at face value and never trained by ongoing human interaction, they can never adapt.

Bias enters in a few ways. One source is rooted in the data sets used to train algorithms for screening candidates. Other sources of bias enter when certain criteria are privileged, such as growing up in a certain area, attending a top university, or certain age preferences. By using the data for existing employees as a model for qualified candidates, the screening process can become a kind of feedback loop of biased criteria.

A few methods and practices can help correct or avoid this problem. One is to use broad swaths of data, including data from outside your company and even your industry. Also, train algorithms on a continual basis, incorporating new data, and monitoring algorithm function and results. Set benchmarks for measuring data quality and have humans screen rsums as well. Management of automated recruiting and screening solutions can go a long way in minimizing bias as well as reducing the number of qualified candidates who get their rsums rejected.

Bell out of order, please knock

As mentioned earlier, change takes time once these processes are in place and embedded. Until widespread acceptance that problems exist, and steps are taken to address them, the best job seekers can do is adapt.

With all of the possible ways that programmers biases influence the bots screening rsums, what can people applying for jobs do to improve their chances of getting past the AI gatekeepers?

The good news is that these moves will not only help eliminate false negatives and keep your rsum out of the abyss, but they are likely to make things easier for the human beings it reaches.

Well, why didnt you say so? Thats a horse of a different color!

So, what are they looking for? How do you beat the bots?

In the big picture, AI is still young, and we are working out the kinks and bugs not only at a basic code and function level, but also on the human level. We are still learning how to navigate and account for our roles and responsibilities in the overall ecosystem of human-computer interaction.

The bottom line is that AI, machine learning, and automation can eliminate bias or reinforce it. That separation may never be pure, but its an ideal that is not only worth striving for, it is absolutely necessary to work toward. The impact and consequences of our choices today will leave long-lasting effects on every area of human life.

And the bright side is that were already beginning to see how those theoretical concerns can play out in the real world, and we have an opportunity to improve a life-changing technological development whose reach and impact we can still only dimly imagine. In the meantime, job seekers looking to beat the bots are not entirely powerless, but can do what human beings have done well for ages: adapt.

Interested in how to deliver a great candidate experience? Read our guide on how to Transform the Candidate Experience.

Read more:

How To Get Your Rsum Past The Artificial Intelligence Gatekeepers - Forbes

Read More..

Artificial intelligence won’t kill journalism or save it, but the sooner newsrooms buy in, the better – Nieman Journalism Lab at Harvard

The robots arent taking over journalism jobs, but newsroom should adapt artificial intelligence technologies and accept that the way news is produced and consumed is changing, according to a new report by Polis, the media think-tank at the London School of Economics and Political Science.

In its global survey on journalism and artificial intelligence, New Powers, New Responsibilities, researchers asked 71 news organizations from 32 countries if and how that currently use AI in their newsrooms and how they expect the technology to impact the news media industry. (Since what exactly constitutes AI can be fuzzy, the report defines it as a collection of ideas, technologies, and techniques that relate to a computer systems capacity to perform tasks normally requiring human intelligence.)

Right now, newsrooms mostly use AI in three areas: news gathering, production, and distribution. Of those surveyed, only 37 percent have an active AI strategy. The survey found that while newsrooms were interested in AI for efficiency and competitive purposes, they said they were mostly motivated by the desire to help the public cope with a world of news overload and misinformation and to connect them in a convenient way to credible content that is relevant, useful and stimulating for their lives.

The hope is that journalists will be algorithmically turbo-charged, capable of using their human skills in new and more effective ways, Polis founding director Charlie Beckett said in the report. AI could also transform newsrooms from linear production lines into networked information and engagement hubs that give journalists the structures to take the news industry forward into the data-driven age.

While most respondents said that AI would be beneficial as long as newsrooms stuck to their ethical and editorial policies, they noted that budget cuts as a result of implementing AI could lower the quality of news produced. They were also concerned about algorithmic bias and the role that technology companies will play in journalism going forward.

AI technologies will not save journalism or kill it off, Beckett writes. Journalism faces a host of other challenges such as public apathy and antipathy, competition for attention, and political persecutionPerhaps the biggest message we should take from this report is that we are at another critical historical moment. If we value journalism as a social good, provided by humans for humans, then we have a window of perhaps 2-5 years, when news organisations must get across this technology.

Heres a video summary of the report:

And here is a brief response to the report from Johannes Klingebiel of Sddeutsche Zeitung.

Link:

Artificial intelligence won't kill journalism or save it, but the sooner newsrooms buy in, the better - Nieman Journalism Lab at Harvard

Read More..

Highlights: Addressing fairness in the context of artificial intelligence – Brookings Institution

When society uses artificial intelligence (AI) to help build judgments about individuals, fairness and equity are critical considerations. On Nov. 12, Brookings Fellow Nicol Turner-Lee sat down with Solon Barocas of Cornell University, Natasha Duarte of the Center for Democracy & Technology, and Karl Ricanek of the University of North Carolina Wilmington to discuss artificial intelligence in the context of societal bias, technological testing, and the legal system.

Artificial intelligence is an element of many everyday services and applications, including electronic devices, online search engines, and social media platforms. In most cases, AI provides positive utility for consumerssuch as when machines automatically detect credit card fraud or help doctors assess health care risks. However, there is a smaller percentage of cases, such as when AI helps inform decisions on credit limits or mortgage lending, where technology has a higher potential to augment historical biases.

Policing is another area where artificial intelligence has seen heightened debateespecially when facial recognition technologies are employed. When it comes to facial recognition and policing, there are two major points of contention: the accuracy of these technologies and the potential for misuse. The first problem is that facial recognition algorithms could reflect biased input data, which means that their accuracy rates may vary across racial and demographic groups. The second challenge is that individuals can use facial recognition products in ways other than their intended usemeaning that even if these products receive high accuracy ratings in lab testing, any misapplication in real-life police work could wrongly incriminate members of historically marginalized groups.

Technologists have narrowed down this issue by creating a distinction between facial detection and facial analysis. Facial detection describes the act of identifying and matching faces in a databasealong the lines of what is traditionally known as facial recognition. Facial analysis goes further to assess physical features such as nose shape (or facial attributes) and emotions (or affective computing). In particular, facial analysis has raised civil rights and equity concerns: an algorithm may correctly determine that somebody is angry or scared but might incorrectly guess why.

When considering algorithmic bias, an important legal question is whether an AI product causes a disproportional disadvantage, or disparate impact, on protected groups of individuals. However, plaintiffs often face broad challenges in bringing anti-discrimination lawsuits in AI cases. First, disparate impact is difficult to detect; second, it is difficult to prove. Plaintiffs often bear the burden of gathering evidence of discriminationa challenging endeavor for an individual when disparate impact often requires aggregate data from a large pool of people.

Because algorithmic bias is largely untested in court, many legal questions remain about the application of current anti-discrimination laws to AI products. For example, under Title VII of the 1964 Civil Rights Act, private employers can contest disparate impact claims by demonstrating that their practices are a business necessity. However, what constitutes a business necessity in the context of automated software? Should a statistical correlation be enough to assert disparate impact by an automated system? And how, in the context of algorithmic bias, can a plaintiff feasibly identify and prove disparate impact?

Algorithmic bias is a multi-layered problem that requires a multi-layered solution, which may include accountability mechanisms, industry self-regulation, civil rights litigation, or original legislation. Earlier this year, Sen. Ron Wyden (D-OR), Sen. Cory Booker (D-NJ), and Rep. Yvette Clark (D-NY) introduced the Algorithmic Accountability Act, which would require companies to conduct algorithmic risk assessments but allow them to choose whether or not to publicize the results. In addition, Rep. Mark Takano (D-CA) introduced the Justice in Forensic Algorithms Act, which addresses the transparency of algorithms in criminal court cases.

However, this multi-layered solution may require stakeholders to first address a more fundamental question: what is the goal that were trying to solve? For example, to some individuals, the possibility of inaccuracy is the biggest challenge when using AI in criminal justice. But to others, there are certain use cases where AI does not belong, such as in the criminal justice or national security contexts, regardless of whether or not it is accurate. Or, as Barocas describes these competing goals, when the systems work well, theyre Orwellian, and when they work poorly, theyre Kafkaesque.

Excerpt from:

Highlights: Addressing fairness in the context of artificial intelligence - Brookings Institution

Read More..

Colorado at the forefront of AI and what it means for jobs of the future – The Denver Channel

LITTLETON, Colo. -- A group of MIT researchers visited Lockheed Martin this month for a chance to talk about the future of artificial intelligence and automation.

Liz Reynolds is the executive director of the MIT Task Force on the Work of the Future and says her job is to focus on the relationship between new technologies and how they will affect jobs.

Colorado is at the forefront of thinking about these things, Reynolds said. All jobs will be affected by this technology.

Earlier this year, U.S. Sen. Michael Bennet, D-Colo., created an artificial intelligence strategy group to take a closer look at how AI is being used in the state and how that will change in the future.

We need a national strategy on AI that galvanizes innovation, plans for the changes to our workforce, and is clear-eyed about the challenges ahead. And while were seeing progress, workers and employers cant wait on Washington, said Sen. Bennet in a statement. Colorado is well-positioned to shape those efforts, which is why weve made it a priority to bring together Colorado leaders in education, business, nonprofits, labor, and government to think through how we can best support and train workers across Colorado so they are better prepared for a changing economy."

MIT recently released a 60-page report detailing some of the possibilities and challenges with AI and automation.

One of the major challenges the group is considering is how the technology will affect vulnerable workers, particularly people who do not have a four-year degree.

The MIT team is looking for ways to train those workers to better prepare them for the changes.

Were not trying to replace a human, thats not something youre ever going to do with eldercare. For example, youre going be looking for ways to use this technology to help, Reynolds said.

Despite recent advances in AI, Reynolds believes the changes to the workforce will happen over a matter of decades, not years.

We think its going to be a slower process and its going to give us time to make the changes that we need institutionally, she said.

Beyond that, projections suggest that, with an aging workforce, there will be a scarcity of people to employ in future and technology can help fill some of those gaps.

The bigger question is how to ensure that workers can get a quality job that results in economic security for their families.

I think theres really an opportunity for us to see technology not as a threat but really, as a tool, Reynold said. If we can use the right policies and institutions to support workers in this transition then we could really be working toward something that works for everyone.

Lockheed Martin has been using artificial intelligence and automation in its space program for years. The companys scientists rely on automation to manage and operate spacecrafts that are on missions.

However, the technology is also being applied closer to home. The AI Lockheed Martin has created is already being applied to peoples day-to-day lives, from GPS navigation to banking. Now, the company is looking for more ways to make use of it.

Even though its been around for some time, we want to think about how we can use it in different, emerging ways and apply it to other parts of our business as well, said Whitley Poyser, the business transformation acting director for Lockheed Martin Space.

One of the areas in particular Lockheed Martin is looking to apply the technology is in its manufacturing, not only to streamline processes but to use data the machines are already collecting to predict potential issues and better prepare for them.

Poyser understands that there are some fears about this technology taking over jobs, but she doesnt believe thats the case.

Its not taking the job away, its just allowing our employees to think differently and think about elevating their skills and their current jobs, Poyser said. Its actually less of a fear to us and more of an opportunity.

The true potential of artificial intelligence is only beginning to be unleashed for companies like Lockheed Martin. Reynolds is hoping that predicting for the possibilities and challenges now will help the country better prepare for the changes in the decades to come.

Read more:

Colorado at the forefront of AI and what it means for jobs of the future - The Denver Channel

Read More..

Scientists used IBM Watson to discover an ancient humanoid stick figure – Business Insider

Artificial intelligence has helped archaeologists uncover an ancient lost work of art.

The Nazca Lines in Peru are ancient geoglyphs, images carved into the landscape. First formally studied in 1926, they depict people, animals, plants, and geometric shapes. The formations vary in size, with some of the biggest running up to 30 miles long. Their exact purpose is unknown, although some archaeologists think they may have had religious or spiritual significance. Local guides believe the lines relate to sources of water.

Some Nazca lines span miles of Peruvian countryside. Flickr/Christian Haugen

New geoglyphs are still being discovered and can be hard to spot due to changes in the landscape, with natural erosion and urbanization breaking them up.

A research team from Yamagata University recently announced it had discovered 142 new Nazca formations, including images of birds, monkeys, fish, snakes, and foxes.

The team partnered with IBM to try and train its deep-learning platform Watson to look for hard-to-find geoglyphs.

They fed the AI with aerial images to see if it could spot any more Nazca outlines. Watson threw up a few candidates, from which the researchers picked the most promising. Sure enough, their field work confirmed the AI had found an ancient Nazca artwork.

The find was a relatively small depiction of a humanoid figure spanning just 16 feet. The researchers estimate the figure dates from roughly 100 BC to AD 100, making it at roughly 2,000 years old.

The project's success has prompted Yamagata University to announce a more prolonged partnership with IBM, and will create a full location map of the geoglyphs to help future archaeologists.

See more here:

Scientists used IBM Watson to discover an ancient humanoid stick figure - Business Insider

Read More..

The Mark Foundation Funds Eight Projects at the Intersection of Artificial Intelligence and Cancer Research – BioSpace

November 19, 2019

NEW YORK Can 3D renderings of pancreatic cancer tumors provide insights that improve the prognosis for one of the deadliest cancer types? Can machine learning help us better predict which patients will benefit from immunotherapies, which currently have a 30-50 percent success rate? And can a smartphone app be a tool in early cancer detection?

These are just some of the questions researchers are looking to answer with funding provided by The Mark Foundation for Cancer Research (MFCR). Details about each project funded in todays announcement are below.

Most of the research projects in this funding round come out of collaborations formed at a workshop jointly held by MFCR and Carnegie Mellon University in April 2019. Attendees came from many different areas of research including machine learning, computational biology, digital pathology, biomedical engineering, systems biology, and clinical oncology. More on this workshop is here.

Bringing together scientists from varying disciplines is critical to tackling the toughest challenges in cancer research, said Ryan Schoenfeld, PhD, Vice President for Scientific Research at The Mark Foundation. Were glad to see promising collaborations emerge from our workshop and to be the first foundation with such a robust portfolio at the intersection of AI and cancer research. Were just at the beginning stage of harnessing the incredible power of machine learning in the fight against cancer, and were excited to see where these projects take us.

Michael Schatz, PhD of Johns Hopkins University and Eliezer Van Allen, MD of the Dana-Farber Cancer Institute joined forces in one such collaboration. Michael and Eli will use an advanced genetic sequencing technology specifically designed to detect structural variations (SVs), a class of DNA mutations that are not typically detected by standard DNA sequencing technologies. Their approach could uncover harmful SVs in families that have high rates of unexplained cancer.

We are incredibly grateful to The Mark Foundation for supporting this research and promoting our new collaboration, said Schatz. The new technologies will let us examine these families with unprecedented resolution, and we expect to find tens of thousands of variants per patient that have never been observed before. Together, we will evaluate this new class of cancer risk variants and aim to improve cancer prevention, improve cancer therapy, and ultimately save lives.

Researchers from the following institutions/organizations have received funding: 4YouandMe; Carnegie Mellon University; Children's Hospital of Pennsylvania; Dana-Farber Cancer Institute; Institute for Systems Biology; Johns Hopkins University; SickKids Research Institute/University of Toronto; University of California, Davis; and University of Pennsylvania.

Descriptions of the AI/cancer research project funding announced today are below.

***

About the Mark Foundation for Cancer Research

The Mark Foundation for Cancer Research is dedicated to accelerating cures for cancer by integrating discoveries in biology with innovative technology. Launched in 2017, The Mark Foundation pursues its mission by funding a global portfolio of groundbreaking research carried out by individual investigators, multi-investigator teams, and inter-institutional collaborations. Since its launch in 2017, the Foundation has awarded over $70 million in grant funding to over 50 institutions across 19 U.S. states and 4 countries.

Recognizing the obstacles that can prevent scientific advances from improving patient outcomes, The Mark Foundation maintains a nimble, high-impact approach to funding research that encompasses grants for basic and translational cancer research, as well as venture philanthropy investment in companies that bridge the gap between the bench and the bedside. To learn more about the work of The Mark Foundation for Cancer Research, visit: https://themarkfoundation.org/.

***

Title: Using Smartphones and Wearables for Early Detection of Central Nervous System (CNS) Tumors

Researchers: Stephen Friend, MD, PhD, 4YouandMe (Principal); Anna Goldenberg, PhD, The Hospital for Sick Children (SickKids) Research Institute / University of Toronto; Marzyeh Ghassemi, PhD, Vector Institute/University of Toronto; Luca Foschini, PhD, Evidation; et al

Description: For the 30,000 patients diagnosed with brain and central nervous system cancers each year, most of the provision of care rests on subjective, discontinuous data collected at discrete time intervals. To address this limitation, a coalition of researchers led by Dr. Stephen Friend of 4YouandMe will test the feasibility of using smart devices and health tracking apps to detect symptoms in brain cancer patients. They will conduct an observational study of 100 high-risk patients and use the wealth of data collected to develop a model of disease progression. Using each patients own smartphone and small wearable devices in conjunction with health apps that collect, compile, and transmit data directly to clinicians, the coalition will be able to semi-continuously monitor changes in everything from gait to mental health to sleep. They expect analysis of the data will reveal patterns of symptoms that are missed by traditional patient monitoring. These insights could enable doctors to detect tumors early, before symptoms become noticeably severe, and to closely monitor and adjust treatments in patients with advanced disease.

***

Title: Using Blood Biomarkers to Aid App-Based Cancer Monitoring

Researcher: James Heath, PhD, Institute for Systems Biology

Description: The data collected by smartphones and wearables will be even more powerful when combined with information about patients molecular markers of cancer. Toward this goal, researchers in the Heath lab at the Institute for Systems Biology are collecting and analyzing in-depth measurements of blood biomarkers in the patients monitored for symptom detection using smart technologies by Stephen Friend and colleagues. Working together, both teams expect that tracking the trajectory of these physical markers will allow them to calibrate the output from devices and lead to algorithms that optimally connect both types of data for early screening and surveillance of high-risk patients.

***

Title: Using Artificial Intelligence to Predict Interactions between Immune and Tumor Cells

Researcher: Alexander Baras, MD, PhD, Johns Hopkins University

Description: Understanding how immune cells and tumor cells will interact is critically important for forecasting patient outcomes for immunotherapy. The Baras lab at Johns Hopkins University is designing artificial intelligence-based algorithms that can identify how mutations are most likely to be presented by tumor cells given a patients unique genetic make-up and predict how well immune cells will be able to detect them. These algorithms may lead to improved prognoses of individual patients and a better understanding of how each patient will respond to treatment.

***

Title: Artificial Intelligence Assisted MRI Screening for Pediatric Cancers

Researchers: Anna Goldenberg, PhD, SickKids Research Institute/University of Toronto (Principal); Andrea Doria, MD, Hospital for Sick Children; Casey S. Greene, PhD, University of Pennsylvania; L. J. States, MD, Children's Hospital of Pennsylvania

Description: Whole body magnetic resonance imaging (wbMRI) is an important tool for screening children with a genetic predisposition for cancer. However, the images can be difficult to interpret, and early stage cancers are often missed. The Goldenberg lab at the SickKids Research Institute in Toronto, in collaboration with researchers at the University of Pennsylvania and Children's Hospital of Pennsylvania, is developing an AI-based tool that improves wbMRI screening for at-risk children and overcomes current limitations to using AI in the pediatric cancer setting. They expect that this tool will enable more sensitive early detection of pediatric cancers at medical institutions worldwide.

***

Title: Detailed, Automated 3D Imaging of Pancreatic Cancers and Precancers

Researchers: Richard Levenson, MD, University of California, Davis (Co-Principal); Farzad Fereidouni, PhD, University of California, Davis (Co-Principal); Ralph Hruban, MD, Johns Hopkins University; Denis Wirtz, PhD, Johns Hopkins University; Laura Wood, MD, PhD, Johns Hopkins University; Pei-Hsun Wu, PhD, Johns Hopkins University

Description: Pancreatic cancer is an aggressive disease that readily spreads to other organs, particularly the liver. Researchers in the Fereidouni and Levenson labs at UC Davis have teamed up with leading experts in pancreatic cancer at Johns Hopkins University to develop a fully automated 3D microscopy technique that can be used to image pancreatic tumors and nearby blood vessels, enabling closer study of the invasion of small veins into tumors. They expect that this will lead to a better understanding of the poor prognosis for pancreatic cancer patients and provide cancer researchers with a precise and affordable tool to study tumor anatomy.

***

Title: Detecting Novel Cancer Mutations That Change the Genomes 3D Structure

Researchers: Jian Ma, PhD, Carnegie Mellon University (Principal); Eliezer Van Allen, MD, and Felix Dietlein, MD, PhD, Dana-Farber Cancer Institute

Description: Mutations in the region of the genome that are important for the 3D structure of chromosomes are likely critical for cancer growth, but nevertheless poorly understood. Researchers in the Ma lab at Carnegie Mellon University and the Van Allen lab at the Dana Farber Cancer Institute are developing machine learning algorithms trained to detect mutations that are likely to affect the genomes 3D structure. This technology will pave the way to a new understanding of the influence of these mutations on tumor proliferation, clinical outcomes, and patient responses to existing and emerging therapeutics.

***

Title: New Imaging Methods for Identifying Structural Differences in Cancer Cells

Researchers: Robert F. Murphy, PhD, Carnegie Mellon University (Principal); Min Xu, PhD, Carnegie Mellon University; Yi-Wei Chang, PhD, University of Pennsylvania

Description: Tumor heterogeneity presents a major challenge in predicting outcomes and appropriate therapies for cancer patients. Different tumor cells may have distinct physical characteristics. Electron cryotomography (ECT) has unrivaled power to produce 3D images of cells and tissues at unprecedented spatial resolution. Researchers at Carnegie Mellon University led by Robert F. Murphy and Min Xu and at the University of Pennsylvania led by Yi-Wei Chang are developing machine learning methods that can be used in combination with ECT to distinguish differences between cancer cell types and determine whether those differences can be correlated with disease progression and treatment outcomes.

***

Title: Advanced DNA Sequencing for Uncovering Novel Inheritable Carcinogenic Mutations

Researchers: Michael Schatz, PhD, Johns Hopkins University (Principal); Eliezer Van Allen, MD, Dana-Farber Cancer Institute

Description: Many heritable cancers have no known genetic causes. This is in part because standard DNA sequencing technologies do not normally detect an entire class of mutations called structural variations (SVs). Researchers in the Schatz lab at Johns Hopkins University and the Van Allen lab at the Dana-Farber Cancer Institute have developed a genetic sequencing technology specifically designed to detect SVs and are using it to find harmful SVs in a cohort of families that have high rates of unexplained cancer. They expect this will lead to an immediate improvement in screening and diagnostics for heritable cancers, allowing doctors to intervene earlier by identifying those at heightened risk for disease.

Continue reading here:

The Mark Foundation Funds Eight Projects at the Intersection of Artificial Intelligence and Cancer Research - BioSpace

Read More..

Newsrooms have five years to embrace artificial intelligence or they risk becoming irrelevant – Journalism.co.uk

A new report published this week (18 November 2019) looking at the intersection of AI and journalism has issued a warning to global newsrooms: collaborate with your competitors or face extinction.

The study, 'New powers, new responsibilities. A global survey of journalism and artificial intelligence' is a joint project between Polis, the international journalism think-tank at London School of Economics and Political Science, and the Google News Initiative, who has funded the research.

It surveyed 71 international news organisations on their on use of artificial intelligence for editorial purposes across a seven-month period, showing that just 37 per cent of them have a dedicated AI strategy.

Charlie Beckett, director, Polis, London School of Economics and Political Science, said that newsrooms have between two and five years to develop a meaningful strategy, or risk fading out of the digital landscape.

"This is a marathon, not a sprint - but theyve got to start running now," he said.

"Youve got two years to start running and at least working out your route and if youre not active within five years, youre going to lose the window of opportunity. If you miss that, youll be too late."

Even by the lowest possible trajectory, the rate of which natural language processing, translations, text generations and deep-fakes are developing means that newsrooms cannot afford to drag their heels, as the knowledge gap will only widen.

"Deepfakes have already accelerated in the last six months from a something in lab to something kids in Macedonia can churn out. Im not trying to panic people - the report stresses the positivity of AI - but there is a real sense of urgency here," he explained.

"Its really clear if you look at other industries that AI is shaping customer behaviour. People expect personalisation, be that in retail or housing, for production, supply or content creation. They use AI because of the efficiencies that it generates and how it enhances the services or products it offers.

"So if we, as journalists, are going to be living in that world, journalism is going to look very dumb if it doesnt have those capabilities. If journalism doesnt get its act together, worse than looking antiquated, it won't be looked at at all."

Despite these alarm bells, integrating AI into editorial process can have a range of benefits, including taking the burden out of long-winded tasks and sifting through large databases to produce local stories.

While many global newsrooms like Reuters News are quite advanced in this field, integrating significant cultural and operational changes is challenging for cash-strapped local and regional newsrooms.

The report details that many newsrooms struggle because of those financial limitations, but also because of lack of expertise, managerial strategy and time to prioritise AI, as well as scepticism around the technology. Some of these concerns touch on established arguments around algorithmic bias, filter bubbles and the influence of machine learning over editorial decisions.

The report also offers an eight-step pathway to integrating AI in newsrooms, even for those starting from scratch. It starts with assessing readiness of AI right through to creating task-specific roles with AI resources.

But dialogue and exchanging best practices from those in similar circumstances are also important.

Networking is not just a nice idea though, it can be a commercial arrangement.

"It could be a developing a good machine learning program, saying 'Can we benefit by selling this onto other newsrooms, so others can also benefit?' Or sharing data so you can train data better to develop better newsroom tools.

"When you train natural language processing you need a big dataset of images. One newsroom may not have that - are there opportunities for newsrooms to get together and share these tools and benefit? Its not altruism, its called benign self-interest."

Not co-operating, he argues, will lead to mutual self-destruction. But he is already seeing early signs of co-operation, recognising that competitors face common issues. Only by tackling the problem together can they resume their rivalry.

"To have healthy competition, you need healthy business," said Beckett.

"Get the tech right, then that will allow you to focus much harder on what you do differently and best, and how your editorial product is different to your competitors."

Ultimately, he said that AI will be neither the saviour nor the demise of journalism. But it will at least allow local journalists to leave their newsdesk more often and do more field work.

"If youve ever worked in a local newsroom, you know there are people who cant leave the office because they are churning out stories.

"They are losing the very idea of being local, which is going out into the streets, to meet people and to interact with your community. This is to augment and to power-up journalism, its not about journalism turning into a bland robotic product, its about getting back to distinctive journalism."

Want to know how to use breaking news to grow your audience? Find out how at Newsrewired on 27 November at Reuters, London. Head to newsrewired.com for the full agenda and tickets

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).

Here is the original post:

Newsrooms have five years to embrace artificial intelligence or they risk becoming irrelevant - Journalism.co.uk

Read More..

AI in contact centres: It’s time to stop talking about artificial intelligence – Verdict

AI has started to feel like an old conversation. But the reality is that its only just off the starting blocks in many industries. In the last year, contact centres and organisations focused on customer engagement have moved beyond the AI hype into practical implementation. As such, businesses that want to stop talking and start doing should be following in the footsteps of these success stories.

There are tangible examples of AI applications already in full swing in the contact centre industry, ranging from Natural Language Processing (NLP) to image recognition.Research from industry-leading analyst Gartner suggests that in 2020, 80% of customer service interactions will be handled, at least partly, by AI. This is hardly surprising, as around a quarter of customer interactions are already handled through an automated chatbot, and the customer engagement technology sector is constantly expanding the very definition of what AI is and what it can do.

The driving force behind the AI revolution is customer experience. As this becomesthekey business differentiator, organisations that stay ahead of the curve are seeing happy, loyal and engaged customers and higher profits, by turning AI hype into tangible business success. Moving beyond the hype and towards result-driven applications of AI will be critical to the success of any company wanting to survive in this competitive landscape.

AI coupled with real-life human intelligence creates an augmented dual interface that is delivering a competitive advantage to companies wanting to offer a frictionless customer journey. An omni-channel contact centre that deploys AI working hand-in-hand with human agents is becoming critical for any organisation that doesnt want to be in the bottom quartile for customer satisfaction (CSAT).

Companies in the top quartile experience an impressive 77% less churn from their employees, and are 44% more profitable. Research has shown that the number one reason for customer service agent dissatisfaction is workload. Contact centres are eliminating the mundane tasks for their human agents through intelligent automation, and are supporting the agents ability to seamlessly handle calls with AI. This improves job satisfaction, thereby reducing the rate of attrition.

Evolving the role of human agents means less money is spent on training. In the past, data scientists who possess one of the most sought-after skill-sets would be required to process, analyse and derive insights from customer data. Now, AI-solutions are programmed to do this automatically, reducing organisations dependency on scarce and pricey skill-sets.

Not only is the contact centre industry demystifying practical uses for AI, it is also debunking the rumours about AI replacing humans. Contrary to popular belief, and as the hype may suggest, AI will not cost businesses their human face. Organisations can still leverage automation while maintaining the human touch, by providing intelligently augmented interactions.

This type of intelligent assistance means employees will no longer have to work like machines, but they will be more efficient and deliver better results. The fact is that no new technology in human history has ever created long-term, mass unemployment. There will be a period of adjustment and a need for a different skill-set. But overall, these developments will open up new opportunities for establishing long-term career prospects in customer engagement.

NLPis a form of AI that analyses natural dialogue to draw contextual meaning and understand language the way humans do. NLP registers, deciphers, understands, and makes sense of spoken language, and turns it into actionable data.This technology is a great example of both a tangible and current use of AI to achieve business success, as well as a strong argument against the idea that AI is replacing humans in the workplace.

Having information on the nature of an incoming customer call readily available means that human agents do not have to sift through huge volumes of data to answer the query. This enables them to provide a much faster and more personalised experience to the customer. NLP can also be used to help fully-automated Machine Agents to parse meaning from spoken language, enabling them to provide more accurate responses.

When a customer reaches a contact centre agent, NLP can work in the background and prompt the agent with automated information on-screen to assist them in resolving the query. Increased automation means companies will need to spend less money on training costs. NLP automatically provides a wrap summary of the conversation on completion of the call so the agent does not have to spend time at the end of the interaction completing this task. This reduces the administrative burden and frees up agents to answer more calls.

Many early adopters of this type of AI are organisations in the public sector. A combination of a tight budget and hefty workloads makes public sector organisations prepared to invest first, but many other industries are now seeing the value in NLP and rolling out the technology en masse.

Get the Verdict morning email

In the past, one of the biggest roadblocks of AI deployment was the limited computing resources available. But today, with hyperscale cloud platforms and vast computational power, the fully scalable AI solutions have become much more attainable. We have moved from scarcity, caused by high cost technology, to the abundance of cheap processing power.

The combination of extra computing and new AI-driven processes have really come together in the last few years. This means that organisations can easily and cost-effectively draw on extra computing horsepower to scale their contact centre capacity accordingly.

This new approach to the contact centre presents businesses across every industry with a scalable opportunity to future-proof their communications estate and keep up with customer expectations of a flawless, omni-channel customer experience. Using the latest cutting-edge tools to complement a companys existing offering should become second nature to any forward-thinking business. These are exciting times for AI, and, as we move into 2020, it is time to stop talking about AI, and start doing.

Read more: Chatbots in retail: nine companies using AI to improve customer experience

See the rest here:

AI in contact centres: It's time to stop talking about artificial intelligence - Verdict

Read More..

Quantum Computing: Challenges, Trends and the Road Ahead – CMSWire

PHOTO:Joshua Ness

Recently, Google announced another major milestone in the development of Quantum Computing. According to Google, and despite skepticism from some of its competitors, the Mountain View, Calif.-based company announced that it had achieved quantum supremacy. While the announcement is still undergoing peer review by others in the industry, what no one is challenging is the progress being made in the development of technologies that will have implications for both public and private companies, and even for governments.

Roger A. Grimes, a data-driven defense evangelist at UK-basedKnowBe4 explained that the quantum supremacy is the moment in time when a quantum computer finally does something that a traditional, binary, classical, computer cannot. It can be achieved in terms of raw computational speed or performing even an otherwise ordinary math problem that a classical computer simply isn't capable of (i.e. it doesn't have to be speed related). Google's report seems to indicate it was a bit of both. According to the report Google was able to accomplish in three minutes using a quantum computer what the world's fastest computer would take 10,000 years to do.

Grimes explained that ever since Richard Feynman talked about using the fantastic properties of quantum mechanics to make a new paradigm of computing in 1959, the world has waited for the day when quantum supremacy would happen. The first quantum computer was made in 1998 and it has taken mankind another 21 years to get them to the point to where they will begin to take over tasks and new tasks that regular computers can't perform. After this moment, no serious large company, government, or country will want to stay on or focus on the older types of computers. There was a computer world before and there is now a new computer world, shiny and new, after, he said.

Yes, traditional computers will stay in our lives for decades to come, but a technological wall has been breached, and it means wondrous new things, both good and bad.

The result is that within the next decade, any company or organization without a quantum computer will be old.

Related Article:Will Quantum Computing Break Blockchain?

So why is this important? Quantum computers will allow us to better understand how the universe works. Quantum mechanics, particles, and properties are how everything in the universe works. Up until this point, it was impossible to model how everything in the universe (or multiple universes, for that matter) truly works. It's all been theory and speculation.

Now, with enough serious quantum computers, for the first time, we can literally model and figure out how everything that is works. We will get better weather prediction, better chemicals, better medicines with less side effects, better traffic management, better artificial intelligence, and better be able to predict and detect where scarce resources, like gas and oil, are. Everything can be better predicted and focused, he added.

Keep in mind at this point in time, though, that quantum supremacy is a technical term used by the academic community to mean when a quantum computer can do just one thing faster than a classical computer, Professor Yehuda Lindell,CEO and co-founder ofUnbound Tech.

However, this is not what we think about when we hear supremacy, nor is it really relevant to cryptography and other application domains. In particular, what businesses and other enterprises are really interested in knowing is when quantum computers will be able to solve hard important problems faster than classical computers, and when quantum computers will be able to break cryptography. I personally believe that this is many years away. I will say at least a decade, but I think it will be more like two decades at least. I also want to stress that this is still an if and not a when, he said.

Keep in mind, though, the fact that small quantum computers have been built does not mean that quantum computers at the scale and accuracy needed to break cryptography will ever be built. The problems that need to be overcome are considerable.

Related Article:Quantum Computing Brings Potential and Risk to the Enterprise

Alex Costas, software engineer of Tampa, Fla.-basedSchellman & Company, an independent security and privacy compliance assessor (www.schellmanco.com), points to some of the problems that quantum computers will be able to address in the future.

Quantum computing, he said, promises to solve problems and drive simulations that have been computationally or physically intractable with conventional hardware, such as simulating the interactions of a novel pharmaceutical in vivo, creating secret messages that destroy themselves when read, or covertly monitoring remote systems without the need for an internet connection. This may all sound well and good, except for the fact that internet security is for the most part predicated on the following assumption: it is hard to factor large numbers into their prime components, he said.

It is not so hard as it may seem. There are quite a few asymmetric schemes (e.g. NTRU, McEliece, etc) that don't rely on prime factorization for their difficulty and have no efficient quantum (or classical) solution yet. Aside from that, secret key cryptography (e.g. AES) is not affected by these kinds of attacks, and quantum effects can also be leveraged to protect these secret keys as well

All that said, quantum computing technology has the potential to be a major driver of futurebreakthrough advances in areas such as artificial intelligence and healthcare, according to Anis Uzzaman, CEO of San Jose, Calif.-basedPegasus Tech Ventures. Many of the opportunities for investments in hardware are now in the later stage, but the broader investment community should look out for the enabling technologies and software that will start to emerge for the hardware platforms.

The United States and China are the two heavyweights competing for leadership in quantum computing. While the United States has a first-move advantage and maintains a lead, China is making heavy investments in pursuit of a variety of breakthroughs. Quantum computing applications may not reach mainstream consumer applications for a little while longer, but there will definitely be a variety of companies across a range of industries that look integrate this technology over the next 3-5 years, he said.

Current quantum computers are far from where we need them to be for practical applications due to their high level of "noise" (errors), Leonard Wossnig, CEO of UK-based Rahko, wrote on a blogon the a Quantneo online magazine, the publication of a web community focusing on business applications for Quantum Information Science (QIS).

If we cannot find a way to use these current and near-term quantum computers, he wrote, we will need to wait for fully-error-corrected "universal" machines to be developed to see real significant benefit (15-20 years by many estimates). This is where the software becomes much more than a necessary complement to the hardware. Quantum software has the potential to significantly accelerate our pathway to practically useful quantum computers.

Originally posted here:
Quantum Computing: Challenges, Trends and the Road Ahead - CMSWire

Read More..