How the Human-Machine Intelligence Partnership Is Evolving – AiThority

AI has enjoyed a long hype cycle, recently reignited by the introduction and rapid adoption of OpenAIs ChatGPT. Companies are now at varying stages of AI adoption given their business goals, resources, access to expertise, and the fact that AI is being embedded in more applications and services. Irrespective of industry, AI depends on a critical mass of quality data. However, the necessary quality depends on the use case. For example, as consumers, weve all been the victim of bad data as evidenced by marketing promotions we either laugh at or delete.

In the scientific community, such as the pharmaceutical and life sciences industry, bad data can be life-threatening, so data quality must be very high.

Also, unlike many other industries, the data required to discover a novel drug molecule tends to be scarce rather than abundant.

The data scarcity prevalent in the pharmaceutical and life science industries promotes a stronger alliance between humans and machines. Over time, there has been a significant accumulation of scientific data, the understanding of which demands a high level of education. This data accrual has been quite costly, leading to a general reluctance among owners to share the information they have acquired.

The intricate nature of scientific data implies that only scientists within the same field can comprehend the deeper contexts. Therefore, the volume of data available in an appropriate context is typically limited. This scarcity makes it challenging to develop credible AI algorithms in the healthcare industry.

To counteract this data deficit, human experts play an essential role in providing context and supplementary information. This human intervention helps in the co-development of algorithms and the workflows in which these algorithms are accurately utilized.

AI hype cycles have caused fear, uncertainty, and doubt because vendors are underscoring the need for automation in white-collar settings. In the distant past, AI was firmly focused on production-line jobs impacting blue-collar workers. Back then, no one anticipated AI would impact knowledge work, especially because AI capabilities were limited to the technology of the day and in most cases, it was rule-based.

Now, we see pervasive use of AI techniques such as machine learning and deep learning that can analyze massive amounts of data at scale. Instead of following a deterministic set of rules, modern systems are probabilistic, which makes things like prediction, as opposed to just historical data analysis, possible.

In the case of pharmaceuticals and life sciences, its possible to import tags from research papers, which is helpful, but the context tends not to be stated explicitly, so scientists need to help AI understand the underlying hypothesis or scientific context. The system then learns by being rewarded for good outcomes and scientists rejecting the bad outcomes. Without a human overseeing what AI does, it can drift in a manner that makes it less accurate.

In fact, it can take several weeks or months to create molecules and test them under experimental conditions. If animals are involved, the process could take years.

Some scientists or professionals dont want to share their work with AI, particularly when they are highly educated and experienced. These people know how long research takes and how expensive it can be, and theyve grown comfortable with that over time.

Also, the pharmaceutical and life sciences industries are highly regulated, so irrespective of whether AI is used or not, there are certain processes, and certain levels of rigor required just to ensure patient safety.

Interestingly, when well-trained scientists see AI in action, it becomes abundantly clear that it can handle a million or more data points easier and faster than a human. Suddenly, it becomes clearer that AI is a valuable tool that can save time and money and enable greater precision.

However, that doesnt mean that scientists trust what they see, especially when it comes to deep learning, which includes large language models such as ChatGPT. The problem with deep learning is that it tends to be opaque, meaning that it can take an input or multiple inputs and produce a result, but the AI is unable to explain in terms understandable to humans how it arrived at that result.

Thats why theres been a loud cry for AI transparency; people want to trust the result. Scientists and auditors demand it.

One of the biggest genetic databases is 23andMe. This is the gene testing service that reveals a persons ethnicity. It has also enabled individuals to discover family members they never met, such as a set of twins, one of whom was adopted. The service offers significant entertainment value.

However, from a scientific standpoint, it doesnt offer much.

Without understanding someones medical history, understanding genetic composition can only be somewhat helpful. As an example, a brother and sister may carry the same gene that is expressed in one and dormant in the other.

The more we know about an individual, the better chance there is of choosing the compounds that will work for them and at what dosage level. Today, theres still a lot of trial and error, and doses are standardized. In short, AI will help make personalized medicine possible.

The pharmaceutical and life sciences industries are both highly competitive. About two years ago, I visited Cambridge University and noticed that big pharma companies had sent a researcher or two to learn about Cambridges experimental automation technology that utilizes AI. Big pharma companies often work with research institutions to learn about scientific discovery and to get the high-quality data they need.

Another example is Recursion Pharmaceuticals where theyre automating biology-related processes. They photograph cells after treating some molecules and then use AI algorithms to understand the images. They produce tens of terabytes of image data every day and the experimental conditions are decided by prediction models. As new data comes in, the system generates new models, and the cycle repeats automatically and continuously.

AI is transforming the ways organizations and industries operate. However, scientific disciplines require a rigorous approach that yields accurate results and provides the transparency scientists and auditors need. Since governance, privacy, and security are not inherently baked into AI, organizations with strict requirements need to be sure that the technology they utilize is both safe and accurate.

Continue reading here:
How the Human-Machine Intelligence Partnership Is Evolving - AiThority

Related Posts

Comments are closed.