Page 2,288«..1020..2,2872,2882,2892,290..2,3002,310..»

Expect sales reps’ calls if IT wants to ditch Oracle – The Register

Oracle executives brief clients against plans to move away from Big Red's technology platforms, it is alleged.

A recent webinar by Palisade Compliance heard that it took "guts" for enterprise customers to make the decision to move away from Oracle technology as its senior salespeople would call customer CEOs and board members and brief against IT management proposing such a move.

Craig Guarente, an advisor in Oracle licensing and compliance issues, said: "You need the courage to do something different, because you're going to have 20 Oracle reps telling you why it's a mistake, and they're going to call your CEO. They're going to call your board and do whatever they need to make you [change course]."

Guarente, CEO of Palisade Compliance and former Oracle veep, said customers would often complain that no matter how they try to reduce their reliance on Big Red's technology, "the Oracle calculator only has a plus button."

"Sometimes companies get in distress and they're shrinking and Oracle says, 'Yeah, but you still have to pay me, I know you only have half the users and half the capacity but you still have to pay me and we're going to raise prices because of inflation.' That really frustrates companies," he said.

The webinar was also hosted Ed Boyajian, CEO of EDB, a software company backing the open-source database PostgreSQL. He said large customers had moved away from Oracle to PostgreSQL but that it often required top-level support.

"Our biggest customers very large-scale enterprise-wide Postgres users report needing a strategic drive to change. That intersected the C-suite: there is a common theme that it takes a strong commitment at that level. Because people are always afraid of the risk of the unknown."

We have asked Oracle to comment.

Big Red has argued that its approach to the cloud has offered a way of integrating with the on-prem world. In 2020, it launched an on-premises cloud product, Oracle Dedicated Region Cloud, completely managed by Oracle, using the same architecture, cloud services, APIs, and SLAs as its equivalent regional public and private clouds.

"Customers can think of it as their own private cloud running inside their data centre, or they will also see it as a hybrid cloud, given that this the exact same thing we offer in a public cloud," said Regis Louis, exec veep of product management for Oracle Cloud Platform in EMEA.

Meanwhile, Oracle also claims to innovate with tight integration between hardware and software supporting the performance of its Exadata products. Big Red claims it beefed-up Exadata X9M, launched last year, provides online transaction processing (OLTP) with more than 70 per cent higher input/output operations per second (IOPS) than its earlier release.

But some customers have trodden the path away from the dominant application and database vendor. EDB claims to offer tools that smooth the migration to PostgreSQL, plus the option of moving applications without rewriting them.

Speaking to The Register in 2020, Ganadeva Bandyopadhay, associate vice president of IT at TransUnion CIBIL, described the migration from Oracle to Postgres EDB.

The company was looking to revamp older applications based on "rapidly outgoing concepts like heavy database servers with a lot of business logic within the database code," Bandyopadhay said.

The credit information company operating in India found its Oracle licences were being underused, but the rigidity in the rules made it difficult to move them onto different virtual instances and convert from the processor-based to the "Named User Plus" licensing.

Starting from 2015, Bandyopadhay and his team wanted to remove the business logic from the main database, improving performance and flexibility in the architecture, something he said would have been difficult to do with Oracle.

"It was nothing against Oracle, but our logic was to address Oracle features which are built within the database," he said. "There is a cost to that which we had accepted for a long time, but with the changing expectations [from the business], we had to really revamp and flatten out the databases and put the business logic into somewhere else in the middle tier," he said.

After completing the migration in 2017, Bandyopadhay's team found the Postgres EDB-based system achieved higher throughput at lower licensing costs than Oracle, but not before reskilling its internal IT team.

Read more:
Expect sales reps' calls if IT wants to ditch Oracle - The Register

Read More..

Threat Actors Organize: Welcome to Ransomware Inc. – Virtualization Review

News

"Many people still think of a ransomware actor as the proverbial 400-pound hacker in his mom's basement -- nothing could be further from the truth," says in-the-trenches security expert Allan Liska. "There are a number of cottage industries that have sprung up in support of ransomware."

In fact, the intelligence analyst at Recorded Future outlined a businesslike environment and workflow he has discerned from his more than 20 years in the IT security field, most recently focused on ransomware:

"In fact, the leader of a ransomware group is often nothing more than a 'marketing' person whose sole purpose is to get more affiliates for the group," said Liska, who is known as the "Ransomware Sommelier."

He shared his thoughts with Virtualization & Cloud Review following his presentation in a recent multi-part online event titled "Modern Ransomware Defense & Remediation Summit," now available for on-demand viewing.

It's no surprise Liska started off discussing initial access brokers early on, as he has become somewhat of a specialist in that area. For example, last year took to Twitter to lead a crowdsourcing effort to create a one-stop-shop for a list of initial access vulnerabilities used by ransomware attackers, as we explained in the article "'Ransomware Sommelier' Crowdsources Initial Access Vulnerability List."

Of course, organized ransomware has been a known thing for a while now, with even nation-state actors getting in on the action, but Liska and other security experts indicate the bad guys are getting more sophisticated.

"Outsourcing the initial access to an external entity lets attackers focus on the execution phase of an attack without having to worry about how to find entry points into the victim's network," said an article last summer in Infosecurity Magazine titled "The Rise of Initial Access Brokers," which noted the flourishing market often sees compromised VPN or RDP accounts as network inroads, along with other exposed remote services like SSH.

Digital Shadows also charted "The Rise of Initial Access Brokers" a year ago, complete with a chart showing popular access types and their average prices (note that prices have likely gone up with the recent inflation spike):

Liska detailed the initial access scene in his opening presentation, titled "The Current Ransomware Threat Landscape & What IT Pros Must Know."

"So one of the things that you have to understand with ransomware is it's generally not the ransomware actor that's gaining that initial access," he explained. "There are other criminals that are called initial access brokers, and they're the ones who generally gain that access. And then they turn around and they sell it to the ransomware actors themselves, whether it's to the operator of the ransomware-as-a-service offering, or whether it's one of their affiliates that people that sign up to be able to deploy their ransomware.

"Think of it like flipping houses, except you're flipping networks. You're turning that network over to a ransomware actor who's then going to deploy the ransomware."

Allan Liska, Intelligence Analyst, Recorded Future

"So when you're talking about an attack like this, you're generally talking about two different types of actors: one to get the initial access and one that turns around and sells it. Think of it like flipping houses, except you're flipping networks. You're turning that network over to a ransomware actor who's then going to deploy the ransomware. And they generally sell that initial access from anywhere from a couple thousand to 10, 15, even 100,000, depending on the type of access they're able to get -- so if you have administrator access -- and the size of the network. But you know, the thing is, if you're a ransomware actor, it's still a good investment. Because if you're confident you can deploy the ransomware you're gonna make way more than what you're paying for that initial access."

Liska explained he and other security experts are seeing four primary initial access vectors: credential stuffing/reuse; phishing; third-party; and exploitation, summarized in this graphic:

Phishing was the popular vector throughout 2019 and 2020, Liska said, but RDP (Remote Desktop Protocol) -- "low hanging fruit" -- is gaining traction. Here are Liska's thoughts on RDP, third-party attacks and exploitation:

RDP: "Ransomware Deployment Protocol"?"What we're starting to see in 2021 -- and we expect this to continue into 2022 -- is that credential stuffing and credential reuse attacks are becoming much more common," Liska said. "In fact, we kind of have a joke in the industry that RDP actually stands for ransomware deployment protocol, instead of what it actually means, only because RDP is one of the most common entry methods. Because it's so easy for these initial access brokers to just fire up an old laptop and start scanning, looking for open RDP connections, and then trying credential stuffing/credential reuse attacks. You have to keep in mind, there are literally billions of credentials that are being sold on underground markets.

"So while it seems like a credential use attack would be a challenge, it really isn't. You connect to the RDP server, you see what network it belongs to, you search on Genesis market or one of the other markets for usernames and passwords that match it. And then you try those -- you get 100 of them -- you try them and unfortunately, most the time, they will find a match, and they'll be able to gain access. That's why Multi-Factor Authentication is so important for any system that's exposed to the internet."

Third-Party Attacks"These are increasingly common," Liska said. "We really saw this take off in 2021. So a ransomware actor, or the initial access broker, gains access to a managed service provider, or a vendor of some kind. And rather than [deploy] the ransomware on that vendor, what they do is they use that access to jump to those partners. They find it's really easy, because you get to start right in the gooey center, and work your way out. So we're seeing a big increase in that. And again, that goes with the sophistication and increasing sophistication of the ransomware access."

Exploitation"And then exploitation is also growing in popularity," Liska continued. "So, you know, in the last year, we catalogued more than 40 different exploits that were used by ransomware groups or the initial access focus in order to gain that first access. So it's really, really important that you're patching. Again, anything that's public facing, especially anything that has proof of concept code, or anything like that release, has to be patched immediately."

RaaS: Ransomware-as-a-ServiceOne striking fact that speaks to the businesslike organization of ransomware are numerous RaaS operations that have sprung up around the globe, as Liska's chart below shows:

Cybersecurity specialist Rubrik, in a ransomware compendium, says of RaaS: "Criminals don't have to create their viruses anymore. Developers will create ransomware for a fee or share of the profits, creating a whole new industry that caters to ransomware." Also, the company noted a growing ecosystem of dedicated infrastructure has formed to support ransomware, including "bulletproof" hosts who will refuse to take criminal users offline, along with dedicated networks to help criminals avoid anti-virus software and move and hide virtual currency payments.

See the original post:
Threat Actors Organize: Welcome to Ransomware Inc. - Virtualization Review

Read More..

Duke University and IonQ Develop New Quantum Computing Gate – HPCwire

DURHAM, N.C. & COLLEGE PARK, Md., Feb. 10, 2022 Today, the Duke Quantum Center (DQC) at Duke University and IonQ announced the invention of a new quantum computing operation with the potential to accelerate several key quantum computing techniques and contribute to scaling quantum algorithms. The new quantum gate is a novel way to operate on many connected qubits at once and leverages the multi-qubit communication bus available only on IonQ and DQC quantum computers. Full details of the gate technique can be found on the preprint archive arXiv at arXiv:2202.04230.

The new gate family includes the N-qubit Toffoli gate, which flips a select qubit if and only if all the other qubits are in a particular state. Unlike standard two-qubit quantum computing gates, the N-qubit Toffoli gate acts on many qubits at once, leading to more efficient operations. The gate appears naturally in many common quantum algorithms.

IonQ and Dukes discovery may lead to significant efficiency gains in solving fundamental quantum algorithms, such as Grovers search algorithm, variational quantum eigensolvers (VQEs), and arithmetic operations like addition and multiplication. These use cases are ubiquitous across quantum computing applications, and are core to IonQs work in quantum chemistry, quantum finance, and quantum machine learning. They are also key components of commonly accepted industry benchmarks for quantum computers, which have alreadyshown IonQs computers to be industry leaders.

This discovery is an example of us continuing to build on the leading technical architecture weve established. It adds to the unique and powerful capabilities we are developing for quantum computing applications, said Peter Chapman, CEO at IonQ.

This research, conducted at Duke by Dr. Or Katz, Prof. Marko Cetina, and IonQ co-Founder and Chief Scientist Prof. Christopher Monroe, will be integrated into IonQs quantum computing operating system for the general public to use. Monroe notes that, no other available quantum computing architecturesnot even other ion-based quantum computersare able to utilize this new family of N-qubit gates. This is because IonQs quantum computers uniquely feature full connectivity and a wide communication bus that allows all qubits to talk to each other simultaneously.

This discovery follows a series of announcements around IonQs research efforts and preparations for scale. In December, IonQ announced that itplans to use barium ions as qubitsin its systems, bringing about a wave of advantages it believes will enable advanced quantum computing architectures. Last year, the team also debuted the industrys firstReconfigurable Multicore Quantum Architecture and Evaporated Glass Trap technology, both of which are expected to contribute to scaling the number of qubits in IonQs quantum computers.

About IonQ

IonQ, Inc. is a leader in quantum computing, with a proven track record of innovation and deployment. IonQs next-generation quantum computer is the worlds most powerful trapped-ion quantum computer, and IonQ has defined what it believes is the best path forward to scale.

IonQ is the only company with its quantum systems available through the cloud on Amazon Braket, Microsoft Azure, and Google Cloud, as well as through direct API access. IonQ was founded in 2015 by Christopher Monroe and Jungsang Kim based on 25 years of pioneering research. To learn more, visitwww.ionq.com.

Source: IonQ

Continue reading here:
Duke University and IonQ Develop New Quantum Computing Gate - HPCwire

Read More..

Quantum computing venture backed by Jeff Bezos will leap into public trading with $1.2B valuation – GeekWire

A team member at D-Wave Systems, based in Burnaby, B.C.,, works on the dilution refrigerator system that cools the processors in the companys quantum computer. (D-Wave Systems Photo / Larry Goldstein)

Burnaby, B.C.-based D-Wave Systems, the quantum computing company that counts Jeff Bezos among its investors and NASA among its customers, has struck a deal to go public with a $1.2 billion valuation.

The deal involves a combination with DPMC Capital, a publicly traded special-purpose acquisition company, or SPAC. Its expected to bring in $300 million in gross proceeds from DPMCs trust account, plus $40 million in gross proceeds from investors participating in a PIPE arrangement. (PIPE stands for private investment in public equity.)

Quantum computing takes advantage of phenomena at the quantum level, processing qubits that can represent multiple values simultaneously as opposed to the one-or-zero paradigm of classical computing. The approach is theoretically capable of solving some types of problems much faster than classical computers.

Founded in 1999, D-Wave has focused on a type of technology called quantum annealing, which uses quantum computing principles and hardware to tackle tasks relating to network optimization and probabilistic sampling.

Physicists have debated whether D-Waves Advantage system should be considered an honest-to-goodness quantum computer, but the company says that question has been settled by research that, among other things, turned up signatures of quantum entanglement. D-Wave is included among the quantum resources offered by Amazon and Microsoft, and it also has its own cloud-based platform, known as Leap.

The SPAC deal has already been cleared by the boards of directors for D-Wave and DPCM Capital. If the transaction proceeds as expected, with approval by DPCMs stockholders, it should close by midyear. The result would be a combined company called D-Wave Quantum Inc. that would remain headquartered in Burnaby a suburb of Vancouver, B.C. and trade on the New York Stock Exchange under the QBTS stock symbol.

Today marks an inflection point signaling that quantum computing has moved beyond just theory and government-funded research to deliver commercial quantum solutions for business, D-Wave CEO Alan Baratz said in a news release.

Among the investors involved in the PIPE transaction are PSP Investments, NEC Corp., Goldman Sachs, Yorkville Advisors and Aegis Group Partners. Other longtime D-Wave investors include Bezos Expeditions as well as In-Q-Tel, a venture capital fund backed by the CIA and other intelligence agencies.

In what was described as an innovative move, the SPAC deal sets aside a bonus pool of 5 million shares for DPCMs non-redeeming public stockholders.

D-Wave says it will use the fresh funding to accelerate its delivery of in-production quantum applications for its customers, and to build on a foundation of more than 200 U.S. patents. The company is aiming to widen its offerings beyond quantum annealing by developing more versatile gate-model quantum computers.

Emil Michael, DPMC Capitals CEO, said the total addressable market for quantum computing services could amount to more than $1 billion in the near term, and rise to $150 billion as applications mature.

While quantum computing is complex, its value and benefits are quite simple: finding solutions to problems that couldnt be previously solved, or solving problems faster with more optimal results, Michael said. D-Wave is at the forefront of developing this market, already delivering the significant benefits of quantum computing to major companies across the globe.

Continue reading here:
Quantum computing venture backed by Jeff Bezos will leap into public trading with $1.2B valuation - GeekWire

Read More..

Global $1.6 Billion Quantum Computing Technologies and Markets to 2026 – PRNewswire

DUBLIN, Feb. 10, 2022 /PRNewswire/ -- The "Quantum Computing: Technologies and Global Markets to 2026" report has been added to ResearchAndMarkets.com's offering.

The global quantum computing technologies market should reach $1.6 billion by 2026 from $390.7 million in 2021 at a compound annual growth rate (CAGR) of 33.2% for the forecast period of 2021 to 2026.

Report Scope

This report provides an overview of the global market for quantum computing and analyzes market trends. Using 2020 as the base year, the report provides estimated market data for the forecast period 2021 through 2026. Revenue forecasts for this period are segmented based on offering, deployment, technology, application, end-user industry and region.

Quantum computing is the gateway to the future. It can revolutionize computation by making certain types of classically stubborn problems solvable. Currently, no quantum computer is mature enough to perform calculations that traditional computers cannot, but great progress has been made in the last few years. Several large and small start-ups are using non-error-corrected quantum computers made up of dozens of qubits, some of which are even publicly accessible via the cloud. Quantum computing helps scientists accelerate their discoveries in related areas, such as machine learning and artificial intelligence.

Early adoption of quantum computers in the banking and financial industries, increased investment in quantum computing technology, and the rise of numerous strategic partnerships and collaborations are the main drivers behind the market growth.

The trend towards strategic approaches such as partnerships and collaborations is expected to continue. As quantum computer vendors move to quantum development, the consumer industries will seek to adopt current and new quantum technologies to gain a competitive advantage. The technological hurdles in the implementation of the quantum systems, as well as the lack of quantum skills, can limit the market growth. However, increasing adoption of quantum technology in healthcare, increasing demand for computing power, and the introduction of cloud-based quantum computing services are expected to open up new market opportunities during the forecast period.

Between 2021 and 2026, many companies with optimization problems may adopt a hybrid approach where some of the problems are handled by classical computing and the rest by quantum computers. The demand for quantum computers is expected to grow from multiple end-user industries, from finance to pharmaceuticals, automobiles to aerospace. Many industries, such as banks, are now using cloud-based quantum services.

There is no doubt that quantum computers will be expensive machines to develop and will be operated by a small number of key players. Companies like Google and IBM plan to double the performance of quantum computers each year. In addition, a small but important cohort of promising start-ups is steadily increasing the number of qubits a computer can process. This creates an immersive opportunity for the global quantum computing market growth in the coming years.

This report has divided the global quantum computing market based on offering, technology, deployment, application, end-user industry, and region. Based on offering, the market is segmented into systems and services. The services memory segment held the largest market share, and it is expected to register the highest CAGR during the forecast period. The services segment includes quantum computing as a service (QCaaS) and consulting services.

The report also focuses on the major trends and challenges that affect the market and the competitive landscape. It explains the current market trends and provides detailed profiles of the major players and the strategies they adopt to enhance their market presence. The report estimates the size of the global quantum computing market in 2020 and provides projections of the expected market size through 2026.

Competitive Landscape

Company profiles of the key industry players include

Patent Analysis

For more information about this report visit https://www.researchandmarkets.com/r/o1td8j

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

See original here:
Global $1.6 Billion Quantum Computing Technologies and Markets to 2026 - PRNewswire

Read More..

Postdoctoral Research Associate in Quantum Algorithms for Fluid Simulations job with DURHAM UNIVERSITY | 281136 – Times Higher Education (THE)

Department of Physics

Grade 7: - 34,304 - 36,382Fixed Term - Full TimeContract Duration: 24 monthContracted Hours per Week: 35Closing Date: 11-Mar-2022, 7:59:00 AM

The Department

The Department of Physics at Durham University is one of the leading UK Physics departments with an outstanding reputation for excellence in teaching, research and employability of our students.

The Department is committed to advancing equality and we aim to ensure that our culture is inclusive, and that our systems support flexible and family-friendly working, as recognized by our Juno Champion and Athena SWAN Silver awards. We recognise and value the benefits of diversity throughout our staff and students.

The Role

Applications are invited for a postdoctoral position to develop quantum algorithms for fluid simulations, to work as part of the EPSRC funded ExCALIBUR project on Quantum Enhanced and Verified Exascale Computing (QEVEC). The QEVEC project addresses the potential of quantum computing as a disruptor in exascale computing. Even at an early stage of development, if quantum computing can be deployed as a co-processor to tackle bottlenecks in existing and future exascale codes, it has the potential to provide a huge boost to the overall computational power.

There will be four PDRAS in the QEVEC team, each with different expertise, working to-gether to develop quantum computing for the main ExCALIBUR use cases (fluids simulations and materials simulations), and methods to validate the hybrid quantum-classical algorithms. This post specifically considers the important application area of quantum algorithms in com-putational fluid dynamics. The candidate will evaluate new and existing quantum algorithms for their suitability as quantum subroutines for exascale codes, including for lattice Boltz-mann, smooth particle hydrodynamics, and other computational fluid dynamics (CFD) meth-ods.

We are seeking an enthusiastic computational researcher in fluid dynamics who is interested in developing quantum computing skills, or a quantum computing researcher who is keen to investigate potential applications in fluid dynamics simulations. You need to be a good team worker and communicator to work closely with computational scientists across discipline boundaries.

The post is based in Durham, but the candidate is expected to collaborate closely with other members of the QEVEC team based in Strathclyde, UCL, Warwick and London Southbank. Where appropriate, the candidate will also engage with other ExCALIBUR projects, Collaborative Computational Projects (CCPs) and High End Consortiums (HECs), the National Quantum Computing Centre and the Quantum Computing and Simulation Hub.

The post is for 24 months, to commence in June 2022 or as soon as possible thereafter. We also welcome part time applications to this role.

Informal enquiries are welcome and should please be directed to Prof Halim Kusumaatmaja (halim.kusumaatmaja@durham.ac.uk) and/or Dr Alastair Basden (a.g.basden@durham.ac.uk). Further details on the QEVEC project can be found in the EXCALIBUR website (https://excalibur.ac.uk/projects/qevec/).

Responsibilities:

These posts are fixed term for 24 months.

The post-holder is employed to work on research/a research project which will be led by another colleague. Whilst this means that the post-holder will not be carrying out independent research in his/her own right, the expectation is that they will contribute to the advancement of the project, through the development of their own research ideas/adaptation and development of research protocols.

Successful applicants will ideally be in post by 1st June 2022.

The Requirements

Essential:

Desirable:

How to Apply

For informal enquiries please contact Prof Halim Kusumaatmaja at halim.kusumaatmaja@durham.ac.uk and/or Dr Alaistair Basden at a.g.basden@durham.ac.uk. All enquiries will be treated in the strictest confidence.

We prefer to receive applications online via the Durham University Vacancies Site. https://www.dur.ac.uk/jobs/. As part of the application process, you should provide details of 3 (preferably academic/research) referees and the details of your current line manager so that we may seek an employment reference.

Applications are particularly welcome from women and black and minority ethnic candidates, who are under-represented in academic posts in the University.

What to Submit

All applicants are asked to submit:

A CV and covering letter which details your experience, strengths and potential in the requirements set out above;, and clearly describes where you meet the essential and desirable criteria (for example, as a bullet

DBS Requirement:Not Applicable.

Read the original here:
Postdoctoral Research Associate in Quantum Algorithms for Fluid Simulations job with DURHAM UNIVERSITY | 281136 - Times Higher Education (THE)

Read More..

Why It’s Time to Think Differently About Honeywell – Motley Fool

As the headline says, it is time to start thinking differently about Honeywell International (NYSE:HON). The company is known as being one of the last great diversified industrial giants, and that definition still applies. However, what many investors might be missing is that Honeywell is an aggressive investor in cutting-edge technologies, and those businesses are going to significantly add to the value of the company in a few years.

Pause for a second and consider investing in a small company backed by substantive investors that's on track to grow its quantum computing-based revenue from $20 million in 2022 to around $2 billion in 2026.

Image source: Getty Images.

At the same time, the small company has a sustainable technology company (green fuels, feedstocks for recycled plastics, etc.) set to generate $700 million in revenue by 2025. Moreover, this is not any old start-up company with wide-eyed dreams; it's backed by a tried and tested management team with deep pockets.

Such a company would be valued at multiples equivalent to several times its sales. That's the sort of value that investors should start to price into Honeywell stock. The reason is that CEO Darius Adamczyk told investors to expect those revenue figures for two of Honeywell's highest-growth businesses in the coming years.

Of course, transitioning to this kind of thinking won't come easily, and the investments necessary to get there will hold back earnings and free cash flow (FCF) in the near term. However, that's the flip side of the coin, and investors thinking about Honeywell as a diversified industrial might stress over the lost earnings and cash flow.

To put some figures on the matter, management noted that its full-year 2022 earnings before interest, tax, depreciation and amortization (EBITDA) would be negatively affected by $150 million due to investment in its quantum computing business, Quantinuum. Moreover, capital investments made to support growth in Quantinuum will eat into FCF to the tune of $200 million to $300 million in 2022.

Image source: Getty Images.

To put that figure into context, Honeywell is currently valued at $132 billion, a figure equivalent to 23.2 times its trailing FCF. If investors price out the stock based on FCF, then the "lost" $300 million could cost the market cap around 23.2 multiplied by $300 million -- $6.95 billion in market cap.

If you are taking a glass-half-full approach and getting optimistic about the growth investments, then the news about the investments is a good thing. After all, investors put money in stocks because they feel confident that management can generate better returns on the money than they (investors) can.

On the other hand, the glass-half-empty approach laments that Honeywell's earnings and cash flow are being held back and downgrades the stock accordingly. This approach shaves off some of the market cap as outlined above.

Unfortunately, the bull and bear debate over the stock won't stop here. Not least because Honeywell, in line with many other industrial peers, is expecting a formula of first half affected by supply chain pressures and cost increases, followed by a better second half in 2022.

As such, investors looking at the stock as a diversified industrial will have to tolerate a mix of earnings and margin headwinds from the increased investments and the uncertainty from waiting until the second half for an acceleration in growth at Honeywell.

Image source: Getty Images.

Moreover, management prepared investors for a challenging first quarter, with organic growth forecast to be in the range of a 2% decline to an increase of 1%. Meanwhile, adjusted EPS is forecast to be in the range of $1.80 to $1.90, implying a decline of 6% to a decline of 1%.

In addition, CFO Greg Lewis told investors that "with the supply chain impacts that we have been facing, those will continue to drive higher inventory levels, dampening our cash generation in the short term."

It all adds up to a first quarter that's likely to look a little weak on a headline basis.

If you are looking at the stock purely as a diversified industrial, then the answer is probably "no." Despite the fall in the share price, Honeywell is still a highly rated stock, and the slightly disappointing 2022 guidance means the stock isn't quite at a level enticing enough for investors not taking the long view.

However, for investors looking for a back-door way to play quantum computing and sustainable technology trends, Honeywell may well represent a great way to do so without considering the nosebleed valuations and blue sky assumptions that usually come with such investments.

Whichever way you look at it, Honeywell's growth investments are changing the investment proposition over the stock.

This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis -- even one of our own -- helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.

Go here to see the original:
Why It's Time to Think Differently About Honeywell - Motley Fool

Read More..

Benefits of and Best Practices for Protecting Artificial Intelligence and Machine Learning Inventions as Trade Secrets – JD Supra

We previously discussedwhich portions of an artificial intelligence/machine-learning (AI/ML) platform can be patented. Under what circumstances, however, is it best to keep at least a portion of the platform a trade secret? And what are some best practices for protecting trade secrets? In this post, we explore important considerations and essential business practices to keep in mind when working to protect the value of trade secrets specific to AI/ML platforms, as well as the pros and cons of trade secret versus patent protection.

Protecting AI/ML Platforms via Trade Secrets

What qualifies as a trade secret can be extraordinarily broad, depending on the relevant jurisdiction, as, generally speaking, a trade secret is information that is kept confidential and derives value from being kept confidential. This can potentially include anything from customer lists to algorithms. In order to remain a trade secret, however, the owner of the information must follow specific business practices to ensure the information remains secret. If businesses do not follow the proscribed practices, then the ability to protect the trade secret is waived and its associated value is irretrievably lost. The business practices required are not onerous or complex, and we will discuss these below, but many businesses are unaware of what is required for their specific type of IP and only discover their error when attempting to monetize their inventions or sell their business. To avoid this devastating outcome, we work to arm our clients with the requisite practices and procedures tailored to their specific inventions and relevant markets.

In the context of AI/ML platforms, trade secrets can include the structure of the AI/ML model, formulas used in the model, proprietary training data, a particular method of using the AI/ML model, any output calculated by the AI/ML model that is subsequently converted into an end product for a customer, and similar aspects of the platform. There are myriad ways in which the value of the trade secret may be compromised.

For example, if an AI/ML model is sold as a platform and the platform provides the raw output of the model and a set of training data to the customer, then the raw output and the set of training data would no longer qualify for trade secret protection. Businesses can easily avoid this pitfall by having legally binding agreements in place between the parties to protect the confidentiality and ownership interests involved. Another area in which we frequently see companies waive trade secret protection is where the confidential information that can be independently discovered (such as through reverse-engineering a product). Again, there are practices that businesses can follow to avoid waiving trade secret protection due to reverse-engineering. Owners, therefore, must also be careful in ensuring that the information they seek to protect cannot be discovered through use or examination of the product itself and where that cannot be avoided, ensure that such access is governed by agreements that prohibit such activities, thereby maintaining the right to assert trade secret misappropriation and recover the value of the invention.

To determine if an invention may be protected as a trade secret, courts will typically examine whether the business has followed best practices or reasonable efforts for the type of IP and relevant industries. See e.g. Intertek Testing Services, N.A., Inc. v. Frank Pennisi et al., 443 F. Supp. 3d 303, 323 n.19 (E.D.N.Y. Mar. 9, 2020). What constitutes best practices for a particular type of IP can vary greatly. For example, a court may examine whether those trade secrets were adequately protected. The court may also look to whether the owner created adequate data policies to prevent employees from mishandling trade secrets. See Yellowfin Yachts, Inc. v. Barker Boatworks, LLC, 898 F.3d 1279 (11th Cir. Aug. 7, 2018)(where the court held that requiring password protection to access trade secrets was insufficient without adequate measures to protect information stored on employee devices). If the court decides that the business has not employed best practices, the owner can lose trade secret protection entirely.

Most often, a failure to ensure all parties who may be exposed to trade secrets are bound by a legally-sufficient confidentiality or non-disclosure agreement forces the owner to forfeit their right to trade secret protection for that exposed information. Owners should have experienced legal counsel draft these agreements to ensure that the agreements are sufficient to protect the trade secret and withstand judicial scrutiny; many plaintiffs have learned the hard way that improperly-drafted agreements can affect the trade secret protection afforded to their inventions. See, e.g., BladeRoom Group Ltd. v. Emerson Electric Co., 11 F.4th 1010, 1021 (9th Cir. Aug. 30, 2021)(holding that NDAs with expiration dates also created expiration dates for trade secret protection); Foster Cable Servs., Inc. v. Deville, 368 F. Supp. 3d 1265 (W.D. Ark. 2019)(holding that an overbroad confidentiality agreement was unenforceable); Temurian v. Piccolo, No. 18-cv-62737, 2019 WL 1763022 (S.D. Fla. Apr. 22, 2019)(holding that efforts to protect data through password protection and other means were negated by not requiring employees to sign a confidentiality agreement).

There are many precautions owners can take to protect their trade secrets, which we discuss below:

Confidentiality and Non-Disclosure Agreements: One of the most common methods of protecting trade secrets is to execute robust confidentiality agreements and non-disclosure agreements with everyone who may be exposed to trade secrets, to ensure they have a legal obligation to keep those secrets confidential. Experienced legal counsel who can ensure the agreements are enforceable and fully protect the owner and their trade secrets are essential as there are significant pitfalls in these types of agreements and many jurisdictions have contradicting requirements.

Marketing and Product Development: The AI/ML platform itself should also be constructed and marketed in such a way as to prevent customers from easily discovering the trade secrets, whether through viewing marketing materials, through ordinary use of the platform, or through reverse-engineering of the platform. For example, if an AI/ML platform uses a neural network to classify medical images, and the number of layers used and the weights used by the neural network to calculate output are commercially valuable, the owner should be careful to exclude any details about the layers of the AI/ML model in marketing materials. Further, the owner may want to consider developing the platform in such a way that the neural network is housed internally (protected by various security measures) and therefore not directly accessible by a customer seeking to reverse-engineer the product.

Employee Training: Additionally, owners should also ensure that employees or contractors who may be exposed to trade secrets are trained in how to handle those trade secrets, including how to securely work on or discuss trade secrets, how to handle data on their personal devices (or whether trade secret information may be used on personal devices at all), and other such policies.

Data Security: Owners should implement security precautions (including limiting who can access trade secrets, requiring passwords and other security procedures to access trade secrets, restricting where data can be downloaded and stored, implementing mechanisms to protect against hacking attempts, and similar precautions) to reduce the risk of unintended disclosure of trade secrets. Legal counsel can help assess existing measures to determine whether they are sufficient to protect confidential information under various trade secret laws.

Pros and Cons of Trade Secret Protection over Patent Protection

Trade secret protection and patent protection are obtained and maintained in different ways. There are many reasons why trade secret protection may be preferable to patent protection for various aspects of an AI/ML platform, or vice-versa. Below we discuss some criteria to consider before deciding how to protect ones platform.

Protection Eligibility: As noted in our previous blog post, patent protection may be sought for many components of an AI/ML platform. There are, however, some aspects of an AI/ML platform that may not be patent-eligible. For example, while the architecture of a ML model may be patentable, specific mathematical components of the model, such as the weight values, mathematical formulas used to calculate weight values in an AI/ML algorithm, or curated training data, may not, on their own, be eligible for patent protection. If the novelty of a particular AI/ML platform is not in how an AI/ML model is structured or utilized, but rather in non-patentable features of the model, trade secret protection can be used to protect this information.

Cost: There are filing fees, prosecution costs, issue fees, and maintenance fees required to obtain and keep patent protection on AI/ML models. Even for an entity that qualifies as a micro-entity under the USPTOs fee schedule, the lifetime cost of a patent could be several thousand dollars in fees, and several thousand dollars in attorneys fees to draft and prosecute the patent. Conversely, the costs of trade secret protection are the costs to implement any of the above methods of keeping critical portions of the AI/ML model secret from others. In many instances, it may be less expensive to rely on trade secret protection, than it may be to obtain patent protection.

Development Timeline: AI/ML models, or software that implements them, may undergo several iterations through the course of developing a product. As it may be difficult to determine which, if any, iterations are worth long-term protection until development is complete, it may be ideal to protect each iteration until the value of each has been determined. However, obtaining patent protection on each iteration may, in some circumstances, be infeasible. For example, once a patent application has been filed, the specification and drawings cannot be amended to cover new, unanticipated iterations of the AI/ML model; a new application that includes the new material would need to be filed, incurring further costs. Additionally, not all iterations will necessarily include changes that can be patented, or it may be unknown until after development how valuable a particular modification is to technology in the industry, making it difficult to obtain patent protection for all iterations of a model or software using the model. In these circumstances, it may be best to use a blend of trade secret and patent protection. For example, iterations of a model or software can be protected via trade secret; the final product, and any critical iterations in between, can subsequently be protected by one or more patents. This allows for a platform to be protected without added costs per iteration, and regardless of the nature of the changes made in each iteration.

Duration of Protection: Patent owners can enjoy protection of their claimed invention for approximately twenty years from the date of filing a patent application. Trade secret protection, on the other hand, lasts as long as an entity keeps the protected features a secret from others. For many entities, the twenty-year lifetime of a patent is sufficient to protect an AI/ML platform, especially if the patent owner anticipates substantially modifying the platform (e.g., to adapt to future needs or technological advances) by the end of the patent term. To the extent any components of the AI/ML platform are unlikely to change within twenty years (for example, if methods used to curate training data are unlikely to change even with future technological advances), it may be more prudent to protect these features as trade secrets.

Risk of Reverse-Engineering: As noted above, trade secrets do not protect inventions that competitors have been able to discover by reverse-engineering an AI/ML product. While an entity may be able to prevent reverse-engineering of some aspects of the invention through agreements between parties with permission to access the AI/ML product or through creative packaging of the product, there are some aspects of the invention (such as the training data that needs to be provided to the platform, end product of the platform, and other features) that may need to remain transparent to a customer, depending on the intended use of the platform. Such features, when patent-eligible, may benefit more from patent protection than from trade secret protection, as a patent will protect the claimed invention even if the invention can be reverse-engineered.

Exclusivity: A patent gives the patent owners the exclusive right to practice or sell their claimed inventions, in exchange for disclosing how their inventions operate. Trade secrets provide no such benefit; to the extent competitors are able to independently construct an AI/ML platform, they are allowed to do so even if an entity has already sold a similar platform protected by trade secret. Thus, to the extent an exclusive right to the AI/ML model or platform is necessary for the commercial viability of the platform or its use, patent protection may be more desirable than trade secret protection.

Conclusion

Trade secret law allows broad protection of information that can be kept secret from others, provided certain criteria are met to ensure the information is adequately protected from disclosure to others. Many aspects of an AI/ML platform can be protected under either trade secret law or patent law, and many aspects of an AI/ML platform may only be protected under trade secret law. It is therefore vital to consider trade secret protection alongside patent protection, to ensure that each component of the platform is being efficiently and effectively protected.

[View source.]

See original here:
Benefits of and Best Practices for Protecting Artificial Intelligence and Machine Learning Inventions as Trade Secrets - JD Supra

Read More..

Raster plots machine learning to predict the seizure liability of drugs and to identify drugs | Scientific Reports – Nature.com

Human iPSC-derived neural network drug response

Culturing of a human iPSC-derived neural network seeded on an MEA was possible without cell aggregation even on the 12th week of culturing. Network burst firing was observed from the 6th week of culture onward. Figure1A (a) shows a cultured 81days in vitro (DIV) phase contrast image, and Fig.1A (b) shows a typical network burst signal. Concentration-dependent data were obtained for 13 seizure-causing compounds and two seizure-free compounds after the 14th week of culturing, when the neural networks were considered mature28. Whenever the signal that was obtained passed a threshold, the spikes detected were used to create a raster plot. Figure1A (c) shows the threshold used to detect spikes in the single electrode signal (top portion) and a raster plot of the detected spikes (bottom portion). Figure1B shows raster plots of compounds with different mechanisms of action: (a) 4-aminopyridine (4-AP), (b) pentylenetetrazol (PTZ), (c) carbamazepine, (d) N-methyl-D-aspartic acid (NMDA), (e) acetaminophen, and (f) dimethyl sulfoxide (DMSO). Seizure-causing compounds caused different changes depending on their mechanism of action (Fig.1B). Figure1C shows a schematic of five analytic parameters calculated from raster plots (total spikes (TS), number of network bursts (NoB), inter network burst interval (IBI), duration of a network burst (DoB), and spikes in a network burst (SiB)). Figure2 shows the drug response of each parameter when the vehicle response is set to 100%. The numerical data are listed in supplementary Tables S1S5. The maximum increases in the NoB of 4-AP and PTZ were 321.0%15.4% (30M) and 147.3%2.7% (10M), respectively. The IBI, DoB, and SiB decreased starting at a concentration of 1M for 4-AP and PTZ (Fig.2a,b). The DoB decreased starting at 0.3M of picrotoxin (Fig.2c). For carbamazepine, the TS and NoB decreased at 30M, and the DoB decreased and the IBI increased at 100M (Fig.2d). For pilocarpine, the IBI increased starting at 10M, the DoB decreased starting at 30M, and the TS decreased at 100M (Fig.2e). For kainic acid, the TS decreased at 0.3M and the NoB went to 0 starting at 1M (Fig.2f). For NMDA, the TS increased at 1M whereas the TS, DoB, and SiB decreased and the NoB increased at 10M (Fig.2g). For tramadol, the NoB decreased and the SiB increased starting at 3M, the TS, DoB, and SiB decreased at 30M, and the IBI increased at 100M (Fig.2h). For theophylline, the IBI increased starting at 10M, and the SiB increased whereas the NoB decreased starting at 30M (Fig.2i). For paroxetine, the DoB decreased starting at 0.3M, and the TS decreased starting at 1M (Fig.2j). For varenicline, the IBI increased and the DoB decreased at 30M (Fig.2k). For venlafaxine, the DoB decreased at 10M, and the TS and SiB decreased at 30M (Fig.2l). For acetaminophen, the DoB decreased starting at 3M (Fig.2m). For DMSO and amoxapine, no changes in any parameters were observed (Fig.2n,o).

MEA data from a cultured human iPSC-derived neural network. (A) (a) Phase-contrast image of neurons on an MEA chip at 81days in vitro (DIV). (b) Typical action potential waveform in a spontaneous recording. (c) Upper graph shows the action potential waveform acquired with a single electrode and the voltage threshold for spike detection (red line). Raster plots of detected spikes (black circles) are shown under the graph. (B) Concentration-dependent Raster plot images of typical mechanisms of action (a) 4-AP, (b) carbamazepine, (c) NMDA, (d) PTZ, (e) acetaminophen, (f) DMSO. (C) Schematic diagram of analysis parameters.

Concentration-dependent changes of 15 compounds in five parameters: TS (pink), NoB (black), IBI (green), DoB (blue), SiB (cyan). Parameters were depicted as the average % change of control (vehicle control set to 100%)SEM from n=34 wells. Data were analyzed using one-way ANOVA followed by post hoc Dunnett's test (*p<0.05, **p<0.01 vs. vehicle).

Based on the preceding results, we found that the changes in the parameters studied were not similar among all seizure-causing compounds; changes differed based on the mechanism of action of the drug. At the same time, a significant difference in the DoB was detected for acetaminophen, which is a seizure-free compound. Changes in DoB may be observed for certain seizure-free compounds. Consequently, we found that there are difficulties in using a single parameter to distinguish between seizure-causing compounds with different mechanisms of action and seizure-free compounds.

We created an artificial intelligence (AI) that was trained on raster plots so that it could classify the responses of seizure-causing compounds with different mechanisms of action as well as the responses of seizure-free compounds. Raster plots were created from the time-series data of the detected spikes, and then images were created by segmenting the data into time windows four times that of the inter-maximum frequency of a network burst interval (IMFI) in the pre-drug administration. The network burst frequency differed depending on the well, so the number of segmented raster plot images also differed depending on the well. The reason for choosing four times the IMFI is that it is suitable for capturing both the regularity of network burst activity and fine firing patterns, and reduces variability between wells. Next, the segmented raster plot images were input into AlexNet36, an object recognition model, and the 4096-dimensional parameters which were output from the fully connected layer (the 21st layer) were extracted as image feature quantities. Lastly, we corrected for differences between the wells due to differing initial states by normalizing the feature quantities of each drug around the mean value of the feature quantities when the vehicle was administered to each well. The 13 seizure-causing compound and two seizure-free compound datasets, which is the number of split raster plots per concentration, were created as shown in Table 1. We used a pattern recognition neural network composed of 4096 neuron input layers, nine sigmoid neuron hidden layers, and an output layer with two classes, which made up a toxicity prediction model to predict whether a compound was a seizure-causing compound or a seizure-free compound (Fig.3A). We used four seizure-causing compounds with different mechanisms and burst frequency responses (4-AP [30 and 60M, n=3 wells, respectively], carbamazepine [100M, n=3 wells], NMDA [3 and 10M, n=3 wells], and PTZ [1000M, n=3 wells]) and two seizure-free compounds (all concentrations of acetaminophen [n=3 wells] and all concentrations of DMSO [n=3 wells]) to train and validate the effectiveness of this model; 75% of the dataset was used for training, and the remaining 25% was used for validation after training (Table 1). The reason for selecting the four seizure-causing compounds is that, in order to cover the firing pattern of the seizure-causing compound, compounds having different mechanisms of action were selected as training data, one in which the firings increased and the other in which the firings decreased. The accuracy was evaluated using the raster plots of unlearned wells after training, i.e., using the holdout scheme. The training data used contained 330 4-AP plots, 822 carbamazepine plots, 1323 NMDA plots, 198 PTZ plots, and 3546 acetaminophen plots and DMSO 2286 plots. The test data used contained 111 4-AP plots, 294 carbamazepine plots, 441 NMDA plots, 54 PTZ plots, and 1182 acetaminophen plots and 702 DMSO plots. We created a confusion matrix of the seizure-causing and seizure-free classification results from the training data and test data (Fig.3B). Next, a receiver operating characteristic curve and the area under the curve (AUC) were calculated for all training data and all test data, and the optimal operating point was determined (Fig.3C(a)). The accuracy, positive predictive value, sensitivity, specificity, and F-measure of the prediction results of the model at the optimal operating point were calculated (Table 2). The model trained on raster plot feature quantities had an AUC in the training data of 0.9998 and an AUC in the unlearned data of 0.9967; the optimal operating point was 0.158. The classification precision in the training data for each drug at the optimal operating point was as follows: 100% for 4-AP, 97.8% for carbamazepine, 99.6% for NMDA, and 96.0% for PTZ. The classification precision in the unlearned data was 100% for 4-AP, 91.5% for carbamazepine, 100% for NMDA, and 94.4% for PTZ. The prediction accuracy for all compounds was 98.4%.

Creation of seizure risk prediction AI using raster plot images and evaluation of the prediction model. (A) Data flow and architecture of seizure risk prediction model. w1 is the weight between the input layer and the hidden layer, w2 is the weight between the hidden layer and the output layer. (B) (a) Confusion matrix for each compound used for training, (b) confusion matrix for the entire training dataset, (c) confusion matrix for each compound used for the test, (d) confusion matrix for the entire test dataset. The test dataset used the data of the wells not used for training dataset. Vehicle in the confusion matrix indicates vehicle data in four seizure-causing compounds. (C) (a) Receiver operating characteristic (ROC) curve after classification of training and testing data in a neural network model (black line: training data; red line: testing data; red dot: optimum operating point). (b) Comparison of ROC curves after classification of the same testing data in NN and SVM models (black line: SVM model; red line: NN model).

Figure3C(b) shows the ROC curve using a support vector machine (SVM) model trained with the same 4096-dimensional feature dataset as the neural network (NN) model. Comparing the AUC in the test data of SVM and NN revealed that the NN model had an AUC of 0.9967 and the SVM model had an AUC of 0.9841; thus, the NN model was superior to the SVM model [Fig.3C(b)]. Therefore, in this study, we used the NN model.

The seizure-causing/seizure-free classification AI trained on the raster plots that we created accurately classified the responses of seizure-causing compounds with differing mechanisms and seizure-free compounds.

If we are able to establish a ranked development priority for compounds based on their seizure liability, it will lead to more efficient drug discovery and development. Determining the concentration dependence is necessary in order to assign priority to drugs. Thus, using the AI we created, we investigated the concentration dependence of seizure-causing/seizure-free judgments. The concentration data toxicity probabilities predicted by the AI are shown in Fig.4. The proportions of the images classified as seizure-causing and as seizure-free used the time-series data from each well, and then the mean probability for each well was calculated and used to represent the toxicity risk at each concentration. For unlearned sample, which includes data of the wells that were not used for training dataset, the following concentrations were determined to have a seizure liability probability of 50% or higher4-AP: 1M (62.2%), 10M (94.6%), 30M (100%), and 60M (100%); carbamazepine: 30M (76.9%) and 100M (85.0%); NMDA: 1M (63.3%), 3M (100%), and 10M (100%); and PTZ: 1M (51.9%), 10M (81.5%), 100M (88.9%), and 1000M (88.9%) (Fig.4 (a), (b), (d), (e)). The seizure liability at concentrations lower than the concentrations the AI was trained on was shown, and then the concentration dependence was calculated. Acetaminophen, which is a seizure-free compound, was determined to be seizure-free with a probability of 97.9% or higher, regardless of the concentration. DMSO was also determined to be seizure-free with a probability of 99.1% or higher, regardless of the concentration (Fig.4c,f). The seizure liability prediction AI we created determined the concentration dependence of seizure-causing compounds and identified seizure-free compounds as seizure-free regardless of the concentration.

Concentration-dependent prediction of seizure risk in learning drugs by AI. AI predicted the negative probabilities (blue bar) and seizure risk (red bar) at each concentration of training data (left) and test data (right). (a) 4-AP, (b) NMDA, (c) acetaminophen, (d) carbamazepine, (e) PTZ, (f) DMSO.

It is important for the AI that we created to detect the toxicity of drugs that it has not been trained on. Thus, we used the AI we created to determine the toxicity of nine unlearned seizure-causing compounds based on data collected on them. In order to verify AI, nine unlearned seizure-causing compounds were regarded as unknown compounds and were not trained. Figure5 shows the seizure toxicity determination results for each concentration of the unlearned drugs. The concentrations that showed a 50% or higher probability of seizure liability were as followskainic acid: 1M (81.8%), 3M (100%), and 10M (100%); paroxetine: 3M (73.7%), 10M (100%), and 30M (100%); picrotoxin: 0.1M (91.4%), 0.3M (93.7%), 1M (91.8%), 3M (97.8%), and 10M (91.5%); varenicline: 10M (52.6%) and 30M (77.1%); pilocarpine: 1M (62.3%), 3M (75.8%), 10M (86.8%), 30M (89.4%), and 100M (97.0%); tramadol: 3M (61.9%), 10M (88.6%), 30M (98.9%), and 100M (100%); and venlafaxine: 10M (90.5%), 30M (100%), and 100M (100%). Seven of the unlearned drugs were determined to have concentration-dependent seizure liability (Fig.5ad, fh). On the other hand, amoxapine and theophylline were determined to be seizure-free at all concentrations (Fig.5e,i). This showed that the AI was able to detect seizure toxicity in a concentration-dependent manner, even for unlearned drugs.

Concentration-dependent prediction of seizure risk in non-training drugs by AI. AI predicted the negative probabilities (blue bar) and seizure risk (red bar) at each concentration. (a) Kainic acid, (b) paroxetine, (c) picrotoxin, (d) varenicline, (e) amoxapine, (f) pilocarpine, (g) tramadol, (h) venlafaxine, (i) theophylline.

In order to verify whether AI can determine the safety of unlearned negative compounds, the negative compounds Aspirin (1, 3, 10, 30, 100M) and Amoxicillin (1, 3, 10, 30, 100) M) and Felbinac (1, 3, 10, 30, 100M) data were judged (Fig.6). The negative probabilities of Aspirin were 76.3% (1M), 82.0% (3M), 79.0% (10M), 80.8% (30M), and 81.7% (100M). Amoxicillin were 91.3% (1M), 86.3% (3M), 86.4% (10M), 81.1% (30M), and 77.6% (100M). Felbinac were 83.8% (1M), 80.9% (3M), 76.1% (10M), 71.8% (30M), and 77.7% (100M) (Fig.6b). Although there were some significant differences in the conventional analysis parameters (Fig.6a), AI judged negative at all three concentrations. From these results, it was confirmed that AI can be judged to be negative even for negative compounds that are unlearned drugs.

Prediction of seizure risk in non-training negative compounds by AI. (A) Concentration-dependent changes of 3 negative compounds in five parameters: TS (pink), NoB (black), IBI (green), DoB (blue), SiB (cyan). (a) Aspirin, (b) amoxicillin, (c) felbinac, (B) AI predicted the negative probabilities (blue bar) and seizure risk (red bar) at each concentration.

Because seizure-causing compounds with differing mechanisms elicit different responses, if the AI is able to classify the compounds despite this, it can also predict the mechanism of seizure liability of unlearned drugs. Thus, we trained the AI on drug names and raster plots in order to classify compounds as seizure-causing compounds with differing mechanisms or seizure-free compounds.

We used a pattern recognition neural network composed of 4096 neuron input layers, 120 hidden layers containing sigmoid neurons, and an output layer with 14 classes (Fig.7), which made up a drug identification model to predict the name of seizure-causing compounds and seizure-free compounds. The model was trained on a dataset composed of 4-AP (30 and 60M), amoxapine (100M), carbamazepine (30 and 100M), kainic acid (1, 3, and 10M), NMDA (3 and 10M), PTZ (1000M), paroxetine (3, 10, and 30M), picrotoxin (1, 3, and 10M), pilocarpine (10, 30, and 100M), theophylline (100M), tramadol (30 and 100M), varenicline (30M), and venlafaxine (10, 30, and 100M) as well as all concentrations of acetaminophen as well as all concentrations of DMSO as seizure-free compounds (Table 3). The all compounds dataset that was used was made up of 56 wells. Training was conducted by excluding one of the 56 wells and training the AI on the names of the drugs in the other 55 wells; 75% of the 55 well datasets were used for training, and 25% were used for validation after training. The excluded well was used for obtaining test data. The prediction accuracy was calculated using the leave-one-sample (well)-out scheme. We created five AIs for each excluded well. In other words, we created 565=280 AIs. For the data from the single well (the data from the single well that was not used to train the AI), the name of the drug was identified based on the five models we created, and the mean value was calculated. The deviation of the five models prediction accuracy was 0.11% at the trained concentrations of all drugs. The deviation of the prediction accuracy at all concentrations of all drugs was 1.6%. The predictive probabilities at different drug concentrations are shown in Table 4. DMSO and acetaminophen, which are seizure-free compounds, were judged to be seizure-free at all concentrations for every drug vehicle, with a mean probability of 99.9%0.3%. 4-AP (1M), amoxapine (3M), NMDA (1M), picrotoxin (0.1M), pilocarpine (1M), PTZ (10M), theophylline (3M), varenicline (10M), venlafaxine (3M), and tramadol (10M) were correctly identified at concentrations lower than those in the training data. Carbamazepine (30M), kainic acid (1M), and paroxetine (3M) were correctly identified at the concentrations used to train the AI. The drugs that could not be identified at certain concentrations were all seizure-free compounds, and no drugs were misidentified as different drugs. The mean predictive accuracy for all drugs at the concentrations used to train the AI was 99.9%0.1%. The drug identification AI we created correctly identified the responses of 13 seizure-causing compounds and two seizure-free compounds.

Creation of drug name prediction AI using raster plot images. Data flow and architecture of drug name prediction model. w1 is the weight between the input layer and the hidden layer; w2 is the weight between the hidden layer and the output layer.

View original post here:
Raster plots machine learning to predict the seizure liability of drugs and to identify drugs | Scientific Reports - Nature.com

Read More..

Stanford to offer Free Machine Learning with Graphs course online from fall – Analytics India Magazine

Stanford Universitys Machine Learning with Graphs course will be available online for free from the fall of 2022.

Complex data can be represented as a graph of relationships between objects. Such networks are a fundamental tool for modelling social, technological, and biological systems. The course focuses on the computational, algorithmic, and modelling challenges specific to the analysis of massive graphs. By means of studying the underlying graph structure and its features, students are introduced to machine learning techniques and data mining tools apt to reveal insights on a variety of networks.

The topics covered in the course include representation learning and Graph Neural Networks; algorithms for the World Wide Web; reasoning over Knowledge Graphs; influence maximisation; disease outbreak detection and social network analysis.

The pre-requisites for the course include:

1. Knowledge of basic computer science principles, sufficient to write a reasonably non-trivial computer program (e.g., CS107 or CS145 or equivalent are recommended)

2. Familiarity with the basic probability theory (CS109 or Stat116 are sufficient but not necessary)

3. Familiarity with the basic linear algebra

The recitation sessions in the first weeks of the class will give an overview of the expected background.Stanford University recommended Graph Representation Learning, Networks, Crowds, and Markets: Reasoning About a Highly Connected World and Network Science as optional reading.

Register for Data Engineering Summit 2022

Read the rest here:
Stanford to offer Free Machine Learning with Graphs course online from fall - Analytics India Magazine

Read More..