Page 588«..1020..587588589590..600610..»

Inverted AI Secures Seed Round for Generative AI in AV/ADAS Development – AiThority

Inverted AI, a world-class generative AI company for AV/ADAS development, is pleased to announce it has secured over$4 million USDin institutional early stage financing, led by Yaletown Partners, and including strategic investors such as Blue Titan Ventures, Dasein Capital, Inovia Capital, Defined, and WUTIF.

The funding enables Inverted AI to further accelerate and expand its product offerings, including those based on its leading ITRA (Imagining the Road Ahead) technology, as the leader in providing behavioral models accelerating safe commercial deployment of autonomous solutions including advanced ADAS, autonomous vehicles and robots, and a full range of realistic, simulation-based systems. ITRA enables longer and more complex simulation scenarios that can be used to test the ADAS/AV system against the full range of behaviors displayed by the human road users around it.Our technology helps ensure the safety of ADAS and AV systems while significantly reducing development costs and time by enabling a full range of diverse and realistic testing scenarios.

Recommended AI News:Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

Inverted AIs predictive human behavioral models are designed to bring the most realistic behavior to simulation based on massive quantities of video data, that result in simulations with reactive, diverse, and realistic NPCs across vehicle classes and pedestrians. saidFrank Wood, CEO of Inverted AI. Were grateful for Yaletown Partners leadership and technology expertise and ability to help us scale. Yaletown has invested in 75+ technology-driven companies acrossNorth America, developing a strong roster of portfolio companies, and has a total enterprise value exposure that exceeds$10Bn. The opportunity of generative AI to advance adaptive systems and accelerate the convergence of simulation and real-world applications, in our view, is one of the great thematic changes underway. Inverted AI is an extraordinary team that has created the worlds leading foundational models in generative AI for human behavior with the potential to unlock significant value in all areas of autonomy, observedEric Bukovinsky, Partner at Yaletown Partners.

Recommended AI News: Google Cloud and Accenture Launch Generative AI Center of Excellence

Inverted AI currently offers API access to DRIVE, with our leading human-like non-playable character (NPCs) driving behaviors that are diverse, realistic, and reactive and INITIALIZE, with realistic and diverse agent placements. The company has begun beta testing new products including BLAME, which can automatically determine which agent(s) caused a collision and why from logs to efficiently validate simulations, and SCENARIO, enabling whole-scene scenario generation with realistic, reactive, and diverse agents inc. pedestrians, bikes, cars, buses, and lights. The company recently launched a new site with an interactive demo of the API and the ability for researchers and developers to sign-up for a trial API key.

Recommended AI News: Mistral AI Selects Google Cloud Infrastructure to Make Generative AI More Open and Accessible

[To share your insights with us, please write tosghosh@martechseries.com]

Visit link:
Inverted AI Secures Seed Round for Generative AI in AV/ADAS Development - AiThority

Read More..

NEC Launches New AI Business Strategy with the Enhancement and Expansion of Generative AI – AiThority

Building a foundation model to enable scale and function expansion

NEC Corporation has enhanced and expanded the performance of its lightweight large language model (LLM) and is scheduled to launch it in the spring of 2024. With this development, NEC is aiming to provide an optimal environment for the use of generative artificial intelligence (AI) that is customized for each customers business and centered on a specialized model that is based on NECs industry and business know-how.

Recommended AI News:Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

These services are expected to dramatically expand the environment for transforming operations across a wide range of industries, including healthcare, finance, local governments and manufacturing. Moreover, NEC will focus on developing specialized models for driving the transformation of business and promoting the use of generative AI from individual companies to entire industries through managed application programming interface (API) services.

NEC has enhanced its LLM by doubling the amount of high-quality training data and has confirmed that it outperformed a group of top-class LLMs in Japan and abroad in a comparative evaluation of Japanese dialogue skills (Rakuda*).Furthermore, the LLM can handle up to 300,000 Japanese characters, which is up to 150 times longer than third-party LLMs, enabling it to be used for a wide range of operations involving huge volumes of documents, such as internal and external business manuals.

NEC is also developing a new architecture that will create new AI models by flexibly combining models according to input data and tasks. Using this architecture, NEC aims to establish a scalable foundation model that can expand the number of parameters and extend functionality. Specifically, the model size can be scalable from small to large without performance degradation, and it is possible to flexibly link with a variety of AI models, including specialized AI for legal or medical purposes, and models from other companies and partners. Additionally, its small size and low power consumption enable to be installed in edge devices. Furthermore, by combining NECs world-class image recognition, audio processing, and sensing technologies, the LLMs can process a variety of real-world events with high accuracy and autonomy.

Recommended:AiThority Interview with Hardy Myers, SVP of Business Development & Strategy at Cognigy

In parallel, NEC has also started developing a large-scale model with 100 billion parameters, much larger than the conventional 13 billion parameters. Through these efforts, NEC aims to achieve sales of approximately 50 billion yen over the next three years from its generative AI-related business.

The development and use of generative AI has accelerated rapidly in recent years. Companies and public institutions are examining and verifying business reforms using various LLMs, and the demand for such reforms is expected to increase in the future. On the other hand, many challenges remain in its utilization, such as the need for prompt engineering to accurately instruct AI, security aspects such as information leakage and vulnerability, and business data coordination during implementation and operation.

Since the launch of the NEC Generative AI Service in July 2023, NEC has been leveraging the NEC Inzai Data Center, which provides a low-latency and secure LLM environment, and has been accumulating know-how by building and providing customer-specific individual company models and business-specific models ahead of the industry by using NEC-developed LLM.

NEC leverages this know-how to provide the best solutions for customers in a variety of industries by offering LLM consisting of a scalable foundation model and an optimal environment for using generative AI according to the customers business.

Recommended:AiThority Interview with Anthony Katsur, Chief Executive Officer at IAB Tech Lab

[To share your insights with us, please write tosghosh@martechseries.com]

Read more:
NEC Launches New AI Business Strategy with the Enhancement and Expansion of Generative AI - AiThority

Read More..

ModMed Boosts Its Communication Efficiency With Grammarly’s AI Writing Assistance – AiThority

Grammarlys comprehensive writing assistance with generative AI features drives 28x ROI and saves over 19 working days per year per employee for ModMed

Grammarly, the company helping over 30 million people and 70,000 teams work smarter and faster wherever they write announced that ModMedis increasing its efficiency in communication with Grammarlys AI writing assistance. As a fast-growing provider of intelligent, specialized healthcare software, ModMed is usingGrammarly Businessto save time while focusing on strategic initiatives.

Recommended AI News:Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

ModMed uses technology to transform healthcare, and its wonderful to have a partner in Grammarly whos as invested as us in using AI to reshape efficiency

Spending less time on day-to-day writing tasks and more time on high-value work helps ModMed push the pace of technological advancement in an industry thats historically slow to adapt. Grammarlys AI writing partner has helped ModMed:

AI will separate businesses that get ahead from those that fall behindfull stop, said Matt Rosenberg, Grammarlys Chief Revenue Officer and Head of Grammarly Business. Were proud to support an innovator like ModMed thats using AI strategically to enhance agility and adaptability, and in an industry thats slow to evolve, no less. Grammarly makes it easier for ModMeds teams to get everyday work done, so they can move more quickly and focus on doing what they do best.

ModMed experienced year-over-year growth in 2022 as it rapidly delivered new solutions across several medical specialties, and investing in Grammarly Business helped the company improve productivity. ModMed started using Grammarlys in-line assistance to interact more quickly and consistently by refining writing correctness, clarity, style, and tone. Grammarly enforces ModMeds style guide, so team members get real-time suggestions on how to write in the companys tone and style, and they also use preset textsnippetsfor fast, consistent responses.

AiThority InterviewInsights:AiThority Interview with Ramsey Masri, Chief Executive Officer at Ceres Imaging

ModMed CEO Daniel Cane discovered that using an AI writing partner like Grammarly had a positive impact on efficiency in communications. As a result, he expanded the use ofGrammarlys generative AI featuresfor a productivity boost. Over 600 team members use Grammarly for tasks like facilitating brainstorming, quickly polishing chats and emails, and creating first drafts of content to speed up production time. And because Grammarly applies company context, tone, and style preferences to its output and suggestions, ModMeds team members can work faster and achieve more.

ModMed uses technology to transform healthcare, and its wonderful to have a partner in Grammarly whos as invested as us in using AI to reshape efficiency, Cane said. I couldnt imagine working without Grammarly. We use its generative AI to help with first drafts, ideation, and designing the framework of our conversations. Then, once were done creating, Grammarlys in-line features ensure everything is clean, concise, and on tone.

Grammarly works across more than 500,000 apps and websitesmore than other AI writing assistantsso ModMed team members get support right where theyre working without having to switch between tools. Grammarly Business also providesadvanced analyticsandenterprise-grade security, with the most comprehensive security certifications of any AI writing assistance company. The company never sells customer data or lets third parties use it to train their models.

Grammarly is always there assisting you, whether youre writing an email, doing research, or completing a deck, added Adam Scott Riff Chief Marketing Officer of ModMed. It helps us be more efficient and increase productivity. It helps free up our team members to focus on the things they really enjoy and to innovate.

Read:Alteryx Launches New Alteryx AiDIN Innovations to Fuel Enterprise-wide Adoption of Generative AI

[To share your insights with us, please write tosghosh@martechseries.com]

Read this article:
ModMed Boosts Its Communication Efficiency With Grammarly's AI Writing Assistance - AiThority

Read More..

Why AI struggles to predict the future : Short Wave – NPR

Muharrem Huner/Getty Images

Muharrem Huner/Getty Images

Artificial intelligence is increasingly being used to predict the future. Banks use it to predict whether customers will pay back a loan, hospitals use it to predict which patients are at greatest risk of disease and auto insurance companies use it to determine insurance rates by predicting how likely a customer is to get in an accident.

"Algorithms have been claimed to be these silver bullets, which can solve a lot of societal problems," says Sayash Kapoor, a researcher and PhD candidate at Princeton University's Center for Information Technology Policy. "And so it might not even seem like it's possible that algorithms can go so horribly awry when they're deployed in the real world."

But they do.

Issues like data leakage and sampling bias can cause AI to give faulty predictions, to sometimes disastrous effects.

Kapoor points to high stakes examples: One algorithm falsely accused tens of thousands of Dutch parents of fraud; another purportedly predicted which hospital patients were at high risk of sepsis, but was prone to raising false alarms and missing cases.

After digging through tens of thousands of lines of machine learning code in journal articles, he's found examples abound in scientific research as well.

"We've seen this happen across fields in hundreds of papers," he says. "Often, machine learning is enough to publish a paper, but that paper does not often translate to better real world advances in scientific fields."

Kapoor is co-writing a blog and book project called AI Snake Oil.

Want to hear more of the latest research on AI? Email us at shortwave@npr.org we might answer your question on a future episode!

Listen to Short Wave on Spotify, Apple Podcasts and Google Podcasts.

This episode was produced by Berly McCoy and edited by Rebecca Ramirez. Brit Hanson checked the facts. Maggie Luthar was the audio engineer.

See the original post:
Why AI struggles to predict the future : Short Wave - NPR

Read More..

Prediction of cell migration potential on human breast cancer cells treated with Albizia lebbeck ethanolic extract using … – Nature.com

Plant material

Fresh stem barks of A. lebbeck were collected during the rainy season (April to October) from northern Nigeria, a town called Tabuli, part of Gaya Local Government, Kano State, during their flowering stage and dried at room temperature. The A. lebbeck stem bark collection follows all the applicable international standards, guidelines, and laws. The plant specimen was authenticated by Dr. Bala Sidi Aliyu, and deposited with voucher specimen number BUKHAN187 at the herbarium Plant Biology Department, Faculty of Science, Bayero University Kano.

Dried Albizia lebbeck stem barks were pulverised to clear powder and subjected to flask extraction using 99.9% methanol as extraction solvent. Powdered A. lebbeck stem bark (50g) was soaked in an Erlenmeyer flask containing methanol (500mL) and placed under continual shaking for 48 at room temperature27. Whatman filter paper No.1 was used to filter the extract and concentrate it under reduced pressure using a Rotary evaporator. The concentrated extract was dried completely at 40C in an oven and stored at 4C before the analysis.

The ALEE extracts were analysed for their total flavonoid (TFC) and total phenolic content (TPC) using standard spectrophotometric methods28,29. To determine TFC determination, ALEE (1mg/mL) was mixed with NaNO2 solution (5%), 10% AlCl3, and 1M NaOH, and absorbance was measured at 510nm. Folin-Ciocalteu reagent was added to ALEE (10:1) for TPC, followed by incubation with Na2CO3 (7.5%) and absorbance measurement at 760nm. Results are presented as mg quercetin equivalent (QE)/g dry extract and gallic acid equivalents (g GAEs/g dry extract).

We utilised gas chromatography-mass spectrometry (GCMS) to analyse the organic composition of ALEE. We first created a crude extract in ethanol (1mg/mL) and filtered it via a 0.22m syringe filter. Then, we injected it into a Shimadzu GCMSQP2010 plus analyser with helium as the carrier gas at a steady flow rate of 1mL/min. The oven temperature was set at 50 C for 2min and gradually increased by 7 C/min. We assessed the mass spectra at a scanning interval of 0.5s, with a complete scan range from 25 to 1000m/z, employing a Quadrupole mass detector. Ultimately, we identified the existing compounds by scrutinising the spectrum via the WILLEY7 MS library.

MDA-MB 231 (strongly metastatic) and MCF-7 (weakly metastatic) BCa cell lines were obtained as a gift from Imperial College London (UK) and stored at the Biotechnology Research Centre (BCR) of Cyprus International University. The BCR ethical committee (BRCEC2011-01) approved using these cell lines in our study. We cultured the cells in Dulbecco's Modified Eagle's Medium (DMEM) (Gibco by Life Technology USA), supplemented with 2mM L-glutamine, penicillin, and 10% fetal bovine serum (FBS), and maintained them in a sterile incubator at 37C and 5% CO2.

We conducted a tryphan blue dye exclusion assay, following the guidelines provided by Fraser et al.31, to measure the level of cytotoxicity in BCa cells. We administered various doses, 0, 2.5, 10, 25, 50, 100 and 200 g/mL, to the cells and observed them for 24, 48, and 72h. After this period, we replaced the medium with a diluted tryphan blue solution, formulated by mixing 0.25ml of the dye with 0.8ml of medium. This assay accurately determined the extent of cytotoxicity present in the cells. Data are presented as averages of 330 measurements.

The proliferation of MDA-MB 231 (strongly metastatic) and MCF-7 (weakly metastatic) BCa cells treated with ALEE extracts were assessed using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide) reagent Sigma-Alderich) as described by Fraser et al. (1990) with some adjustments. BCa cells (3104 cells/mL) cultured in tissue plates (12-well) were treated with 10, 5, 2.5, and 0g/mL of ALEE extracts and incubated for 24, 48, and 72h. Treatments and culture medium (DMEM) were replaced every 24h. Microplate Reader (ELX 800) was used to measure the absorbance of the treated cell and control at 570nm. All the experiment was performed at least thrice in triplicates (n3).

A wound heal assay was carried out to evaluate the anti-metastatic potential of ALEE extracts against highly metastatic (MDA-MB 231) and weakly metastatic (MCF-7) cells using the method of Fraser et al. with some modifications. Cells were plated in 35mm culture dishes, and parallel and intersecting lines were drawn on the culture dishes31. Briefly, 1106/mL and 5105/mL cells per dish of MCF-7 and MDA-MB 231, respectively, were plated on 35mm culture dishes, and three scratch lines were made using pipette tips (200 L) after the cell settled. The initial and subsequent wounds causedwere captured using a camera (Leica, Germany) attached to an inverted microscope at100 magnification, and image processing software (ImageJ) was used to analyse the recovery wound area (cell migration) by migrating cells using Eq.(1).

$$mathrm{Mo I}=1-(frac{ {text{Wt}}}{mathrm{ W}0})$$

(1)

Mo I, motility index; Wt, the wound width at 24 or 48h; W0, initial wound width at 0h.

The study of the science of data is critical in any driven-model data-driven model. The accuracy of the data was tested using XGB, ELM, and MLP algorithms with MATLAB (R2021a). In this work, various models were proposed for the in vitro cancer metastasis prediction in MDA-MB 231 and MCF-7 cells, respectively. The data was collected from our experimental data set (n80) to reveal the accuracy of the algorithms. In this way, two parameters were used as input variables, i.e. the motility index on the cells and the concentration of the extract, respectively. The two parameters we considered in modelling were the concentration of the extract and the motility index, although other parameters can be utilized for the same purpose. The models used have a learning algorithm with a single layer, and a fast learning rate and both the hidden biases and input layers which process and distribute data respectively, in the network are chosen randomly. However, other variables can also be used in the simulation of in vitro cancer metastasis prediction in both cell lines. In addition, models provide details on the effectiveness of the treatment, and choosing a single model that can perform best in most circumstances is difficult for the predictors, but applying various ensemble models can reveal the best models that will fit the data. Determination of cell migration potentials in breast cancer cells treated with ALEE extract using the motility index on the cells and the extract concentration as the input parameters were the main objectives of our proposed method. The proposed flowchart of the models is shown in Fig.1.

Proposed flowchart of experimental data-driven methods.

The XGB algorithm is a commonly used model that is highly efficient with high reproducibility in analysing and modelling data using various inputs and outputs. The method was first introduced and improved by Friedman et al.32, and it plays an essential role in the classification and regression of data. Its application in extreme learning techniques is well-known and the technique33. The technique uses a precise setup of up best complex decision tree algorithm to reveal good performance and speed faster than the standard gradient algorithm34. XGB is a machine learning ensemble technique that works similarly to Random Forest and is recognised by its classification and regression trees (CART) set. The model utilizes parallel processing to enhance learning speed, balance between variance and bias, and minimize the risk of overfitting. Furthermore, it is not the same with the decision tree (DT), whereby every leave carries an actual score, which aids in enriching those interpretations which cannot be defined using the DT. Algorithms have been used in modelling and predicting data, and it has shown promising results. Due to this ensemble technique's wide application and excellent features, we use it to model and predict the anti-migratory potential of the cells. Given that CART ([(xi, yi)dots ..{text{T}}K(xi, yi)]) is the training data set of the treated cells motility index represented as xi to predict outcomes yi and determined using K classification, as shown in Eq.(2)35:

$$widehat{y}= sum_{k=1}^{K}{f}_{k}left({x}_{i}right), {f}_{k}in F$$

(2)

where ({f}_{k}) represents independent tree structure with cells motility index scores, and F denotes the space of all CART. Optimisation of the objective is given by Eq.(3)35:

$$objleft(theta right)= sum_{i=1}^{n}l({y}_{i}, {widehat{y}}_{i})+sum_{i=1}^{t}Omega ({f}_{i})$$

(3)

The loss function is denoted (l) which estimates the difference between target ({y}_{i}) and predicted ({widehat{y}}_{i}). The regularization function that penalises the model to avoid over-fitting is denoted as (Omega ,) and ({f}_{i}) represents the simultaneous training loss function. Furthermore, the prediction value for (t) at step ({widehat{y}}_{i}^{t})35:

Prediction (widehat{y}) at the t step can be expressed as

$${widehat{y}}_{i}^{t}=sum_{k=1}^{t}{f}_{k}left({x}_{i}right)={widehat{y}}_{i}^{t-1}+{f}_{t} ({x}_{i})$$

(4)

Substituting the predicted value in Eq.(4). Equation(3) can be expressed as36:

$${obj}^{t}= sum_{i=1}^{n}({{y}_{i}-({widehat{y}}_{i}^{t-1}+{f}_{t} left({x}_{i}right)))}^{2}+sum_{i=1}^{t}Omega ({f}_{i})$$

(5)

It can also be expressed as

$${obj}^{t}= sum_{i=1}^{n}{[ 2left( {widehat{y}}_{i}^{t-1}-{y}_{i}right){f}_{t} left({x}_{i}right) +{f}_{t} left({x}_{i}right)}^{2}+Omega ({f}_{t})+constant$$

(6)

Looking at Taylors expansion due to loss of function, it can be expressed in Eq.(7)36:

$${obj}^{t}= sum_{i=1}^{n}{[ lleft({y}_{i},{widehat{y}}_{i}^{t-1}right)+{g}_{i} {f}_{t} left({x}_{i}right) +{frac{1}{2}{h}_{i} f}_{t} left({x}_{i}right)}^{2}+Omega ({f}_{t})+constant$$

(7)

where ({g}_{i}= {partial }_{{widehat{y}}_{i}^{t-1}}{l(y}_{i}-{widehat{y}}_{i}^{t-1})), and ({h}_{i}= {partial }_{{widehat{y}}_{i}^{t-1}}^{2}{l(y}_{i}-{widehat{y}}_{i}^{t-1})). Which was described by ({f}_{t}left(xright)= {w}_{q(x)},) and the normalised function is expressed as

$$Omega left(fright)= gamma T+frac{1}{2}lambda sum_{j=1}^{T}{w}_{j}^{2}$$

(8)

where (T) represent the total number of trees, and the objective function can rewritten as

$${obj}^{t}approx sum_{i=1}^{n}[{g}_{i} {w}_{qleft({x}_{i}right)}+ frac{1}{2}{h}_{i}{w}_{qleft({x}_{i}right)}^{2}]+gamma T+frac{1}{2}lambda sum_{j=1}^{T}{w}_{j}^{2}=sum_{j=1}^{T}[left(sum_{iin {I}_{i}}{g}_{i}right){w}_{j}+ frac{1}{2}left(sum_{iin {I}_{i}}{h}_{i}+lambda right){w}_{j}^{2}+ gamma T$$

(9)

where ({I}_{i}={left.iright| qleft({x}_{i}right)=j}) refers to the ({j}^{th}) leaf data index. ({G}_{j}=sum_{iin {I}_{i}}{g}_{i}) and ({H}_{j}=sum_{iin {I}_{i}}{h}_{i}), the objective function can be written as

$${obj}^{t}= sum_{j=1}^{T}[{G}_{j}{w}_{j}+frac{1}{2}left({H}_{j}+ lambda right){w}_{j}^{2}+gamma T$$

(10)

Performance for (q(x)) can be achieved using the objective function and ({w}_{j,}) as you can see in Eqs. (11) and (12).

$${w}_{j}^{*}= -frac{{G}_{j}}{{H}_{j}+ lambda }$$

(11)

$${obj}^{*}= - frac{1}{2} sum_{j=1}^{T}frac{{G}_{j}}{{H}_{j}+ lambda }+gamma T$$

(12)

In addition, Eq.(13) is for leaf node score during splitting, L and R are the left and right scores, and the regularisation of the additional leaf is denoted as (gamma).

$$Gain= frac{1}{2}left[frac{{G}_{L}^{2}}{{H}_{L}+lambda }+ frac{{G}_{R}^{2}}{{H}_{R}+lambda }-frac{{({G}_{L}+{G}_{R})}^{2}}{{H}_{L}+ {H}_{R}+ lambda }right]-gamma$$

(13)

The ELM model is a novel learning algorithm with a single hidden layer that works similarly to a feed-forward neural network (FNN) due to its approximation potential. And it was first introduced by Huang et al.37. Issues such as slower training speed and over-fitting with FNN have been addressed analytically by ELM through inversion and matrix multiplication38. The structure of this model contains only one layer and hidden nodes, which result in the model not requiring a learning process to calculate its parameters, and hence, it remains constant during both the training and predicting phases. In addition, ELM hidden biases and input layer are chosen randomly, and the MoorePenrose generalised inverse function determines the output layer. The ELM revealed precision due to its robustness when applied to hydrological.

Modelling39.

The ELM was expressed by training dataset ({left({x}_{1}, {y}_{1}right), dots , left({x}_{t}, {y}_{t}right)}). Overall, the input are represented as ({x}_{1}, {x}_{2}, dots , {x}_{t}) and the output as ({y}_{1}, {y}_{2}, dots , {y}_{t}).

The training dataset (N) ((t = 1, 2, dots , N)) where ({x}_{t} in {mathbb{R}}^{d}) and ({y}_{t}in {mathbb{R}}), with (H) hidden nodes, is given by37 as in Eq.(14):

$$sum_{i=1}^{H}{B}_{i}{g}_{i}left({alpha }_{i}.{x}_{t}+{beta }_{i}right)= {z}_{t},$$

(14)

Equation(14), (i) represents index of the hidden layer node, ({beta }_{i}) and ({alpha }_{i}) denote the bias and weight of the random layers, and (d) is the number of inputs. Furthermore, the predicted weight of the output layer, model output and hidden layer neurons activation function are (B in {mathbb{R}}^{H}), (Z({z}_{t}in {mathbb{R}})) and (Gleft(alpha ,beta , xright)) respectively. The best activation function is found to be the sigMoId function40 as follows:

$$Gleft(xright)= frac{1}{1+exp(-x)},$$

(15)

In addition, the output layer utilizes a linear activation function, which is shown in the following equation:

$$sum_{t=1}^{N}Vert {z}_{t}-{y}_{t}Vert =0,$$

(16)

The value of (B) is calculated using the system of linear equations as expressed in Eq.(17) and G in Eq.(18)

$$Gleft(alpha ,beta , xright)= left[begin{array}{c}g({x}_{1})\ vdots \ g({x}_{N})end{array}right]={ left[begin{array}{ccc}{g}_{1}({alpha }_{1}.{x}_{1}+{beta }_{1})& cdots & {g}_{L}({w}_{H}.{x}_{1}+{beta }_{H})\ vdots & cdots & vdots \ {g}_{1}({alpha }_{N}.{x}_{N}+{beta }_{1})& cdots & {g}_{L}({w}_{H}.{x}_{N}+{beta }_{F})end{array}right]}_{N times H}$$

(18)

B is calculated in Eq.(19), and Y in Eq.(20).

$$B={left[begin{array}{c}{B}_{1}^{T}\ vdots \ {B}_{H}^{T}end{array}right]}_{H times 1}$$

(19)

$$Y={left[begin{array}{c}{y}_{1}^{T}\ vdots \ {y}_{N}^{T}end{array}right]}_{N times 1}$$

(20)

G is for the hidden layer. (widehat{B}) was calculated using MoorePenrose inverse function+by inverting the hidden-layer matrix (see Eq.21).

$$widehat{B}={G}^{+}Y$$

(21)

Overall, estimated (widehat{y,}) which denotes the predicted MoI of the cells whic,h can achieved using Eq.(22).

$$widehat{y}= sum_{i=1}^{H}{widehat{B}}_{i}{g}_{i}left({alpha }_{i}.{x}_{t}+{beta }_{i}right)$$

(22)

MLP, as one of the commonly applied Artificial neural networks (ANNs) composed of information processing units and an advanced simulation tool, motivated and mimicked the biological neurons. In this way, ANN, just like the human central nervous system (CNS), can solve complex problems with a non-linear and linear behaviour by combining features such as parallel processing, generalisation, learning power and decision making41. The general architecture of ANN consists of 3 layers with individual and different tasks: the input layer, which distributes the data in the network; the hidden layers, which process the information and the outputs, which, in addition to processing each input vector, show its work. The neurons are regarded as the smallest unit that processes the networks. The basic characteristics of MLP include using interactive connections between the neurons without advanced mathematical design to complete the information processing. Furthermore, MLP comprises input, one or more hidden and output layers in its architecture, similar to the ANN (Fig.2)40.

Schematic diagram of MLP network structure.

To evaluate the performance efficiency of the artificial intelligence-based models used in the current study; two different metrics, where; NashSutcliffe coefficient (NS) was used for understanding the fitness between the experimental and predicted values, while Root mean square error (RMSE) was used in determining the errors depicted by each model.

Hence, the Root mean square error (RMSE) was expressed as:

$$RMSE=sqrt{frac{1}{N}} sum_{j=1}^{N}{left({(Y)}_{obs,j}-{(Y)}_{com,j}right)}^{2}$$

(23)

NashSutcliffe coefficient (NS), expressed as:

$${text{NS}}=1-left[frac{{sum }_{i=1}^{N}{left({Q}_{obs,i}-{Q}_{sim,i}right)}^{2}}{{sum }_{i=1}^{N}{left({Q}_{obs,i}-{overline{Q} }_{obs,i}right)}^{2}}right]infty le NSle 1$$

(24)

Read more:
Prediction of cell migration potential on human breast cancer cells treated with Albizia lebbeck ethanolic extract using ... - Nature.com

Read More..

The Future of AI and Machine Learning in Fintech – Medium

Photo by Arif Riyanto on Unsplash

Welcome to the forefront of financial evolution, where artificial intelligence (AI) and machine learning (ML) converge to reshape the landscape of fintech. In this exploration of the future, well unravel the exciting possibilities, key trends, and transformative impacts that AI and ML are poised to bring to the dynamic world of financial technology.

The future of AI and machine learning in fintech is a thrilling journey marked by innovation, efficiency, and inclusivity. As these technologies continue to advance, responsible integration, ethical practices, and regulatory compliance will be paramount. Embracing the transformative power of AI and ML ensures a dynamic, secure, and user-centric future for the fintech industry. Welcome to the era where intelligence meets finance, charting new frontiers for a digital financial revolution.

Website https://www.bluestock.in/

App https://play.google.com/store/apps/details?id=in.bluestock.app

Linkedin https://www.linkedin.com/company/bluestock-fintech/

Instagram https://www.instagram.com/bluestock.in/

#Bluestock #Bluestock Fintech

See the article here:
The Future of AI and Machine Learning in Fintech - Medium

Read More..

Machine Learning in CT Imaging: Predicting COPD Progression in High-Risk Individuals – Physician’s Weekly

The following is a summary of CT Imaging With Machine Learning for Predicting Progression to COPD in Individuals at Risk, published in the November 2023 issue of Pulmonology by Kirby et al.

It was not clear what the best treatment is for community-acquired pneumonia in children that is made worse by empyema. For a study, researchers sought to find the difference between the treatments for children with parapneumonic fluid or empyema in terms of hospital stay and other important clinical outcomes. People with long-term lung diseases can get high-resolution pictures of their lungs with a CT scan. Over the last few decades, a lot of work has gone into creating new quantified CT scan airway measures that show when the shape of the airways isnt normal. Many observational studies showed links between CT scan airway measurements and clinically important events like illness, death, and loss of lung function.

However, only a few quantitative CT scan measurements are used in clinical practice. The study overviewed the important methodological issues when using quantitative CT scan airway analyses. They also examined the scientific literature about quantitative CT scan airway measurements used in human clinical or randomized trials and observational studies.

They also discussed new proof that quantitative CT scan images of the airways can be useful in the clinic and what needs to happen to get the study into clinical use. CT scan measures of airways continued to help them learn more about how diseases work, how to diagnose them, and how they turn out. A literature review, on the other hand, showed that more research needs to be done to see if using quantitative CT scans in therapeutic settings is helpful. High-quality proof of clinical gain from treatment based on quantitative CT scan imaging of the airways and technical guidelines for quantitative CT scan imaging of the airways are needed.

Source: sciencedirect.com/science/article/abs/pii/S0012369223003148

See original here:
Machine Learning in CT Imaging: Predicting COPD Progression in High-Risk Individuals - Physician's Weekly

Read More..

Google unveils MedLM generative AI models for healthcare with HCA, Augmedix and BenchSci as early testers – FierceHealthcare

Google continues to advance its generative AI models designed specifically for healthcare use cases. This week, the tech giant unveiledMedLM, a family of foundation models designed for healthcare industry use cases and available through Google Cloud.

Google's work on generative AI models in healthcare has advanced rapidly since it rolled out Med-PaLM, a large language model designed to provideanswers to medical questions, just a year ago.

The company developed two models under MedLM, built on Med-PaLM 2. The first MedLM model is larger, designed for complex tasks. The second is a medium model, able to be fine-tuned and best for scaling across tasks, according to the company in a blog post. Its first two models are now available to U.S. Google Cloud customers via the companys Vertex AI platform.

"In the coming months, were planning to bringGemini-based models into the MedLM suite to offer even more capabilities," wrote Yossi Matias, vice president of engineering and research at Google and Aashima Gupta, global director, healthcare strategy and solutions at Google Cloud in the blog post.

Gemini is Google's newest large language model as a competitor to OpenAI and Microsoft's GPT-4.

Google says it has been working with companies to testMedLM and those companies are now moving it into production in their solutions, or broadening their testing.

For the past several months, HCA Healthcare has been piloting a solution to help physicians with their medical notes in four emergency department hospital sites. Physicians use an app developed by tech company Augmedix on a hands-free device to create accurate medical notes from clinician-patient conversations.

Augmedix, which developed technology for ambient medical documentation, was piloting Google Clouds Med-PaLM 2 and will now integrate MedLM into its technology stack.

"Generative AI solutions for use in healthcare delivery require a more tailored and precise approach than general purpose LLMs, which is why we value our strategic partnership with Google Cloud, Ian Shakil,Augmedixfounder, director, and chief strategy officer said.Google Cloud has established its leadership as an AI innovator with solutions specifically designed to address the needs of healthcare providers.

Augmedix uses Google Clouds Vertex AI platform to fine-tune some models using training data created by the company's existing technology, which generates 70,000 notes per week and spans more than 30 specialties.

The company anticipates that integrating MedLM into its ambient medical documentation products will improve thequality of medical note output and provide faster turnaround time. Augmedixalso plans to rapidly expand into more sub-specialties through 2024.

BenchSci, a company that uses AI to hasten drug discovery,is integrating MedLM into its ASCEND platform to further improve the speed and quality of pre-clinical research and development.

Google also is working with Deloitte to use generative AI to improve provider search and Accenture to leverage the tech to improve patient access, experienceand outcomes.

Go here to see the original:
Google unveils MedLM generative AI models for healthcare with HCA, Augmedix and BenchSci as early testers - FierceHealthcare

Read More..

What Investors Need to Know About Oracle’s Cloud Slowdown – The Motley Fool

The market didn't respond well to Oracle's (ORCL 3.25%) report for the fiscal second quarter, which ended on Nov. 30. The database and software giant missed analyst expectations for revenue, reporting growth of just 5%.

Software license revenue tumbled 18%, hardware revenue dropped 11%, and services revenue slipped 2%. The cloud services and license support segment, the largest by far, grew sales by 12%.

Overall cloud revenue, which includes infrastructure-as-a-service (Iaas) and software-as-a-service (Saas), expanded by 25%. That compares to 30% growth in the first fiscal quarter. IaaS and SaaS revenue rose 52% and 15%, respectively, down from 66% and 17% growth in the previous quarter.

It's clear from Oracle's results that demand for its software has weakened. Given the current economic environment, marked by companies pulling back on spending in some areas, that's not too surprising.

It's also clear that demand for Oracle's cloud infrastructure is booming. Oracle is still a small player in the IaaS market, which is dominated by Amazon Web Services and Microsoft Azure, but it's growing much faster than its larger competitors. Oracle's IaaS business produced $1.6 billion in revenue in the second quarter.

CEO Safra Katz and Chairman Larry Ellison made it clear during the earnings call that the slowdown in IaaS growth wasn't a demand problem. "So, again, the demand is extraordinary, we can build the data centers relatively fast, and I expect the OCI [Oracle Cloud Infrastructure] growth rate to be over 50% for a few years," said Ellison. "Yes, we're not demand-limited in any way right now," Katz added.

Oracle is having success winning artificial intelligence (AI) workloads, which require powerful GPUs that are in short supply. The bottleneck for Oracle is acquiring those GPUs. "[A]s more GPUs become available and we can put those in, we have just really unlimited amount of demand," said Ellison.

Oracle is seeing strong demand for its cloud infrastructure for other types of workloads, as well -- so much so that the company is embarking on a massive expansion of its cloud computing capacity. Oracle is in the process of expanding its 66 existing cloud data centers and is planning to build 100 new cloud data centers. No timeline was given, but the company emphasized that it's capable of building data centers quickly.

This cloud data center expansion won't be cheap. Oracle expects to spend about $8 billion in fiscal 2024 on capital expenditures, a bit below fiscal 2023 levels but still nearly double what it spent in fiscal 2022. Oracle's capital spending was just $2.4 billion through the first half of the fiscal year, so the company will more than double its rate of spending in the second half as it brings more cloud computing capacity online.

Oracle is not traditionally a capital-intensive company, but its cloud infrastructure business has changed that.

ORCL Capital Expenditures (TTM) data by YCharts.

The risk for Oracle is that it overbuilds and is stuck with excess capacity and low utilization rates. Demand for AI workloads appears unlimited today, but that may not remain the case once the AI frenzy dies down a bit. And in the general-purpose cloud infrastructure market, a downturn in the economy could slow demand as companies rethink their cloud spending.

It's taken years for Oracle's cloud infrastructure business to gain real traction. The company is looking to take advantage of soaring demand, but it's important to remember that IaaS represented just 12% of total revenue in the second quarter. The rest comes from software, support, services, and hardware, and those businesses aren't exactly booming.

Oracle has the potential to grow into a major player in the cloud infrastructure market, but it won't be smooth sailing for investors. For now, IaaS growth can only move the needle so much.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Timothy Green has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Amazon, Microsoft, and Oracle. The Motley Fool has a disclosure policy.

Read the rest here:
What Investors Need to Know About Oracle's Cloud Slowdown - The Motley Fool

Read More..

Your Guide to the Exciting Benefits of Cloud Computing in 2024 – Medium

Welcome to the world of cloud computing in 2024, where the possibilities are endless, and organizations have the power to transform the way they operate. In this article, we will explore the benefits of cloud computing in a way that is not only informative but also humanized. We will take a closer look at how cloud computing can enhance agility, scalability, cost-efficiency, and security, allowing businesses to thrive in the digital age.

Embracing cloud computing in 2024 means unlocking a new level of agility for your organization. No longer bound by physical infrastructure limitations, businesses can scale resources up and down at a moments notice. This flexibility allows you to respond swiftly to market demands and optimize costs by paying only for the resources you need. With the clouds ability to quickly adapt to changing circumstances, your organization can seize opportunities and stay one step ahead of the competition.

Gone are the days of worrying about server capacity and infrastructure limitations. In 2024, cloud computing provides unprecedented scalability, allowing your organization to effortlessly handle peak workloads. Whether you need to accommodate sudden surges in website traffic or process massive amounts of data, the cloud has got you covered. Scale resources up or down with ease, ensuring smooth operations and seamless user experiences.

Imagine harnessing the power of your organizations data to gain valuable insights and make informed decisions. With cloud computing in 2024, this dream becomes a reality. Cloud-based analytics platforms provide advanced capabilities for processing, analyzing, and visualizing vast datasets. Unlock the potential of your data, identify patterns, and discover hidden opportunities. Leverage machine learning and artificial intelligence services in the cloud to democratize advanced analytics, regardless of your organizations size or budget.

In an increasingly connected world, security and compliance are of utmost importance. Cloud computing in 2024 offers robust security features to safeguard your data and protect your organizations reputation. Cloud service providers invest heavily in advanced security technologies such as encryption, threat detection, and identity and access management. You can trust that your sensitive data is safe and secure, all while ensuring compliance with evolving regulatory requirements.

Optimizing costs and resource efficiency is crucial for any organizations success. Cloud computing in 2024 empowers you to do just that. With the maturation of cloud technologies, you gain access to sophisticated tools for monitoring and managing resources. Take advantage of dynamic resource allocation, paying only for what you use, and reducing expenses. Cloud-based automation and optimization tools streamline workflows, maximize productivity, and optimize resource utilization, resulting in cost savings and increased efficiency.

As we look ahead to 2024, the benefits of cloud computing continue to redefine the way organizations operate and thrive. In this article, we explored the exciting advantages in a humanized manner, highlighting how the cloud enhances agility, scalability, data analytics, security, and cost optimization.

No matter the size of your organization, cloud computing in 2024 empowers you to unleash your full potential. Seamlessly scale resources, discover meaningful insights from data, rest easy with robust security measures, and optimize costs. Embracing cloud computing is not just about technology; its about unlocking the future with ease, agility, and confidence.

In conclusion, cloud computing in 2024 offers a world of opportunity for organizations to revolutionize the way they conduct business. Embrace the future with open arms, and let the exciting benefits of cloud computing guide your organization towards success in the digital age.

Excerpt from:
Your Guide to the Exciting Benefits of Cloud Computing in 2024 - Medium

Read More..