Page 654«..1020..653654655656..660670..»

Latest research on Novelty Detection part2(Machine Learning 2023) – Medium

Author : Stefan Smeu, Elena Burceanu, Emanuela Haller, Andrei Liviu Nicolicioiu

Abstract : : Novelty detection aims at finding samples that differ in some form from the distribution of seen samples. But not all changes are created equal. Data can suffer a multitude of distribution shifts, and we might want to detect only some types of relevant changes. Similar to works in out-of-distribution generalization, we propose to use the formalization of separating into semantic or content changes, that are relevant to our task, and style changes, that are irrelevant. Within this formalization, we define the robust novelty detection as the task of finding semantic changes while being robust to style distributional shifts. Leveraging pretrained, large-scale model representations, we introduce Stylist, a novel method that focuses on dropping environment-biased features. First, we compute a per-feature score based on the feature distribution distances between environments. Next, we show that our selection manages to remove features responsible for spurious correlations and improve novelty detection performance. For evaluation, we adapt domain generalization datasets to our task and analyze the methods behaviors. We additionally built a large synthetic dataset where we have control over the spurious correlations degree. We prove that our selection mechanism improves novelty detection algorithms across multiple datasets, containing both stylistic and content shifts.

2.Environment-biased Feature Ranking for Novelty Detection Robustness (arXiv)

Author :

Abstract :

See original here:
Latest research on Novelty Detection part2(Machine Learning 2023) - Medium

Read More..

Research on Cache Optimization part3(Machine Learning) | by … – Medium

Author : Xiangyu Gao, Yaping Sun, Hao Chen, Xiaodong Xu, Shuguang Cui

Abstract : Mobile edge computing (MEC) networks bring computing and storage capabilities closer to edge devices, which reduces latency and improves network performance. However, to further reduce transmission and computation costs while satisfying user-perceived quality of experience, a joint optimization in computing, pushing, and caching is needed. In this paper, we formulate the joint-design problem in MEC networks as an infinite-horizon discounted-cost Markov decision process and solve it using a deep reinforcement learning (DRL)-based framework that enables the dynamic orchestration of computing, pushing, and caching. Through the deep networks embedded in the DRL structure, our framework can implicitly predict user future requests and push or cache the appropriate content to effectively enhance system performance. One issue we encountered when considering three functions collectively is the curse of dimensionality for the action space. To address it, we relaxed the discrete action space into a continuous space and then adopted soft actor-critic learning to solve the optimization problem, followed by utilizing a vector quantization method to obtain the desired discrete action. Additionally, an action correction method was proposed to compress the action space further and accelerate the convergence. Our simulations under the setting of a general single-user, single-server MEC network with dynamic transmission link quality demonstrate that the proposed framework effectively decreases transmission bandwidth and computing cost by proactively pushing data on future demand to users and jointly optimizing the three functions. We also conduct extensive parameter tuning analysis, which shows that our approach outperforms the baselines under various parameter settings.

2.Matrix Factorization for Cache Optimization in Content Delivery Networks (CDN) (arXiv)

Author : Adolf Kamuzora, Wadie Skaf, Ermiyas Birihanu, Jiyan Mahmud, Pter Kiss, Tams Jursonovics, Peter Pogrzeba, Imre Lendk, Tom Horvth

Abstract : Content delivery networks (CDNs) are key components of high throughput, low latency services on the internet. CDN cache servers have limited storage and bandwidth and implement state-of-the-art cache admission and eviction algorithms to select the most popular and relevant content for the customers served. The aim of this study was to utilize state-of-the-art recommender system techniques for predicting ratings for cache content in CDN. Matrix factorization was used in predicting content popularity which is valuable information in content eviction and content admission algorithms run on CDN edge servers. A custom implemented matrix factorization class and MyMediaLite were utilized. The input CDN logs were received from a European telecommunication service provider. We built a matrix factorization model with that data and utilized grid search to tune its hyper-parameters. Experimental results indicate that there is promise about the proposed approaches and we showed that a low root mean square error value can be achieved on the real-life CDN log data

Read more:
Research on Cache Optimization part3(Machine Learning) | by ... - Medium

Read More..

Grant backs research on teaching networks to make better decisions – Rice News

Picture a swarm of drones capturing photos and video as they survey an area: What would enable them to process the data collected in the most rapid and effective manner possible?

Rice Universitys Santiago Segarra and Ashutosh Sabharwal have won a grant from the Army Research Office, a directorate of the U.S. Army Combat Capabilities Development Command Army Research Laboratory, to develop a machine learning framework that improves military communication networks decision-making processes. The research could also help inform applications such as self-driving vehicles and cyber intrusion detection.

Distributed decision-making is crucial in military networks, said Sabharwal, who is a co-investigator on the grant. In high-stakes, fast-paced environments, relying solely on a centralized decision-making process can result in delays, bottlenecks and vulnerabilities. Spreading decision and execution responsibilities across the network enables a rapid response to changing situations and adaptability to unforeseen circumstances.

The main challenge for effective distributed network control is that the individual units that make up a network nodes have to find the best way to aggregate local information and distill it into actionable knowledge. In the drone example, to perform a machine learning task like object recognition on visual data collected in real time, the individual nodes or, in our example, drones have to follow designated protocols that specify where the information is to be processed.

This can be done either in the drone with its limited battery and computational capacity or can be offloaded to headquarters through wireless connections with the associated communication latency, Segarra said.

The optimal decision depends on multiple factors, such as the size and sensitivity of the data, the complexity of the task and the congestion level of the communication network. Rigid decision-making protocols that pre-specify how information is to be aggregated can delay or impede the networks ability to react. Sabharwal and Segarra aim to develop a novel distributed machine learning architecture that would enable nodes to combine local data in the most effective manner.

Our goal is for the swarm of drones to make jointly optimal offloading decisions in a distributed manner that is, in the absence of a central agent that tells every drone what to do, Segarra said.

To achieve this, the researchers will develop a deep learning framework where two graph neural networks interact in an actor-critic setting: The actor neural network makes offloading decisions while the critic assesses their quality. By training both neural networks in an iterative fashion, the goal is to obtain a versatile actor whose decisions translate into rapid, adaptive action across a broad range of scenarios.

Segarra is an assistant professor of electrical and computer engineering and statistics. Sabharwal is Rices Ernest Dell Butcher Professor of Engineering and chair of the Department of Electrical and Computer Engineering.

Project title: Distributed Machine Learning for Tactical Networks

Award number: W911NF-24-2-0008

https://news-network.rice.edu/news/files/2023/11/000_AROgrant.jpgCAPTION: Ashutosh Sabharwal (left) and Santiago Segarra(Credit: Photo courtesy of Rice University)

George R. Brown School of Engineering: https://engineering.rice.edu/ Department of Electrical and Computer Engineering: https://eceweb.rice.edu/ Ashutosh Sabharwal website: http://ashu.rice.edu/ Santiago Segarra website: http://segarra.rice.edu/ National Security Research Accelerator: https://runsra.rice.edu/ Wireless Open-Access Research Platform: http://warpproject.org/trac Reconfigurable Eco-system for Next-generation End-to-end Wireless: https://renew-wireless.org/ Scalable Health Labs: http://sh.rice.edu/See Below the Skin: http://www.seebelowtheskin.org/ Saving Lives Through Transformative Health Technologies: https://pathsup.org/

Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nations top 20 universities by U.S. News & World Report. Rice has highly respected schools of architecture, business, continuing studies, engineering, humanities, music, natural sciences and social sciences and is home to the Baker Institute for Public Policy. With 4,574 undergraduates and 3,982 graduate students, Rices undergraduate student-to-faculty ratio is just under 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for lots of race/class interaction, No. 2 for best-run colleges and No. 12 for quality of life by the Princeton Review. Rice is also rated as a best value among private universities by Kiplingers Personal Finance.

Continue reading here:
Grant backs research on teaching networks to make better decisions - Rice News

Read More..

Predicting water quality through daily concentration of dissolved … – Nature.com

Once again, this paper offers four novel models for DO prediction. The models are composed of an MLP neural network as the core and the TLBO, SCA, WCA, and EFO as the training algorithms. All models are developed and implemented in the MATLAB 2017 environment.

Proper training of the MLP is dependent on the strategy employed by the algorithm appointed for this task (as described in previous sections for the TLBO, SCA, WCA, and EFO). In this section, this characteristic is discussed in the format of the hybridization results of the MLP.

An MLPNN is considered the basis of the hybrid models. As per Section The MLPNN, this model has three layers. The input layer receives the data and has 3 neurons, one for each of WT, pH, and SC. The output layer has one neuron for releasing the final prediction (i.e., DO). However, the hidden layer can have various numbers of neurons. In this study, a trial-and-error effort was carried out to determine the most proper number. Ten models were tested with 1, 2, , and 10 neurons in the hidden layer and it was observed that 6 gives the best performance. Hence, the final model is structured as 361. With the same logic, the activation function of the output and hidden neurons is respectively selected Pureline (x=y) and Tansig (described in Section Formula presentation) 83.

Next, the training dataset was exposed to the selected MLPNN network. The relationship between the DO and water conditions is established by means of weights and biases within the MLPNN (Fig.4). In this study, the role of tuning theses weighst and biases is assigned to the named metaheuristic algorithms. For this purpose, the MLPNN configuration is first transformed in the form of mathematical equations with adjustable weights and biases (The equations will be shown in Section Formula presentation). Training the MLPNN using metaheuristic algorithms is an iterative effort. Hereupon, the RMSE between the modeled and measured DOs is introduced as the objective function of the TLBO, SCA, WCA, and EFO. This function is used to monitor the optimization benhavior of the algorithms. Since RMSE is an error indicator, the algorithms aim to minimize it over time to improve the quality of the weights and biases. Designating the appropriate number of iterations is another important step. By analyzing the convergence behavior of the algorithms, as well as referring to previous similar studies, 1000 iterations were determined for the TLBO, SCA, and WCA, while the EFO was implemented with 30,000 iterations. The final solution is used to constrcuct the optimized MLPNN. Figure5 illustrates the optimization flowchart.

Optimization flowchart of the models.

Furthermore, each algorithm was implemented with nine swarm sizes (NSWs) to achieve the best model configuration. These tested NSWs were 10, 25, 50, 75, 100, 200, 300, 400, and 500 for the TLBO, SCA, WCA, while 25, 30, 50, 75, 100, 200, 300, 400, and 500 for the EFO84. Collecting the obtained objective functions (i.e., the RMSEs) led to creating a convergence curve for each tested NSWs. Figure6 depicts the convergence curves of the TLBO-MLPNN, SCA-MLPNN, WCA-MLPNN, and EFO-MLPNN.

Optimization curves of the (a) TLBO-MLPNN, (b) SCA-MLPNN, (c) WCA-MLPNN, and (d) EFO-MLPNN.

As is seen, each algorithm has a different method for training the MLPNN. According to the above charts, the TLBO-MLPNN, SCA-MLPNN, WCA-MLPNN, and EFO-MLPNN with respective NSWs of 500, 400, 400, and 50, attained the lowest RMSEs. It means that for each model, the MLPNNs trained by these configurations acquired more promising weights and biases compared to eight other NSWs. Table2 collects the final parameters of each model.

The RMSE of the recognized elite models (i.e., the TLBO-MLPNN, SCA-MLPNN, WCA-MLPNN, and EFO-MLPNN with the NSWs of 500, 400, 400, and 50) was 1.3231, 1.4269, 1.3043, and 1.3210, respectively. These values plus the MAEs of 0.9800, 1.1113, 0.9624, and 0.9783, and the NSEs of 0.7730, 0.7359, 0.7794, and 0.7737 indicate that the MLP has been suitably trained by the proposed algorithms. In order to graphically assess the quality of the results, Fig.7a,c,e, and g are generated to show the agreement between the modeled and measured DOs. The calculated RPs (i.e., 0.8792, 0.8637, 0.8828, and 0.8796) demonstrate a large degree of agreement for all used models. Moreover, the outcome of ({DO}_{{i}_{expected }}- {DO}_{{i}_{predicted}}) is referred to as error for every sample, and the frequency of these values is illustrated in Fig.7b,d,f, and h. These charts show larger frequencies for the error values close to 0; meaning that accurately predicted DOs outnumber those with considerable errors.

The scatterplot and histogram of the errors plotted for the training data of (a and b) TLBO-MLPNN, (c and d) SCA-MLPNN, (e and f) WCA-MLPNN, and (g and h) EFO-MLPNN.

Evaluating the testing accuracies revealed the high competency of all used models in predicting the DO for new values of WT, pH, and SC. In other words, the models could successfully generalize the DO pattern captured by exploring the data belonging to 20142018 to the data of the fifth year. For example, Fig.8 shows the modeled and measured DOs for two different periods including (a) October 01, 2018 to December 01, 2018 and (b) January 01, 2019 to March 01, 2019. It can be seen that, for the first period, the upward DO patterns have been well-followed by all four models. Also, the models have shown high sensitivity to the fluctuations in the DO pattern for the second period.

The real and predicted DO patterns for (a) October 01, 2018 to December 01, 2018 and (b) January 01, 2019 to March 01, 2019.

Figure9a,c,e, and g show the errors obtained for the testing data. The RMSE and MAE of the TLBO-MLPNN, SCA-MLPNN, WCA-MLPNN, and EFO-MLPNN were 1.2980 and 0.9728, 1.4493 and 1.2078, 1.3096 and 0.9915, and 1.2903 and 1.0002, respectively. These values, along with the NSEs of 0.7668, 0.7092, 0.7626, and 0.7695, imply that the models have predicted unseen DOs with a tolerable level of error. Moreover, Fig.9b,d,f, and h present the corresponding scatterplots illustrating the correlation between the modeled and measured DOs in the testing phase. Based on the Rp values of 0.8785, 0.8587, 0.8762, and 0.8815, a very satisfying correlation can be seen for all used models.

The error line and scatterplot plotted for the testing data of (a and b) TLBO-MLPNN, (c and d) SCA-MLPNN, (e and f) WCA-MLPNN, and (g and h) EFO-MLPNN.

To compare the efficiency of the employed models, the most accurate model is first determined by comparing the obtained accuracy indicators, then, a comparison between the optimization time is carried out. Table3 collects all calculated accuracy criteria in this study.

In terms of all all accuracy criteria (i.e., RMSE, MAE, RP, and NSE), the WCA-MLPNN emerged as the most reliable model in the training phase. In other words, the WCA presented the highest quality training of the MLP followed by the EFO, TLBO, and SCA. However, the results of the testing data need more discussion. In this phase, while the EFO-MLPNN achieved the smallest RMSE (1.2903), the largest RP (0.8815), and the largest NSE (0.7695) at the same time, the smallest MAE (0.9728) was obtained for the TLBO-MLPNN. About the SCA-based ensemble, it was shown that this model yields the poorest predictions in both phases.

Additionally, Figs.10 and 11 are also produced to compare the accuracy of the models in the form of boxplot and Taylor Diagram, respectively. The results of these two figures are consistent with the above comparison. They indicate the high accordance between the models outputs and target DOs, and also, they reflect the higher accuracy of the WCA-MLPNN, EFO-MLPNN, and TLBO-MLPNN, compared to the SCA-MLPNN.

Boxplots of the models for comparison.

Taylor diagram of the models for comparison.

In comparison with some previous literature, it can be said that our models have attained a higher accuracy of DO prediction. For instance, in the study by Yang et al.85, three metaheuristic algorithms, namely multi-verse optimizer (MVO), shuffled complex evolution (SCE), and black hole algorithm (BHA) were combined with an MLPNN and the models were applied to the same case study (Klamath River Station). The best training performance was achieved by the MLP-MVO (with respective RMSE, MAE, and RP of 1.3148, 0.9687, and 0.8808), while the best testing performance was achieved by the MLP-SCE (with respective RMSE, MAE, and RP of 1.3085, 1.0122, and 0.8775). As per Table3, it can be inferred that the WCA-MLPNN suggested in this study provides better training results. Also, as far as the testing results are concerned, both WCA-MLPNN and TLBO-MLPNN outperformed all models tested by Yang et al.85. In another study by Kisi et al.42, an ensemble model called BMA was suggested for the same case study, and it achieved training and testing RMSEs of 1.334 and 1.321, respectively (See Table 5 of the cited paper). These error values are higher than the RMSEs of the TLBO-MLPNN, WCA-MLPNN, and EFO-MLPNN in this study. Consequently, these model outperform benchmark conventional models that were tested by Kisi et al.42 (i.e., ELM, CART, ANN, MLR, and ANFIS). With the same logic, the superiority of the suggested hybrid models over some conventional models employed in the previous studies49,65 for different stations on the Klamath River can be inferred. Altogether, these comparisons indicate that this study has achieved considerable improvements in the field of DO prediction.

Table4 denotes the times elapsed for optimizing the MLP by each algorithm. According to this table, the EFO-MLPNN, despite requiring a greater number of iterations (i.e., 30,000 for the EFO vs. 1000 for the TLBO, SCA, and WCA), accomplishes the optimization in a considerably shorter time. In this relation, the times for the TLBO, SCA, and WCA range in [181.3, 12,649.6] s, [88.7, 6095.2] s, and [83.2, 4804.0] s, while those of the EFO were bounded between 277.2 and 296.0s. Another difference between the EFO and other proposed algorithms is related to two initial NSWs. Since NSW of 10 was not a viable value for implementing the EFO, two values of 25 and 30 are alternatively considered.

Based on the above discussion, the TLBO, WCA, and EFO showed higher capability compared to the SCA. Examining the time of the selected configurations of the TLBO-MLPNN, SCA-MLPNN, WCA-MLPNN, and EFO-MLPNN (i.e., 12,649.6, 5295.7, 4733.0, and 292.6s for the NSWs of 500, 400, 400, and 50, respectively) shows that the WCA needs around 37% of the TLBOs time to train the MLP. The EFO, however, provides the fastest training.

Apart from comparisons, the successful prediction carried out by all four hybrid models represents the compatibility of the MLPNN model with metaheuristic science for creating predictive ensembles. The used optimizer algorithms could nicely optimize the relationship between the DO and water conditions (i.e., WT, pH, and SC) in the Klamath River Station. The basic model was a 361 MLPNN containing 24 weights and 7 biases (Fig.4). Therefore, each algorithm provided a solution composed of 31 variables in each iteration. Considering the number of tested NSWs and iterations for each algorithm (i.e., 30,000 iterations of the EFO and 1000 iterations of the WCA, SCA, and TLBO all with nine NSWs), it can be said that the outstanding solution (belonging to the EFO algorithm) has been excerpted among a large number of candidates (=130,0009+310009).

However, concerning the limitations of this work in terms of data and methodology, potential ideas can be raised for future studies. First, it is suggested to update the applied models with the most recent hydrological data, as well as the records of other water quality stations, in order to enhance the generalizability of the models. Moreover, further metaheuristic algorithms can be tested in combination with different basic models such as ANFIS and SVM to conduct comparative studies.

The higher efficiency of the WCA and EFO (in terms of both time and accuracy) was derived in the previous section. Hereupon, the MLPNNs constructed by the optimal responses of these two algorithms are mathematically presented in this section to give two formulas for predicting the DO. Referring to Fig.4, the calculations of the output neuron in the WCA-MLPNN and EFO-MLPNN is expressed by Eqs.5 and 6, respectively.

$$ begin{aligned} DO_{WCA - MLPNN } & = , 0.395328 times O_{HN1 } + 0.193182 + O_{HN2 } - 0.419852 times O_{HN3 } + 0.108298 times O_{HN4 } \ & quad +, 0.686191 times O_{HN5 } + 0.801148 times O_{HN6 } + 0.340617 \ end{aligned} $$

(5)

$$ begin{aligned} DO_{EFO - MLPNN } & = 0.033882 times {{O}_{HN1}}^{prime} - 0.737699 times {{O}_{HN2}}^{prime} - 0.028107 times {{O}_{HN3}}^{prime} - 0.700302 \ & quad times {{O}_{HN4}}^{prime} + 0.955481 times {{O}_{HN5}}^{prime} - 0.757153 times {{O}_{HN6}}^{prime} + 0.935491 \ end{aligned} $$

(6)

In the above relationships, ({O}_{HNi}) and ({{O}_{HNi}}^{prime}) represent the outcome of the ith hidden neuron in the WCA-MLPNN and EFO-MLPNN, respectively. Given Tansig (x) = (frac{2}{1+ {e}^{-2x}}) 1 as the activation function of the hidden neurons, ({O}_{HNi}) and ({{O}_{HNi}}^{prime}) are calculated by the below equations. As is seen, these two parameters are calculated from the inputs of the study, i.e., (WT, pH, and SC).

$$ left[ {begin{array}{*{20}c} {O_{HN1 } } \ {O_{HN2 } } \ {O_{HN3 } } \ {O_{HN4 } } \ {O_{HN5 } } \ {O_{HN6 } } \ end{array} } right] = Tansigleft( {left( {left[ {begin{array}{*{20}c} { - 1.818573} & {1.750088} & { - 0.319002} \ {0.974577} & {0.397608} & { - 2.316006} \ { - 1.722125} & { - 1.012571} & {1.575044} \ {0.000789} & { - 2.532009} & { - 0.246384} \ { - 1.288887} & { - 1.724770} & {1.354887} \ {0.735724} & { - 2.250890} & {0.929506} \ end{array} } right] left[ {begin{array}{*{20}c} {WT} \ {pH} \ {SC} \ end{array} } right] } right) + left[ {begin{array}{*{20}c} {2.543969} \ { - 1.526381} \ {0.508794} \ {0.508794} \ { - 1.526381} \ {2.543969} \ end{array} } right]} right) $$

(7)

$$ left[ {begin{array}{*{20}c} {O_{HN1}{prime} } \ {O_{HN2}{prime} } \ {O_{HN3}{prime} } \ {O_{HN4}{prime} } \ {O_{HN5}{prime} } \ {O_{HN6}{prime} } \ end{array} } right] = Tansigleft( {left( {left[ {begin{array}{*{20}c} {1.323143} & { - 2.172674} & { - 0.023590} \ {1.002364} & {0.785601} & {2.202243} \ {1.705369} & { - 1.245099} & { - 1.418881} \ { - 0.033210} & { - 1.681758} & {1.908498} \ {1.023548} & { - 0.887137} & { - 2.153396} \ {0.325776} & { - 1.818692} & { - 1.748715} \ end{array} } right] left[ {begin{array}{*{20}c} {WT} \ {pH} \ {SC} \ end{array} } right] } right) + left[ {begin{array}{*{20}c} { - 2.543969} \ { - 1.526381} \ { - 0.508794} \ { - 0.508794} \ {1.526381} \ {2.543969} \ end{array} } right]} right) $$

(8)

More clearly, the integration of Eqs.(5 and 7) results in the WCA-MLPNN formula, while the integration of Eqs.(6 and 8) results in the EFO-MLPNN formula. Given the excellent accuracy of these two models and their superiority over some previous models in the literature, either of these two formulas can be used for practical estimations of the DO, especially for solving the water quality issue within the Klamath River.

Continue reading here:
Predicting water quality through daily concentration of dissolved ... - Nature.com

Read More..

Biologit: Machine learning and AI to monitor medical literature – SiliconRepublic.com

Founded in 2021 by Nicole Baker and Bruno Ohana, Dublin-based Biologit has been helping companies automate the monitoring of scientific literature.

Whether it is human medicines or medical devices, monitoring health products for safety is a very important part of keeping patients safe, says Nicole Baker, an immunologist by background who started her own company to help life sciences firms automate safety monitoring.

Adverse events are one of the top causes of hospitalisation and can lead to serious health issues. Hence, regulators around the world pay very close attention to the surveillance of health products.

Baker, who co-founded Biologit with tech expert Bruno Ohana, specialises in the field of pharmacovigilance, which involves reviewing the vast number of medical extracts published each year and forums on social media to identify any red flags regarding adverse effects of drugs on the market.

This, combined with her experience working in the biotech and pharma industries, helped Baker create Biologit two years ago, leveraging AI to help keep patients safe by simplifying the detection of adverse events from drug development to post-market.

At Biologit, we specialise in cutting-edge active safety surveillance solutions across the life sciences spectrum. We do that by combining expert domain knowledge with the latest technology to build solutions that help keep patients safe, she explains.

The idea is to help companies of all sizes and at any stage of clinical development automate the comprehensive and time-consuming task of monitoring scientific literature.

With its roots in Trinity College Dublin, where the technology was first incubated, Biologit was part of Enterprise Irelands New Frontiers Entrepreneur Development Programme, in which Baker participated in 2019. She then went on to participate inBig Ideas 2020.

The companys first product, Biologit MLM-AI, was developed by iterating with early adopters to build a solution that is fit for the needs of the industry.

From the user experience to the AI, everything was built from the ground up with domain experts. This has given us insights on how to apply AI to solve the problems that mattered most to our users while maintaining high levels of compliance in a regulated industry, Baker explains.

Because we tackled the challenges of the entire safety surveillance workflow, our platform has the unique ability to deliver high levels of automation and productivity gains.

According to her, MLM-AI includes a rich and comprehensive scientific literature database which contains more than 45m citations and is growing every day.

Our users can benefit from the Biologit database out of the box to run their searches, reducing friction and costs.

After launching MLM-AI last year, Baker and Ohana worked on onboarding its first customers. This year, the focus has been more on customer acquisition and growth, Baker says, as the team has increased its presence on industry forums and other online channels.

We continue to onboard new customers and were building a user base that is quite global, Baker says, adding that she has no interest in raising investment at the moment. Weve put a lot of energy in hiring too and have been very fortunate to build a stellar team to help us deliver for our customers and grow Biologit into the future.

Earlier this year, the Dublin-headquartered start-up announced plans to at least double its team in 2023 following a successful 2m funding round led by Enterprise Ireland. At the time, it had 14 employees based across Ireland, India, Poland, France, Spain and the Philippines.

While for Baker the main challenge in running Biologit is finding the time to do the most important tasks, Ohana said there have been a few exciting challenges for the team as a whole.

There were lots of fun challenges since we started, and they have evolved with the different stages of the company: can we build the technology, can we find good market fit and the right business model, will the platform scale as we grow, said Ohana, an expert in machine learning.

It is really nice to get to think about all that, and a great learning experience. Our customers expect a very high standard of security, compliance, and that we continue to innovate for them, were now putting the structure in place to so that we can do those things as we scale.

10 things you need to know direct to your inbox every weekday. Sign up for theDaily Brief, Silicon Republics digest of essential sci-tech news.

Continue reading here:
Biologit: Machine learning and AI to monitor medical literature - SiliconRepublic.com

Read More..

What has the advent of cloud computing done to the speed of Tech … – Medium

The big three cloud providers or otherwise called Hyper scalers AWS, Microsoft Azure and GCP offer scalable computing for everyone, this means startups can massively take advantage of it which means shortage of compute power is not a hindrance for technological innovation.

Let me explain what I meanultra powerful, ultra reliable, ultra scalable and ultra secure cloud computing resources are available worldwide where there is internet available. This means businesses small and large can utilize the following benefits with a few clicks:

1 Rapid Elasticity: Cloud hosted application usage and traffic surges are not an issue for well load balanced applications. It allows for a new app service to climb from 0 users to a million with a few clicks and in seconds.

2Remote Connectivity, Global access and Collaboration: Cloud computing facilitates global collaboration by providing a centralized platform accessible from anywhere. This has led to increased collaboration on projects with teams distributed across the globe, fostering diverse perspectives and expertise. VPN tunnels and connections can be provisioned and can scale to thousands of simultaneous connections in seconds leading to availability of company resources to work from anywhere employees.

3 No server infrastructure planning required: Cloud computing allows businesses to scale resources up or down based on demand. This flexibility enables rapid development and deployment of applications without the need for extensive infrastructure planning.

4 No large upfront investments required: Cloud services usually follow a pay-as-you-use model, eliminating the need for large upfront investments in hardware, instead this has been replaced by cloud spend budgets. This cost efficiency empowers startups and small businesses to access advanced computing resources, fostering innovation across a broader range of organizations.

5 Test and Destroy enables rapid deployment: Cloud services provide developers with the tools to quickly prototype and deploy applications. This rapid development cycle allows for faster innovation and shorter time-to-market for new products and features.

Read the original post:
What has the advent of cloud computing done to the speed of Tech ... - Medium

Read More..

Microsoft unveils mysterious "Azure Boost" feature that could give … – TechRadar

Microsoft has introduced a new hardware upgrade option called Azure Boost designed to improve the performance of virtual machines on its cloud computing platform.

In an explainer, the company outlined how Azure Boost looks to [offload] server virtualization processes traditionally performed by the hypervisor and host OS onto purpose-built software and hardware.

Microsoft says that customers can expect a range of benefits including improved networking, revised storage, performance, and security.

Microsoft said that Azure Boost-compatible VM hosts contain the new Microsoft Azure Network Adapter (MANA), which means up to 200 Gbps bandwidth.

Offloading storage operations also sees improvements, with up to 17.3 GBps and 3.8 million IOPS for local storage and up to 12.5 GBps throughput and 650 K IOPS for remote storage.

The cloud provider has confirmed that 17 instance types are already supported, and future Azure virtual machines will also be compatible with Azure Boost.

The explainer confirms that Azure Boost applies to both Linux VMs and Windows VMs, and already, some versions of Ubuntu (20.04 and 22.04 LTS) and Windows Server (2016, 2019, and 2022), among others, have support for the MANA driver (via The Register).

AWS has a similar setup, which it calls Nitro, which underpins the EC2 instances. Like Azure Boost, it promises to offload functions to dedicated hardware and software, in turn improving performance and optimizing cost efficiency.

Whether youre an AWS customer or an Azure customer preparing to take advantage of Boost, the performance enhancements could allow you to use fewer resources, which is better for both the environment and your businesss bank account.

See the article here:
Microsoft unveils mysterious "Azure Boost" feature that could give ... - TechRadar

Read More..

eWEEK TweetChat, December 12: Tech in 2024, Predictions and … – eWeek

eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

On Tuesday, December 12 at 11 AM PT, @eWEEKNews will host its monthly #eWEEKChat. The topic will be the future of technology in 2024 and beyond, and it will be moderated by James Maguire, eWEEKs Editor-in-Chief.

Well discuss using X, formerly known as Twitter current and evolving trends shaping the future of enterprise technology, from AI to cloud to cybersecurity. Our ultimate goal: to offer guidance to companies that enables them to better keep pace with evolving tech trends.

See below for:

The list of experts for this months Tweetchat currently includes the following please check back for additional expert guests:

The questions well tweet about will include the following check back for more/revised questions:

The chat begins promptly at 11 AM PT on December 12. To participate:

2. Open Twitter in a second browser. On the menu to the left, click on Explore. In the search box at the top, type in #eweekchat. This will open a column that displays all the questions and all the panelists replies.

Remember: you must manually include the hashtag #eweekchat for your replies to be seen by that days tweetchat panel of experts.

Thats it youre ready to go. Be ready at 11 AM PT on September 12 to participate in the tweetchat.

NOTE: There is sometimes a few seconds of delay between when you tweet and when your tweet shows up in the #eWeekchat column.

July 25: Optimizing Generative AI: Guide for CompaniesAugust 15: Next Generation Data AnalyticsSeptember 12: AI in the EnterpriseOctober 17: Future of Cloud ComputingNovember 14: The Future of Generative AIDecember 12: Tech in 2024: Predictions and Wild Guesses

*all topics subjects to change

View post:
eWEEK TweetChat, December 12: Tech in 2024, Predictions and ... - eWeek

Read More..

China’s tech giants dip their toes into web3, but prospects are limited so far – Yahoo Finance

During the Staking Summit in Istanbul, a conference attended by hundreds of individuals involved in the staking practice of the crypto ecosystem, two exhibition booths stood out. They belonged to Tencent and Huawei. Amidst a backdrop dominated by twenty-somethings clad in trendy company hoodies and giving out well-designed merchandise, the two Chinese tech giants appeared somewhat incongruous with their more formal corporate banners.

They were next to engineers, marketers and business developers deeply entrenched in staking, where individuals pledge their crypto assets, such as Ethereum, to protocols in exchange for returns. The borrowed assets are subsequently used to validate transactions in blockchains implementing the "proof-of-stake" method.

In the past year, several Chinese tech giants, including Alibaba, Tencent and Huawei, have been popping up across crypto events in different corners of the world. In the hope of carving out a market share in the nascent web3 space, they show up for these events either as official sponsors or assume a more discreet presence simply as attendees.

Chinese tech giants' participation in crypto sits somewhere at the crossroads of web2 and web3 thanks to their home country's widespread ban on cryptocurrency trading and initial coin offerings. In the most common case, these tech firms are touting their computing resources to web3 startups in a way not so different from how they have been selling cloud services to companies in more established tech verticals.

Cloud expenses by companies building or leveraging decentralized networks are understood to be still quite insignificant. It's not uncommon for a "mid-sized" enterprise in web2 to spend over $1 million on cloud computing, but a company considered to be mid-sized in web3 might only be spending in the low hundreds of thousands of dollars, several attendees at the event said.

Yet the limited ticket size hasn't impeded Chinese cloud providers from venturing into crypto. As underdogs in the global cloud market, Chinese firms are far more proactive and accommodating with customers because they lack brand recognition, especially in the West. As such, they have to compete by offering cheaper -- or better services.

Story continues

Beyond providing cloud infrastructure, Chinese firms have also been involved in areas that are more removed from their core products and put them in competition with crypto-native firms. That includes building blockchains for enterprise use -- most tech firms in China have steered clear of the public blockchain sphere in which tokens play a critical role due to the country's crackdown on crypto.

Some players also offer node-as-a-service business. Blockchains, which are decentralized databases that store and encrypt transaction data, are run on distributed nodes. These nodes, however, can be expensive and complex to maintain, so companies like Huawei offer a node hosting service for developers, an appealing solution to enterprises that want to build decentralized applications but lack the technical sophistication to do so themselves.

Tencent and Alibaba, being the first movers amongst Chinese tech giants to the web3 space, have also acquainted themselves with respected projects to ramp up their reputation in the industry.

Tencent, for example, has formed partnerships with public blockchains like Sui and Avalanche as well as the Ethereum-scaling solution Scroll.

Alibaba, on the other hand, has teamed up with Aptos, a blockchain developed by former Meta employees, to amplify its name in the web3 world. In a joint announcement today, Alibaba Cloud and Aptos Foundation said they will be co-hosting hackathons that utilize the Move programming language in the Asia Pacific region.

For now, web3 is barely making a dent in Chinese tech giants' top line, but these firms recognize the potential of the burgeoning industry and understand that they cannot afford to overlook the opportunity, even in the face of significant market volatility and the collapse of major players like FTX.

This article originally appeared on TechCrunch at https://techcrunch.com/2023/11/20/china-tech-giants-crypto-web3/

Read more from the original source:
China's tech giants dip their toes into web3, but prospects are limited so far - Yahoo Finance

Read More..

6 most underhyped technologies in IT plus one thats not dead yet – CIO

Generative AI and, more specifically, ChatGPT captivated the corporate world in 2023, with board directors, CEOs, and other executives fawning (and sometimes fearing) the technology.

Their enthusiasm is justified, with multiple studies finding that AI is delivering strong value and returns on investment. IBM, for one, found that the average ROI on enterprise-wide AI initiatives at 5.9% with best-in-class companies reaping an enviable 13% ROI.

No wonder that theyre all talking about it.

But with all due respect, AI is hardly the only critical tech in town. Yes, recent advancements in AI have been groundbreaking, and those advancements have revolutionary potential, but artificial intelligence like all hyped-up tech is built on the shoulders of numerous other technologies that dont seem to get any glory at all.

Isnt it time some of those overlooked and underappreciated technologies get their due?

We think so.

With that in mind, we asked a group of IT leaders and tech analysts to list what they think are some underhyped technologies, why they get overlooked, and why they shouldnt. Here are the technologies they find to be among the most underappreciated in IT today.

CIOs and their teams can hardly do their jobs nor build and manage the extensive tech stack required to support AI and any other newfangled technology coming to market today if they dont have a handle on their IT environment.

IT management software helps them accomplish that task and accomplish it to a practically perfect degree of stability and reliability.

Anything that falls into the category of IT management tools is often cast aside, but these are the workhorses of IT, says John Buccola, CTO of E78 Partners, which provides consulting and managed services in finance technology and other professional areas.

Tools that Buccola puts into this class of unsung heroes include Active Directory and access and identity management solutions. (They really simplify environments that are heterogeneous, notes Buccola, who is also an officer with the Southern California chapter of the Society for Information Management.)

You dont think about them. They all just work, and thats what people want from IT, he adds.

Other tools worth calling out are IT service management (ITSM) and IT Infrastructure Library (ITIL) solutions which Buccola says are particularly critical for helping to keep IT expenses in check.

Just think how the cost of cloud computing services could explode if no one had an eye on it. As Buccola says: Something has to sit on top of that to make sure the costs associated with those assets arent spinning out of control.

Indeed, it would be nearly impossible to find a CIO who doesnt have to be diligent about managing IT costs something they would be hard-pressed to do without the management tools to help them.

This stuff doesnt get a lot of press, but theyre such essentials for IT teams, Buccola adds.

Go back 15 years when cloud was the tech generating all the buzz, and analysts were trying to separate reality from the hype.

Today the model doesnt seem like such a marvel, but when you think about it, cloud still deserves a lot of praise.

It has been one of the most enabling technology shifts weve ever had, and because of the move to cloud, it enables us to do everything else were doing now. But it has gone completely to the background, because AI has sucked up all the air, says Mark Taylor, CEO of the Society for Information Management (SIM).

Even though many still recognize clouds formidable transformational power, research suggests why clouds significance gets downplayed. Some clues are found in the 2023 Cloud Business Survey from professional services firm PwC. The survey showed that 78% of responding executives had adopted cloud in most or all parts of the business, yet more than half said they had not realized expected outcomes such as cost reductions, improved resiliency, and new revenue channels.

PwC suggests, however, that fault doesnt lie with cloud computing but with how organizations use it: Moving to the cloud or running parts of your business in the cloud is not the same as being cloud-powered. What does that really mean and what does it take?

About 10% of those surveyed apparently know the answer: They have reinvented their businesses through cloud, they report fewer barriers to realizing value and theyre doing so at a rate twice that of other companies. And even in the current business environment, they expect to see continued revenue growth of 15% or greater.

Cloud-based enterprise resource planning (ERP) is another behind-the-scenes technology that often gets overlooked in favor of newer, glossier tech, says Jeff Stovall, CIO of Abt Associates, who adds that cloud-based ERPs are rarely credited for how critical they are for digital transformation.

Weve done ERPs for so many years, weve been doing these ERP projects for decades, but with cloud ERPs, theres a shift in how business can innovate, says Stovall, who is also former City of Charlotte CIO and a SIM board member.

By moving from on-prem to the cloud, organizations can reimagine their business processes and transform how core facets of their work gets done, Stovall says. Its a catalyst for transformation, but its an overlooked catalyst, because weve become so comfortable with the concept of ERP that we dont think about its transformational capabilities, he adds.

In fact, Stovall sees some organizations stick with on-premises ERP even as they seek to transform other pieces of their IT environment and business processes not realizing how much more they could accomplish if they would modernize this fundamental enterprise core and the processes it supports.

Yugal Joshi, a partner at research and advisory firm Everest Group, lists cloud assessment tools as another tech thats underhyped and underused.

Cloud assessment and cloud migration tools, or cloud-enablement platforms, all help IT teams analyze and understand applications and cloud infrastructure so they have the information required for a solid cloud-deployment roadmap.

Sure, other technologies, such as IT audit software, can help here, as can manual assessments, but Joshi says cloud assessment tools have proven to boost the chances of successful cloud initiatives.

CIOs sometimes think they dont need this tool because moving to cloud has become so pervasive. They think migration is easy, but its complex and the choices of cloud vendors and offerings have increased, [adding to that complexity], Joshi explains.

Similarly, Kumud Kokal, CIO of Farmers Business Network, lists as undervalued the fundamental technologies within the IT environment that were once marvels but no one ever pauses to value anymore. Specific technologies he names include payroll systems (that seamlessly deliver to workers the money theyre owed) and the WiFi networks (that deliver everywhere connectivity).

Theres a downside to that underappreciation, he says: CIOs often face challenges when asking for enough money to maintain those out-of-sight, out-of-mind technologies.

Nobody thinks about the plumbing behind the scenes anymore, but its all critical, he adds.

Although AI gets all the attention, the key components that make it work often do not, including data. Yet as organizations eagerly embrace AI in all its forms, many have neglected parts of their data management needs, says Laura Hemenway, president, founder, and principle of Paradigm Solutions, which supports large enterprise-wide transformations.

Even those who are on top of data management often downplay the powerful work their data management tools do. As such, Hemenway thinks data management software deserves more recognition for the important job it does, even as the work involved is often considered a tedious task that doesnt have the pizzazz of making the most of ChatGPT.

Still, sound data management is a linchpin for AI and other analytics work, which underpins a whole host of processes deemed critical in modern business from automated processes to personalized customer support. So its essential to get it right.

There was plenty of buzz around the coming metaverse several years ago, excitement over which peaked in 2021 when Facebook announced it was changing its name to Meta a nod to what the social media giant sees for the future of computing, many concluded.

But with no big breakthroughs, interest fizzled and the metaverse found itself on some overhyped tech lists. But dont be so quick to write it off, warns Taylor, who thinks this category of tech has been unfairly downgraded, which lands it on his list of underhyped technologies.

Taylor, who prefers the terms spatial computing and virtual presence to metaverse, notes that all the technologies in this category enable immersive virtual world experiences regardless of their differences. Inflated expectations from vendors that have not been able to fully deliver the seamless virtual experiences they have promised are a key reason why the hype quickly cooled off, Taylor says.

But when they figure it out, like AI, it will change everything. It will just take the limits off so many things. And because its underhyped, people may be unprepared for when it has its breakthrough moment, he says.

Read this article:
6 most underhyped technologies in IT plus one thats not dead yet - CIO

Read More..