Page 244«..1020..243244245246..250260..»

Investigation of the effectiveness of a classification method based on improved DAE feature extraction for hepatitis C … – Nature.com

In this subsection, we evaluate the feature extraction effect of the IDAE by conducting experiments on the Hepatitis C dataset with different configurations to test its generalization ability. We would like to investigate the following two questions:

How effective is IDAE in classifying the characteristics of hepatitis C ?

If the depth of the neural network is increased, can IDAE mitigate the gradient explosion or gradient vanishing problem while improving the classification of hepatitis C disease ?

Does an IDAE of the same depth tend to converge more easily than other encoders on the hepatitis C dataset ?

Firstly, out of public health importance, Hepatitis C (HCV) is a global public health problem due to the fact that its chronic infection may lead to serious consequences such as cirrhosis and liver cancer, and Hepatitis C is highly insidious, leading to a large number of undiagnosed cases.It is worth noting that despite the wide application of traditional machine learning and deep learning algorithms in the healthcare field, especially in the research of acute conditions such as cancer, however, there is a significant lack of in-depth exploration of chronic infectious diseases, such as hepatitis C. In addition, the complex biological attributes of the hepatitis C virus and the significant individual differences among patients together give rise to the challenge of multilevel nonlinear correlation among features. Therefore, the application of deep learning methods to the hepatitis C dataset is not only an important way to validate the efficacy of such algorithms, but also an urgent research direction that needs to be put into practice to fill the existing research gaps.

The Helmholtz Center for Infection Research, the Institute of Clinical Chemistry at the Medical University of Hannover, and other research organizations provided data on people with hepatitis C, which was used to compile the information in this article. The collection includes demographic data, such as age, as well as test results for blood donors and hepatitis C patients. By examining the dataset, we can see that the primary features are the quantity of different blood components and liver function, and that the only categorical feature in the dataset is gender. Table 1 shows the precise definition of these fields.

This essay investigates the categorisation issue. The Table 2 lists the description and sample size of the five main classification labels. In the next training, in order to address the effect of sample imbalance on the classification effect, the model will be first smote32 sampled and then trained using the smote sampled samples. With a sample size of 400 for each classification.

The aim of this paper is to investigate whether IDAE can extract more representative and robust features, and we have chosen a baseline model that includes both traditional machine learning algorithms and various types of autoencoders, which will be described in more detail below:

SVM: support vector machines are used to achieve optimal classification of data by constructing maximally spaced classification hyperplanes and use kernel functions to deal with nonlinear problems, aiming to seek to identify decision boundaries that maximize spacing in the training data.

KNN: the K Nearest Neighbors algorithm determines the class or predictive value of a new sample by calculating its distance from each sample in the training set through its K nearest neighbors.

RF: random forests utilize random feature selection and Bootstrap sampling techniques to construct and combine the prediction results of multiple decision trees to effectively handle classification and regression problems.

AE: autoencoder is a neural network structure consisting of an encoder and a decoder that learns a compact, low-dimensional feature representation of the data through a autoreconfiguration process of the training data, and is mainly used for data dimensionality reduction, feature extraction, and generative learning tasks.

DAE: denoising autoencoder is a autoencoder variant that excels at extracting features from noisy inputs, revealing the underlying structure of the data and learning advanced features by reconstructing the noise-added inputs to improve network robustness, and whose robust features have a gainful effect on the downstream tasks, which contributes to improving the model generalization ability.

SDAE: stacked denoising autoencoder is a multilayer neural network structure consisting of multiple noise-reducing autoencoder layers connected in series, each of which applies noise to the input data during training and learns to reconstruct the undisturbed original features from the noisy data, thus extracting a more abstract and robust feature representation layer by layer.

DIUDA: the main feature of Dual Input Unsupervised Denoising Autoencoder is that it receives two different types of input data at the same time, and further enhances the generalization ability of the model and the understanding of the intrinsic structure of the data by fusing the two types of inputs for the joint learning and extraction of the feature representation.

In this paper, 80% of the Hepatitis C dataset is used as model training and the remaining 20% is used to test the model. Since the samples are unbalanced, this work is repeated with negative samples to ensure that the samples are balanced. For the autoencoder all methods, the learning rate is initialized to 0.001, the number of layers for both encoder and decoder are set to 3, the number of neurons for encoder is 10, 8, 5, the number of neurons for decoder is 5, 8, 10, and the MLP is initialized to 3 layers with the number of neurons 10, 8, 5, respectively, and furthermore all models are trained until convergence, with a maximum training epoch is 200. The machine learning methods all use the sklearn library, and the hyperparameters use the default parameters of the corresponding algorithms of the sklearn library.

To answer the first question, we classified the hepatitis C data after feature extraction using a modified noise-reducing auto-encoder and compared it using traditional machine learning algorithms such as SVM, KNN, and Random Forest with AE, DAE, SDAE, and DIUDA as baseline models. Each experiment was conducted 3 times to mitigate randomness. The average results for each metric are shown in Table 3.From the table, we can make the following observations.

The left figure shows the 3D visualisation of t-SNE with features extracted by DAE, and the right figure shows the 3D visualisation of t-SNE with features extracted by IDAE.

Firstly, the IDAE shows significant improvement on the hepatitis C classification task compared to the machine learning algorithms, and also outperforms almost all machine learning baseline models on all evaluation metrics. These results validate the effectiveness of our proposed improved noise-reducing autoencoder on the hepatitis C dataset. Secondly, IDAE achieves higher accuracy on the hepatitis C dataset compared to the traditional autoencoders such as AE, DAE, SDAE and DIUDA, etc., with numerical improvements of 0.011, 0.013, 0.010, 0.007, respectively. other metrics such as the AUC-ROC and F1 scores, the values are improved by 0.11, 0.10, 0.06,0.04 and 0.13, 0.11, 0.042, 0.032. From Fig. 5, it can be seen that the IDAE shows better clustering effect and class boundary differentiation in the feature representation in 3D space, and both the experimental results and visual analyses verify the advantages of the improved model in classification performance. Both experimental results and visualisation analysis verify the advantages of the improved model in classification performance.

Finally, SVM and RF outperform KNN for classification in the Hepatitis C dataset due to the fact that SVM can handle complex nonlinear relationships through radial basis function (RBF) kernels. The integrated algorithm can also integrate multiple weak learners to indirectly achieve nonlinear classification. KNN, on the other hand, is based on linear measures such as Euclidean distance to construct decision boundaries, which cannot effectively capture and express the essential laws of complex nonlinear data distributions, leading to poor classification results.

In summary, these results demonstrate the superiority of the improved noise-reducing autoencoder in feature extraction of hepatitis C data. It is also indirectly verified by the effect of machine learning that hepatitis C data features may indeed have complex nonlinear relationships.

To answer the second question, we analyze in this subsection the performance variation of different autoencoder algorithms at different depths. To perform the experiments in the constrained setting, we used a fixed learning rate of 0.001. The number of neurons in the encoder and decoder was kept constant and the number of layers added to the encoder and decoder was set to {1, 2, 3, 4, 5, 6}. Each experiment was performed 3 times and the average results are shown in Fig. 6, we make the following observations:

Effects of various types of autoencoders at different depths.

Under different layer configurations, the IDAE proposed in this study shows significant advantages over the traditional AE, DAE, SDAE and SDAE in terms of both feature extraction and classification performance. The experimental data show that the deeper the number of layers, the greater the performance improvement, when the number of layers of the encoder reaches 6 layers, the accuracy improvement effect of IDAE is 0.112, 0.103 , 0.041, 0.021 ,the improvement effect of AUC-ROC of IDAE is 0.062, 0.042, 0.034,0.034, and the improvement effect of F1 is 0.054, 0.051, 0.034,0.028 in the order of the encoder.

It is worth noting that conventional autocoders often encounter the challenges of overfitting and gradient vanishing when the network is deepened, resulting in a gradual stabilisation or even a slight decline in their performance on the hepatitis C classification task, which is largely attributed to the excessive complexity and gradient vanishing problems caused by the over-deep network structure, which restrict the model from finding the optimal solution. The improved version of DAE introduces residual neural network, which optimises the information flow between layers and solves the gradient vanishing problem in deep learning by introducing directly connected paths, and balances the model complexity and generalisation ability by flexibly expanding the depth and width of the network. Experimental results show that the improved DAE further improves the classification performance with appropriate increase in network depth, and alleviates the overfitting problem at the same depth. Taken together, the experimental results reveal that the improved DAE does mitigate the risk of overfitting at the same depth as the number of network layers deepens, and also outperforms other autoencoders in various metrics.

To answer the third question, in this subsection we analyse the speed of model convergence for different autoencoder algorithms. The experiments were also performed by setting the number of layers added to the encoder and decoder to {3, 6}, with the same number of neurons in each layer, and performing each experiment three times, with the average results shown in Fig. 7, where we observe the following conclusions: The convergence speed of the IDAE is better than the other autoencoder at different depths again. Especially, the contrast is more obvious at deeper layers. This is due to the fact that the chain rule leads to gradient vanishing and overfitting problems, and its convergence speed will have a decreasing trend; whereas the IDAE adds direct paths between layers by incorporating techniques such as residual connectivity, which allows the signal to bypass the nonlinear transforms of some layers and propagate directly to the later layers. This design effectively mitigates the problem of gradient vanishing as the depth of the network increases, allowing the network to maintain a high gradient flow rate during training, and still maintain a fast convergence speed even when the depth increases. In summary, when dealing with complex and high-dimensional data such as hepatitis C-related data, the IDAE is able to learn and extract features better by continuously increasing the depth energy, which improves the model training efficiency and overall performance.

Comparison of model convergence speed for different layers of autoencoders.

Link:
Investigation of the effectiveness of a classification method based on improved DAE feature extraction for hepatitis C ... - Nature.com

Read More..

AI has a lot of terms. We’ve got a glossary for what you need to know – Quartz

Nvidia CEO Jensen Huang. Photo: Justin Sullivan ( Getty Images )

Lets start with the basics for a refresher. Generative artificial intelligence is a category of AI that uses data to create original content. In contrast, classic AI could only offer predictions based on data inputs, not brand new and unique answers using machine learning. But generative AI uses deep learning, a form of machine learning that uses artificial neural networks (software programs) resembling the human brain, so computers can perform human-like analysis.

Generative AI isnt grabbing answers out of thin air, though. Its generating answers based on data its trained on, which can include text, video, audio, and lines of code. Imagine, say, waking up from a coma, blindfolded, and all you can remember is 10 Wikipedia articles. All of your conversations with another person about what you know are based on those 10 Wikipedia articles. Its kind of like that except generative AI uses millions of such articles and a whole lot more.

More here:
AI has a lot of terms. We've got a glossary for what you need to know - Quartz

Read More..

Machine Learning Helps Scientists Locate the Neurological Origin of Psychosis – ExtremeTech

Researchers in the United States, Chile, and the United Kingdom have leveraged machine learning to hone in on the parts of the brain responsible for psychosis. Their findings help to illuminate a common yet elusive experience and could contribute to the development of novel treatments for psychosis and the conditions that cause it.

Around 3 in every 100 people will experience at least one psychotic episode in their lifetimes. Commonly misunderstood, these episodes are characterized by hallucinations (a false perception involving the senses) or delusions (false beliefs not rooted in reality). Many people who experience psychosis have a condition like schizophrenia or bipolar disorder; others have a history of substance abuse, and still others have no particular condition at all.

Regardless of its cause, psychosis can be debilitating for those who experience it, leading some people to seek out antipsychotic medication aimed at staving off future episodes. Though antipsychotic medications are often a godsend for the people who take them, they've historically disrupted neurological psychosis research. During brain scans, it's difficult to know whether specific brain activity can be attributed to the person's condition or to the drugs they're taking. This means medical professionals and pharmaceutical companies work with a fairly limited understanding of psychosis as they help patients manage their episodes.

Researchers at Stanford University, the University of California Los Angeles, Universidad del Desarrollo, and the University of Oxford relied on two strategies to circumvent this issue. To start, they gathered study participants from a wide range of ages and conditions in the hope of uncovering an overarching theme. The group of nearly 900 participants included people ages 6 to 39, some of whom had a history of psychosis or schizophrenia and some of whom had never experienced either. Just over 100 participants had 22q11.2 deletion syndrome, meaning they're missing part of one of their copies of chromosome 22a condition known to carry a 30% risk of experiencing psychosis, schizophrenia, or both. Another 120 participants experienced psychosis but had not been diagnosed with any particular hallucination- or delusion-causing condition.

Credit: Supekar et al, Molecular Psychiatry/DOI 10.1038/s41380-024-02495-8

The team also used machine learning to spot the minute distinctions between the brain activity of those who experience psychosis and the brain activity of those who don't. To map out the participants' neurological activity, the team used functional magnetic resonance imaging (fMRI). This technique allows medical professionals and researchers to track the tiny fluctuations in blood flow triggered by brain changes.

With a custom spatiotemporal deep neural network (stDNN), the researchers compared the functional brain signatures of all participants and found among those with 22q11.2 deletion syndrome. Regardless of demographic, these participants experienced what appeared to be "malfunctions" in the anterior insula and the ventral striatum. These two parts of the brain are involved in humans' cognitive filters and reward predictors, respectively. The stDNN continued to find clear discrepancies between the anterior insulae and ventral striata of those who experienced psychosis and those who did not, further indicating that these two regions of the brain played a vital role in hallucinations and delusions.

These findings, shared Friday in a paper for Molecular Psychiatry, support a standing theory regarding the reliance of psychosis on malfunctioning cognitive filters. Scientists have long wondered whether, during a psychotic episode, the brain struggles to distinguish what's true from what isn't. This is a key function of the brain's salience network, which detects and assigns importance to incoming stimuli. When the salience network cannot work correctly, the brain might incorrectly assign importance and attention to the wrong stimuli, resulting in a hallucination or delusion.

Our discoveries underscore the importance of approaching people with psychosis with compassion, said Stanford neuroscientist and senior study author Dr. Vinod Menon in a statement. Menon and his colleague, psychiatrist Kaustubh Supekar, hope their findings will assist in the development of antipsychotic treatments, especially for those with schizophrenia.

Read the original:
Machine Learning Helps Scientists Locate the Neurological Origin of Psychosis - ExtremeTech

Read More..

DApp developers can benefit from decentralization without compromise: Here’s how – Cointelegraph

Supporting an inclusive Web3 era with innovative data indexing tools, SubQuery Network is building a more accessible and robust digital future powered by decentralized middleware.

Middleware plays a crucial role in decentralized applications (DApps) by bridging the gap between blockchains and user interfaces. It encompasses various components such as indexers, which organize blockchain data, and remote procedure calls (RPCs), enabling network interactions to function as if they were local. Additionally, middleware includes oracles and decentralized data storage services, which are software solutions that transfer real-world data to the blockchain.

While middleware enhances the scalability and usability of DApps, its inherently centralized structure poses significant challenges. This centralization can lead to instability and risks a single entity gaining control, which threatens the core principles of decentralization within the Web3 ecosystem.

To address these concerns, the Web3 community is actively developing alternatives to reinforce the decentralization of the middleware layer. These efforts ensure the infrastructure remains robust and true to the decentralized ethos.

In line with these developments, SubQuery Network offers a scalable and unified data infrastructure that supports the vision of a decentralized future. The platform offers a suite of tools designed to empower developers to bring DApps to life without compromising speed, flexibility and efficiency. SubQuery is assisting developers globally, with full support for over 160 networks, including many outside the Ethereum Virtual Machine (EVM) family.

With its native token, SQT, SubQuery integrates two core services indexing and RPC into a single decentralized network. Thus, users can access fully decentralized services (for example purchasing a plan for an RPC endpoint) with the simplicity of Web2 alternatives.

The platform tests the networks load capacity through its free offerings. Additionally, the network is transforming with the beta launch of the SubQuery Data Node. The redesign of the structure and query language of RPCs leads to significant performance increases. Preliminary results show that the SubQuery Data Node enables the data indexer to run 3.9 times faster than The Graph.

When discussing the evolution of Web3 decentralization, James Bayly, chief operating officer of SubQuery Network, emphasized middlewares critical role:

The next era of Web3 decentralization will be in the middleware, the final frontier against centralized services and industry titans.

Bayly pointed out the challenges faced by developers, stating, Developers have long had to sacrifice performance and reliability when building DApps, we are working to make these sacrifices a thing of the past. Bayly also highlighted that many leading DApps are utterly reliant on centralized middleware components that could be turned off at any moment.

The Web3 community focuses on making middleware more decentralized, paving the way for a secure, user-friendly and public digital environment. As we progress, the number of choices available to developers in Web3 will only increase.

Disclaimer. Cointelegraph does not endorse any content or product on this page. While we aim at providing you with all important information that we could obtain in this sponsored article, readers should do their own research before taking any actions related to the company and carry full responsibility for their decisions, nor can this article be considered as investment advice.

See the rest here:

DApp developers can benefit from decentralization without compromise: Here's how - Cointelegraph

Read More..

Sand cat swarm optimization algorithm and its application integrating elite decentralization and crossbar strategy … – Nature.com

Six engineering challenges have been selected for this part for the purpose to assess how well CWXSCSO performs when used to engineering optimization problems. The sine and cosine optimization algorithm (SCA)40, frost and ice optimization algorithm (RIME)41, butterfly optimization algorithm (BOA)42, Harris Eagle Optimization algorithm (HHO)8, and Osprey optimization algorithm (OOA)43 were chosen as the primary three technical applications. The whale optimization algorithm (WOA)7, the locust optimization algorithm (GOA)44, the gray wolf optimization algorithm (GWO)45, the marine predator optimization algorithm (MPA)46, and the frost and ice optimization algorithm (RIME) were used to compare the final three technical applications. Every algorithm in the experiment has a population of 30 and an upper limit of 1000 iterations.

The performance of the modified algorithm pair gets assessed using pressure vessel design issues in this research. The main objective of the pressure vessel design challenge is to decrease the production expenses associated with the pressure vessel. This problem contains the selection of four optimization variables, namely shell thickness ({T}_{S}), head thickness (({T}_{h})), inner radius ((R)), and length of cylinder section without head ((L)). The mathematical description of the pressure vessel design problem is as follows:

variable:

$$overrightarrow{x}=left[{x}_{1} {x}_{2} {x}_{3} {x}_{4}right]=left[{T}_{S} {T}_{h} R Lright]$$

Function:

$$fleft(overrightarrow{x}right)=0.6224{x}_{1}{x}_{3}{x}_{4}+1.7781{x}_{2}{x}_{3}^{2}+3.1661{x}_{1}^{2}{x}_{4}+19.84{x}_{1}^{2}{x}_{3}$$

Constraint condition:

$${g}_{1}left(overrightarrow{x}right)=-{x}_{1}+0.0193{x}_{3}le 0$$

$${g}_{2}left(overrightarrow{x}right)=-{x}_{3}+0.00954{x}_{3}le 0$$

$${g}_{3}left(overrightarrow{x}right)=-pi {x}_{3}^{2}-frac{4}{3}pi {x}_{3}^{3}+1296000le 0$$

$${g}_{4}left(overrightarrow{x}right)={x}_{4}-240le 0$$

Variable interval:

$$0le {x}_{1},{x}_{2}le 99, 10le {x}_{3},{x}_{4}le 200$$

The experimental findings of CWXSCSO and the comparison algorithm are presented in Table 7. The CWXSCSO yields a value of 5886.05. When compared to alternative algorithms, this particular algorithm exhibits a superior competitive advantage in terms of maintaining the proper functioning of the pressure vessel while simultaneously minimizing costs. Benefits in guaranteeing the operation of the pressure vessel while reducing expenses. The updated method demonstrates rapid convergence to the ideal value with the best convergence accuracy, as depicted in Fig.5. In turn, the CWXSCSO facility exhibits exceptional engineering optimization capabilities.

Optimization convergence diagram of pressure vessel design problem.

The issue at hand is the Welded Beam Design (WBD), which involves the utilization of an optimization method to minimize the production cost associated with the design. The optimization problem can be boiled down to the identification of four design variables that meet the constraints of shear stress ((tau )), bending stress ((theta )), beam bending load (left({P}_{c}right)), end deviation ((delta )), and boundary conditions, namely beam length ((l)), height ((t)), thickness ((b)), and weld thickness ((h)). The objective is to minimize the manufacturing cost of welded beams. The problem of welded beams is a common example of a nonlinear programming problem. The mathematical description of the welded beam design problem is as follows:

Variable:

$$overrightarrow{x}=left[{x}_{1} {x}_{2} {x}_{3} {x}_{4}right]=left[h l t bright]$$

Function:

$$fleft(overrightarrow{x}right)=1.10471{x}_{1}^{2}{x}_{2}+0.04811{x}_{3}{x}_{4}left(14.0+{x}_{2}right)$$

Constraint condition:

$${g}_{1}left(overrightarrow{x}right)=tau left(overrightarrow{x}right)-{tau }_{max}le 0$$

$${g}_{2}left(overrightarrow{x}right)=sigma left(overrightarrow{x}right)-{sigma }_{max}le 0$$

$${g}_{3}left(overrightarrow{x}right)=delta left(overrightarrow{x}right)-{delta }_{max}le 0$$

$${g}_{4}left(overrightarrow{x}right)={x}_{1}-{x}_{4}le 0$$

$${g}_{5}left(overrightarrow{x}right)=P-{P}_{c}left(overrightarrow{x}right)le 0$$

$${g}_{6}left(overrightarrow{x}right)=0.125-{x}_{1}le 0$$

$${g}_{7}left(overrightarrow{x}right)=1.10471{x}_{1}^{2}{x}_{2}+0.04811{x}_{3}{x}_{4}left(14.0+{x}_{2}right)-5.0le 0$$

Variable interval:

$$0.1le {x}_{1}le 2,{ 0.1le x}_{2}le 10, 0.1le {x}_{3}le 10 ,{0.1le x}_{4}le 2$$

As can be seen from Table 8, the final result of CWXSCSO is 1.6935. As can be seen in Fig.6, the initial fitness value of the improved algorithm is already very good, and there are several subtle turns later, indicating that it has the ability to jump out of the local optimal. The improved algorithm achieves the purpose of reducing the manufacturing cost, and the cost of manufacturing welded beams is minimal compared with other algorithms.

Optimization convergence diagram of welding beam design problem.

The reducer holds an important place within mechanical systems as a crucial component of the gear box, serving a diverse range of applications. The primary aim of this challenge is to diminish the overall weight of the reducer through the optimization of the seven parameter variables. They are the tooth surface width (b)(=({x}_{1})), the gear module (m(={x}_{2})), the tooth count in the pinion (z(={x}_{3})), the measurement of the initial shaft distance between bearings. ({l}_{1}(={x}_{4})), the distance between the bearings of the second shaft ({l}_{2}(={x}_{5})), the diameter of the initial shaft ({d}_{1}(={x}_{6})) and the measurement of the diameter of the second shaft ({d}_{2}(={x}_{7})). The mathematical description of the speed reducer design problem is as follows:

Variable:

$$overrightarrow{x}=left[{x}_{1} {x}_{2} {x}_{3} {x}_{4} {x}_{5} {x}_{6} {x}_{7}right]=left[b m z {l}_{1} {l}_{2} {d}_{1} {d}_{2}right]$$

Function:

$$fleft(overrightarrow{x}right)=0.7854{x}_{1}{x}_{2}^{2}left(3.3333{x}_{3}^{2}+14.9334{x}_{3}-43.0934right)-1.508{x}_{1}left({x}_{6}^{2}+{x}_{7}^{2}right)+7.4777left({x}_{6}^{3}+{x}_{7}^{3}right)+0.7854({x}_{4}{x}_{6}^{2}+{{x}_{5}x}_{7}^{2})$$

Constraint condition:

$${g}_{1}left(overrightarrow{x}right)=frac{27}{{x}_{1}{x}_{2}^{2}{x}_{3}}-1le 0$$

$${g}_{2}left(overrightarrow{x}right)=frac{397.5}{{x}_{1}{x}_{2}^{2}{x}_{3}^{2}}-1le 0$$

$${g}_{3}left(overrightarrow{x}right)=frac{1.93{x}_{4}^{3}}{{x}_{2}{x}_{3}{x}_{6}^{4}}-1le 0$$

$${g}_{4}left(overrightarrow{x}right)=frac{1.93{x}_{5}^{3}}{{x}_{2}{x}_{3}{x}_{7}^{4}}-1le 0$$

$${g}_{5}left(overrightarrow{x}right)=frac{sqrt{{left(frac{745{x}_{4}}{{x}_{2}{x}_{3}}right)}^{2}+16.9times {10}^{6}}}{110.0{x}_{6}^{3}}-1le 0$$

$${g}_{6}left(overrightarrow{x}right)=frac{sqrt{{left(frac{745{x}_{4}}{{x}_{2}{x}_{3}}right)}^{2}+157.5times {10}^{6}}}{85.0{x}_{6}^{3}}-1le 0$$

$${g}_{7}left(overrightarrow{x}right)=frac{{x}_{2}{x}_{3}}{40}-1le 0$$

$${g}_{8}left(overrightarrow{x}right)=frac{{5x}_{2}}{{x}_{1}}-1le 0$$

$${g}_{9}left(overrightarrow{x}right)=frac{{x}_{1}}{{12x}_{2}}-1le 0$$

$${g}_{10}left(overrightarrow{x}right)=frac{{1.5x}_{6}+1.9}{{x}_{4}}-1le 0$$

$${g}_{11}left(overrightarrow{x}right)=frac{{1.1x}_{7}+1.9}{{x}_{5}}-1le 0$$

Variable interval:

$$2.6le {x}_{1}le 3.6 ,0.7le {x}_{2}le 0.8 ,17le {x}_{3}le 28 , 7.3le {x}_{4}le 8.3 ,7.8le {x}_{5}le 8.3,$$

$$2.9le {x}_{6}le 3.9, 5.0le {x}_{7}le 5.5$$

Table 9 and Fig.7 demonstrate that the modified method is adept at minimizing the weight of the reducer under 11 boundaries. It suggests that the enhancement is effective and may be more effectively utilized in mechanical systems.

Reducer design optimization convergence curve.

This engineering project aims to create a 4-step cone pulley with a minimal weight by looking at 5 design elements. Four variables represent the diameter of individual step of the pulley, denoted as ({d}_{i}(i=mathrm{1,2},mathrm{3,4})), while the final variable represents the magnitude of the pulley's breadth, denoted as (w). There are 8 nonlinear constraints and 3 linear constraints in the problem. The restriction is to maintain uniformity in the belt length ({C}_{i}), tension ratio ({R}_{i}), and belt transfer power ({P}_{i}) throughout all steps. The mathematical description of the step cone pulley problem is as follows:

Function:

$$fleft(xright)=rho omega left[{d}_{1}^{2}left{1+{left(frac{{N}_{1}}{N}right)}^{2}right}+{d}_{2}^{2}left{1+{left(frac{{N}_{2}}{N}right)}^{2}right}+{d}_{3}^{2}left{1+{left(frac{{N}_{3}}{N}right)}^{2}right}+{d}_{4}^{2}left{1+{left(frac{{N}_{4}}{N}right)}^{2}right}right]$$

Constraint condition:

$${h}_{1}left(xright)={C}_{1}-{C}_{2}=0, {h}_{2}left(xright)={C}_{1}-{C}_{3}=0 , {h}_{3}left(xright)={C}_{1}-{C}_{4}=0$$

$${g}_{mathrm{1,2},mathrm{3,4}}left(xright)={R}_{i}ge 2, {g}_{mathrm{5,6},mathrm{7,8}}left(xright)={P}_{i}ge left(0.75*745.6998right)$$

where:

$${C}_{i}=frac{pi {d}_{i}}{2}left(1+frac{{N}_{i}}{N}right)+frac{{left(frac{{N}_{i}}{N}-1right)}^{2}}{4a}+2a i=left(mathrm{1,2},mathrm{3,4}right)$$

$${R}_{i}=expleft[mu left{pi -2{{text{sin}}}^{-1}left{left(frac{{N}_{i}}{N}-1right)frac{{d}_{i}}{2a}right}right}right] i=left(mathrm{1,2},mathrm{3,4}right)$$

$${P}_{i}=stwleft[1-expleft[-mu left{pi -2{{text{sin}}}^{-1}left{left(frac{{N}_{i}}{N}-1right)frac{{d}_{i}}{2a}right}right}right]right]frac{pi {d}_{i}{N}_{i}}{60} i=left(mathrm{1,2},mathrm{3,4}right)$$

$$rho =7200kg/{m}^{3} , a=3m ,mu =0.35 ,s=1.75MPa ,t=8mm$$

Variable interval:

$$0le {d}_{1}, {d}_{2}le 60, 0le {d}_{3}, omega le 90$$

Table 10 clearly demonstrates that the MPA method outperforms the CWXSCSO algorithm, but it still possesses certain advantages over other algorithms. Figure8 illustrates that while the precision of convergence in CWXSCSO is less than that of MPA, its convergence speed beats that of MPA. Despite lacking MPA for the stepping cone pulley problem, CWXSCSO still has the benefit of rapid convergence speed.

Optimization convergence diagram of step cone pulley problem.

In power mechanical systems, the design of a planetary gear train presents a limited optimization problem. The issue encompasses three optimization variables, specifically the quantity of gear teeth (left({N}_{1},{N}_{2},{N}_{3},{N}_{4},{N}_{5},{N}_{6}right)), gear modulus (left({m}_{1},{m}_{2}right)), and the figure of merit (left(pright)). The primary aim of the issue is to limit the maximum error associated with the transmission ratio employed in automotive production. The issue at hand encompasses a total of six integer variables, three discrete variables, and eleven distinct geometric and assembly restrictions. The mathematical description of the planetary gear train design optimization problem is as follows:

Variable:

$$x=left({x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5},{x}_{6},{x}_{7},{x}_{8},{x}_{9}right)=left({N}_{1},{N}_{2},{N}_{3},{N}_{4},{N}_{5},{N}_{6},{m}_{1},{m}_{2},pright)$$

Function:

$$fleft(xright)=maxleft|{i}_{k}-{i}_{ok}right|, k=left{mathrm{1,2},dots ,Rright}$$

where:

$${i}_{1}=frac{{N}_{6}}{{N}_{4}}, {i}_{o1}=3.11, {i}_{2}=frac{{N}_{6}left({N}_{1}{N}_{3}+{N}_{2}{N}_{4}right)}{{N}_{1}{N}_{3}left({N}_{6}+{N}_{4}right)}, {i}_{OR}=-3.11, {I}_{R}=-frac{{N}_{2}{N}_{6}}{{N}_{1}{N}_{3}}, {i}_{O2}=1.84$$

Constraint condition:

$${g}_{1}left(xright)={m}_{2}left({N}_{6}+2.5right)-{D}_{max}le 0$$

$${g}_{2}left(xright)={m}_{1}left({N}_{1}+{N}_{2}right)+{m}_{1}left({N}_{2}+2right)-{D}_{max}le 0$$

$${g}_{3}left(xright)={m}_{2}left({N}_{4}+{N}_{5}right)+{m}_{2}left({N}_{5}+2right)-{D}_{max}le 0$$

$${g}_{4}left(xright)=left|{m}_{1}left({N}_{1}+{N}_{2}right)-{m}_{1}left({N}_{6}+{N}_{3}right)right|-{m}_{1}-{m}_{2}le 0$$

$${g}_{5}left(xright)=-left({N}_{1}+{N}_{2}right){text{sin}}left(frac{pi }{p}right)+{N}_{2}+2+{delta }_{22}le 0$$

$${g}_{6}left(xright)=-left({N}_{6}-{N}_{3}right){text{sin}}left(frac{pi }{p}right)+{N}_{3}+2+{delta }_{33}le 0$$

$${g}_{7}left(xright)=-left({N}_{4}+{N}_{5}right){text{sin}}left(frac{pi }{p}right)+{N}_{5}+2+{delta }_{55}le 0$$

$${g}_{8}left(xright)={left({N}_{3}+{N}_{5}+2+{delta }_{35}right)}^{2}-{left({N}_{6}-{N}_{3}right)}^{2}-{left({N}_{4}+{N}_{5}right)}^{2}+2left({N}_{6}-{N}_{3}right)left({N}_{4}+{N}_{5}right){text{cos}}left(frac{2pi }{p}-beta right)le 0$$

$${g}_{9}left(xright)={N}_{4}-{N}_{6}+{2N}_{5}+2{delta }_{56}+4le 0$$

$${g}_{10}left(xright)={2N}_{3}-{N}_{6}+{N}_{4}+2{delta }_{34}+4le 0$$

$${h}_{1}left(xright)=frac{{N}_{6}-{N}_{4}}{p}=integer$$

where:

$${delta }_{22}={delta }_{33}={delta }_{55}={delta }_{35}={delta }_{56}=0.5$$

$$beta =frac{{cos}^{-1}left({left({N}_{4}+{N}_{5}right)}^{2}+{left({N}_{6}-{N}_{3}right)}^{2}-{left({N}_{3}+{N}_{5}right)}^{2}right)}{2left({N}_{6}-{N}_{3}right)left({N}_{4}+{N}_{5}right)}$$

Variable interval:

$$P=left(mathrm{3,4},5right), {m}_{1},{m}_{2}=left(mathrm{1.75,2.0,2.25,2.5,2.75,3.0}right) , 17le {N}_{1}le 96,$$

$$14le {N}_{2}le 54, 14le {N}_{3}le 51, 17le {N}_{4}le 46, 14le {N}_{5}le 51, 48le {N}_{6}le 124$$

Based on the data shown in Fig.9 and Table 11, it is evident that CWXSCSO continues to outperform other methods in terms of convergence accuracy and convergence speed. This illustrates the potential for widespread implementation and utilization of the upgraded algorithm in power machinery.

Convergence curve of planetary gear train design optimization problem.

The issue of robot hand claws is a complex challenge within the field of mechanical structure engineering. The goal of the robot clamping optimization is to minimize the disparity between the highest and lowest magnitudes of forces. The challenge of robot grippers encompasses a total of seven continuous design variables the three connecting rods ((a,b,c)), the vertical displacement of the linkages ((d)), the vertical distance separating the initial node of the robotic arm from the end of the actuator ((e)), the displacement in the horizontal direction between the actuator end and the linkages node ((f)), and the angle of the second and third linkages in a geometric context (left(rho right)). There appear a total of seven distinct limitations. The mathematical description of the robot clamping optimization problem is as follows:

Variable:

$$x=left({x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5},{x}_{6},{x}_{7}right)=left(a,b,c,d,e,f,pright)$$

Function:

Go here to see the original:

Sand cat swarm optimization algorithm and its application integrating elite decentralization and crossbar strategy ... - Nature.com

Read More..

Cronyism Gets in the Way of Decentralizing the Department of Education – Centro de Periodismo Investigativo

Two weeks before Governor Pedro Pierluisi signed an executive order to create the Department of Educations (DE) Initiative for Educational Decentralization and Regional Autonomy (IDEAR, in Spanish), the Office of Management and Budget (OGP, in Spanish) had already awarded a legal services consulting contract and evaluated a proposal of more than $5 million for professional services to work on the plan that would grant greater autonomy to the Department of Education Regional Offices in the decision-making process.

As early as May 2, 2023, the OGP hired Consultora Legal PSC to identify critical legal considerations for the decentralization of the DE for $70,000. Three weeks later, it contracted IOTA Impact Company Inc. for $5.2 million, which established work guidelines for the creation of Local Educational Agencies (LEAs) to replace the DEs Regional Educational Offices.

While millions were pouring in to contract external advisory services, local prestigious academics and experts from different disciplines had voluntarily joined the working groups convened last year with the goal of contributing to the depoliticization of the agency, an evil that for years has impacted the DE, that has a budget of more than $31 billion.

But this week at least nine of them resigned en masse after denouncing the politicization of the work committees and that the process became one lacking transparency and participation, especially after the arrival of Education Secretary Yanira Races.

A U.S. Department of Education spokesperson told the Center for Investigative Journalism (CPI, in Spanish) that: Since we started our work, community engagement has always been a key and necessary pillar for the decentralization of Puerto Ricos education system. We are disappointed about the departures of several members of the IDEAR committee and appreciate the concerns they have raised. We urge the Puerto Rico Department of Education to ensure the decentralization process is truly placing the needs of the community at the forefront of this important effort and that the right voices are at the table.

Governor Pierluisi assured that contracting IOTA was a recommendation from the federal government and insisted that they [the company] have experience in other jurisdictions.

IDEAR is funded with federal funds, and the Government of Puerto Rico has no interference in the contracts with IOTA or Consultora Legal PSC, OGP press spokeswoman Wilmelis Mrquez assured. The U.S. Department of Education did not clarify who ordered the hiring of the companies.

Members of academia and the nonprofit sector who left the initiative this week requested that IDEARs current decentralization process be halted and that the first LEA to be implemented in August be abandoned. However, the Governor rejected that recommendation.

The work continues and there will be no delay in the established plan. The goal is for at least one local educational region (LEA) to begin operating in August, Pierluisi insisted.

The Governor downplayed the resignations in written statements. The reality is that were on the right track and what we want is to have educational entities at the regional level with direct access to federal funds, with human resources, purchasing, and legal affairs divisions to meet their needs, said Pierluisi.

The pilot projects have already begun in three areas that include the schools of the municipalities of Utuado, Orocovis, Yauco, Guayanilla, Gunica, Mayagez, Aasco and Hormigueros. Superintendents were also chosen for three pilot zones and Local Advisory Councils (CAL) were formed.

Among the experts who resigned are Jos Caraballo Cueto, economist and researcher, who has been part of the University of Puerto Ricos (UPR) Public Education Observatory; UPR Law School Dean Vivian Neptune; Eileen Segarra, professor at the UPR and director of the UPR Public Education Observatory; Angeles Acosta, clinical psychologist and associate professor at the UPR Medical Sciences Campus; Janice Petrovich, a consultant to foundations and non-governmental organizations and former director of global education work at the Ford Foundation; Yolanda Cordero, professor at the UPR Graduate School of Public Administration and part of the UPR Public Education Observatory; Attorney Cecille Blondet, director of Espacios Abiertos, an organization that promotes citizen participation; Attorney Enrique R. Coln Bac, expert in education issues from the Espacios Abiertos team; and Enery Lpez, from Liga de Ciudades, an entity that seeks to unite local governments in a non-partisan effort.

More than being related to external hiring, the group of experts resignation to their participation in IDEAR is linked to their conviction that the promised depoliticization of the Department of Education and the proposed decentralization of the agency are no longer part of the discussions in the working groups formed for the process that began in 2023, they said.

In August 2023, Chris Soto, senior advisor to U.S. Secretary of Education, Miguel Cardona, had explained to the CPI the goals of the plan that they promoted in the Puerto Rico Department of Education. An important part of the [decentralization] plan is how superintendents are chosen. That they arent elected by their political party. Lets be clear, thats what people say. And this doesnt happen in other jurisdictions. So, what we discussed in the plan is establishing a process in which superintendents are chosen on merit. There are requirements. Same thing applies to directors. Were going to determine what the process is, so that it isnt a political decision, but rather that the director has the experience to run their school, Soto said.

But the reality was that while there was talk of depoliticization, Secretary Races named more officials in positions of trust in the working groups, led by the director of IDEAR, Roger Iglesias, son of the namesake former New Progressive Party (PNP, in Spanish) senator. This caused a constant struggle between those who believe in the democratization of education within the committees and the DE establishment, the CPI learned in conversations with people familiar with the process.

One of the DEs actions that most bothered members of IDEAR was that the guidelines sent by the Secretary of Education to the Legislature reversed much of what was established in the working groups. The guidelines seek to establish the duties and processes and procedures, which are currently carried out at the central level, but which would be delegated to the Regional Educational Offices.

The document sent to the Legislature perpetuated the centralization of power in the figure of the Secretary of Education and did not incorporate essential suggestions presented in the IDEAR working groups to include the participation of superintendents and CALs in DE regulations, transparency and allowing participatory processes in the definition of school budgets, among other proposals.

The IDEAR working groups became more of a one-way information management spaces where the DE consults but doesnt respond to members requests for information. This was the claim of nine members of the implementation committees, who on April 15 submitted a collective resignation letter to disassociate themselves from the process, dissatisfied with the fact that decisions continue to be made at the central level.

The determinations, they said, are being made hastily and without clear metrics on how the success of pilot projects in three school regions to establish the first LEAs will be evaluated.

Another issue that created the greatest controversy was agreeing on a formula to establish the budget per student (known as the per pupil formula) because, among other reasons, the economists that the DE used to establish it were not part of IDEAR nor were they related to the school community.

Last year, Iglesias was appointed as director of the initiative and eight teams were formed to address the areas of special education, human resources, reconstruction, operation of Local Educational Agencies, governance, purchasing processes, academic management, and finances. Some 162 people made up the eight groups.

The work in the groups was affected by constant last-minute changes of dates and times and the constant inclusion in each meeting of new DE representatives without disclosing positions or roles, effectively limiting participation, according to the letter from the group that resigned.

Of the Secretary of Educations 93 positions of trust, at least 14 are on one of the eight IDEAR committees. Five of these employees are in two or even three work tables. For example, there are four trusted agency employees along with three other DE administration officials, on the LEA Committee.

Ral Coln Torres, special assistant to Secretary Races, who earns a monthly salary of $9,599, leads the LEA team. He was Interim Superintendent in the Bayamn region, as well as Director of School Management. That group also includes the DEs Secretary of Planning and Performance, Lydiana Lpez, who earns a monthly salary of $6,821.

Another trusted member of this committee is the director of the Ponce regional educational office, Roberto J. Rodrguez Santiago, with a monthly salary of $8,050. Rodriguez Santiago is the Electoral Commissioner for the PNP in Ponce. ngel Tardy Montalvo, special assistant to the Secretary and employee of the Special Education Student, Parent, and Community Services Unit, who earns $6,250 per month, is also part of the team. The Dean of the UPR Law School, Neptune; and Doctor in Education and philanthropist, Petrovich, walked away from the committee this week.

The average annual salary of teachers in Puerto Rico is $33,000.

Meanwhile Lpez, of the Liga de Ciudades, resigned from the Governance Committee, a team led by Luis A. Orengo Morales, special assistant to the Secretary and donor to the Governor and the PNP. He also served as Interim Director for the San Juan region in 2017, from which he was removed after Hurricane Maria over his performance and the prolonged closure of schools that had not suffered serious damage. He was also Assistant Superintendent of Schools in Guaynabo.

Economist and Director of the UPR Census Information Center in Cayey, Caraballo Cueto, resigned from the team that seeks to decentralize the purchasing processes. This group is led by the Director of the DE Purchasing Office, Norma J. Roln Barada, an official who has remained in charge of the Purchasing Office during PNP and Popular Democratic Party (PDP, in Spanish) administrations. At this worktable there are three other of Races trusted employees: Duhamel Adames Rodrguez, Jullymar Octavianni and Wanda E. Muoz Valle. Adames Rodrguez is Regional Superintendent, Octavianni is Undersecretary of Administration earning $10,884 per month, and Muoz Valle is assistant to Undersecretary Luis R. Gonzlez Rosario, who also directs one of the IDEAR working groups.

Jimmy Cabn, a Pierluisi and other PNP figures donor, who earns a monthly salary of $9,142 as Assistant Secretary and oversees the Teaching Career program, heads the Human Resources Committee. The UPR denied him a professional certificate after plagiarism was detected in a project. In addition, he worked prominently in 2000 with early voting ballots in the State Election Commission. UPR professor, Cordero, resigned from this group, in which Adames Rodrguez and Octavianni are also members.

Caraballo Cueto and the professor and Director of the Education Observatory, Segarra, withdrew from participating in the Finance Committee. Mara Lizardi, a Pierluisi donor and former Assistant Secretary of Human Resources under the administration of convicted former Secretary Julia Keleher, leads this group. Octavianni is also at this worktable along with another trusted employee, the superintendent of the San Juan region, Jorge A. Santiago Ramos.

Assistant Undersecretary for Academic and Programmatic Affairs, Beverly Morro, who was mentioned in 2023 as a possible Secretary of Education, also heads one of IDEARs working groups. She has been a donor to the PNP and Governor Pierluisis committee. Associate Undersecretary Luis Gonzlez Rosario, who is an engineer, heads the table related to the physical plant of the schools. His arrival at the DE in the summer of 2022 from the Department of Transportation and Public Works was said to have been imposed by La Fortaleza, although the then Secretary of Education Eliezer Ramos Pars denied it. He has been a frequent donor to the PNP, the Pierluisi committee and PNP Representative Gabriel Rodrguez Aguil.

Points like those made by the resigning experts were brought up to Iglesias as early as December 11 in a letter signed by three members of the Local Advisory Councils (CAL). In that letter they stated, for example, that they only had five days to read the essays of the superintendent candidates in their area before sending a recommendation to the Secretary of Education. This is even though, as Soto, the assistant to the U.S. Secretary of Education, explained, the participatory, decentralized, and depoliticized selection of regional superintendents is supposed to be an essential part of this project promoted by the U.S. Department of Education.

The criteria or evaluation methodologies of the pilot projects were not explained to the School Councils either, they warned.

The pilot [projects] seem like standardized processes without regional flexibility, which respond to the needs of each region, said Eduardo Lugo Hernndez, from the academic component of CAL from the West region; Helga Maldonado Domnguez and Gerardo Medina Rivera, both from the southern CAL family component.

As an example of the one-way work that the DE wants to impose, Medina Rivera, whose daughter goes to a school in Yauco, said he requested at a council meeting in February to visit schools throughout the southern region because by being with other parents I realize that Im a little alienated from whats happening with other schools in the region and that he only knew what was happening at the school where my daughter goes. However, his request was denied after it was brought to the central level by IDEARs representative on the council, Nidia Estrada. The explanation he was given was that the superintendent of the pilot, Anita Orengo, is the only one that can visit and let us know whats happening in the schools at the monthly meetings.

Maldonado Domnguez, meanwhile, resigned from continuing with the project. Her vacancy was briefly filled by Mayra L. Acosta Muiz. I joined the CAL at the request of Mr. [former DE Secretary] Eliezer Ramos Pars, said Acosta Muiz. I have a 30-year-old Special Education son who is still active in the DE. When the vacancy came up, I joined, but I was only there for a week because from the first day I said that I didnt want to be just a rubber stamp. On March 23 she submitted her resignation to Iglesias.

I was skeptical when I entered the process, but with the intention of overseeing the process and in some way making them know that there are people who are watching them. But the idea that it was going to depoliticize, were fighting against a very big monster, now in meetings saying that word is like saying a bad word, said Medina Rivera.

It has become clear to us that our participation at this time is symbolic, and we would be doing education a disservice if we became rubber stamps to the extent that the process has been distorted from its initial objective, the group of educators and advisors who withdrew from the initiative, stated.

Consultora Legal, chaired by Alberto C. Rodrguez Prez, was hired to identify the need for new regulations, standards and guides, or amendments, to achieve a decentralization approach. The vice president of this firm, Mara Vzquez Graziani, is a frequent donor to Governor Pedro Pierluisis campaign committee, to which she has already donated $6,200 this year. That attorney and Rodrguez Prez have also made donations to the PNP and several of its candidates.

IOTA Impact, incorporated in 2017 in Delaware and with offices in New York, was registered in Puerto Rico in March 2023, and on April 8 of that year it submitted its proposal to the government of Puerto Rico for the transformation of the education system. Its experience in the educational sector, as observed in the service proposal submitted to the OGP, is in Colombia. The company stated that it developed a strategic plan there for the decentralization of seven vocational institutes of the Ministry of National Education and designed the organizational restructuring of Colciencias, a public organization to promote scientific development. Other IOTA work focuses on Colombian private schools.

IOTA is a marketing research and public opinion survey company, according to several company directories that the CPI consulted, and as the company described itself on its website and in its service proposals. On its Facebook page, it promotes its work in Puerto Rico as a success story in which we implement support in one of the 10 largest education systems in the United States with 45,000 employees and 250,000 students.

In the presentation of its work in Puerto Rico on the social network, it urges people to: Contact us now and discover how we can work together to boost the success of your business with new solutions and tangible results!

On September 27, the corporations contract was amended to extend its term and increase it to $9.7 million. The amendment came after IOTA submitted a proposal on September 19 detailing a timeline for implementing the six pillars of transformation identified in the first IDEAR Executive Committee report. A second amendment was signed on January 26, 2024 to extend its validity until December 31. Services are billed between $400 and $115 per hour.

Excerpt from:

Cronyism Gets in the Way of Decentralizing the Department of Education - Centro de Periodismo Investigativo

Read More..

Vitalik Buterin Reminds Everyone About Main Goal of Crypto – TradingView

Ethereum co-founder Vitalik Buterin reminded us that crypto is not about trading digital assets, it is about liberty and decentralization. His statement raises crucial questions about the role of cryptocurrencies in fostering freedom and privacy in the face of global surveillance concerns.

Buterin's assertion underlines a disconcerting trend where individual rights can potentially be compromised by expansive surveillance measures. The fear that governmental powers could misuse such capabilities to monitor adversaries or the public is not unfounded. The ethos of crypto was birthed as a countermeasure to such centralizations of power, aiming to distribute control back to individuals.

However, the cryptocurrency landscape, including Ethereum, faces its paradoxes. Despite the decentralized ideals, a significant portion of Ethereum's transactions have encountered censorship, most notably with compliance to the Office of Foreign Assets Control (OFAC). This contradiction raised a lot of noise in the cryptocurrency community and even became a topic of existential discussion within the Ethereum community.

Moreover, Ethereum's shift from proof of work (PoW) to proof of stake (PoS) in its consensus mechanism has been touted as a step toward greater efficiency and environmental sustainability. Nonetheless, PoS does not necessarily lead to more decentralization. In PoS, those with larger stakes or more tokens have more influence, potentially leading to concentration of power, which is at odds with the fundamental crypto principle of equalizing power distribution, despite the same issue existing in the PoW environment.

Ethereum's value has seen considerable volatility after the most recent market-wide correction. Recent trends show resilience after the return above $3,000, but the second-biggest cryptocurrency is yet to show its true potential as the post-halving rally is expected to push the value of ETH at least toward its previous ATH at approximately $5,000.

Read the original post:

Vitalik Buterin Reminds Everyone About Main Goal of Crypto - TradingView

Read More..

The ethics of advanced AI assistants – Google DeepMind

Responsibility & Safety

Iason Gabriel and Arianna Manzini

Exploring the promise and risks of a future with more capable AI

Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.

General-purpose foundation models are paving the way for increasingly advanced AI assistants. Capable of planning and performing a wide range of actions in line with a persons aims, they could add immense value to peoples lives and to society, serving as creative partners, research analysts, educational tutors, life planners and more.

They could also bring about a new phase of human interaction with AI. This is why its so important to think proactively about what this world could look like, and to help steer responsible decision-making and beneficial outcomes ahead of time.

Our new paper is the first systematic treatment of the ethical and societal questions that advanced AI assistants raise for users, developers and the societies theyre integrated into, and provides significant new insights into the potential impact of this technology.

We cover topics such as value alignment, safety and misuse, the impact on the economy, the environment, the information sphere, access and opportunity and more.

This is the result of one of our largest ethics foresight projects to date. Bringing together a wide range of experts, we examined and mapped the new technical and moral landscape of a future populated by AI assistants, and characterized the opportunities and risks society might face. Here we outline some of our key takeaways.

Illustration of the potential for AI assistants to impact research, education, creative tasks and planning.

Advanced AI assistants could have a profound impact on users and society, and be integrated into most aspects of peoples lives. For example, people may ask them to book holidays, manage social time or perform other life tasks. If deployed at scale, AI assistants could impact the way people approach work, education, creative projects, hobbies and social interaction.

Over time, AI assistants could also influence the goals people pursue and their path of personal development through the information and advice assistants give and the actions they take. Ultimately, this raises important questions about how people interact with this technology and how it can best support their goals and aspirations.

Illustration showing that AI assistants should be able to understand human preferences and values.

AI assistants will likely have a significant level of autonomy for planning and performing sequences of tasks across a range of domains. Because of this, AI assistants present novel challenges around safety, alignment and misuse.

With more autonomy comes greater risk of accidents caused by unclear or misinterpreted instructions, and greater risk of assistants taking actions that are misaligned with the users values and interests.

More autonomous AI assistants may also enable high-impact forms of misuse, like spreading misinformation or engaging in cyber attacks. To address these potential risks, we argue that limits must be set on this technology, and that the values of advanced AI assistants must better align to human values and be compatible with wider societal ideals and standards.

Illustration of an AI assistant and a person communicating in a human-like way.

Able to fluidly communicate using natural language, the written output and voices of advanced AI assistants may become hard to distinguish from those of humans.

This development opens up a complex set of questions around trust, privacy, anthropomorphism and appropriate human relationships with AI: How can we make sure users can reliably identify AI assistants and stay in control of their interactions with them? What can be done to ensure users arent unduly influenced or misled over time?

Safeguards, such as those around privacy, need to be put in place to address these risks. Importantly, peoples relationships with AI assistants must preserve the users autonomy, support their ability to flourish and not rely on emotional or material dependence.

Illustration of how interactions between AI assistants and people will create different network effects.

If this technology becomes widely available and deployed at scale, advanced AI assistants will need to interact with each other, with users and non-users alike. To help avoid collective action problems, these assistants must be able to cooperate successfully.

For example, thousands of assistants might try to book the same service for their users at the same time potentially crashing the system. In an ideal scenario, these AI assistants would instead coordinate on behalf of human users and the service providers involved to discover common ground that better meets different peoples preferences and needs.

Given how useful this technology may become, its also important that no one is excluded. AI assistants should be broadly accessible and designed with the needs of different users and non-users in mind.

Illustration of how evaluations on many levels are important for understanding AI assistants.

AI assistants could display novel capabilities and use tools in new ways that are challenging to foresee, making it hard to anticipate the risks associated with their deployment. To help manage such risks, we need to engage in foresight practices that are based on comprehensive tests and evaluations.

Our previous research on evaluating social and ethical risks from generative AI identified some of the gaps in traditional model evaluation methods and we encourage much more research in this space.

For instance, comprehensive evaluations that address the effects of both human-computer interactions and the wider effects on society could help researchers understand how AI assistants interact with users, non-users and society as part of a broader network. In turn, these insights could inform better mitigations and responsible decision-making.

We may be facing a new era of technological and societal transformation inspired by the development of advanced AI assistants. The choices we make today, as researchers, developers, policymakers and members of the public will guide how this technology develops and is deployed across society.

We hope that our paper will function as a springboard for further coordination and cooperation to collectively shape the kind of beneficial AI assistants wed all like to see in the world.

Paper authors: Iason Gabriel, Arianna Manzini, Geoff Keeling, Lisa Anne Hendricks, Verena Rieser, Hasan Iqbal, Nenad Tomaev, Ira Ktena, Zachary Kenton, Mikel Rodriguez, Seliem El-Sayed, Sasha Brown, Canfer Akbulut, Andrew Trask, Edward Hughes, A. Stevie Bergman, Renee Shelby, Nahema Marchal, Conor Griffin, Juan Mateos-Garcia, Laura Weidinger, Winnie Street, Benjamin Lange, Alex Ingerman, Alison Lentz, Reed Enger, Andrew Barakat, Victoria Krakovna, John Oliver Siy, Zeb Kurth-Nelson, Amanda McCroskery, Vijay Bolina, Harry Law, Murray Shanahan, Lize Alberts, Borja Balle, Sarah de Haas, Yetunde Ibitoye, Allan Dafoe, Beth Goldberg, Sbastien Krier, Alexander Reese, Sims Witherspoon, Will Hawkins, Maribeth Rauh, Don Wallace, Matija Franklin, Josh A. Goldstein, Joel Lehman, Michael, Klenk, Shannon Vallor, Courtney Biles, Meredith Ringel Morris, Helen King, Blaise Agera y Arcas, William Isaac and James Manyika.

Visit link:
The ethics of advanced AI assistants - Google DeepMind

Read More..

Google Consolidates AI-Building Teams Into DeepMind – PYMNTS.com

Google is consolidating the teams that focus on building artificial intelligence (AI) models acrossGoogle ResearchandGoogle DeepMind.

All this work will now be done within Google DeepMind,Sundar Pichai, CEO of Google and Alphabet, said in anote to employeesposted on the companys website Thursday (April 18).

Pichai said in the note that this move will scale our capacity to deliver capable AI for our users, partners and customers.

It will simplify development by concentrating compute-intensive model building in one place, establish single access points for those looking to take these models and build generative AI applications, and give Google Research a clear and distinct mandate to invest in three key areas: computing systems, foundational machine learning and algorithms, and applied science and society, according to the note.

This move comes a year after the company created Google DeepMind by bringing together Google Brain, DeepMind and other researchers focused on AI systems, per the note. This group developed the companysGeminimodels.

The letter also announced changes to the way Googles Responsible AI teams work.

Those in Research have been moved to DeepMind, other responsibility teams have been moved to the central Trust and Safety team, and the company is increasing investment in testing AI-powered features for vulnerabilities, the note said.

These changes continue the work weve done over the past year to simplify our structure and improve velocity and execution such as bringing together the Brain team in Google Research with teams in DeepMind, which helped accelerate our Gemini models; unifying our ML infrastructure and ML developer teams to enable faster decisions, smarter compute allocation and a better customer experience; and bringing our Search teams under one leader, Pichai said in the note.

It was reported Tuesday (April 16) that Googlesspending on AIwill surpass $100 billion.

When asked at a conference about reports that Google rivals Microsoft and OpenAI plan to spend $100 billion on an AI supercomputer known as Stargate, DeepMind CEODemis Hassabissaid: We dont talk about our specific numbers, but I think were investing more than that over time.

Originally posted here:
Google Consolidates AI-Building Teams Into DeepMind - PYMNTS.com

Read More..

Google will outpace Microsoft in AI investment, DeepMind CEO says – TNW

We have all been guilty of falling under the foundation model spell of the past year-and-a-half, initiated by OpenAIs unveiling of ChatGPT to the public.

But it is not only where large language models (LLMs) such as GPT-4 are concerned that incredible progress has been made in the field of artificial intelligence. And one company has been behind more impressive milestones than most DeepMind, acquired by Google in 2014 for a reported400mn to 650mn.

Speaking at the TED 40th anniversary conference in Vancouver, Canada, on Monday, DeepMinds CEO and head of Googles entire AI R&D efforts, Demis Hassabis, confirmed that Google has no intention of slowing down investment in the technology. Quite the opposite.

While Hassabis said Google does not talk about specific numbers, the company will surpass the $100 billion that Microsoft and OpenAI plan to invest in their Stargate AI supercomputer over the coming years.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

We are investing more than that over time, and that is one of the reasons we teamed up with Google, Hassabis said. We knew that in order to get to AGI, we would need a lot of compute and Google had, and still has, the most computers.

While this sounds like the perfect scenario for an artificial intelligence arms race that could lead to rolling the dice on things like reinforcement learning and AI safety, Hassabis reiterated that this must be avoided.

According to the DeepMind CEO, this is especially important as we come nearer to achieving artificial general intelligence AI that can match or surpass human cognitive abilities such as reasoning, planning, and remembering.

This technology is still relatively nascent, and so its probably ok what is happening at the moment, Hassabis said. But as we get closer to AGI we need to start thinking as a society about the types of architectures that get built. The good news is that most of these scientists who are working on this, we know each other quite well, we talk to each other a lot at conferences, Hassabis stated. (Raise your hand if you are only mildly reassured by this particular piece of information.)

Hassabis further added that learning to build safe AGI architectures is a kind of bottleneck that humanity needs to get through, in order to emerge on the other side to a flourishing of many different types of systems that have emerged from the initial ones with mathematical or practical guarantees around what they do.

The responsibility for preventing a runaway race dynamic from happening, Hassabis believes, rests not only with AI industry labs, but many other parts of society: governments, civil society, and academia. If we get this right, we could be in this incredible new era of radical abundance, curing all diseases, spreading consciousness to the stars, and maximum human flourishing.

One of the themes of this years TNW Conference is Ren-AI-ssance: The AI-Powered Rebirth. If you want to go deeper into all things artificial intelligence, or simply experience the event (and say hi to our editorial team), weve got something special for our loyal readers. Use the code TNWXMEDIA at checkout to get 30% off your business pass, investor pass or startup packages (Bootstrap & Scaleup).

Read the original here:
Google will outpace Microsoft in AI investment, DeepMind CEO says - TNW

Read More..