Page 649«..1020..648649650651..660670..»

UB AI expert Doermann testifies before Congress on threat of … – University at Buffalo

The last time UB artificial intelligence expert David Doermann testified before Congress, in 2019, he warned lawmakers about the dangers of deepfakes and other synthetic media.

Since then, the threat has only grown, Doermann said during his latest appearance on Capitol Hill.

Speaking Nov. 8 to members of the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation, Doermann again urged lawmakers to invest more resources into ensuring the technology is not misused.

As these technologies advance at an unprecedented rate, it is crucial to recognize the potential for both positive and negative implications. Every week, we hear of its use at both ends of the spectrum, he said.

This week, we heard about AI being used to finish a new Beatles song on one hand and to generate nude images of classmates by a high schooler in New Jersey on the other. Despite our presidents executive orders and the testimony of our thought and business leaders, we are not moving fast enough to curtail the continued damage this technology is doing and will continue to do as it evolves, he said.

A SUNY Empire Innovation Professor and interim chair of the Department of Computer Science and Engineering, Doermann elaborated on the many ways in which synthetic or manipulated digital content can cause harm.

Not only has it been used in non-consensual pornography, cyberbullying and harassment, causing severe harm to individuals, the potential national security implications are grave. Deepfakes can be exploited to impersonate government officials, military personnel or law enforcement, leading to misinformation and potentially dangerous situations, said Doermann, who before arriving at UB worked for the Defense Advanced Research Projects Agency (DARPA), where he oversaw the agencys media forensics program and other programs related to the use of human language technologies.

As deepfake technology becomes more sophisticated, Doermann advocated for federal policies that govern its use.

I urge you to consider legislation and regulations to address the misuse of deepfake technology. Striking the right balance between free speech and safeguards to protect against malicious uses of deepfakes is essential, he told subcommittee members. First and foremost, public awareness and digital literacy programs are vital in helping individuals recognize deepfakes and false information.

At UB, researchers are tackling this problem with federal support. Examples include the Center for Information Integrity, whose researchers are developingtools to help older adults and children spot online deceptions, as well as work by researchers on the DARPA Semantic Media Forensic Program.

The challenges facing society, Doermann said, are complex and pervasive enough that federal resources alone will not alleviate them.

Collaboration between Congress and technology companies is essential to address the challenges posed by deepfakes. Tech companies are responsible for developing and implementing policies to detect and mitigate deepfake content on their platforms, he said. More robust privacy and consent laws are needed to protect individuals from using their likeness and voice in deepfake content without their permission. Continued research and development in AI and deepfake technology are necessary, as is funding for initiatives to counter deepfake misuse.

To see Doermanns entire testimony, visit the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovations website.

See the article here:

UB AI expert Doermann testifies before Congress on threat of ... - University at Buffalo

Read More..

What is Google Gemini? CEO Sundar Pichai says ‘excited’ about the innovation – Business Today

Google's Gemini, hailed by CEO Sundar Pichai as an exciting innovation, has been making waves since its announcement. This development, following the seismic impact of ChatGPT's launch last November, prompted Google to take decisive action, investing substantially in catching up with the generative AI trend. This concerted effort led not only to the introduction of Google Bard but also to the unveiling of Google Gemini.

We are building our next generation of models with Gemini and I am extraordinarily excited at the innovation coming ahead. I expect it to be a golden age of innovation ahead and can't wait to bring all the innovations to more people, Pichai recently said at the APEC CEO Conference.

What exactly is Google Gemini?

Gemini represents a suite of large language models (LLMs) employing training methodologies akin to those used in AlphaGo, integrating reinforcement learning and tree search techniques. It holds the potential to challenge ChatGPT's dominance as the premier generative AI solution globally.

It emerged mere months after Google amalgamated its Brain and DeepMind AI labs to establish a new research entity known as Google DeepMind. It also follows swiftly on the heels of Bard's launch and the introduction of its advanced PaLM 2 LLM.

While expectations suggest a potential release of Google Gemini in the autumn of 2023, comprehensive details regarding its capabilities remain elusive.

In May, Sundar Pichai, CEO of Google and Alphabet, shared a blog post offering a broad overview of the LLM, stating: "Gemini was purpose-built from the ground up to be multimodal, boasting highly efficient tool and API integrations, and designed to facilitate future innovations such as memory and planning."

Pichai also highlighted, "Despite being in its early stages, we are already witnessing remarkable multimodal capabilities not previously seen in earlier models. Once fine-tuned and rigorously assessed for safety, Gemini will be available in various sizes and functionalities, akin to PaLM 2."

Since then, official disclosures about its release have been scarce. Google DeepMind CEO Demis Hassabis, in an interview with Wired, hinted at Gemini's capabilities, mentioning its amalgamation of AlphaGo's strengths with the impressive language capabilities of large models.

According to Android Police, an anonymous source associated with the product suggested that Gemini will generate text alongside contextual images, drawing on sources such as YouTube video transcripts.

Challenges on the horizon

Google's extensive endeavour to catch up with OpenAI, the creators of ChatGPT, appears to be more challenging than initially anticipated, as reported by The Information.

Earlier this year, Google informed select cloud clients and business partners that they would gain access to the company's new conversational AI, the substantial language model Gemini, by November.

However, the company recently notified them to expect it in the first quarter of the following year, as revealed by two individuals with direct insight. This delay poses a significant challenge for Google, particularly amidst the slowdown in its cloud sales growth, contrasting with the accelerated growth of its larger rival, Microsoft. A portion of Microsoft's success can be attributed to selling OpenAI's technology to its customer base.

Also ReadGovernment to meet social media platforms including Meta and Google over deepfake crisis, says Ashwini Vaishnaw

Continue reading here:
What is Google Gemini? CEO Sundar Pichai says 'excited' about the innovation - Business Today

Read More..

AI Unleashed :Transforming Humanity – Medium

Introduction:Artificial Intelligence (AI) has not only emerged from the annals of science fiction but has firmly planted itself as a cornerstone in multiple sectors. Its various forms, from machine learning to deep learning, are driving unprecedented change. While these advances are groundbreaking, they also necessitate a critical examination of AIs potential risks to humanity.

1. Machine Learning in Financial Tech:Machine learning, a critical facet of AI, is upending traditional finance. JPMorgans COIN platform exemplifies this, using ML to deconstruct commercial loan agreements, a task once demanding hundreds of thousands of man-hours. Beyond efficiency, ML in finance also extends to fraud detection and algorithmic trading, creating systems that are not only faster but more secure and intelligent.

2. Deep Learnings Impact on Healthcare:Deep learning, celebrated for its pattern recognition capabilities, is revolutionizing healthcare. Googles DeepMind, for instance, uses deep learning algorithms to accurately diagnose diseases such as cancer, dramatically improving early detection rates. This advancement transcends traditional diagnostics, offering a glimpse into a future where AI partners with medical professionals to save lives.

3. Supervised Learning in E-Commerce:E-commerce giants like Amazon and Netflix harness supervised learning to power recommendation engines, offering personalized experiences to users. This approach leverages massive datasets to predict customer preferences, transforming browsing into a curated experience that drives both satisfaction and revenue.

4. Unsupervised Learning in Marketing:Unsupervised learning is reshaping marketing by uncovering hidden patterns in consumer data. This AI form enables businesses to segment their markets more effectively, crafting targeted strategies that resonate with distinct customer groups.

5. Neural Networks in the Automotive Industry:The automotive industrys leap into the future is powered by neural networks, particularly in developing autonomous vehicles. Teslas self-driving cars, which use Convolutional Neural Networks (CNNs) for image recognition and decision-making, exemplify AIs role in enhancing road safety and redefining transportation.

6. NLP Revolutionizing Customer Service:Natural Language Processing (NLP) has transformed customer service. AI-driven chatbots and virtual assistants, used by companies like Apple and Amazon, offer instant, intelligent customer interactions. This innovation not only enhances customer experience but also streamlines operations.

7. Reinforcement Learning in Gaming and Robotics:In gaming and robotics, reinforcement learning is making significant strides. DeepMinds AlphaGo, which outplayed human Go champions, illustrates AIs potential in strategic decision-making. Robotics, too, benefits from this AI form, creating machines that learn and adapt like never before.

Theoretical Risks of AI:AIs rapid advancement, however, brings potential risks. Automation could lead to significant job displacement. In cybersecurity, AI-enhanced attacks present sophisticated new challenges. Philosophically, the concept of an AI singularity where AI outstrips human intelligence raises concerns about uncontrollable outcomes that may not align with human ethics.

Conclusion:AIs integration across sectors demands a nuanced approach, balancing its transformative potential with ethical considerations. By comprehending AIs capabilities and fostering robust ethical frameworks, we can harness AIs power responsibly, ensuring it serves humanitys best interests.

See the rest here:
AI Unleashed :Transforming Humanity - Medium

Read More..

Researchers seek consensus on what constitutes Artificial General Intelligence – Tech Xplore

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

by Peter Grad , Tech Xplore

close

A team of researchers at DeepMind focusing on the next frontier of artificial intelligenceArtificial General Intelligence (AGI)realized they needed to resolve one key issue first. What exactly, they asked, is AGI?

It is often viewed in general as a type of artificial intelligence that possesses the ability to understand, learn and apply knowledge across a broad range of tasks, operating like the human brain. Wikipedia broadens the scope by suggesting AGI is "a hypothetical type of intelligent agent [that] could learn to accomplish any intellectual task that human beings or animals can perform."

OpenAI's charter describes AGI as a set of "highly autonomous systems that outperform humans at most economically valuable work."

AI expert and founder of Geometric Intelligence Gary Marcus defined it as "any intelligence that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence."

With so many variations in definitions, the DeepMind team embraced a simple notion voiced centuries ago by Voltaire: "If you wish to converse with me, define your terms."

In a paper published on the preprint server arXiv, the researchers outlined what they termed "a framework for classifying the capabilities and behavior of AGI models."

In doing so, they hope to establish a common language for researchers as they measure progress, compare approaches and assess risks.

"Achieving human-level 'intelligence' is an implicit or explicit north-star goal for many in our field," said Shane Legg, who introduced the term AGI 20 years ago.

In an interview with MIT Review, Legg explained, "I see so many discussions where people seem to be using the term to mean different things, and that leads to all sorts of confusion. Now that AGI is becoming such an important topic we need to sharpen up what we mean."

In the arXiv paper, titled "Levels of AGI: Operationalizing Progress on the Path to AGI," the team summarized several principles required of an AGI model. They include a focus on the capabilities of a system, not the process.

"Achieving AGI does not imply that systems 'think' or 'understand' [or] possess qualities such as consciousness or sentience," the team emphasized.

An AGI system must also have the ability to learn new tasks, and know when to seek clarification or assistance from humans for a task.

Another parameter is a focus on potential, and not necessarily actual deployment of a program. "Requiring deployment as a condition of measuring AGI introduces non-technical hurdles such as legal and social considerations, as well as potential ethical and safety concerns," the researchers explained.

The team then compiled a list of intelligence thresholds ranging from "Level 0, No AGI," to "Level 5, Superhuman." Levels 14 included "Emerging," "Competent," "Expert" and "Virtuosos" levels of achievement.

Three programs met the threshold of the label AGI. But those three, generative text models (ChatGPT, Bard and Llama 2), reached only "Level 1, Emerging." No other current AI programs met the criteria for AGI.

Other programs listed as AI included SHRDLU, an early natural language understanding computer developed at MIT, listed at "Level 1, Emerging AI."

At "Level 2, Competent" are Siri, Alexa and Google Assistant. The grammar checker Grammarly ranks at "Level 3, Expert AI."

Higher up this list, at "Level 4, Virtuoso," are Deep Blue and AlphaGo. Topping the list, "Level 5, Superhuman," are DeepMind's AlphaFold, which predicts a protein's 3D structure from its amino acid sequence; and StockFish, a powerful open-source chess program.

However, there is no single proposed definition for AGI, and there is constant change.

"As we gain more insights into these underlying processes, it may be important to revisit our definition of AGI," says Meredith Ringel Morris, Google DeepMind's principal scientist for human and AI interaction.

"It is impossible to enumerate the full set of tasks achievable by a sufficiently general intelligence," the researchers said. "As such, an AGI benchmark should be a living benchmark. Such a benchmark should therefore include a framework for generating and agreeing upon new tasks."

More information: Meredith Ringel Morris et al, Levels of AGI: Operationalizing Progress on the Path to AGI, arXiv (2023). DOI: 10.48550/arxiv.2311.02462

Journal information: arXiv

2023 Science X Network

See the rest here:
Researchers seek consensus on what constitutes Artificial General Intelligence - Tech Xplore

Read More..

why Smart Contracts are living up-to their name & how we Benefit … – Medium

why Smart Contracts are living up-to their name & how we Benefit from them, greatly.

I have allowed myself to indulge in a sexy micro-blogging platform which is more commonly known as X(previously called Twitter). my username is @baller_777, you can check-it-out if youd like.

wanna build a successful business/venture, Do this;

IN THE 2000s: build a website

IN THE 2010s: build an application

IN THE 2020s: build smart-contracts

that tweet got me thinking about a potential technological revolution we may/may not witness, just like the Internet.

Let me establish some context; During 2020s, COVID Pandemic was still on-going, and during that period terms like cryptocurrency and web3.0', bitcoin and blockchain and of course our favorites' doge became the talk-of-the-town and all of these terms root to a singular technological department; Blockchain. Smart Contracts is one of them but evidently a very crucial one.

About the Technicalities, it is quite simple.

Nevertheless, we all know what impact the internet has on our lives, now. hence, as quoted once by a revolutionizer, differentiation is the only way of survival and the universe wants you to be typical, I believe it is fair to say that we might just be leading towards another passing fad(which is code for revolution), in our lives.

For The Benefit Of Humans.

Go here to see the original:

why Smart Contracts are living up-to their name & how we Benefit ... - Medium

Read More..

Deep learning-based solution for smart contract vulnerabilities … – Nature.com

CodeBERT model has the state-of-the-art performances in tasks related to programming language processing23. It features capturing semantic connections between natural language and programming language. According to Yuan et al.34, CodeBERT can achieve 61% of accuracy in software vulnerabilities discovery which is generally higher than mainstream models Word2Vec35, FastText36 and GloVe37 (46%, 41% and 29% respectively). In our research, smart contracts are based on programming language Solidity. Therefore, we optimize the CodeBERT model and employ it in our study. CNN is a commonly used and typical deep learning model with an excellent generality in processing images and texts. LSTM is also a deep learning model featuring in processing long texts and it can effectively learn time sequence in texts which CNN is not adaptive to do. Both CNN and LSTM have achieved significantly high accuracy (0.958 and 0.959 respectively) in source code vulnerabilities detection, according to Xu et al.38. We attempt to employ CNN and LSTM models as comparisons with CodeBERT model and further analyze the performances of them in our tasks.

Figure 1 illustrates the complete process of developing a vulnerability detection model called Lightning Cat for smart contracts, which consists of three stages. The first stage involves building and preprocessing the labeled dataset of vulnerable Solidity code. In the second stage, training three models (Optimized-CodeBERT, Optimized-LSTM, and Optimized-CNN) and comparing their performance to determine the best one. Finally, in the third stage, the selected model is evaluated using the Sodifi-benchmark dataset to assess its effectiveness in detecting vulnerabilities.

Lightning Cat Model Development Process.

During the data preprocessing phase, we collect three datasets and subsequently perform data cleaning. Finally, we employ the CodeBERT model to encode the data.

Our primary training dataset comprises three main sources: 10,000 contracts from the Slither Audited Smart Contracts Dataset39, 20,000 contracts from smartbugs-wild40, and 1,000 typical smart contracts with vulnerabilities identified through expert audits, overall 31,000 contracts. To effectively compare results with other auditing tools, we choose to use the SolidiFI benchmark dataset41 as our test set, a dataset containing contracts containing 9,369 bugs.

Within our test set SolidiFI-benchmark, there are three static detection tools which are Slither, Mythril, and Smatcheck as well as all identified four common vulnerability types which are Re-entrancy, Timestamp-Dependency, Unhandled-Exception, and tx.origin. To ensure the completeness and fairness of the results, our proposed Lightning Cat model primarily focused on these four types of vulnerabilities for comparison. Table 1 displays the mapping between the four types of vulnerabilities and the three auditing tools.

Considering that a complete contract might consist of multiple Solidity files and a single Solidity file might contain several vulnerable code snippets, we utilized the Slither tool to extract 30,000 functions containing these four types of vulnerabilities from the data sources39,40. Additionally, we manually annotate the problematic code snippets within the contracts audited by experts, overall 1,909 snippets. The training set comprises 31,909 code snippets. For the test set, we extract 5,434 code snippets related to these four vulnerabilities from the SolidiFI-benchmark dataset. The processing procedures for the training and test sets can be seen in Fig. 2.

The length of a smart contract typically depends on its functionality and complexity. Some complex contracts can exceed several thousand tokens. However, handling long text has been a long-standing challenge in deep learning42. Transformer-based models can only handle a maximum of 512 tokens. Therefore, we attempted two methods to address the issue of text length exceeding 510 tokens.

The data is split into chunks of 510 tokens each, and all the chunks are assigned the same label. For example, if we have a group of Re-entrancy vulnerability code with a length of 2000 tokens, it would be split into four chunks, each containing 510 tokens. If there are chunks with fewer than 510 tokens, we pad them with zeros. However, the training results show that the models loss does not converge. We speculate that this is due to the introduction of noise from unrelated chunks, which negatively affects the models generalization capability.

Audit experts extracted the function code of vulnerabilities from smart contracts and assigned corresponding vulnerability labels. If the extracted code exceeds 510 tokens, it is truncated, and if the code falls short of 510 tokens, it is padded with zeros. This approach ensures consistent input data length, addresses the length limitation of Transformer models, and preserves the characteristics of the vulnerabilities.

After comparing the two methods, we observed that training on vulnerability-based function code helped the models loss function converge better. Therefore, we chose to use this data processing method in subsequent experiments. Additionally, we removed unrelated characters such as comments and newline characters from the functions to enhance the models performance. As shown in Fig. 3, we only extracted the function parts containing the vulnerability code, reducing the length of the training dataset while maintaining the vulnerability characteristics. This approach not only improves the models accuracy, but also enhances its generalization ability.

Extraction of Vulnerable Function Code (We partition the smart contract as a whole and extract only the functions where the vulnerabilities are present. In the provided image, we focus on the withdrawALL function, which serves as our training dataset. If a contract contains multiple vulnerabilities, we extract multiple corresponding functions).

CodeBERT is a pretraining model based on the Transformer architecture, specifically designed for learning and processing source code. By undergoing pretraining on extensive code corpora, CodeBERT acquires knowledge of the syntax and semantic relationships inherent in source code, as well as the interactive dynamics between different code segments.

During the data preprocessing stage, CodeBERT is employed due to its strong representation ability. The source code undergoes tokenization, where it is segmented into tokens that represent semantic units. Subsequently, the tokenized sequence is encoded into numerical representations, with each token mapped to a unique integer ID, forming the input token ID sequence. To meet the models input requirements, padding and truncation operations are applied, ensuring a fixed sequence length. Additionally, an attention mask is generated to distinguish relevant positions from padded positions containing invalid information. Thus, the processed data includes input IDs and attention masks, transforming the source code text into a numericalized format compatible with the model while indicating the relevant information through the attention mask.

For Optimized-LSTM and Optimized-CNN models, direct processing of input IDs and masks is not feasible. Therefore, CodeBERT is utilized to further process the data and convert it into tensor representations of embedding vectors. The input IDs and attention masks obtained from the preprocessing steps are passed to the CodeBERT model to obtain meaningful representations of the source code data. These embedding vectors can be used as inputs for Optimized-LSTM and Optimized-CNN models, facilitating their integration for subsequent vulnerability detection.

In the current stage, our approach involves the utilization of three machine learning models: Optimized-CodeBERT, Optimized-LSTM, and Optimized-CNN. The CodeBERT model is specifically fine-tuned to enhance its compatibility with the target task by accepting preprocessed input IDs and attention masks as input. However, in the case of Optimized-LSTM and Optimized-CNN models, we do not conduct any fine-tuning on the CodeBERT model for data preprocessing.

CodeBERT is a specialized application that utilizes the Transformer model for learning code representations in code-related tasks. In this paper, we focus on fine-tuning the CodeBERT model to specifically address the needs of smart contract vulnerability detection. The CodeBERT model is built upon the Transformer architecture, which comprises multiple encoder layers. Prior to entering the encoder layers of CodeBERT, the input data undergoes an embedding process. Following the encoding stage of CodeBERT, fully connected layers are added for classification purposes. The model architecture of our CodeBERT implementation is depicted in Fig. 4.

Our Optimized-CodeBERT Model Architecture.

Word Embedding and Position Encoding In the data preprocessing stage, we have utilized a specialized CodeBERT tokenizer to process each word into the input information. In this model tranining stage, the tokenizer employs embedding methods, which are used to convert text or symbol data into vector representations. This processing transforms each word into a 512-dimensional word embedding. In addition, we introduce position embedding, which is a technique introduced to assist the model in understanding the positional information within the sequence. It associates each position with a specific vector representation to express the relative positions of tokens in the sequence. For a given position i and dimension k, the Position Encoding (text {PE}(i, k)) is computed as follows:

$$begin{aligned} text {PE}(i, k) = {left{ begin{array}{ll} sin left( frac{i}{10000^{2k/d}}right) &{} text {if } k text { is even} \ cos left( frac{i}{10000^{2k/d}}right) &{} text {if } k text { is odd} end{array}right. } end{aligned}$$

Here, d represents the dimension of the input sequence. The formula utilizes sine and cosine functions to generate position vectors, injecting positional information into the embeddings. The exponential term (frac{i}{10000^{2k/d}}) controls the rate of change of the position encoding, ensuring differentiation among positions. By adding the Position Encoding to the Word Embedding, positional information is integrated into the embedded representation of the input sequence. This enables CodeBERT to better comprehend the semantics and contextual relationships of different positions in the code. The processing steps are illustrated in Fig. 5.

Word and Position Embedding Process.

Encoder layers The CodeBERT model performs deep representation learning by stacking multiple encoder layers. Each encoder layer comprises two sub-layers: multi-head self-attention and feed-forward neural network. The self-attention mechanism helps encode the relationships and dependencies between different positions in the input sequence. The feed-forward neural network is responsible for independently transforming and mapping the features at each position.

The multi-head self-attention mechanism calculates attention weights, denoted as (w_{ij}), for each position i in the input code sequence. The attention weights are computed using the following equation:

$$begin{aligned} w_{ij} = text {Softmax}left( frac{{q_i cdot k_j}}{sqrt{d}}right) end{aligned}$$

Here, (q_i) represents the query at position i, (k_j) denotes the key at position j, and d is the dimension of the queries and keys. The output of the self-attention mechanism at position i, denoted as (o_i), is obtained by multiplying the attention weights (w_{ij}) with their corresponding values (v_j) and summing them up:

$$begin{aligned} o_i = sum _{j=1}^{n} w_{ij} cdot v_j end{aligned}$$

where n is the length of the input sequence.

Each encoder layer also contains a feed-forward neural network sub-layer, which processes the output of the self-attention sub-layer using the following equation:

$$begin{aligned} text {FFN}(x) = text {ReLU}(x cdot W_1 + b_1) cdot W_2 + b_2 end{aligned}$$

Here, x represents the output of the self-attention sub-layer, and (W_1, b_1) and (W_2, b_2) are the parameters of the feed-forward neural network.

Fully connected layers To output the classification labels, we added fully connected layers. Firstly, we added a new linear layer with 100 features on top of the existing linear layer. To avoid the limited capacity of a single linear layer, we utilized the ReLU activation function. Additionally, to prevent overfitting, we introduced a dropout layer with a dropout rate of 0.1 after the activation layer. Lastly, we used a linear layer with four features for the output. During the fine-tuning process, the parameters of these new layers were updated.

The Optimized-LSTM model is specifically designed for processing sequential data, capable of capturing temporal dependencies and syntactic-semantic information43. For the task of smart contract vulnerability detection, our constructed Optimized-LSTM model provides a serialization-based representation of Solidity source code, taking into account the order of statements and function calls. The Optimized-LSTM model captures the syntax, semantics, and dependencies within the code, enabling an understanding of the logical structure and execution flow. Compared to traditional RNNs, the Optimized-LSTM model we constructed addresses the issue of vanishing or exploding gradients when handling long sequences44. This is accomplished through the key mechanism of gated cells, which enable selective retention or forgetting of previous states. The model consists of shared components across time steps, including the cell, input gate, output gate, and forget gate. In the Optimized-LSTM model, we have defined an LSTM layer and a fully connected layer, with the LSTM layer being the core component. Within the LSTM layer, the input (x^{(t)}), the output from the previous time step (h^{(t-1)}), and the cell state from the previous time step (c^{(t-1)}) are fed into an LSTM unit. This unit contains a forget gate (f^{(t)}), an input gate (i^{(t)}), and an output gate (o^{(t)}), as shown in Fig. 6.

The Architecture of Optimized-LSTM.

In the model, we utilize a bidirectional Optimized-LSTM, where the forward Optimized-LSTM and backward Optimized-LSTM are independent and concatenated at the final step. This allows for better capture of long-term dependencies and local correlations within the sequence. During the forward propagation of the model, the input x is first passed through the Optimized-LSTM layer to obtain the output h and the final cell state c. Since the lengths of the data instances may vary, we calculate the average output by averaging the outputs at each time step in h. Then, the average output is fed into a fully connected layer to obtain the final prediction output y. We used the cross-entropy loss function L for training, which is defined as:

$$begin{aligned} L_i=-sum _{j=1}^N y_{i,j}log {hat{y}}_{i,j}. end{aligned}$$

Here, N represents the number of classes, (y_{(i,j)}) denotes the probability of the jth class in the true label of sample i, and ({hat{y}}_{(i,j)}) represents the probability of sample i being predicted as the jth class by the model.

The Convolutional Neural Network (CNN) is a feedforward neural network that exhibits remarkable advantages when processing two-dimensional data, such as the two-dimensional structures represented by code45. In our model design, we transform the code token sequence into a matrix, and CNN efficiently extracts local features of the code and captures the spatial structure, effectively capturing the syntax structure, relationships between code blocks, and important patterns within the code.

The Optimized-CNN primarily consists of convolutional layers, pooling layers, fully connected layers, and activation functions. Its core idea is to extract features from input data through convolution operations, reduce the dimensionality of feature maps through pooling layers, and ultimately perform classification or regression tasks through fully connected layers46. The key module of the Optimized-CNN is the convolutional layer, which is computed as follows:

$$begin{aligned} y_{i,j}=sigma left( sum _{k=1}^Ksum _{l=1}^Lsum _{m=1}^M w_{k,l,m}x_{i+l-1,j+m-1,k}+bright) end{aligned}$$

Here, (x_{(i,j,k)}) represents the element value of the input data at the i-th row, j-th column, and k-th channel, (w_{(k,l,m)}) represents the weight value of the k-th channel, l-th row, and m-th column of the convolutional kernel, and b represents the bias term. (sigma) denotes the activation function, and in this case, we use the Rectified Linear Unit (ReLU).

The output of the convolutional layer is passed to the pooling layer for further processing. The commonly used pooling methods are Max Pooling and Average Pooling. In this case, we employ Max Pooling, and the calculation formula is as follows:

$$begin{aligned} y_{i,j}=max limits _{m=1}^Mmax limits _{n=1}^N x_{i+m-1,j+n-1} end{aligned}$$

Pooling operations can reduce the dimensionality of feature maps, model parameters, and to some extent alleviate overfitting issues. Finally, a fully connected layer is used to compute the model, which is expressed as:

$$begin{aligned} y=sigma (Wx+b) end{aligned}$$

Here, x represents the output of the previous layer, W and b denote the weights and bias terms, and (sigma) is the activation function. By stacking multiple convolutional layers, pooling layers, and fully connected layers, we construct a Optimized-CNN model as shown in Fig. 7, which has powerful feature extraction and classification capabilities for smart contract classification.

The Architecture of Optimized-CNN.

Read this article:

Deep learning-based solution for smart contract vulnerabilities ... - Nature.com

Read More..

Utility at a cost: Assessing the risks of blockchain oracles – S&P Global

Conducting transactions on the blockchain requires not only on-chain technology such as smart contracts to execute them, but also a way to access key inputs such as real-time prices that are observed outside the blockchain. An oracle provides a means of obtaining off-chain data, connecting the real world and decentralized finance (DeFi) systems. Interoperability issues remain a key inhibitor to wider blockchain adoption due the proliferation of public and private blockchains that do not have a native ability to transfer information back and forth. Hence, oracles unlock the power of linking traditional finance (TradFi) to DeFi.

While oracles meaningfully enhance the utility of smart contracts, they introduce a number of technical, data quality and concentration risks. These risks qualify as new operational risks, in our view, and can affect the credit quality of issuers with links to DeFi operators in the worst case scenario (see How DeFis Operational Risks Could Influence Credit Quality, published June 7, 2023).

Oracles perform a critical function in the blockchain ecosystem by providing the ability to both import and export data between two dimensions the real world and the blockchain that do not otherwise connect. A key feature of blockchain technology is smart contracts, which are programs stored on the blockchain that are executed once a predetermined set of conditions are met. Absent oracles, smart contracts would be limited to on-chain data and hence would have much more limited functionality. Oracles act as a bridge between on-chain and off-chain infrastructure as well as other blockchains, enabling smart contracts to make use of real-world data. Further, oracles provide the ability to export data off-chain.

The ability of oracles to bridge smart contracts to the real world enhances the power of blockchains significantly and has accelerated their adoption for financial transactions. The success of DeFi protocols that enable peer-to-peer financial transactions is largely due to the ability of oracles to import necessary data into smart contracts. The increased use of on-chain swap and lending transactions has relied on oracles to import real-world pricing and user data to provide the necessary conditionalities to effect transactions governed by smart contracts. Given the breadth of data accessible through oracles and the efficiency gains of peer-to-peer transactions, there are many efforts underway to extend this technology to other fields such as real-world asset tokenization, insurance, healthcare and real estate.

Characteristics of blockchain oracles

Inbound versus outbound

Inbound oracles transfer data from the real-world (off-chain) into the blockchain network and are the most common type of oracle.

Outbound oracles allow smart contracts to export data and commands to off-chain systems.

Centralized versus decentralized

Centralized oracles are managed by one entity that serves as the oracles singular data source.

Decentralized oracles rely on consensus from multiple entities to validate the accuracy and availability of data.

Software versus hardware oracles

Since Chainlink launched in 2017 there has been a proliferation of blockchain oracles. While they all provide connectivity between on- and off-chain domains, there are sizable differences in terms of supported blockchains, consensus mechanisms and available data sets. That said, the landscape is characterized by Chainlink as the largest participant (as measured by total value secured) by far, with several smaller, less established protocols.

Blockchain oracle

Total value secured

Token ticker

Summary

API3

$14 million

API3

API3 aggregates data directly from source-level data providers on 16 different blockchains. Capabilities include direct API-provider-to-blockchain connectivity, decentralized data feeds (dAPIs) and random number generation.

Band Protocol

$40 million

BAND

Band Protocol aggregates and connects real-world data and provides application programming interfaces (APIs) to smart contracts across more than 20 blockchains. Band Protocol supports pricing data queries, cross-chain bridges and proof of identity.

Chainlink

$14.6 billion

LINK

Chainlink focuses on decentralized oracle networks and is by far the largest among peers in terms of market capitalization. Its networks use decentralization, trusted nodes and cryptographic proofs to connect data/APIs to smart contracts.

Chronicle

$6.4 billion

N/A

Chronicle relies on a community-powered consensus network of 22 feed node operators to provide verifiable and trackable data across both public and enterprise blockchains.

DIA

$73 million

DIA

DIA is a cross-chain data and oracle platform focused on the sourcing and delivery of customizable data feeds both on- and off-chain. The platform collects data ticks directly from over 80 sources for web3 or web2 use cases.

Pyth

$1.6 billion

N/A

Pyth Network is an oracle that publishes financial market data to multiple blockchains, with data contributed by over 80 first-party publishers using a unique pull price update model.

UMA

$95 million

UMA

UMA enables blockchain protocols to securely import arbitrary data types on-chain. It provides data for cross-chain bridges, insurance protocols, custom derivatives and prediction markets.

WINkLink

$7.7billion

WIN

WINKLink is an oracle built on the Tron blockchain that provides data feeds from real-world sources like banks, weather and the internet to execute smart contracts.

Note: Total value secured represents amount locked in all protocols associated with the referenced oracle.

Source: defilama.com, as of Nov. 15, 2023.

While not directly visible as a risk to users of DeFi protocols, we believe oracle risks are material and it is critical to understand how they are mitigated within different protocols. Oracles introduce external dependencies, and with them, vulnerabilities that could challenge the accuracy and timeliness of critical real-time, real-world data. They increase the attack surface of a protocol if bad actors find ways to exploit oracle-delivered data points for their own advantage or if there are outages of critical service providers. We have identified the following nonexhaustive risk factors that drive oracle risks.

Concentration risk in the context of blockchain oracles is multifaceted, with concentration not merely a market challenge with Chainlink dwarfing alternative projects but a challenge faced within each oracle network. There are three main points of concentration risk: one at a market level, in activity centering on a single project, and two in the oracle process, in concentration of data providers and decision-making.

Why it matters: Chainlink is the most widely used oracle project in DeFi: its total value secured exceeds that of the two next largest, WINkLink and Chronicle, combined. It has also recently partnered with TradFi market participants, including SWIFT and ANZ bank, in pilot projects experimenting with cross-chain communication to support financial transactions. Although Chainlinks dominance represents a risk dependency, it is not a single point of failure. Chainlink is not a single network; its oracles used in DeFi consist of multiple decentralized oracle networks that run independently. This reduces the risk that a vulnerability could impact DeFi at a systemic level, and that network speed and latency issues could result from a spike in usage in a different network. SWIFT and ANZs pilot schemes used Chainlinks new Cross-Chain Interoperability Protocol (CCIP), which aims to further enhance security with multiple networks used to support each bridge, and two separate implementations of the protocol with different code bases.

At a DeFi protocol level, data concentration is a notable risk for third-party oracle networks. To avoid creating single points of failure in providing data, third-party oracles are usually designed to aggregate data from multiple nodes. However, in some instances this does not secure against poor data quality as data can come from a single or small number of sources, even if those sources are supported by a wide array of validators. The process of decentralizing data from a small number of parties means that sometimes users are unnecessarily paying for inefficient third-party oracle networks, while remaining subject to trusting data providers.

Another concentration concern emerges around governance and decision-making. In their role as aggregators, oracles make calls as to which nodes to reach out to for information. These decisions, as with others related to the technologys roadmap, are not always transparent and may overly rely on team members and developers. The technical and specialist nature of oracles further challenges how far governance can be decentralized. Consequently, oracle providers can represent entities requiring trust in processes often positioned as trustless.

Potential risk mitigation: Diversification across an array of oracle projects may help reduce concentration risk. Protocols investigating oracle projects may need to assess how transparent the governance and source code is. For example, any decentralized autonomous organization (DAO) promising more democratized oracles needs to make sure that DAO participation is sufficiently high to have an economically reasonable outcome. Such an assessment should be ongoing, as governance concentration risks will increase as voting participation declines or if a group of participants gain an oversized proportion of a networks tokens, for example. This is particularly challenging in technically complex projects such as oracles where few users are knowledgeable enough to meaningfully shape decision-making.

An oracles main role (in the context of DeFi) is to provide off-chain data for smart contracts to help their execution. As such, one of the key risks oracle users face is getting low-quality or even manipulated data that could lead to wrong outcomes or losses. This could arise either because of misreporting or manipulation of the data by the centralized oracle or the nodes of the decentralized oracle.

Why it matters: Data quality risk can result in significant financial losses for oracle users. For example, a user programs a smart contract to sell an asset if its price drops below $500. If due to a lack of coverage for instance, the oracle uses reports that the asset price is $400 instead of $600, the smart contract will automatically sell the position, resulting in a $200 loss for the asset owner. There could also have been fraud or intentional misreporting of the data by a centralized oracle or by some nodes in a decentralized oracle, coupled with poor verification mechanisms. In this example, the oracle owner can intentionally send the price of $400 to buy the asset at a discount compared with its real market value. Verification of ownership records is another example where incorrect data could result in significant losses for oracle users. In this case, the smart contract for buying the asset is executed on the basis that the seller has effective ownership of the asset. If this information is incorrect, the buyer will have paid without receiving the asset. The loss for the end user is permanent and cannot be reversed given that blockchain transactions are automated and immutable.

Potential risk mitigation: Risk mitigation depends on the type of oracle. For centralized oracles, track record is important but cannot eliminate risk as the data used can be compromised. The oracle owner is responsible for the data quality, but if it uses unreliable data sources, the risk persists. To resolve this problem, decentralized oracles were created to aggregate data from various sources and use verification mechanisms that check its accuracy before transmitting it to the smart contracts. For example, a decentralized oracle will look at the different data sources and eliminate abnormal values, use the median data or calculate an average. As such, even if one node in the network provides wrong or manipulated data, other nodes will provide a different set of data and the incorrect data will be eliminated. The data aggregation mechanism is important in this case. If we go back to our previous example and assume that two nodes reported prices of $400 and $550 for the asset. Using the average price of $475 would still result in executing the smart contract and selling the asset at a $75 loss for the asset owner. While this is lower than the previous situation, it is still a loss for the end user. Therefore, it is important for oracles to diversify their sources of information and use reputable nodes. If the oracle had 10 nodes reporting a similar price of $550 and one node reporting $400, the latter would have been eliminated.

Bringing off-chain data reliably to the on-chain world comes along with technical risks related to outages of the oracle operators and more blockchain-specific risks like network congestion and latency. These issues could lead to outdated data transmissions or no transmissions at all to the receiver, which are usually smart contracts that execute functions as part of a protocol.

Why it matters: Outdated data transmissions or failures to transmit data because of technical problems could lead to flawed function executions in smart contracts and to unintended outcomes with significant financial losses for the end users of DeFi protocols. For example, DeFi lending protocol Maker experienced oracle problems due to the congestion of the Ethereum network at the outset of the COVID-19 pandemic in March 2020, which translated to financial losses for its users. Latency in data transmissions could also lead to failures in transmitting accurate data. Limits to scalability on the Ethereum blockchain, for example, are well known and tackled with smart solutions like layer 2 blockchains (a blockchain network built on top of a base-layer blockchain to add functionality and speed).

Potential risk mitigation: Technical risks resulting from specific oracles can be partly mitigated either at the protocol level, by using multiple oracles, or at the oracle provider level, if they operate as a decentralized network. The root causes of network congestion and latency may be addressed as blockchain technology develops, particularly with features enhancing scalability and interoperability (see the How blockchains scale section in What can You Trust In A Trustless System, published Oct. 11, 2023).

The ability of oracles to bring off-chain data onto a blockchain (and vice versa) greatly enhances DeFi use cases, and can support further growth in applications connected with the financing of the real economy. Going forward, the ability to secure communications across different blockchains could be transformative in supporting institutional adoption for financial market applications, by helping connect the walled gardens of private permissioned blockchains used by different institutions to each other and to public blockchains. However, the utility of oracles can come at the cost of adding a number of key risks such as concentration, data quality and technical risks. Understanding and addressing these risks will be critical to developing robust market infrastructure for financial applications.

What Can You Trust in a Trustless System, Oct. 11, 2023

How DeFis Operational Risks Could Influence Credit Quality, June 7, 2023

Smart Contracts Could Improve Efficiency And Transparency In Financial Transactions, Oct. 4, 2022

Cyber Brief: Reviewing The Credit Aspects Of Blockchain, May 5, 2022

Read this article:

Utility at a cost: Assessing the risks of blockchain oracles - S&P Global

Read More..

Staying safe in web3: your guide to dapps security – crypto.news

As web3 grows, so do the risks associated with decentralized applications (dapps). Here, we share practical advice to mitigate these risks.

At the forefront of emerging web3 technologies are decentralized applications, often called dapps. They use interlinked smart contracts to do specific tasks within the app, running on blockchain as code snippets. They are like a bridge between the current Internet (Web 2.0) and the developing web3.

Dapps leverage blockchain technologys inherent security, transparency, and indelibility to empower users with enhanced privacy and greater control over their data and digital assets.They function as the blockchain counterpart of traditional apps, covering social media, finance, gaming, and more.

Though the way you use a dapp might look similar to regular apps, whats happening behind the scenes is different. Instead of being stored on one big server, dapps are spread across many computers called nodes on a blockchain network.

The swift expansion of web3 has transformed the technological terrain. Yet, its also brought new security challenges.

Amongst the most prominent security risks associated with web3 and decentralized applications are phishing attacks. These occur when malicious actors create fraudulent websites or social media accounts to trick users into disclosing their private keys or other confidential information.

Another closely related threat is social engineering, a deceptive method cybercriminals use to trick users into sharing their login credentials.

Some security shortcomings stem from the interaction between web3 and Web 2.0 infrastructures, while others are inherent to protocols like blockchain and IPFS (InterPlanetary File System).

Web3 relies on network consensus, which can slow down fixing these and other vulnerabilities.

Some main security risks include:

On Nov. 17, 2023, blockchain security platform Immunefi unveiled its report on the root causes of the most damaging vulnerabilities in web3.

The report, announced at Web Summit 2023, attended by crypto.news, introduces a new vulnerability classification standard for web3. The research indicates that the root causes of hacks fall into three discernable categories:

While smart contract protocols often receive ample attention, Immunefi pointed out that the danger might lie in the overlooked infrastructure level.

According to the report, almost half of all monetary losses from hacks in 2022 were caused by infrastructure issues such as poor private key handling. Moreover, it found that nearly 37.5% of all incidents were due to developer mistakes in smart contracts concerning access control, input validation, and arithmetic operations.

The platforms CEO, Mitchell Amador, emphasized that even a well-designed smart contract could be compromised if the underlying infrastructure is vulnerable, leading to substantial losses.

Blockchains are open and permissionless environments. That means you are not just protecting against someone who has managed to sneak into your infrastructure like you were in traditional web, youre protecting against anybody who can see your contracts, anybody who can mess with your product.

Sharing his thoughts with crypto.news, Alex Dulub, founder of Web3 Antivirus, a blockchain security firm, pointed out that the real threat for web3 and decentralized apps lies in vulnerabilities arising from incomplete smart contract logic.According to him, while developers may use specific requirements to define how smart contracts work, theres always a risk of them being used in unintended ways.

Dulub noted that hackers are being more creative, experimenting with smart contracts and projects, searching for inconsistencies to exploit.

Unfortunately, detecting such complex issues with automatic tools or analyzers is nearly impossible. The best approach? Consider rigorous testing, careful logic development, analysis of all potential usage scenarios, thorough auditing, and implementing a bug bounty program.

His concern was echoed by Sipan Vardanyan, co-founder and CEO of cybersecurity firm Hexens, who said that a hackers job is to find what is not intended and to create new and more sophisticated vectors of attack.

Just knowing whats happening out there is absolutely crucial because its a small field and news travels fast, so all you have to do is keep your hand on the pulse.

Immunefis report shows that from January to October 2023, the web3 sector saw financial setbacks of more than $1.4 billion caused by 292 separate instances of fraud and hacking.

The report also indicated that hacks outweighed fraud regarding the cause of financial losses.

In October 2023, analysts attributed about $16 million in losses to hacking incidents, with defi platforms being the primary choice of attack for hackers and fraudsters.

Overall, in the third quarter of 2023, Immunefis analysis identified 74 hacks and scams, leading to a total loss across the web3 ecosystem of $685 million.

The amount involved $662 million lost in 47 hacking incidents and $22 million in 27 incidents of fraud. Two projects, the Mixin Network and Multichain, witnessed most of the losses in Q3 2023, amounting to $200 million and $126 million, respectively.

Per Immunefi, the figures reflect an almost 60% surge compared to Q3 2022, when bad actors made off with about $428 million.

The Mixin and Multichain heists comprised more than 47% of all losses in the third quarter of 2023. In that period, hacking was the primary cause of losses, accounting for 96.7% in comparison to scams, frauds, and rug pulls, which made up only 3.3% of stolen funds.

Additionally, attackers targeted Ethereum (ETH) and BNB Chain (BNB) the most, with Ethereum suffering 33 incidents, while BNB Chain faced 25.

There was also a significant spike in the number of web3 attacks, with the number of single incidents increasing 147% year-on-year from 30 to 74 in Q3 2023.

Overall, the period has witnessed the highest loss in 2023, most of it stemming from attacks by the Lazarus Group, who reports allege are behind high-profile attacks on CoinEx, Alphapo, Stake, and CoinsPaid.

In the attacks, the North Korea-linked group stole $208,600,000, representing 30% of the total losses in Q3 2023.

From a year-to-date perspective, the crypto ecosystem reported losses of $1,410,669,002 across 292 incidents. The third quarter of 2023 was particularly severe, with losses exceeding $340 million in September and $320 million in July.

Here are the measures web3 users can take to protect themselves and their assets from bad actors:

Ensuring web3 security is not a one-time task but a continuous process that involves proactive risk identification, strategic choice of blockchain design, regular audits, and constant learning.

Follow this link:

Staying safe in web3: your guide to dapps security - crypto.news

Read More..

The Risks of DeFi: Navigating the Complex Terrain of Decentralized … – Baltic Times

Introduction

In the fast-paced world of cryptocurrency, the rise of Decentralized Finance (DeFi) has captured the imagination of many investors and enthusiasts. DeFi platforms offer a range of financial services without the need for traditional intermediaries, promising greater accessibility and control. However, beneath the surface lies a landscape fraught with potential hazards and uncertainties that demand a cautious approach. This article delves into the risks associated with DeFi and sheds light on the challenges that individuals face when navigating this evolving ecosystem. So, if you are interested in crypto investment like Bitcoin, you may consider using a reliable trading platform such as immediaterevolution.com.

The Allure of DeFi: A Glimpse into a Borderless Financial Landscape

Crypto enthusiasts have witnessed a wave of innovation that has transformed the financial sector through decentralized technologies. One such innovation is the emergence of DeFi, which aims to create an open and accessible financial system for everyone. Amidst the allure of borderless transactions and decentralized applications, it's essential to recognize that this landscape isn't without its pitfalls.

Unveiling the Risks: A Closer Look at DeFi Vulnerabilities

While DeFi platforms promise autonomy and flexibility, they also harbor significant risks. Security vulnerabilities are among the most pressing concerns, with the potential for hacking attacks and smart contract vulnerabilities leading to substantial financial losses.

Smart Contracts: Bridging Opportunities and Risks

At the heart of DeFi lies the concept of smart contracts, self-executing codes that automatically execute the terms of an agreement. While these contracts offer efficiency and transparency, they are not immune to human error or exploitation. Inexperienced users might inadvertently expose themselves to risks by misinterpreting the terms or trusting unverified smart contracts. The platform warns against hastily engaging with smart contracts without understanding the underlying code.

Liquidity Dangers: The Perils of Impermanent Loss

Liquidity provision, a central component of DeFi platforms, involves users depositing funds into liquidity pools to facilitate trading. However, the concept of impermanent loss can catch many users off guard. When the value of the assets in the pool diverges significantly, individuals may experience losses compared to simply holding the assets. This phenomenon can have substantial financial implications, underscoring the importance of fully comprehending the mechanisms before participating in liquidity provision.

Regulatory Uncertainties: Navigating a Shifting Landscape

As governments and regulatory bodies grapple with the disruptive nature of DeFi, the lack of clear regulations poses a significant risk. While proponents of DeFi celebrate its decentralized nature, this can also result in potential legal ambiguities. Users may unknowingly engage in activities that contravene existing financial regulations, exposing themselves to legal consequences. Staying informed about evolving regulatory frameworks is paramount to avoid any inadvertent violations.

Scams and Ponzi Schemes: Safeguarding Against Deceptive Practices

The allure of quick profits within the DeFi ecosystem has attracted not only legitimate actors but also malicious individuals seeking to exploit unsuspecting users. Scams and Ponzi schemes have been prevalent, leveraging the decentralized and pseudonymous nature of cryptocurrencies to carry out fraudulent activities. The platform advises users to exercise skepticism, conduct thorough research, and remain vigilant to avoid falling victim to such schemes.

Centralization in Disguise: The Paradox of DeFi Platforms

While the term "decentralization" dominates discussions about DeFi, the reality is more nuanced. Some DeFi platforms, despite claiming to be decentralized, still exhibit elements of central control. The governance mechanisms and decision-making processes of these platforms can be influenced by a few powerful stakeholders, undermining the egalitarian principles that DeFi aims to uphold. Recognizing the true extent of decentralization in each platform is crucial to making informed decisions.

The Human Factor: User Errors and Mistakes

In an ecosystem dominated by technological complexities, human errors can amplify risks within DeFi. From sending assets to the wrong addresses to mistakenly granting excessive permissions to third-party applications, these errors can lead to irreversible financial losses. Thoroughly understanding the processes and double-checking actions can mitigate these risks and provide a layer of protection against accidental mishaps.

Mitigating the Risks: A Balanced Approach to DeFi Participation

In light of the risks associated with DeFi, individuals can take several steps to navigate the complex landscape safely. Comprehensive research, skepticism, and diligence are essential when evaluating DeFi platforms and projects. Engaging with DeFi platforms that have undergone thorough audits and have transparent governance mechanisms can reduce the likelihood of falling victim to vulnerabilities.

Conclusion

The world of DeFi offers a tantalizing glimpse into the future of finance, promising autonomy, accessibility, and innovation. However, beneath its shimmering surface lie intricate risks that individuals must be aware of and navigate with care. By approaching DeFi with a blend of curiosity, caution, and understanding, individuals can harness its potential while safeguarding their financial interests in this rapidly evolving landscape.

Read more here:

The Risks of DeFi: Navigating the Complex Terrain of Decentralized ... - Baltic Times

Read More..

Sam Altman’s OpenAI exit won’t stop the use of ChatGPT among … – Blockworks

Former OpenAI CEO Sam Altman is associated with crypto by virtue of his founding role in Worldcoin but his departure from AI powerhouse OpenAI has implications of its own.

Altman was fired Friday after a boardroom coup caused by an alleged lack of candor with the board. Altman has already taken a job at Microsoft and more than 500 OpenAI employees threatened to resign in a letter to the board.

OpenAIs most advanced product, the ChatGPT large language model (LLM), is used by crypto developers and in crypto applications. But faith in the AI startup has been shaken by the past few turbulent days.

Elsewhere, crypto teams are seeking to create wholly decentralized AI products though development of such products lags far behind the work of OpenAI.

Read more: AI needs to be decentralized for the same reasons that money needs to be

ChatGPT is capable of writing and troubleshooting code written in Solidity, the programming language for Ethereum smart contracts, making it a potentially helpful tool for developers. A pseudonymous developer named CroissantETH made a ChatGPT-enabled app whereby ERC-20 tokens can be created with a one-sentence command.

Antonio Viggiano, founder of smart contract testing firm Fuzzy, released an extension that uses ChatGPT to audit code for potential vulnerabilities.

I believe devs use ChatGPT a lot, probably every day or many times a week, Viggiano said.

However, problems persist. Viggiano noted ChatGPT will often provide incorrect responses, causing developers to have to edit code manually. Fernando Rodriguez Hervias, CEO of Web3 developer studio PragmaLayer, said chatbots ultimately cannot be relied upon to keep smart contracts secure.

What if [the] contract gets hacked? Will you blame AI? Hervias said in a Telegram message.

Ritual founder Niraj Pant said that its partially because of those shortcomings that ChatGPT isnt commonly used in mission-critical capacities. However, AI chatbot use cases are growing, like reading a protocols documents and being able to troubleshoot problems in a chat window with builders on the platform.

Pant said OpenAIs boardroom drama underscores a need for diversification in the field of LLMs, currently dominated by OpenAI.

Developers I know that are building on OpenAI stuff are not even sure if their apps will work on Friday after employees leave, at least allegedly from the reports, Pant said.

Pants startup Ritual announced a $25 million raise earlier this month to help Web3 companies interact with AI.

If Altmans dismissal from OpenAI spells stagnation for the AI company which was reportedly seeking to raise funds at an $80 billion valuation in late October cryptos growing crop of AI projects could theoretically stand to gain.

However, Pant said attempts at open-source answers to ChatGPT are still playing catch-up with the likes of OpenAIs work.

A pseudonymous developer named Moon Dev, who uses ChatGPT to build quantitative trading tools for crypto, said in a direct message that they remain unimpressed with crypto AI products as they exist today.

[I] havent seen anything even close, they told Blockworks.

Dont miss the next big story join ourfree daily newsletter.

Go here to see the original:

Sam Altman's OpenAI exit won't stop the use of ChatGPT among ... - Blockworks

Read More..