Page 258«..1020..257258259260..270280..»

Engineering a synthetic gene circuit for high-performance inducible expression in mammalian systems – Nature.com

In silico design and analysis of synthetic circuits for high-performance inducible gene expression

Among the potential gene network motifs, we focussed on those that may yield reduced leakiness levels14. We thus mathematically modelled and compared three alternative circuit topologies for inducible gene expression as shown in Fig.1b, against the nave configuration (NC): (i) the coherent feedforward loop type 4 (CFFL-4)15; (ii) the mutual inhibition (MI) topology14; and (iii) a combination of these two topologies that we named Coherent Inhibitory Loop (CIL). All these circuits make use of an additional species Y to inhibit the reporter gene Z in the absence of the inducer molecule, thereby suppressing leaky expression. We used ordinary differential equations and dynamical systems theory to analyse the performance of these three networks, assuming realistic biological parts (Supplementary Note1).

Analytical results and numerical simulations of the circuits, when using the very same parameters for the common biological parts, confirmed that all three exhibit improved performances over the nave configuration, in terms of lower leakiness, high maximum expression, and increased fold induction, as reported in Fig.1c-e and Supplementary Note1, albeit with notable differences. In the CFFL-4, the leakiness is smaller than the one of the NC thanks to the inhibitory action of Y over Z, in the absence of the inducer molecule (Fig.1c); however, as X does not fully repress Y upon inducer molecule treatment, the maximal expression of Z is also smaller (Fig.1d), thus leading to only a modest increase in Fold Induction (Fig.1e). The MI improves on the CFFL-4 in terms of maximum expression (Fig.1d), as Y is now repressed also by Z in addition to X. The CIL combines the advantages of both circuits, and it exhibits the best performance as compared to the NC configuration in terms of all the three features, as shown in Fig.1c-e. To further explore the robustness of these findings, we conducted additional numerical simulations by varying the models parameters, whose results are shown in Fig.1f and Supplementary Note1. For all the parameter values tested, the CIL circuit exhibited the best performance whereas the CFFL-4 was the worst. Based on these analyses, we decided not to biologically implement a CFFL-4 system and instead focused on the biological implementation of the MI and CIL circuits.

To experimentally implement mutual inhibition (Fig.2a), we looked for a biological implementation that was compact and could be applied to any gene of interest Z. We thus turned to CRISPR-Cas endoribonucleases which have been recently repurposed to act as post-transcriptional regulators by exploiting their pre-gRNA processing mechanisms16. Indeed, CRISPR-endoribonucleases can cleave specific short sequences known as direct repeats (DRs) on their cognate pre-gRNAs, generating shorter guide RNA (gRNA) sequences; hence, these DRs have been repurposed as cleavage motifs to stabilize or degrade user-defined mRNA transcripts by placing them in the mRNA untranslated regions (UTRs)16. Specifically, in our implementation shown in Fig.2a, we employed the CasRx endoribonuclease to implement species Y, while species Z is the Gaussia Luciferase (gLuc) reporter gene bearing the DR sequence in its 3UTR. Because of the CasRxs distinctive feature of irreversibly bind its processed gRNA17, we reasoned that this configuration could implement a mutual inhibition between species Y and Z. Here, Y is able to negatively regulate Z, as the CasRx cleaves the DR in the 3UTR of the gLuc mRNA thus leading to the loss of its polyA tail and subsequent degradation; at the same time, we assumed that Z could be able to inhibit Y by sponging out the CasRx, which irreversibly binds to the DR and it is thus unable to cleave additional Z mRNAs.

a Experimental implementation of the mutual inhibition. CasRx acts as species Y. The Gaussia Luciferase (gLuc) with a Direct Repeat (DR) in the 3Untranslated Region (UTR) acts as species Z. CasRx binds to the DR and cleaves the polyA tail (AAA) of the gLuc mRNA leading to its degradation, thus achieving Y-mediated repression of Z. Following cleavage, the CasRx irreversibly binds to the DR forming the gRNA-Cas binary complex which cannot cleave additional mRNAs, thus possibly implementing the Z-mediated repression of Y. b Experimental validation of CasRx-mediated mRNA degradation. Cells were transfected with CasRx and gLuc plasmids at the indicated relative concentrations. The bar-plot reports the mean Relative Luciferase in arbitrary units (A.U.) obtained by dividing the average Luciferase A.U. value at each molar ratio by the average Luciferase A.U. value in the absence of CasRx. Error bars correspond to the standard deviation. n=4 biological replicates (white dots). c CASwitch v.1.: rtTA3G and CasRx are constitutively expressed from the pCMV promoter, while gLuc with the DR is placed downstream of the pTRE3G promoter. d, e Experimental validation of CASwitch v.1 (red) and comparison with the Tet-On3G expression system (black) at the indicated concentrations of doxycycline. n=4 biological replicates. Relative Luciferase A.U. is computed as the Luciferase A.U. value of each data point divided by the average value of the Tet-On3G system at 1000ng/mL, in both log-scale and linear-scale, and in (e) as fold-induction computed as the Luciferase A.U of each data point divided by the average value in the absence of doxycycline. f The CASwitch v.2: rtTA3G is constitutively expressed from a pCMV promoter, CasRx is driven by the pCMV/TO that can be repressed by the rtTA3G, while the gLuc with the DR is placed downstream of the pTRE3G promoter. g, h Experimental validation of CASwitch v.2 (green) and comparison with the state-of-the-art Tet-On3G gene expression system (black) at the indicated concentrations of doxycycline. n=4 biological replicates. MI: Mutual Inhibition circuit topology; CIL: Coherent Inhibitory Loop circuit topology. Source data are provided as a Source Data file.

To experimentally test this hypothesis, we co-transfected HEK293T cells with CasRx along with one of three different gLuc transcript variants, as reported in Fig.2b. These variants bear different numbers of DR motifs at their 3 UTR: either no DR motif, one DR motif, or four DR motifs (4xDR). Our rationale was that by introducing more than one DR motif, we could sponge CasRx more effectively and thus alleviate repression of the target gLuc mRNA. Indeed in this scenario, one gLuc-4xDR mRNA should able to bind four CasRx, rather than only one, as in the case of the gLuc-DR. Results are shown in Fig.2b: in the absence of CasRx, all the three gLuc transcripts yield the same luciferase expression level, independently of the number of DRs in their 3UTR, thus excluding perturbations of mRNA stability caused by the DR itself. In the case of the gLuc-DR transcript (with one DR), the relative increase in the amount of co-transfected CasRx resulted in an exponential decrease in luciferase expression, with up to 100-fold reduction in luminescence. On the contrary, for the gLuc-4xDR, the CasRx repression efficiency was strongly reduced, thus supporting the hypothesis of a DR-mediated sponging of CasRx, although we cannot exclude alternative mechanisms. Encouraged by these results, we sought to implement the MI circuit using the CasRx endoribonuclease, developing the CASwitch v.1 system, as shown in Fig.2c.

We chose as species X the tetracycline transactivator (rtTA3G) transcriptional factor, which is a fusion protein that combines a tetracycline-responsive DNA-binding domain with a strong transcriptional activation domain13. In the presence of the doxycycline, rtTA3G binds to multiple copies of the tetracycline operon (TO) sequence present in its cognate pTRE3G synthetic promoter, thereby inducing the expression of the downstream gene of interest18. In the CASwitch v.1 system, both the CasRx and the rtTA3G are constitutively expressed from the CMV promoter, while the gLuc harbours one DR in its 3UTR and it is placed downstream of the pTRE3G promoter, as schematically shown in Fig.2c.

We experimentally compared the performances of the CASwitch v.1 and the Tet-On3G by transiently transfecting HEK293T cells with three plasmids: (1) the pCMV-rtTA3G, (2) the pTRE3G-gLuc-DR for the CASwitch v.1, or the pTRE3G-gLuc for the Tet-On3G, and (3) the pCMV-CasRx at a relative molar ratio of 1:5:1. Observe that for the Tet-On3G system, the gLuc has no DR in its 3UTR, but we co-transfected the CasRx anyway to exclude potential biases caused by cellular burden. We then quantified gLuc expression by luminescence measurements at varying concentrations of doxycycline. Results are reported in Fig.2d-e and demonstrate that the CASwitch v.1, in the absence of doxycycline, strongly reduces leaky gene expression by >1-log when compared to the Tet-On3G system (Fig.2d); at the same time, the maximal expression upon doxycycline treatment was only slightly reduced (Fig.2d). Notably, the reduced leakiness and the retention of high maximal expression resulted in a very significant gain in terms of fold-induction by more than 1-log (Fig.2e).

To further evaluate the robustness of the CASwitch v.1 system, we repeated the same experiments at higher relative concentrations of CasRx, as reported in Supplementary Fig.1. This resulted in a further suppression of leakiness, but also in a reduction of the maximal achievable expression, suggesting that controlling CasRx expression is an important design parameter to achieve the desired inducible system properties.

Overall, our results demonstrate that the constitutively expressed CasRx, combined with its cognate direct repeat (DR) in the 3UTR of a target mRNA, can serve as a plug-and-play strategy to significantly enhance the performance of transcriptional inducible gene expression systems.

We set out to further enhance the performances of the CASwitch v.1 by specifically focusing on the increase in the maximal achievable expression upon doxycycline treatment. To this end, guided by the modelling results in Fig.1b, we sought to biologically implement the CIL circuit by replacing the constitutive pCMV promoter driving the CasRx with a modified version, named pCMV/TO, as shown in Fig.2f. The pCMV/TO promoter has two TO sequences downstream of the TATA binding box of the pCMV19, hence, upon doxycycline administration, rtTA3G binds to these elements and causes a steric hindrance to the PolII resulting in a partial repression of CasRx transcription. We first confirmed the effective doxycycline-dependent inhibition of the pCMV/TO promoter (Supplementary Fig.2). Subsequently, we verified that switching the pCMV promoter with the pCMV/TO promoter did not affect CasRx expression and its effect on its downstream target (Supplementary Fig.3). Finally, we proved that the pCMV/TO enables doxycycline-mediated repression of CasRx expression and relief of CasRx-mediated degradation of the target mRNA (Supplementary Fig.4).

We thus leveraged the pCMV/TO-mediated transcriptional control of the CasRx to implement the CASwitch v.2, as shown in Fig.2f, and we experimentally compared its performances to that of Tet-On3G system. Results are reported in Fig.2g,h in terms of luciferase expression at varying concentrations of doxycycline. The CASwitch v.2 exhibited more than 1-log reduction in leakiness compared to the art Tet-On3G system, yielding results similar to those obtained with the CASwitch v.1 (Fig.2g); this time, however, in agreement with the in silico analysis, it was able to fully recover the maximal achievable expression to the level of the original Tet-On3G (Fig.2g), thus leading to a very large amplification of fold induction levels of up to 3000-fold (Fig.2h).

To assess the robustness of CASwitch v.2, we tested its performance against that of state of the art Tet-On3G system by: (i) changing the plasmid molar ratio among the circuit components; (ii) testing it in a different mammalian cell line; and (iii) changing the promoter that drives the rtTA3G.

Results on the performance against changes in plasmid molar ratios are presented in Supplementary Fig.5. Different amounts of plasmids can affect basal and induced levels of gene expression; hence one may presume that the Tet-On3G system performance could be improved by simply changing the plasmid ratios. Interestingly, the CASwitch v.2 (red and blue lines in Supplementary Fig.5b,c) maintains its enhanced performance over the Tet-On3G system (yellow and green lines) independently of the plasmid ratio used.

Results on the performance of the CASwitch v.2 in HeLa cells are shown in Supplementary Fig.6a-b, where it is evident that it retains its improved performance over the Tet-On3G system, consistently with the results observed in HEK293T cells.

Results on the impact of replacing the pCMV promoter driving rtTA3G in the CASwitch v.2 system with two alternative promoters with lower expression strengths (pEF1a and pPGK) are shown in Supplementary Fig.7. In the cases tested, the CASwitch v.2 exhibited a better performance versus the Tet-On3G system by exhibiting a lower leakiness while maintaining the maximal expression (Supplementary Fig.7b) thus leading to a higher fold induction (Supplementary Fig.7c), with a slight decrease in performance for the weakest pPGK promoter. Notably, the use of the pEF1a yielded the highest fold induction, hence we chose to express the rtTA3G from this promoter in following experimental applications of the CASwitch v.2.

Overall, these results confirm that the CASwitch v.2 represents a general strategy to endow transcriptional inducible gene expression system with very low leakiness but with unaltered maximal expression, hence resulting in very large gain in fold induction.

As the CASwitch v.2 greatly enhances the fold induction levels of the Tet-On3G inducible gene expression system, we decided to deploy it to increase the performance of established transcription-based biosensors20. As a case in point, we deployed the CASwitch v.2 to improve the performance of a previously published copper biosensor20 in mammalian cells, as shown in Fig.3a. In this biosensor, a luciferase reporter gene is placed downstream of a synthetic metal-responsive promoter (pMRE). This promoter is bound by the endogenous metal response element binding transcription factor 1 (MTF-1)21 in the presence of zinc (Zn), copper (Cu), or cadmium (Cd) driving expression of the downstream reporter gene. As most biosensors, this configuration has several limitations, including low expression of the reporter gene and a narrow dynamic range, defined as the ratio between the maximum achievable biosensor response and its leakiness (Fig.3b, cblue line). To address these limitations, we modified the CASwitch v.2 system by replacing the pCMV promoter driving the expression of rtTA3G, with the metal-responsive promoter pMRE, as shown in Fig.3a, with the goal of simultaneously enhancing the copper biosensors absolute expression and amplifying its dynamic range.

a Schematics of three alternative experimental implementations of a copper biosensor. Upon copper administration, the endogenous MTF-1 transcription factor binds its cognate synthetic promoter pMRE that either directly drives expression of Firefly Luciferase (fLuc) expression (pMRE Biosensor), or drives expression of the rtTA3G transactivator, which in turn induces the expression of the fLuc through the pTRE3G in the presence of doxycycline (Tet-On3G Biosensor). In the CASwitch v.2 Biosensor, the pMRE promoter drives expression of the rtTA3G, which in turn induces expression of the fLuc harbouring a DR and inhibits expression of the CasRx through the pCMV/TO promoter. b,c Experimental validation of the three biosensors at the indicated concentrations of copper chloride in HEK293T cells. Firefly luciferase (fLuc) expression was evaluated by luminescence measurements and normalised to Renilla firefly (rLuc) luminescence. Fold-induction in (c) is obtained by dividing each data point by the average luciferase expression in the absence of copper. n=4 biological replicates, albeit for CuCl2 equal to 25uM which shows 3 replicates. MTF-1: metal-responsive transcription factor 1; pMRE: synthetic metal responsive promoter; DR: direct repeat sequence; rtTA3G: reverse tetracycline TransActivator 3G; pTRE3G: Tetracycline Responsive Element promoter 3G; pCMV/TO: modified CMV promoter with two Tetracycline Operon (TO) sequences. Source data are provided as a Source Data file.

To evaluate the effectiveness of the CASwitch v.2 plug-in strategy, we compared it to an additional biosensor configuration as shown in Fig.3a, where the pMRE promoter drives the expression of the rtTA3G transcription factor, which in turn drives expression of fLuc from the pTRE3G promoter. This configuration, in the presence of doxycycline, effectively implements a transcriptional amplification of the reporter gene expression, which however should not improve the dynamic range as both leaky and maximal gene expression should increase.

We evaluated the expression of fLuc from the three configurations at increasing concentrations of copper and at a fixed concentration of doxycycline. Results are reported in Fig.3b,c: the standard copper biosensor exhibited considerable leakiness and low levels of reporter gene expression even at high copper concentrations, thus resulting in a low signal-to-noise ratio with a maximum induction of only 10-fold. The second configuration with the rtTA3G resulted in a significant increase in luciferase expression levels at all copper concentrations, however, as expected, it did not lead to dynamic range amplification, as it also increased the leaky reporter expression in the absence of copper. Conversely, the CASwitch v.2 configuration effectively reduced leakiness in the absence of copper, while achieving higher luciferase expression than that of the standard copper biosensor (Fig.3b). This resulted in a large increase in the biosensors signal-to-noise ratio with a maximum induction of up to 100-fold, hence 1-log more than the other two configurations(Fig. 3c). Of note, the CASwitch v.2 yielded higher fold-induction levels at four times lower copper concentration, thus also enhancing its sensitivity. Taken together, these findings support the application of the CASwitch v.2 system to improve the efficacy of existing transcriptional-based biosensors that experience limitations in terms of a narrow dynamic range. The expansion of the biosensors dynamic range through the integration of CASwitch v.2 will yield a more sensitive and reliable biosensor, capable of detecting lower concentrations of the analyte with increased confidence.

We investigated the application of the CASwitch v.2 system in tightly controlling the expression of toxic genes, this feature is very useful for some industrial applications such as recombinant protein production, where the unintended accumulation of the protein of interest due to leakiness impairs host cell viability and lowers production yields (e.g, viral proteins). As a proof-of-principle, we used the CASwitch v.2 system to express the Herpes Simplex Virus Thymidine Kinase-1 (HSV-TK), which exerts cytotoxic effects in the presence of nucleotide analogues such as ganciclovir (GCV)22. To this end, as shown in Fig.4a, we added a Direct Repeat in the 3UTR of the HSV-TK gene and placed it downstream of the pTRE3G promoter in the CASwitch v.2 circuit. We then evaluated cell viability in the presence of ganciclovir, either with or without doxycycline and compared it to the one obtained by using the state-of-the-art Tet-On3G gene expression system. To account for cytotoxic effects associated with transfection, we co-transfected cells with a non-coding plasmid in the Mock condition, against which all other cell viability measurements were normalized to. Furthermore, constitutive expression of HSV-TK provided a reference for the maximum achievable toxicity. Results are reported in Fig.4b, c and show no cytotoxic effects for the CASwitch v.2 system in the absence of doxycycline. In contrast, the Tet-On3G system exhibited high cell toxicity, resulting in ~50% cell death in the absence of doxycycline. These findings confirm that the CASwitch v.2 system has very low leakiness, highlighting its efficacy in controlling toxic genes expression.

a Three alternative constructs to express the cytotoxic HSV-TK gene. pCMV-HSV-TK: positive control, with constitutive expression of HSV-TK. Tet-On3G: the constitutively expressed rtTA3G induces the cytotoxic HSV-TK gene harbouring a DR in its 3UTR, binding to pTRE3G in the presence of doxycycline. CASwitch v.2: the same as the Tet-On3G but for the presence of the CasRx downstream of the pCMV/TO. b Viability of HEK293T cells transfected with the indicated constructs and grown in the presence of ganciclovir. Mock transfected cells represent the negative control. Cell viability is reported as a percentage of the viability of mock transfected cells in the absence of doxycycline. The error bars represent the mean and standard deviation of biological replicates across two independent experiments (n=9). Statistical analysis with ANOVA (one-tailed) after determining equal or unequal variances by DAgostino & Pearson test (****P-value<0.0001) c Crystal violet staining of transfected HEK293T cells to highlight viable cells. d Plasmids required for AAV production. Two alternative experimental implementations for inducible expression of the Helper genes using either the Tet-On3G system or the CASwitch v.2 are also shown. e Assay for testing AAV vector inducible production yield by means of viral transduction. Created with Biorender. f, g Flow cytometry of cells transduced with cell lysates of HEK293T cells transfected with the indicated configurations. At least 10,000 cells were analysed for each point. The bar-plot in (g) reports, for each experimental condition, the mean value of the percentage of transduced cells across of biological replicates fortwo independent experiments (n=6) with error bars corresponding to the standard deviation. Statistical analysis by ANOVA (one-tailed), after determining equal or unequal variances by DAgostino & Pearson test (****P-value<0.0001). HSV-TK: Herpes Simplex Virus Thymidine Kinase; AAV: Adeno-Associated Virus; E2A(DBP): Early 2A DNA Binding Protein gene; E4(Orf6): Early 4 Open reading frame 6 gene; VaRNA-I: Viral associated RNA-I; Rep: AAV-2 Replication genes; Cap: AAV-2 Capsid genes. Source data are provided as a Source Data file.

Adeno-Associated Virus (AAV) vectors have emerged as highly promising tools for in-vivo gene therapy in clinical applications23. However, current large-scale industrial bioproduction face challenges in terms of efficiency and scalability, as it mainly relies on transient transfection of HEK293 cell lines24,25. Attempts to develop more scalable systems, such as AAV producer cell lines with stable integration of inducible gene systems to control the expression of viral genes, have been hampered by the toxicity associated with leaky expression of viral genes26,27,28,29,30. In this context, the CASwitch v.2 expression system may offer a reliable solution having the ability to significantly reduce leakiness while maintaining high levels of maximal achievable expression.

As shown in Fig.4d, transient triple transfection manufacturing of AAV vectors requires three plasmids: (i) a Transgene plasmid encoding the desired transcriptional unit to be packaged, (ii) a Packaging plasmid, and (iii) a Helper plasmid. The Packaging plasmid in our implementation carries the wild-type AAV2 Rep and Cap genes, while the Helper plasmid contains the E2A, E4, and VA RNAI genes derived from Human Adenovirus 5 (HAdV-5)31. As the HAdV-5 genes are polycistronic and expressed from distinct promoters, we first determined the minimal set of viral genes necessary for AAV vector production. Previous studies have shown that the E2A(DBP) and E4(Orf6) coding sequences, along with the VARNA-I ncRNA, are essential for AAV vector production32. Therefore, we designed constructs expressing E2A(DBP) and E4(Orf6) as a single transcript by means of two alternative strategies: the EMCV-IRES33 or P2A-skipping ribosome sequence34. By interchanging the positions of E2A(DBP) and E4(Orf6) in the bicistronic transcriptional units, we generated four different Adenovirus Helper plasmids (named Helper, pAH1-4) about half the size of the original plasmid, as reported in Supplementary Fig.8a. We compared these constructs by quantifying AAV production yield through quantitative PCR (qPCR). All Helper plasmids led to AAV production, albeit to a lesser extent than the full-length Helper plasmid. Among these, the pAH-3 plasmid (pCMV-E2A[DBP]-IRES-E4[Orf6]) exhibited the highest yields, as shown in Supplementary Fig.8b. We attributed the lower production yield to the absence of the VaRNA-I ncRNA. Indeed, co-transfection of VaRNA-I along with puH-3 restored production efficacy (Supplementary Fig.9).

To achieve inducible expression of the Helper genes using the CASwitch v.2 system, we introduced the direct repeat (DR) element into the 3 untranslated region (UTR) of the E2A(DBP)-IRES-E4(Orf6) cassette and placed it downstream of pTRE3G (p3G-AH3-DR), as depicted in Fig.4d. We then qualitatively assessed the capability of theCASwitch v.2 system in controlling expression of helper genes for inducible AAV vector production in the context of transient triple transfection manufacturing and compared it to that of the state-of-the-art Tet-On3G system. Specifically, we employed EGFP as the transgene for generating AAV vectors, with fluorescence quantification in transduced cells serving as a qualitative indirect measure of production yields (Fig.4e). We assessed production yields both in the presence and absence of doxycycline, providing a qualitative evaluation of the Tet-On3G and the CASwitch v.2 systems performance in AAV vector production, as reported in Fig.4e-g. Infection results confirmed that when controlling Helper genes expression with the Tet-On3G system, viral production occurred even in the absence of doxycycline, because of leaky expression of the viral Helper genes. Conversely, when controlling Helper genes with the CASwitch v.2 system, there was a significative reduction in AAV production in the absence of doxycycline, as measured by the percentage of infected cell, while maintaining high production yields in its presence. Despite viral production not being completely shut off, this proof-of-principle experiment shows that with proper fine-tuning, the CASwitch v.2 system could represent an effective solution to prevent unintended toxic viral gene expression, thus paving the way for the development of inducible AAV producer cell lines.

Read the original here:

Engineering a synthetic gene circuit for high-performance inducible expression in mammalian systems - Nature.com

Read More..

Cloud engineering could be a "painkiller" for global warming, study led by University of Birmingham finds – Yourweather.co.uk

Spraying particles into the marine atmosphere can increase cloud cover and have a cooling effect. Photo by Jason Blackeye on Unsplash. Kerry Taylor-Smith 19/04/2024 15:10 5 min

Marine cloud brightening (MCB) also known as cloud engineering involves spraying tiny particles or aerosols into the marine atmosphere where they mix with clouds. These aerosols help to increase the cloud cover and consequently, the amount of sunlight clouds can reflect, having an overall cooling effect.

The climate intervention could be a painkiller, rather than a solution, for global warming say scientists accounting for 69-90% of the cooling effect, much more than previously thought. Their study has been published in Nature Geoscience.

MCB has garnered much information in recent years; it could help offset the effects of anthropogenic global warming and help buy some time while the global economy decarbonises. Although previous models estimating the cooling effects of MCB focussed on the ability of aerosol injection to produce a brightening effect on the cloud, just how MCB works to create a cooling effect and how clouds respond to aerosols is poorly understood.

Researchers, led by the University of Birmingham, investigated this phenomenon by creating a "natural experiment" using aerosol injection from the effusive eruption of Kilauea volcano in Hawaii to study the interactions between these natural aerosols, clouds and climate.

They utilised machine learning and historic satellite and meteorological data to create a predictor to show how the cloud would behave during periods when the volcano was inactive, which in turn helped them to clearly identify the direct impacts of volcanic aerosols on the clouds.

They found cloud cover increased by up to 50% during the periods of volcanic activity, producing a cooling effect of up to -10 W m-2 regionally (global heating and cooling is measured in watts per square metre, with a negative figure indicating cooling).

Our findings show that marine cloud brightening could be more effective as a climate intervention than climate models have suggested previously, says lead author, Dr Ying Chen of the University of Birmingham. Of course, while it could be useful, MCB does not address the underlying causes of global warming from greenhouse gases produced by human activity.

It should therefore be regarded as a "painkiller", rather than a solution, and we must continue to improve fundamental understanding of aerosols impacts on clouds, further research on global impacts and risks of MCB, and search for ways to decarbonise human activities, Chen concludes.

This work adds to the growing evidence that current climate models may underestimate the impact of aerosols on clouds as they dont seem to have a strong enough response, says Professor Jim Haywood, from the University of Exeter and the Met Office Hadley Centre. More aerosols seem to result in a larger cloud fraction, which cools the climate more than the models predict.

Haywood says marine cloud brightening could be more effective than previously thought: However, there is still so much that we dont understand about aerosol-cloud interactions meaning that further investigations are imperative.

Experiments using the technique are already underway in Australia in an attempt to reduce bleaching on the Great Barrier Reef, while a team from the University of Washington recently conducted its first outdoor aerosol experiment from a decommissioned aircraft carrier in Alameda, California.

News reference

Chen, Y., Haywood, J., Wang, Y. et al. (2024) Substantial cooling effect from aerosol-induced increase in tropical marine cloud cover. Nature Geoscience.

See original here:

Cloud engineering could be a "painkiller" for global warming, study led by University of Birmingham finds - Yourweather.co.uk

Read More..

Alleged cryptojacker arrested for money laundering, $3.5 million in cloud service fraud ultimately mined less than $1 … – Tom’s Hardware

The U.S. Attorney's office announced the arrest and indictment of crypto miner Charles O. Parks, also known as 'CP3O,' who defrauded and used multiple servers from two well-known cloud computing providers, amounting to a bill totaling nearly US$ 3.5 million to mine cryptocurrencies that are worth almost $1 million. The office terms his scheme as a large-scale 'cryptojacking'.

If convicted, Parks could face up to 20 years imprisonment for wire fraud and money laundering, with 10 more years for unlawful monetary transactions. This investigation was conducted by the US State Attorney for the Eastern District of New York, the FBI, and the New York Police Department.

According to the report, Parks created two corporations called 'Multimillionaire LLC' and 'CP3O LLC' to create multiple accounts with the two cloud computing providers where he used its processing power and storage to mine Ethereum (ETH), Litecoin (LTC) and Monero (XMR). Parks did this by tricking providers into approving privileges and benefits for his accounts. He then sold these cryptocurrencies and laundered the money through exchanges, an NFT marketplace, an online payment provider, and banks to cover up ties to himself.

Instead of paying the mounting bills from respective cloud servers, he used the laundered money on luxury items, cars, jewelry, and travel expenses. It is unknown if the respective companies will be able to recover funds from Parks eventually. But if not, it will be an expensive lesson to better confirm account details before granting special privileges and benefits which could lead to computing power being exploited by crypto miners.

Despite using multiple ways to launder his earnings from cryptocurrency and mounting unpaid bills, the investigative authorities traced it back to Parks, arresting himon April 13th.

Cryptojacking, as the word implies, means hijacking systems to mine cryptocurrencies. Earlier forms of cryptojacking involved infecting user's systems unknowingly, using CPUs and GPUs to mine cryptocurrencies unbeknownst to the victims. There were also Chrome extensions which, unknown to its users, used the hijacked coputers computers to mine cryptocurrency without the owner's knowledge. Google delisted these extensions eventually.

Despite a few companies' efforts to deter cryptojacking, some manage to take advantage, such as Parks. Though the names of the server providers aren't given in the official account, other sources indicate that the servers are located in Redmond and Seattle, both in the State of Washington and home to Microsoft and Amazon's Cloud Computing centers.

Cryptojacking has been a nuisance throughout the years, whether it targets Tesla's cloud computers or personal NAS. However, cases like this indicate the relevant authorities will work with multiple departments to investigate, arrest, and indict the involved parties.

Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.

Original post:
Alleged cryptojacker arrested for money laundering, $3.5 million in cloud service fraud ultimately mined less than $1 ... - Tom's Hardware

Read More..

The 50 Coolest Software-Defined Storage Vendors: The 2024 Storage 100 – CRN

As part of CRNs 2024 Storage 100, here are 50 vendors bringing software capabilities, services and cloud connectivity to storage technology.

The base of nearly any storage system is hardware. But look closely at the hardware, and a distinct pattern appears: The hardware is likely to be an industry-standard server with industry-standard processors, industry-standard storage media and industry-standard memory.

The real value of storage systems, with few exceptions, lies in their software, not their hardware. Software is where the services provided by storage systemsthe ability to store and manage and protect primary and secondary datais provided. For this reason, storage system vendors typically have much larger software engineering teams than they do teams focused on hardware. Software teams define the value of storage.

For this reason, the list of the 50 coolest software-defined storage vendors includes companies ranging from the smallest providers to the likes of NetApp and Dell. They are well-recognized as hardware vendors. However, the list also includes providers of cloud storage where the end customers may not even touch hardware. All these companies are united by the fact that software does indeed define the storage.

As part of CRNs 2024 Storage 100, here are 50 vendors bringing software capabilities, services and cloud connectivity to storage technology.

Advanced Computer & Network Corp.

Gene Leyzarovich

President, CEO

AC&NC provides a wide range of data storage, data protection and data management for NAS, SAN, cloud and hyper-converged infrastructure deployments, as well as storage expansion, servers, switches and adapters to build complete systems. The company also pairs its all-flash and hybrid systems with third-party offerings such as CyberFortress to add data protection and security.

Amax Information Technologies

Jean Shih

President

Amax is a vertically integrated infrastructure manufacture with a focus on server, workstation and storage technologies under its own brand and on an OEM basis. Amax, which in late 2023 held its IPO on the Taiwan Stock Exchange, last year expanded its capabilities with new liquid and immersion cooling technology and the latest Nvidia GPUs.

Broadcom

Hock Tan

President, CEO

As if Broadcom was already not a big enough developer of storage technologiesthanks to multiple acquisitions it produces a wide range of storage adapters, controllers, ICs, and storage networking equipmentthe company in 2023 became a leader in software-defined storage technology with its massive acquisition of VMware from former parent company Dell Technologies.

Cloudian

Michael Tso

Co-Founder, CEO

Cloudian specializes in the development of Amazon S3-compatible object storage systems aimed at managing the complicated unstructured data requirements of a wide range of businesses. It does so via its HyperStore object storage platform, as well as its HyperStore File Services for capacity-intensive, less frequently used files. The company also offers observability and analytics and load-balancing technologies.

Croit

Martin Verges

CEO

Croit develops storage software appliances based on the open-source Red Hat Ceph technology to provide unified, software-defined, scale-out storage that works with block, file and object formats at a low cost per Gigabyte. The company late last year also started providing storage software appliances using the Intel DAOS technology to provide low-latency storage for high-performance environments.

Ctera Networks

Oded Nagel

CEO

Ctera provides technology for secure file services via a platform that manages enterprises file storage, control and governance requirements. The Ctera Enterprise File Services Platform unifies endpoint, branch office and cloud file services via a cloud-native global file system to enable multi-cloud data management with full control over data residency, security and edge-to-cloud acceleration.

DataCore

Dave Zabrowski

CEO

Datacore is one of the pioneers in software-defined storage, offering its storage technology in software-only form for use with customers own hardware platforms. The company most recently extended its software-defined object storage capabilities to the edge via its Agile Containerized Deployments and introduced SANsymphony Adaptive Data Placement for automatic tiering.

DDN

Alex Bouzari

Chairman, Co-Founder, CEO

DDN, the worlds largest privately held storage vendor, develops a comprehensive portfolio of storage systems targeting data-intensive workflows across on-premises and cloud infrastructures for use cases in oil and gas, supercomputing, AI, financial services, manufacturing, telecom and more. DDN in November unveiled DDN Infinia, which provides multitenancy, containerization and performance aimed primarily at accelerated computing for generative AI.

Dell Technologies

Michael Dell

Founder, Chairman, CEO

Dell remains the worlds largest storage vendor thanks to multiple acquisitions, but it is not resting on its laurels. The company is plunging head-first into as-a-service and subscription storage via its Dell Apex portfolio with elastic file, block and backup storage services. The companys PowerScale storage systems in November were validated on Nvidia DGX SuperPod for AI storage.

Hewlett Packard Enterprise

Antonio Neri

President, CEO

HPE has been the second-largest storage vendor for years, primarily on the strength of its HPE Alletra portfolio. Alletra is the center of its HPE GreenLake cloud and as-a-service offerings, including moves in early 2024 to introduce HPE GreenLake services for block storage and file storage. The company in November also unveiled a collaboration with Nvidia for GenAI.

Hitachi Vantara

Sheila Rohra

CEO

Hitachi Vantara, which in 2017 was formed by the combination of Hitachi Data Systems storage and data center infrastructure, Hitachi Insight IoT, and Pentaho big data businesses, reorganized again last November to focus on its block, file, object, mainframe, and software-defined storage and hybrid cloud-centric data infrastructure services portfolios. The company also has a leading storage virtualization technology.

Huawei

Zhengfei Ren

Director, CEO

China-based Huawei remains one of the worlds largest producers of storage systems despite having virtually no U.S. market share due to concerns about alleged China government ties. Huawei late 2023 introduced two new all-flash arrays, including the OceanStor Pacific 9920 scale-out array with up to 768-TB capacity in 2U, and the OceanStor Dorado 2100 all-flash active-active NAS system.

IBM

Arvind Krishna

Chairman, CEO

IBM develops a wide range of storage hardware, software and software-defined storage aimed at general storage, AI, hybrid cloud and data resilience requirements, much of which starting last year has been integrated with open source Ceph technologies from IBMs Red Hat acquisition. The company has most recently added several AI-focused storage systems to its line card.

Icedrive

James Bressington

Founder, CEO

IceDrive is a developer of cloud storage designed to look as if attached to a PC. The company provides Web-based, desktop and mobile apps for sharing and collaborating on a customers stored data. Its encrypted cloud storage uses the twofish algorithm, which the company says is more secure than AES.

Impossible Cloud

Kai Wawrzinek

Co-Founder, CEO

Impossible Cloud develops a decentralized cloud storage architecture built on a global network of enterprise-grade data centers. The company says that its architecture provides secure storage for big data, data backups and archives at a lower cost than hyperscalers can offer. With its S3 API compatibility, it integrates with a wide range of cloud storage applications.

Infinidat

Phil Bullinger

CEO

Infinidat develops enterprise-grade storage technology for AIOps and DevOps, data storage, cyber resiliency, data protection and recovery, business continuity and sovereign cloud storage. All of the companys storage capabilities are based on the same fundamental technology foundation that offers high availability, high performance, and low total cost of ownership at multi-petabyte scale.

Ionir

Jacob Cherian

CEO

Ionir develops a Kubernetes-native storage and data management platform the company says adopts to technology changes and evolving customer requirements without the need for forklift upgrades. Its technology lets businesses run any applications wherever and whenever needed without the need to worry about whether the data is there or not.

iXsystems

Mike Lauth

Co-Founder, CEO

iXsystems develops the open-source storage technology behind TrueNAS, which with over 15 million downloads is the worlds most deployed NAS technology, the company says. The TrueNAS platform is based on the ZFS file system, which provides scale-up and scale-out unified storage. iXsystems is a profitable, self-funded company with no outside investors.

Lenovo

Yuanqing Yang

Chairman, CEO

Originally posted here:
The 50 Coolest Software-Defined Storage Vendors: The 2024 Storage 100 - CRN

Read More..

Civo introduces Cloud GPU powered by Nvidia for high-demand workloads – IT Brief Australia

Cloud computing company Civo has unveiled its latest offering, the Cloud GPU. This comprises a range of services powered by Nvidia GPUs that support workloads such as machine learning, large language models, and graphics rendering.

The Cloud GPU is intended to support high-demand workloads and is immediately available from the Civo dashboard. Users will be able to harness Nvidia's cutting-edge technology with access to NVIDIA A100 40GB, NVIDIA A100 80GB and NVIDIA L40S GPUs for both computing and Kubernetes. Additionally, NVIDIA H100s are now available for reservation.

Beyond leveraging the raw computational power of the GPUs, which includes over 312 TFLOPS of FP16 performance, 1,248 Tensor cores and 80GB of HBM2e memory, users will enjoy the ease of integration. With Civo's plug-and-play adaptability, these powerful processing units can be seamlessly integrated into existing infrastructures.

Civo has also focussed on sustainability in their new product offering. Through a partnership with Deep Green, users may opt to run their cloud GPU workloads on Deep Green's servers. These servers utilise the excess heat generated by data centres, distributing this free heat to various community initiatives like heating swimming pools. Civo's commitment to sustainability extends to its refusal to charge premium pricing for sustainable solutions.

Deep Green operates by immersing servers in mineral oil to capture the heat that is generated during operation. This heat is then transferred through a heat exchanger to provide hot water. In Exmouth, Devon, when deployed on-site, these servers share the reclaimed heat with a public swimming pool, thereby reducing the establishment's energy bill and dependence on fossil fuels.

Speaking on the new offering, Mark Boost, CEO of Civo, expressed enthusiasm for the integration of advanced technology into the Civo stack. He said, "With gold-standard NVIDIA GPUs, we are giving our users the high-performance tools they need to power today's demanding cloud workloads, whether training the next LLM or rendering a complex 3D model."

Boost highlighted the company's commitment to fair pricing for these services, acknowledging challenging economic conditions and reinforcing his conviction that organisations should not be barred from the AI revolution due to prohibitive infrastructure costs.

The CEO also underscored the importance of sustainability in Civo's vision: "Sustainability is at the heart of Civo's future as a cloud provider. By reducing our emissions and making it easy for our customers to do the same, we're hoping to take a firm step towards a more sustainable future. Cloud doesn't have to come at a cost to the planet. By funding innovative solutions, we can build a cloud-native landscape that's suitable for the future of the planet."

Civo will discuss its cloud GPU offering and other services at an upcoming event in Tampa, Florida, called Civo Navigate Local.

Go here to read the rest:
Civo introduces Cloud GPU powered by Nvidia for high-demand workloads - IT Brief Australia

Read More..

Apple to Introduce On-Device AI with iOS 18, Bypassing Cloud Servers – elblog.pl

Apple is poised to reshape the smartphone experience with iOS 18, which is expected to unveil a series of groundbreaking features dedicated to on-device artificial intelligence. As reported by expert journalist Mark Gurman, the upcoming software update is set against the backdrop of the WWDC24 event, where the tech giant traditionally announces its latest operating system iterations.

The narrative of Apples unwavering commitment to user privacy and device security takes a new leap forward with iOS 18. A highlight of the new update is the introduction of AI capabilities that are processed locally on iPhones, without the dependency on external cloud servers. This strategic move ensures enhanced protection for users sensitive personal data and positions Apples AI as a distinct qualifier within the industry.

Insiders anticipate that the new generative AI functions will enrich a plethora of applications such as the iPhone Spotlight search tool, Siri voice assistant, Safari web browser, and various native apps like Shortcuts, Apple Music, Messages, Health, Numbers, Pages, and Keynote. Siri is expected to gain more advanced cognition to handle complex queries, while Messages might feature predictive text enhancements.

While all signs suggest that Apple is not aiming for a direct counterpart to the likes of ChatGPT within iOS 18, speculation around a potential built-in chatbot based on proprietary or third-party collaboration cannot be entirely ruled out. The tech titan has reportedly been in discussions with various AI leaders, including Google, OpenAI, and Baidu, hinting at a future foray into cloud-based AI functionalities as well. Supply chain analysts such as Ming-Chi Kuo and Jeff Pu have hinted at Apples substantial investments in specialized AI server hardware, signaling that Apple is gearing up to be a formidable contender in the generative AI space.

Current Market Trends: The integration of on-device AI in mobile operating systems aligns with the current trend toward enhancing user privacy and data security. Major tech companies are increasingly focusing on processing sensitive information locally to address privacy concerns, regulatory pressures, and growing user demand for secure data handling. On-device AI also reflects the trend of providing users with instant and reliable services that do not require a persistent internet connection.

Companies like Google and Apple have been making significant strides in edge computing, where computation is performed on local devices rather than in a centralized cloud-based infrastructure. This shift is driven by advancements in hardware, such as the development of more powerful and energy-efficient processors capable of handling complex AI tasks directly on smartphones.

Forecasts: As AI technology continues to rapidly evolve, it is forecasted that more sophisticated AI and machine learning capabilities will become standard features in mobile operating systems. These advancements will likely foster new applications in areas such as augmented reality (AR), real-time language translation, and personalized recommendations. Additionally, the market may see an increasing number of partnerships between AI technology providers and smartphone manufacturers.

Key Challenges and Controversies: One of the key challenges associated with this shift towards on-device AI is maintaining the delicate balance between user privacy and the functionality of AI services. Although data may be secure on the device, the potential limitations in computational power compared to cloud servers might affect the performance and scope of AI services.

Another challenge is Apples ability to continue to innovate in AI without compromising its stance on privacy. As competitors may offer more advanced AI features through cloud-based services, Apple needs to ensure its on-device solutions can compete effectively in terms of ability and user experience.

Concerns about potential biases in AI algorithms and the ethical implications of AI decision-making also persist. As AI becomes more ingrained in consumers daily lives, controversies regarding these biases and the transparency of AI systems will likely intensify.

Advantages and Disadvantages: The advantages of implementing on-device AI include:

Enhanced Privacy: User data is processed locally, reducing the risk of unauthorized access during transmission to and from cloud servers. Reduced Latency: AI tasks processed on the device can provide quicker responses and a smoother user experience. Offline Availability: On-device processing allows AI features to be available without an internet connection.

The disadvantages, on the other hand, might include:

Limited Processing Power: Even though smartphones are powerful, they lack the processing capacity of cloud servers, which could limit AI capabilities. Energy Consumption: Intensive AI tasks may lead to increased battery drain, affecting the devices overall battery life.

For more insights and developments, you can visit the official Apple website at the following link.

Read more from the original source:
Apple to Introduce On-Device AI with iOS 18, Bypassing Cloud Servers - elblog.pl

Read More..

Top 5 Amazing Ways AI Coexist With DeFi’s Core Values: Centralization Vs. Decentralization – Blockchain Magazine

The future of DeFi lies in a symbiotic relationship between AI innovation and unwavering commitment to decentralization:

The co-existence of AI and DeFi holds immense promise for the future of finance. By addressing the challenges and fostering a collaborative approach, we can unlock a new era of intelligent DeFi applications that are secure, efficient, and empower users. Heres what the future might hold:

The journey towards a future powered by AI-driven DeFi has only just begun. By harnessing the strengths of both technologies while addressing the challenges, we can create a financial system that is not only innovative but also empowers individuals and fosters a more equitable and secure financial landscape.

The question of AI and DeFis coexistence is not a matter of a simple yes or no. Its a complex dance between innovation, security, and the core principles of decentralization. While the potential benefits of AI in DeFi are undeniable, navigating the centralization paradox requires a nuanced approach.

The Role of the Community

The future of AI-powered DeFi hinges on the active participation of the DeFi community. Fostering open-source development of AI models and promoting transparency in their operation will be crucial. Additionally, advocating for privacy-preserving solutions like federated learning will be essential to ensure users retain control over their data.

Regulation and Governance

Regulation in the DeFi space remains a contentious topic. However, light-touch regulations that promote innovation while mitigating systemic risks could be necessary. Decentralized Autonomous Organizations (DAOs) can play a vital role in establishing fair governance models for AI-powered DeFi applications. These DAOs would be responsible for overseeing the development and implementation of AI models, ensuring they align with the core values of DeFi.

The Human Element

While AI promises to automate many aspects of DeFi, the human element will remain paramount. DeFi communities will need to develop robust educational resources to empower users with the knowledge to navigate AI-driven DeFi applications effectively. Additionally, human expertise will be crucial in areas like ethical considerations, bias detection within AI algorithms, and ensuring responsible development of AI for the benefit of the entire DeFi ecosystem.

The convergence of AI and DeFi presents a unique opportunity to reshape the financial landscape. By embracing innovation while safeguarding the core principles of decentralization, we can create a financial system that is not only secure and efficient but also fosters greater financial inclusion and empowers individuals to take control of their financial future. This future will likely involve a dynamic interplay between human ingenuity and the power of AI, constantly evolving to meet the needs of a global and ever-changing financial landscape.

The journey towards this future will require collaboration between developers, researchers, regulators, and the DeFi community at large. By working together, we can unlock the immense potential of AI-driven DeFi and usher in a new era of financial evolution.

Continued here:

Top 5 Amazing Ways AI Coexist With DeFi's Core Values: Centralization Vs. Decentralization - Blockchain Magazine

Read More..

Arbitrum Leaps Towards Decentralization With Launch of Fraud Proofs on Testnet – West Island Blog

Taking a significant stride toward decentralization, Arbitrum, renowned as the most substantial Ethereum layer-2 scaling solution in terms of total value locked (TVL), recently announced that they have deployed the permissionless version of their fraud proof, referred to as Bounded Liquidity Delay (BOLD), to testnet. The announcement was made on the 16th of April by Offchain Labs, developers of Arbitrum.

Ethereum layer-2 solutions have progressively risen in prominence over recent years. As of April 17, data from L2Beat revealed that these platforms maintain control over a staggering $37 billion worth of assets. Such platforms, offering inexpensive transaction options, have enjoyed substantial uptake from protocol developers and users alike. Popular alternatives like Arbitrum, Optimism, Base, etc., are amongst the favorites.

However, the prevalent popularity aside, these platforms bear considerable issues. Most critically, the development of most of their fraud proofs is still underway. This presents a contrast from common user transactions across all chains, where each transaction must undergo a thorough confirmation via a network of miners or validators a process dictated by the consensus mechanism of the transaction.

Layer-2 platforms work differently, as they reroute transactions for off-chain processing. In this scenario, it becomes theoretically impossible to determine the legitimacy of queued transactions before they are compiled and given on-chain confirmation.

Herein lies the value of the fraud proofs, the likes of those brought forth by Arbitrum and other optimistic roll-up solutions. These proofs cater to a pressing conundrum prevalent in layer-2 solutions, ensuring the validity of those transactions that are processed off-chain. Once BOLD finds integration into the Arbitrum ecosystem, it will play the role of a safety net, safeguarding the transactions integrity while simultaneously promoting efficient off-chain processing.

In line with fundamental blockchain principles, BOLD is set for decentralization. The community will handle the operation of nodes which is a deviation from the currently centralized transaction verification in Arbitrum, controlled by a sparse group of validators.

The deployment of BOLD in the testnet signifies Arbitrums intent to open its secure transaction channels to allow public participation in sustaining network security and validating Ethereum withdrawals. This pioneering step could contribute immensely to the creation of a decentralized ecosystem, while simultaneously fortifying the Arbitrum platform.

This serves as a significant milestone, making Arbitrum the first Ethereum layer-2 to launch its fraud proofs in testnet. The firms announcement has also prompted Ryan Watts of Optimism to inform the community about plans to establish a decentralized fraud-proof system for the second-most extensive layer-2 by TVL.

Despite these advancements, ARBs price is showing stability but remains under discernible pressure. The token has witnessed a 50% depreciation from its March 2024 highs at spot rates and continues to struggle against severe selling pressure. However, if buyers manage to reverse the selling trend seen on April 12 and 13, the token could potentially make a strong recovery, possibly racing towards the $1.5 mark.

See the rest here:

Arbitrum Leaps Towards Decentralization With Launch of Fraud Proofs on Testnet - West Island Blog

Read More..

DeFi Development: Top 10 Intriguing Security Considerations That Can’t Be Ignored – Blockchain Magazine

April 19, 2024 by Diana Ambolis

35

The financial sector is undergoing a significant transformation fueled by Decentralized Finance (DeFi). This innovative ecosystem empowers individuals to participate in financial activities like borrowing, lending, trading, and asset management without relying on traditional intermediaries. As DeFi continues to gain traction, the demand for skilled developers to build secure and robust DeFi applications (dApps) is

The financial sector is undergoing a significant transformation fueled by Decentralized Finance (DeFi). This innovative ecosystem empowers individuals to participate in financial activities like borrowing, lending, trading, and asset management without relying on traditional intermediaries. As DeFi continues to gain traction, the demand for skilled developers to build secure and robust DeFi applications (dApps) is skyrocketing. This guide delves into the world of DeFi development, equipping you with the knowledge and resources to navigate this exciting yet complex landscape.

Understanding DeFi: The Core Principles

DeFi applications operate on blockchain networks, fostering transparency, security, and immutability. Here are some fundamental concepts underpinning DeFi:

Decentralization: DeFi eliminates the need for central authorities like banks or financial institutions. Transactions are facilitated through smart contracts self-executing code stored on the blockchain ensuring trust and automation within the system.

Permissionless Finance: DeFi promotes open access to financial services. Anyone with an internet connection and a crypto wallet can participate in DeFi protocols, regardless of location or credit history.

Tokenization: DeFi heavily relies on crypto tokens. These digital assets represent tradable units of value within the DeFi ecosystem, facilitating various financial activities.

The realm of Decentralized Finance (DeFi) is a captivating landscape brimming with innovation and immense potential. As DeFi disrupts traditional financial systems, the demand for skilled developers to construct secure and robust DeFi applications is surging. But before embarking on your DeFi development odyssey, its crucial to establish a strong foundation in the essential building blocks that underpin these transformative applications. Heres a comprehensive exploration of the core elements that empower you to build the future of finance:

1. Blockchain Technology: The Bedrock of Decentralization

At the heart of DeFi lies blockchain technology. This distributed ledger technology provides the secure and transparent infrastructure upon which DeFi applications are built. Heres a closer look at its significance:

2. Smart Contracts: The Engines of DeFi Applications

Smart contracts are the lifeblood of DeFi applications. Written in programming languages like Solidity (for Ethereum), these self-executing contracts dictate the functionalities and logic behind DeFi protocols. Heres a deeper dive into their role:

3. Wallets and Decentralized Exchanges (DEXs): The Tools of DeFi Interaction

Users interact with DeFi applications through specialized tools:

4. Tokenomics: The Lifeblood of a DeFi Ecosystem

Tokenomics refers to the design and distribution of tokens within a DeFi application. These tokens play a crucial role in the DeFi ecosystem:

5. Oracles: Bridges Between Blockchains and the Real World

DeFi applications often require access to external data feeds to function effectively. Oracles act as bridges, fetching data from the real world (e.g., stock prices, exchange rates) and securely delivering it to DeFi smart contracts. This allows DeFi applications to react to real-world events and offer functionalities like decentralized price feeds for trading or asset valuation.

6. Security Considerations: Building Trust in a Decentralized World

Security is paramount in DeFi development. Here are some crucial aspects to consider:

By mastering these building blocks, youll be well-equipped to embark on your journey as a DeFi developer. Remember, DeFi is a rapidly evolving landscape. Stay updated on emerging trends, security best practices, and innovative protocols to ensure your DeFi applications are not only secure but also at the forefront of this transformative revolution.

Also, read Top 5 Amazing Ways AI Coexist With DeFis Core Values: Centralization vs. Decentralization

The world of Decentralized Finance (DeFi) beckons, offering a revolutionary approach to financial services. As a developer, you hold the key to unlocking this potential by crafting secure and innovative DeFi applications. But where do you begin? This step-by-step guide will illuminate the DeFi development process, equipping you to navigate the journey from concept to a functional DeFi application:

1. Conception: Birthing Your DeFi Idea

Every DeFi application starts with a compelling idea. Heres where you lay the groundwork:

2. Planning and Design: Charting the Course

With a clear idea in mind, meticulous planning lays the foundation for success:

3. Development and Testing: Building and Fortifying Your Creation

With a blueprint in place, its time to translate your vision into reality:

4. Deployment and Launch: Bringing Your DeFi App to Life

The moment of truth arrives! Heres how to usher your DeFi application into the world:

5. Maintenance and Improvement: Ensuring Continuous Growth

The journey doesnt end with launch. Heres how to ensure your DeFi application thrives:

By following these steps and remaining adaptable in this dynamic environment, youll be well-equipped to navigate the DeFi development process and contribute to the future of decentralized finance. Remember, building a successful DeFi application requires not only technical expertise but also a deep understanding of the DeFi ecosystem, its challenges, and its immense potential to revolutionize the financial world.

The realm of Decentralized Finance (DeFi) pulsates with innovation and immense potential. However, this nascent landscape also presents unique security challenges. As a DeFi developer, safeguarding user assets and ensuring the integrity of your application is paramount. Here are the top 10 security considerations to fortify your DeFi development process and build trust within the DeFi ecosystem:

1. Smart Contract Audits: A Line of Defense Against Vulnerabilities

2. Secure Coding Practices: Building a Robust Foundation

3. Reentrancy Attacks: Shielding Against Recursive Exploits

4. Flash Loan Attacks: Plugging the Liquidity Gap

5. Access Control Mechanisms: Delimiting User Permissions

6. Denial-of-Service (DoS) Attacks: Maintaining System Availability

7. Front-Running Attacks: Leveling the Playing Field for Users

8. Rug Pulls and Exit Scams: Building User Trust and Transparency

9. Decentralized Oracles: Mitigating Trusted Third-Party Risks

10. Continuous Security Monitoring: Vigilance in a Dynamic Landscape

By meticulously addressing these security considerations, you can fortify your DeFi application and contribute to a more secure and trustworthy DeFi ecosystem. Remember, security is an ongoing process, not a one-time fix. Continuous vigilance and adaptation are crucial for safeguarding user assets and ensuring the long-term success of your DeFi project.

Layer 2 Scaling Solutions: As DeFi applications gain traction, scalability challenges on base layer blockchains can arise. Layer 2 solutions offer ways to process transactions off-chain while maintaining the security of the underlying blockchain, improving transaction speeds and reducing costs.

Compliance and Regulation: As DeFi matures, regulatory frameworks are likely to evolve. Staying abreast of compliance requirements will be essential for DeFi developers to ensure their applications operate within legal boundaries.

Integration with Traditional Finance (TradFi): Potential bridges between DeFi and TradFi could emerge, allowing for the creation of innovative hybrid financial products and services.

Conclusion: Embracing the DeFi Development Journey

DeFi development presents a unique opportunity to be at the forefront of a financial revolution. By equipping yourself with the necessary knowledge, tools, and security best practices, you can contribute to building a more open, transparent, and inclusive financial system. Remember, the DeFi landscape is constantly evolving, so continuous learning and a commitment to innovation are key to success in this exciting space. So, if youre passionate about technology and finance, DeFi development might be the perfect path for you. Take the plunge, delve into the world of DeFi, and start building the future of finance!

Read the original post:

DeFi Development: Top 10 Intriguing Security Considerations That Can't Be Ignored - Blockchain Magazine

Read More..

Your Engineering Organization Is too Expensive – The New Stack

As central banks worldwide embark on a crusade against inflation, there’s still a lot of uncertainty about the state of the global economy and where prices (and costs) will go from here.

Engineering organizations are facing increased operating expenses (OpEx) on multiple fronts. Cloud costs are growing across all major providers, with Azure and Microsoft Cloud raising prices 15% year-over-year (YoY) in 2023. And that’s not to mention if your technical estate is running on VMware, which reportedly has raised prices between 600% and 1000% since the Broadcom acquisition. These increases compound the issues created by cloud native toolchains that over the past decade grew more complex, more disjointed and more expensive.

At the same time, salary data and talent retention policies are sending mixed signals. While salary increases are no longer accelerating at the pace we saw back in 2021, engineering compensation packages (especially for senior engineers) are still outpacing inflation, with Kubernetes engineers’ salaries growing 10–15% YoY in 2023.

According to Gartner, most employees in the cloud industry estimate they can earn 11% more by simply switching jobs, and they “actually might be underestimating their increased earning potential.” In contrast, Gartner also reports (subscription required):

“35% of organizations say their 2024 merit increase budgets will remain unchanged from 2023, while 19% plan to decrease or have already decreased their budgets. Only 11% of organizations say that they will increase their merit budgets for 2024. Employees, on the other hand, are expecting an increase of over 7%. This is a potential source for disappointment as most employees will expect their increase to match inflation.”

Retaining top talent and balancing salary expectations, while simultaneously addressing the growing complexity of toolchains and cloud bills that can quickly get out of control, are painting a challenging picture for executives in any industry.

What to do? Do you cut headcount and if so, where? Can you afford to consolidate your toolchain and at what price? How much can you save on your cloud bill? How do you retain top talent that will make the difference in growing your overall productivity and market share?

Platform engineering has taken the engineering and cloud native world by storm in the last two years. All major analysts are calling it a key trend in 2024 and years to come, with Gartner forecasting that “80% of all enterprises will have a platform engineering initiative in place by 2026.”

There are good reasons why platform engineering is hyped, and it might very well be the solution that so many executives facing the challenges described above are looking for.

Platform engineering is the discipline of taking the tech and tools floating around your enterprise organization today and binding them into golden paths that remove cognitive load from developers while enabling true self-service. The sum of these golden paths is called an internal developer platform (IDP), which is the end product built by a platform team for their developers.

Platform teams can design clear, security-vetted roads for application developers to consume infrastructure and resources and interact with their cloud setups. This drives standardization by design across the entire engineering organization and has huge implications for all the questions outlined earlier.

Let’s say, for example, you decide to optimize your processes or work structure. Before you can remove people from any process, it’s crucial to standardize and automate the related workflows as much as possible; otherwise, everything will collapse. Rolling out an IDP will not only massively increase your degree of standardization, but it will also accelerate vendor agnosticism, allowing you to avoid vendor lock-in and consolidate your toolchain faster (and with a lot less pain).

A well-designed IDP can also provide transparency and visibility into your cloud costs, allowing you to tag resources and track costs granularly across all your business units and technical estates. This is key to cutting costs without compromising performance.

Companies adopting platform engineering create a much healthier work environment for developers and operations teams because it minimizes conflict. This leads to lower burnout and a more attractive culture that helps retain top performers. Increased developer productivity also means a shorter time to market (with an average 30% drop for teams rolling out enterprise-grade IDPs) and market share growth.

Sounds good right? And it is. The trick here is not to get lost in the process. Many enterprises have bought into the promise of platform engineering, but they are failing to execute properly on it.

Shipping an IDP that’s truly enterprise-ready, meaning it has an orchestration layer that comes with all enterprise features, including single sign-on (SSO) and role-based access control (RBAC), might seem daunting at first. It requires buy-in from multiple stakeholders (devs, ops, execs) and a different approach from what some engineers are used to. The mistake that many platform teams make is to try and please everyone at once. That is the fastest way to lose momentum and drag your platform engineering initiative out for months or even years before it shows any value. At that point, requirements will have likely changed and your IDP will land in the cemetery of failed corporate initiatives.

Successful platform initiatives, on the other hand, start with a minimum viable platform (MVP) designed to quickly show value to all key stakeholders. MVPs follow an established framework that clearly measures impact across the metrics that matter to everyone involved, then iterate from there. An MVP is the proven way to get everyone in an enterprise org on board with the platform initiative within weeks (instead of months or years) and then expand to a full-blown enterprise-grade IDP that can be rolled out across all teams.

Adopting platform engineering, and especially doing it quickly and reliably, is a key differentiator between companies staying competitive vs. the ones falling behind. Humanitec enables teams to roll out IDPs that are enterprise-ready. Talk to our platform architects if you want to learn more about our MVP program.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.

SUBSCRIBE

Read the original here:

Your Engineering Organization Is too Expensive - The New Stack

Read More..