Page 1,536«..1020..1,5351,5361,5371,538..1,5501,560..»

Tested on Ethereum, StarkWares Zero-Knowledge Proofs Are Now Live on Bitcoin – Decrypt

The newly launched ZeroSync Association is bringing zero-knowledge proofs (ZKPs) to Bitcoin (BTC), allowing users to validate the state of the network without the need to download hundreds of gigabytes of blockchain history or trusting a third party.

Based in Zug, Switzerland, the ZeroSync Association is a non-profit entity supported by various community stakeholders, including core contributors Robert Linus, Tino Steffens, Lukas George, and Max Gillett, as well as supporting partners, such as Lightning Labs, among others.

For the first version of its software, ZeroSync is using Cairo, the programming language brought to life by StarkWare, the Israeli-based company developing popular Ethereum layer-2 scaling solutions StarkEx and StarkNet.

ZeroSync is the first production attempt to radically upgrade the Bitcoin protocol, StarkWare's ecosystem lead Louis Guthmann told Decrypt. It would transform the way people think about the system at a fundamental level.

Commonly referred to as zk-STARKs, StarkWares version of ZKPs does not require the potentially vulnerable trusted setup phase, while claiming to be more scalable and efficient than zk-SNARKsan iteration of ZKP used, for example, by the privacy-focused cryptocurrency Zcash.

StarkWare initially deployed zk-STARKs exclusively on the Ethereum blockchain, and seeing them go live on Bitcoin is a logical next step, according to Uri Kolodny, CEO and co-founder at StarkWare Industries.

This could have a profound effect on how Bitcoin users interact with the network, Kolodny said in a statement shared with Decrypt.

To give Bitcoin developers easy access to ZKPs, ZeroSync is developing a software development kit (SDK) that allows them to generate custom validity proofs depending on individual use cases.

A key part of this SDK is ZeroSyncs client which enables fast initial block download (IBD) and the implementation of the first full proof-of-Bitcoin consensus.

Syncing the Bitcoin blockchain can be a painful process as, depending on your internet connection speed, downloading the history of transactions can take days or even weeks, with new blocks added every ten minutes on average.

According to ZeroSync, its client can be used not only to sync a full node much faster but also without needing to make any code changes to the Bitcoin Core software.

The technology can also be applied to compress the transaction history of validation protocols such as Taro, a protocol for issuing stablecoins on Bitcoins Lightning Network, or, for example, to enable Bitcoin exchanges and custodial services to provide proof-of-reserves.

After years of frustration about slow syncing, users will be able to sync with the network much faster, and with less computation. Its a technological leap akin to the transition from slow dial-up internet to high-speed broadband, said STARKs co-inventor and StarkWare president Eli Ben-Sasson.

While StarkWare, which funds the initiative along with Geometry Research, plans to keep its focus on Ethereum, for Ben-Sasson personally this development closes a circle.

The StarkWare president recalled a Bitcoin conference held in 2013, where he had the eureka moment recognizing the cryptography he helped to invent could change blockchain.

But it was clear that the journey needed to start on Ethereum. Now, exactly ten years later, STARKs have proved themselves on Ethereum and are heading to Bitcoin reaching new horizons, said Ben-Sasson.

Read the original post:

Tested on Ethereum, StarkWares Zero-Knowledge Proofs Are Now Live on Bitcoin - Decrypt

Read More..

Bitcoin, Ethereum and Litecoin Are Commodities Says CFTC – Trustnodes

The Commodities Futures Trading Commission (CFTC) has declared ethereum and litecoin to be commodities, in addition to bitcoin.

In an action against Binance, one of the worlds biggest crypto exchange, CFTC said digital assets that are commodities include bitcoin (BTC), ether (ETH), and litecoin (LTC).

This is the first time that the commission has explicitly stated litecoin is a commodity, with it referred to as such numerous times in the complaint.

Litecoin is one of the first fork of bitcoin launched in 2011. It is pretty much a copy paste of bitcoin, except that its block times are every 2.5 minutes rather than 10 minutes.

Ethereum, out of the three, has been subject to most speculation regarding whether it could be a security, especially by its detractors.

The chair of the Securities and Exchanges Commission (SEC), Gary Gensler, has stated or implied that all cryptos, except for bitcoin, are a security.

CFTC however is making it clear that three such cryptos are not securities but commodities, that being bitcoin, eth and litecoin.

As their action against Binance is primarily due to it offering commodities futures without registering with CFTC, CFTC has to establish that there are in fact any commodities traded at Binance, hence why they are specifying the classification of the three cryptos.

Some however argue that all three are in fact currencies or money, and thats the position of another US department, the FinCen.

They require registration with FinCen as a money transmitter, a currency, even if you are just selling a few bitcoin, eth or litecoin on something like Localbitcoins.

While IRS classifies them as property, whatever that means, and in regards to asset reporting for publicly traded companies, cryptos are intangible assets with indefinite life in the balance sheet.

These inconsistencies have led to criticism of law by enforcement, but in the case of ethereum in particular, that it is being re-iterated as a commodity confirms that a crypto can potentially start off as a security, in this case through an ICO, and eventually become a commodity.

The CFTC does not have oversight over spot trading of crypto commodities, so an exchange offering just the buying and selling of eth, bitcoin or litecoin would not need to register with them, though they have to comply with FinCen.

If however they offer futures, options, swaps or other derivatives to US citizens, they have to register with CFTC.

In the case of Binance CFTC said 16% of the accounts on the exchange belong to US citizens, while Binance maintains they take all necessary measures to prevent access to the exchange by Americans.

CFTC also states Binance itself does not have an executive office, claiming that is in order to not be under the applicable regulations of any jurisdiction.

It is slightly more complex however because Binance started off as an ICO, and technically it is meant to be owned by the BNB token holders across the globe.

It was meant to be run by them as well, through a DAO or some other similar mechanism, all of which is very different from a traditional company.

Some six years since that ICO however, Binance in its current form is fairly traditional with a top down organization, a CEO, employees, and with the DAO part kind of non existent except as a semi-legal design of Binances initiation.

Easy therefore it is for a regulator to say this is the law, but the public first of all has to decide whether there is any innovation in Binances corporate design, if we can call it that, and whether it is the law that is outdated and needs to be modified or whether regardless of its present or aspiring structure Binance still has to comply.

As the biggest and a fairly centralized attempt to sort of implement this new thinking that we call DAO, Binance has been a confusing entity certainly to regulators, but also to some of the public like Bloomberg which claims Zhao owns all of Binance, when there was an ICO that makes it not quite the case, if obviously Changpeng Zhao abides by the terms of that ICO.

Regulators therefore, and the public, needs to start considering just what is a DAO and how does it fit within the current regulatory system as well as whether some updates need to be made to it to accommodate experimentation and potential innovation.

Because Zhao is not doing all this just for fun. He could incorporate somewhere, in the Bahamas like FTX or some other lax jurisdiction and get it over with. He doesnt because he is part of a community that since at least 2016 has been wondering whether the company as a legal form invented some 500 years ago can be updated or innovated in the digital era.

As Binance is a fairly centralized entity, those complex and nuanced arguments are more difficult to make, and youd think the reaction from regulators is something like pfff, what.

But, hopefully the crypto space at least understands just what is happening in regards to this no HQ experiment that is a first as far as we are aware.

And it is potentially a prelude of whats to come once we get to the actual DAOs, which are being built, refined and experimented at the corners of crypto.

As they require significant input, their debut has not arrived yet in part because it is a very hybrid company model in as far as you do need the centralized aspect of a management personnel, and how the dao-nians hire and fire them are complex matters.

But, its an exciting experiment and Binance, perhaps in a very little way, is trying to push it forward. Which is why the exchange has generally attracted support in this space.

Original post:

Bitcoin, Ethereum and Litecoin Are Commodities Says CFTC - Trustnodes

Read More..

The Potential 100X Project Uwerx (WERX) And Ethereum (ETH … – The Crypto Basic

The coming bull market will produce hundreds of 100X tokens; all it takes from the investor is the ability to spot a solid value proposition and invest early. Analysts have pointed out that Uwerx could be one such opportunity, and they have already predicted Ethereum (ETH) to make solid gains throughout 2023.

The digital asset space remains one of the last bastions of fair investment. Due to regulations in the United States, ordinary investors are prohibited from investing in early-stage start-ups with these returns instead going to venture capital funds. A bizarre piece of legislation aimed to protect investors. An investor can lose money in many ways; for example, in Las Vegas. Investing in new companies is potentially one of the best investments an individual can make. And crypto still allows this equitable funding model.

Uwerx will launch with a fair presale, allowing all investors the chance to join this project at its initial stage, with the WERX token selling for $0.005. With the potential to become a blue chip project, this could be one of the years most explosive investment opportunities. Analysts have predicted highs of up to $2.90 by the end of Q3 2023.

Driving Uwerxs growth will be its fundamentally disruptive approach to the freelance economy. Uwerx will launch a decentralized platform for the gig economy, offering a more trusted, secure, and cost-efficient service than its conservative, traditional counterparts. And given the forward-looking nature of freelancers, analysts expect millions may adopt it in the coming months.

Ethereum (ETH) continues to be a dominant market force, and the old idea of Ethereum (ETH) flipping Bitcoin (BTC) has again become popular. Ethereum (ETH) delivers an incredible amount of value to the digital asset space, and Ethereum (ETH) is responsible for the vast amount of liquidity locked in DeFi.

Ethereum (ETH) has received an economic overhaul since the Merge and the introduction of EIP-1559, which has led to bullish calls on Ethereum (ETH) from analysts. Ethereum (ETH) trades at $1,809, with many analysts expecting Ethereum (ETH) to trade well over $2,000 by the end of the year.

According to Velocity Global, almost 15% of workers in England and Wales complete a gig job at least once per week. With continued growth in the number of freelancers and with Uwerx holding a technological lead over its competition, the upside potential remains enormous.

- Advertisement -

Presale participants will be early backers of a protocol that could entirely disrupt the industry. Uwerx has been audited by InterFi network and SolidProof and has a twenty-five-year liquidity lock at prelaunch closure. The creators have also announced that they would be giving up ownership of contracts when the project is listed on centralized exchanges. It could go on to be 2023s best presale, get in on the action by following the links below.

Find Out More Here: Website

Presale

Telegram

Twitter

- Advertisement -

Link:

The Potential 100X Project Uwerx (WERX) And Ethereum (ETH ... - The Crypto Basic

Read More..

Google Brings PostgreSQL-Compatible AlloyDB To Multicloud, Data Centers And The Edge – Forbes

Google is enabling AlloyDB, the PostgreSQL-compatible database, to run anywhere, including public cloud, on-premises servers, edge computing environments and even developer laptops. Branded as AlloyDB Omni, the engine is the same as AlloyDB, the cloud-based managed database announced last year.

Elephant

pixabay

AlloyDB Omni promises compatibility with PostgreSQL, enhanced performance and support delivered by Google Cloud. Compared to the standard, open source PostgreSQL, AlloyDB Omni delivers 2x faster performance and 100x faster analytical queries. This is possible due to how Google has tuned, enhanced and optimized the database engine.

By analyzing a query's components, such as subqueries, joins, and filters, the AlloyDB Omni index advisor reduces the guesswork involved in tuning query performance. By periodically analyzing the workload on the database, it finds queries that could benefit from indexes, and suggests new indexes that could significantly improve query performance.

The other unique feature of AlloyDB Omni includes a columnar engine, which keeps frequently accessed data in an in-memory columnar format for quicker scans, joins, and aggregations. AlloyDB Omni automatically arranges the data and selects between columnar and row-based execution plans using machine learning. This capability delivers better performance without recreating queries targeting different formats.

AlloyDB Omni is packaged as a set of containers that can be deployed in a Debian-based or a Red Hat Enterprise Linux host. In its technical preview, Google is providing a set of shell scripts to automate the deployment. However, there is no guidance on deploying AlloyDB Omni in a Kubernetes cluster through Helm Chart or an operator. This may change when the software moves towards the general availability.

Google recommends deploying AlloyDB Omni on a machine or a VM with at least two CPUs and 16GB of memory. The machine should have Docker and Google Cloud SDK installed to pull the images of AlloyDB from Google Cloud Container Registry and the shell scripts uploaded to Google Cloud Storage. On a machine with prerequisites installed, it takes a couple of minutes to get AlloyDB Omni up and running.

Interestingly, Google doesnt mention Anthos as the preferred infrastructure for deploying AlloyDB Omni. Though the software is packaged as containers, it can run on any Linux machine with Docker installed.

AlloyDB Omni also supports the creation of read replicas - dedicated database servers optimized for read-only access. A replica server provides a read-only clone of the primary database server while continuously updating its own data to reflect changes made to the primary server's data. Read replicas significantly increases the throughput and availability of the database.

Google is investing in AlloyDB Omni to attract customers migrating their databases from legacy versions of Oracle and Microsoft SQL Server. With 100% compatibility with PostgreSQL, customers can take advantage of the migration tools and the expertise available in the ecosystem. The other use case is running an optimized database at the edge. Customers can ingest IoT device data into AlloyDB for querying and analyzing the telemetry data of various sensors. Similar to BigQuery Omni, enterprises can run a Google Cloud-managed database in other cloud environments such as AWS and Azure. It will simplify the integration of data services while reducing the bandwidth cost involved in moving the data across clouds.

Google is not the only public cloud provider to bring a cloud-based managed database to multicloud and on-premises. Microsoft announced Azure Arc-enabled SQL Server and Azure Arc-enabled PostgreSQL in 2020. Based on Azure Arc, Microsoft has packaged these databases as Kubernetes deployments. Enterprises with Arc-enabled Kubernetes can easily deploy SQL Server and PostgreSQL on their clusters.

Scaling AlloyDB Omni to the cloud-based version is straightforward. Like any other migration, customers can export the data in a CSV, DMP or SQL format and import that data into an AlloyDB instance running in Google Cloud. For lift-and-shift scenarios, Google recommends using the Database Migration Service, which is currently in preview.

With a clear migration plan to the cloud-based AlloyDB based on the recently announced Database Migration Service, Google hopes to drive the adoption of its Data Cloud through AlloyDB Omni.

Janakiram MSV is an analyst, advisor and an architect at Janakiram & Associates. He was the founder and CTO of Get Cloud Ready Consulting, a niche cloud migration and cloud operations firm that got acquired by Aditi Technologies. Through his speaking, writing and analysis, he helps businesses take advantage of the emerging technologies.

Janakiram is one of the first few Microsoft Certified Azure Professionals in India. He is one of the few professionals with Amazon Certified Solution Architect, Amazon Certified Developer and Amazon Certified SysOps Administrator credentials. Janakiram is a Google Certified Professional Cloud Architect. He is recognised by Google as the Google Developer Expert (GDE) for his subject matter expertise in cloud and IoT technologies. He is awarded the title of Most Valuable Professional and Regional Director by Microsoft Corporation. Janakiram is an Intel Software Innovator, an award given by Intel for community contributions in AI and IoT. Janakiram is a guest faculty at the International Institute of Information Technology (IIIT-H) where he teaches Big Data, Cloud Computing, Containers, and DevOps to the students enrolled for the Master's course. He is an Ambassador for The Cloud Native Computing Foundation.

Janakiram was a senior analyst with Gigaom Research analyst network where he analyzed the cloud services landscape. During his 18 years of corporate career, Janakiram worked at world-class product companies including Microsoft Corporation, Amazon Web Services and Alcatel-Lucent. His last role was with AWS as the technology evangelist where he joined them as the first employee in India. Prior to that, Janakiram spent over 10 years at Microsoft Corporation where he was involved in selling, marketing and evangelizing the Microsoft application platform and tools. At the time of leaving Microsoft, he was the cloud architect focused on Azure.

See more here:
Google Brings PostgreSQL-Compatible AlloyDB To Multicloud, Data Centers And The Edge - Forbes

Read More..

Cloud ROI: Getting Innovation Economics Right with FinOps – CIO

Is the cloud a good investment? Does it deliver strong returns? How can we invest responsibly in the cloud? These are questions IT and finance leaders are wrestling with today because the cloud has left many companies in a balancing actcaught somewhere between the need for cloud innovation and the fiscal responsibility to ensure they are investing wisely, getting full value out of the cloud.

One IDC study shows 81% of IT decision-makers expect their spending to stay the same or increase in 2023, despite anticipating economic storms of disruption. Another 83% of CIOs say despite increasing IT budgets they are under pressure to make their budgets stretch further than ever beforewith a key focus on technical debt and cloud costs. Moreover, Gartner estimates 70% overspending is common in the cloud.

The need for cloud innovation amid economic headwinds has companies shifting their strategies, putting protective parameters in place, and scrutinizing cloud value with concerted efforts to accelerate return on investment (ROI), specifically on technology.

While many companies are delaying new IT projects with ROI of more than 12 months, others are reducing innovation budgets while they try to squeeze more value out of existing investments. Regardless of how pointed their endeavors are, most IT and finance leaders are looking for ways to better govern cloud transformation. Thats because, in todays economic climate, leaders arent just responsible for driving ingenuity, they are held accountable for ensuring the company is a good steward of its technology investments with concentrated emphasis on:

If the past three years were dedicated to accelerated cloud transformation, 2023 is being devoted to governing it. But its not just todays tumultuous times calling for executives to heed to the reason of fiduciary responsibility. The cloud also necessitates itparticularly when companies want to achieve ROI faster.

The cloud can make for an uneven balance sheet without proper oversight. Itneeds to be closely watched from a financial perspective. Why? The short answer: variable costs. When the cloud is infinitely scalable, costs are infinitely variable. Pricingstructuresare based onservice usage fees and overage charges where even marginal lifts inusage can incur steep increases in cost. While this structure favorscloud providers, it starkly contrasts the needs of IT financial managersmosthave per-unit budgets and preferpredictable monthly costs for easier budgeting and forecasting.

Additionally, companies arent always good at estimating what they need and using everything they pay for. As a result, cloud waste is now a thing.In fact, companies waste as much as 29% of their cloud resources.

As companies lift and shift their workloads to the cloud, they trade in-house management for outsourced services. But as IT organizations are loosening their reign, financial management teams should be tightening their grip. Those who arent actively right sizing their cloud assets are typically paying more than necessary. Hence, why overspending can easily reach 70%.

Achieving ROI in one year requires tracing where your cloud money goes to see how and where it is repaid. Budget dollars go down the drain when companies fail to pay attention to how they are using the cloud, dont take the time to correct misuse, or overlook service pausing features and discounting opportunities.

But cloud cost management is not always a simple task. The majority of IT and financial decision-makers report its challenging to account for cloud spending and usage, with the C-suite cite tracing spend and chargebacks of particular concern. The key to cost control is to pinpoint and track every cloud service cost across the IT portfolioyes even when companies have on average 11 cloud infrastructure providers, nine unified communications solutions, as well as a cacophony of unsanctioned applications consuming up to 30% of IT budgets in the form of Shadow IT.

When you factor in these dynamics and consider that cloud providers have little incentive to improve service usage reports, helping clients better balance the one-sided financials of the relationship, you can see why ROI can be slow-moving.

FinOps comes in to bridge this gap.

Cloud services are now dominating IT expense sheets, and when increasing bills delay ROI, IT financial managers go looking for answers. This has given rise to the concept of FinOps (a word combining Finance and DevOps) which isa financial management discipline for controlling cloud costs. Driving fiscal accountability for the cloud, FinOps helps companies realize more business value and accelerate ROI from their cloud computing investments.

Sometimes described as a cultural shift at the corporate level, FinOps principles were developed to foster collaboration between business teams and IT engineers or software development teams. This allows for more alignment around data-driven spending decisions across the organization. But beyond simply a strategic model, FinOps is also considered a technology solutiona service enabling companies to identify, measure, monitor, and optimize their cloud spend, thus shortening the time to achieve ROI. Leading cloud expense management providers, for example, save cloud investors 20% on average and can deliver positive ROI in the first year.

FinOps Best Practices

As the cloud makes companies agile, managing dynamic cloud costs becomes more important. FinOps help offset rising prices and insert accountability into organizations focused on cloud economics. Best practices for maximizing ROI include reconciling invoices against cloud usage, making sure application licenses are properly disconnected when no longer necessary or reassigned to other employees, and reviewing network servers to ensure they arent spinning cycles without a legitimate business purpose.

Key approaches include:

Is the cloud a good investment? Yes, as long as the company can effectively see and use its assets, monitor its expenses, and manage its service. The cloud started as a means to lower costs, minimize capital expenses, and gain infinite scalability, and that reputation should payout even after being pressure tested by the masses. With a collaborative and disciplined approach to management, companies of every size can recognize quick ROI without generating significant waste or adding unnecessary complexity.

To learn more about cloud expense management services, visit us here.

Read more:
Cloud ROI: Getting Innovation Economics Right with FinOps - CIO

Read More..

Strengthening Business Cybersecurity With CASB – WebProNews

The development of cloud computing technology has revolutionized business operations worldwide. Companies use cloud computing to process and store data so employees can access them anywhere. Unfortunately, this convenience is accompanied by security challenges companies should address to keep sensitive information and intellectual property safe.

A Cloud Access Security Broker (CASB) is a reliable solution to this problem. Cloud Access Security Brokers can provide protection, visibility, and control to cloud-based data and applications.

These features are essential to business cybersecurity because it prevents unwanted parties from accessing vital company data. This reduces the risk of sensitive information leaking to the public or being stolen and sold to a competitor.

Features of a Cloud Access Security Broker

A Cloud Access Security Broker is an intermediary between an organizations IT infrastructure and its cloud-based applications and services. By using CASB, companies will have visibility into their cloud usage. They will also be able to prevent security incidents like DDoS attacks, ransomware attacks, and data breaches. Here is a complete list of features businesses will benefit from integrating CASB into their cloud security framework.

Company executives will have oversight into the usage of their cloud-based applications so they can track employee activities on these applications. This oversight helps management teams identify security risks and take prompt actions to curb them before they get out of hand.

CASBs can detect, block, and report unauthorized entry and data exfiltration attempts to a companys cybersecurity team so they can take other precautionary measures if necessary. A CASB will also give an organization control over the usage of its cloud servers so it can enforce its cybersecurity policies. Controlling employee cloud server access will prevent data loss and help them adhere to government data regulations.

Cloud Access Security Brokers offer protection against cyber attacks. They use machine learning and behavioral analytics to detect suspicious activity and discover signs that indicate the presence of cyber threats. CASBs also scan traffic moving in and out of cloud servers for malware and other harmful content, so they can be blocked and quarantined before reaching their destination.

Governments require companies to protect consumer data. Using a CASB to prevent data breaches and unwanted data access ensures an organization complies with the regulations. This will help them avoid hefty fines and sanctions and preserve their reputation in the public eye.

As companies expand their operations, it might become challenging to maintain oversight and control of their cloud servers. Fortunately, CASBs allow for scalability so businesses of all sizes can get the cloud protection they need. They can be integrated with other security tools and service providers to create a more robust cybersecurity system.

Endnote

Many businesses use cloud-based services and applications to streamline their operations and make it easy for employees to access files needed for their jobs. However, this can lead to data leaks and exposure to malware which will endanger the system. Using a security tool like CASB will provide threat protection and protect companies from unauthorized entries to their cloud servers.

Read this article:
Strengthening Business Cybersecurity With CASB - WebProNews

Read More..

Analysis: Alibaba overhaul leaves fate of prized cloud unit up in the air – Reuters

SHANGHAI, March 31 (Reuters) - Alibaba's (9988.HK) six-way breakup plan has raised questions about the long-term shape of its profitable cloud unit, given that it will have to tackle heavy regulatory scrutiny at a time when competition is intensifying both in China and abroad.

While a split into a standalone unit will give investors a chance to make focused bets on a business estimated by analysts to be worth between $41 billion and $60 billion, the step could put Alibaba's cloud unit even more in the cross-hairs of Chinese and overseas regulators, likely slowing its growth.

Some analysts said external investment and separation from Alibaba's core ecommerce business could help it grow overseas, where it is far behind rivals such as Amazon Web Services. But others see the Chinese state investing in the cloud unit or it even going private, given its dominance in the domestic cloud computing industry.

Alibaba's planned Cloud Intelligence Group, which will house the cloud business AliCloud as well as the tech giant's artificial intelligence and semiconductor research, has a 36% market share in China's domestic cloud computing sector.

Its servers host reams of data from companies ranging from tech peers to retailers, the handling and sharing of which has in recent years drawn increasing scrutiny from Beijing.

"Alibabas business lines have different levels and types of regulatory sensitivity," said Gavekal Dragonomics analyst Thomas Gatley in a note this week.

"For cloud computing, data security is paramount."

Alibaba and China's commerce ministry did not immediately respond to queries sent on Friday.

Receiving state investment and drawing closer to the Chinese government could satisfy regulators in Beijing, who have rolled out new laws regulating the handling of data in China and set up a data bureau to underline their focus on the area.

It could also help AliCloud to compete more effectively in China, where overall demand for cloud computing from internet companies is slowing and growth is mainly coming from governments and state-owned enterprises which have not migrated to the cloud as quickly.

While government entities "will not completely reject" companies like Alibaba, Baidu , and Tencent Holdings (0700.HK) for their projects, "they will have a tendency to choose companies with a government funding and backgrounds," said Zhang Yi, who tracks China's cloud computing sector at research firm Canalys.

In the first half of last year, China's top three telcos - China Mobile (0941.HK), China Unicom (0762.HK), and China Telecom (0728.HK) - collectively surpassed Alibaba's share in the domestic cloud market for the first time, according to brokerage Jefferies, underscoring Beijing's growing reliance on state-backed carriers for data management.

But growing closer to Beijing has a downside, said Michael Tan, a Shanghai-based partner of law firm Taylor Wessing.

"It could backfire at the international level, as it might then face even more attention from the U.S.," he said.

In January, Reuters reported that the Biden administration is reviewing Alibaba's cloud business to determine whether it poses a risk to U.S. national security.

The cloud unit has its own domestic problems to fix.

In 2021, China's Ministry of Industry and Information Technology suspended an information-sharing partnership with AliCloud on the grounds that Alibaba did not report a security vulnerability related to the open-source logging framework Apache Log4j2.

And in December 2022, Alibaba Cloud experienced what it called its "longest major-scale failure" for more than a decade after its Hong Kong and Macau servers suffered a serious outage that affected many services in the region including ones belonging to crypto exchange OKX.

Weeks after the outage, Alibaba group Chairman and CEO Daniel Zhang took over as head of the cloud unit, a role he will continue to hold concurrently even after the split-up.

Another risk from the planned split of the cloud unit, which had sales of around $11.5 billion last year, is that previously captive in-house Alibaba clients start courting rivals, hurting its revenue.

But splitting the cloud unit away could also be a positive for the other Alibaba businesses, some analysts said.

"When all data was put in one basket at Alibaba, there could always be concern about misuse of data within the company to maximise profit," said Tan at Taylor Wessing.

"The restructuring will help avoid this."

($1 = 6.8902 Chinese yuan renminbi)

Reporting by Josh Horwitz; Editing by Brenda Goh and Muralikumar Anantharaman

Our Standards: The Thomson Reuters Trust Principles.

Continued here:
Analysis: Alibaba overhaul leaves fate of prized cloud unit up in the air - Reuters

Read More..

IBM Furthers Flexibility, Sustainability and Security within the Data Center with New IBM z16 and LinuxONE 4 Single Frame and Rack Mount Options -…

New IBM z16 and IBM LinuxONE Rockhopper 4 options are designed to provide a modern, flexible hybrid cloud platform to support digital transformation for a range of IT environments

Consolidating Linux workloads on an IBM LinuxONE Rockhopper 4 instead of running them on compared x86 servers with similar conditions and location can reduce energy consumption by 75% and space by 67% and is designed to help clients reach their sustainability goals1

ARMONK, N.Y., April 4, 2023 /PRNewswire/ -- IBM (NYSE: IBM)today unveiled new single frame and rack mount configurations of IBM z16and IBM LinuxONE 4, expanding their capabilities to a broader range of data center environments. Based on IBM's Telum processor, the new options are designed with sustainability in mind for highly efficient data centers, helping clients adapt to a digitized economy and ongoing global uncertainty.

Introduced in April 2022, the IBM z16 multi frame has helped transform industries with real-time AI inferencing at scale and quantum-safe cryptography. IBM LinuxONE Emperor 4, launched in September 2022, features capabilities that can reduce both energy consumption and data center floor space while delivering the scale, performance and security that clients need.The new single frame and rack mount configurations expand client infrastructure choices and help bring these benefits to data center environments where space, sustainability and standardization are paramount.

"IBM remains at the forefront of innovation to help clients weather storms generated by an ever-changing market," said Ross Mauri, General Manager, IBM zSystems and LinuxONE. "We're protecting clients' investments in existing infrastructure while helping them to innovate with AI and quantum-safe technologies. These new options let companies of all sizes seamlessly co-locate IBM z16 and LinuxONE Rockhopper 4 with distributed infrastructure, bringing exciting capabilities to those environments."

Story continues

Designed for today's changing IT environment to enable new use cases

Organizations in every industry are balancing an increasing number of challenges to deliver integrated digital services. According to a recent IBM Transformation Index report, among those surveyed, security, managing complex environments and regulatory compliance were cited as challenges to integrating workloads in a hybrid cloud. These challenges can be compounded by more stringent environmental regulations and continuously rising costs.

"We have seen immense value from utilizing the IBM z16 platform in a hybrid cloud environment," said Bo Gebbie, president, Evolving Solutions. "Leveraging these very secure systems for high volume transactional workloads, combined with cloud-native technologies, has enabled greater levels of agility and cost optimization for both our clients' businesses and our own."

The new IBM z16 and LinuxONE 4 offerings are built for the modern data center to help optimize flexibility and sustainability, with capabilities for partition-level power monitoring and additional environmental metrics. For example, consolidating Linux workloads on an IBM LinuxONE Rockhopper 4 instead of running them on compared x86 servers with similar conditions and location can reduce energy consumption by 75 percent and space by 67 percent.1These new configurations are engineered to deliver the same hallmark IBM security and transaction processing at scale.

Designed and tested to the same internal qualifications as the IBM z16 high availability portfolio2, the new rack-optimized footprint is designed for use with client-owned, standard 19-inch racks and power distribution units. This new footprint opens opportunities to include systems in distributed environments with other servers, storage, SAN and switches in one rack, designed to optimize both co-location and latency for complex computing, such as training AI models.

Installing these configurations in the data center can help create a new class of use cases, including:

Sustainable design: Easier integration into hot or cold aisle thermal management data center configurations with common data center power and cooling

Optimizing AI solutions: With on-chip AI inferencing and the newest IBM z/OS 3.1, whether rack mount, single frame or multi frame configurations, clients can train or deploy AI models very close to where data resides, allowing clients to optimize AI

Data privacy: Support data sovereignty for regulated industries with compliance and governance restrictions on data location, routing local transactions through local data centers with optimized rack mount efficiency

Edge computing: Enable more efficient rack utilization in limited rack space near manufacturing, healthcare devices, or other edge devices

Securing data on the industry's most available systems3

For critical industries, like healthcare, financial services, government and insurance, a secure, available IT environment is key to delivering high quality service to customers. IBM z16 and LinuxONE 4 are engineered to provide the highest levels of reliability in the industry, 99.99999% availability to support mission-critical workloads as part of a hybrid cloud strategy. These high availability levels help companies maintain consumer access to bank accounts, medical records and personal data. Emerging threats require protection, and the new configurations offer security capabilities that include confidential computing, centralized key management and quantum-safe cryptography to help thwart bad actors planning to "harvest now, decrypt later."

"IBM z16 and LinuxONE systems are known for security, resiliency and transaction processing at scale," said Matt Eastwood, SVP, WW Research, IDC. "Clients can now access the same security and resiliency standards in new environments with the single frame and rack mount configurations, giving them flexibility in the data center. Importantly, this also opens up more business opportunity for partners who will be able to reach an expanded audience by integrating IBM zSystems and LinuxONE capabilities to their existing footprints."

With the IBM Ecosystem of zSystems ISV partners, IBM is working to address compliance and cybersecurity. For clients that run data serving, core banking and digital assets workloads, an optimal compliance and security posture is key to protecting sensitive personal data and existing technology investments.

"High processing speed and artificial intelligence are key to moving organizations forward," said Adi Hazan, director ofAnalycat. "IBM zSystems and LinuxONE added the security and power that we needed to address new clients, use cases and business benefits. The native speed of our AI on this platform was amazing and we are excited to introduce the IBM LinuxONE offerings to our clients with large workloads to consolidate and achieve corporate sustainability goals."

IBM Business Partners can learn more about the skills required to install, deploy, service and resell single frame and rack mount configurations in this blog.

Complementary Technology Lifecycle Support Services

With the new IBM LinuxONE Rockhopper 4 servers, IBM will offer IBM LinuxONE Expert Care. IBM Expert Care integrates and prepackages hardware and software support services into a tiered support model, helping organizations to choose the right fit of services. This support for LinuxONE Rockhopper 4 will offer enhanced value to clients with predictable maintenance costs and reduced deployment and operating risk.

The new IBM z16 and LinuxONE 4 single frame and rack mount options, supported by LinuxONE Expert Care, will be generally available globally[4] from IBM and certified business partners beginning on May 17, 2023. To learn more:

On April 4, at 10 am ET, join IBM clients and partners for behind-the-scenes access to the new IBM z16 single frame and rack mount configurations

On April 17, at 10 am ET, join IBM clients and partners for a deep dive on industry trends, such as sustainability and cybersecurity during the IBM LinuxONE single frame and rack mount virtual event

Check out a preview of the newest version of z/OS, which is designed to scale the value of data and drive digital transformation powered by AI and intelligent automation

About IBMIBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries.Nearly 3,800 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely.IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients.All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity, and service. For more information, visitwww.ibm.com

Media Contact:Ashley Petersonashley.peterson@ibm.com

1 DISCLAIMER: Compared IBM Machine Type 3932 Max 68 model consisting of a CPC drawer and an I/O drawer to support network and external storage with 68 IFLs and 7 TB of memory in 1 frame versus compared 36 x86 servers (2 Skylake Xeon Gold Chips, 40 Cores) with a total of 1440 cores. IBM Machine Type 3932 Max 68 model power consumption was measured on systems and confirmed using the IBM Power estimator for the IBM Machine Type 3932 Max 68 model configuration. x86 power values were based on Feb. 2023 IDC QPI power values and reduced to 55% based on measurements of x86 servers by IBM and observed values in the field. The x86 server compared to uses approximately .6083 KWhr, 55% of IDC QPI system watts value. Savings assumes the Worldwide Data Center Power Utilization Effectiveness (PUE) factor of 1.55 to calculate the additional power needed for cooling. PUE is based on Uptime Institute 2022 Global Data Center Survey (https://uptimeinstitute.com/resources/research-and-reports/uptime-institute-global-data-center-survey-results-2022). x86 system space calculations require 3 racks. Results may vary based on client-specific usage and location.2 DISCLAIMER: All the IBM z16 Rack Mount components are tested via same process requirements as the IBM z16 traditional Single Frame components. Comprehensive testing includes a wide range of voltage, frequency, temperature testing.3 Source: Information Technology Intelligence Consulting Corp. (ITIC). 2022. Global Server Hardware, Server OS Reliability Survey. https://www.ibm.com/downloads/cas/BGARGJRZ4 Check local availability for rack mount here.

IBM Corporation logo. (PRNewsfoto/IBM)

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/ibm-furthers-flexibility-sustainability-and-security-within-the-data-center-with-new-ibm-z16-and-linuxone-4-single-frame-and-rack-mount-options-301789108.html

SOURCE IBM

See the rest here:
IBM Furthers Flexibility, Sustainability and Security within the Data Center with New IBM z16 and LinuxONE 4 Single Frame and Rack Mount Options -...

Read More..

AWS to boost Australian cloud infrastructure – iTWire

Amazon Web Services plans to spend $13.2 billion to expand its cloud infrastructure in Sydney and Melbourne between 2023 and 2027.

The company said the investment is needed to meet growing customer demand for its services in Australia.

To put the sum into perspective, AWS spent more than $9.1 billion in its Asia Pacific (Sydney) Region between 2012 and 2022.

In addition, AWS launched its Asia Pacific (Melbourne) Region in January 2023.

"For over a decade, AWS has invested billions of dollars into Australia through infrastructure and jobs, and worked closely with the public sector, and local customers and partners, to be a force multiplier across the nation," said AWS managing director for Australia and New Zealand Rianne Van Veldhuizen.

"We are committed to positive social and economic impact, investing in local community engagement programs, workforce development initiatives, cloud infrastructure, and renewable energy project investments. Our plan to invest more than $13 billion into the country over the next five years will help create more positive ripple effects, further solidifying Australia's position in the global economy."

Amazon has pledged to power its operations with 100% renewable energy by 2030, and is on track to achieve this goal by 2025.

In Australia, it is committed to take at total of 262MW from utility-scale renewable projects located in Suntop and Gunnedah in New South Wales, and one that is under development in Hawkesdale, Victoria.

Prime Minister Anthony Albanese said "Economic and infrastructure investment from cloud providers like Amazon Web Services helps create jobs, advances digital skills, boosts innovation, and uplifts local communities and businesses. The Australian Government acknowledges AWS's investment into the nation over the past decade, and welcomes its planned investment over the next five years, the full-time jobs supported annually, and contribution to the nation's GDP."

Tech Council of Australia CEO Kate Pounder said "Investments from tech companies like AWS in Australia have an outsized positive impact on the wider economy. Not only do they bring the direct financing and jobs, but their cloud infrastructure has also enabled the growth of a globally competitive Australian software sector, which has become one of the most successful new industries created in Australia in decades. The support for digital skilling also enables our workforce to learn from leading tech companies, with spillover benefits across the Australian economy. The tech sector will be a key driver of future prosperity in Australia, and AWS's contribution will help propel us forward."

AWS's Australian customers include Atlassian, Australian Bureau of Statistics, National Australia Bank, NSW Health Pathology, Qantas, Swoop Aero, and WA Department of Education.

Reducing WAN latency is one of the biggest issues with hybrid cloud performance. Taking advantage of compression and data deduplication can reduce your network latency.

Research firm, Markets and Markets, predicted that the hybrid cloud market size is expected to grow from US$38.27 billion in 2017 to US$97.64 billion by 2023.

Colocation facilities provide many of the benefits of having your servers in the cloud while still maintaining physical control of your systems.

Cloud adjacency provided by colocation facilities can enable you to leverage their low latency high bandwidth connections to the cloud as well as providing a solid connection back to your on-premises corporate network.

Download this white paper to find out what you need to know about enabling the hybrid cloud in your organisation.

DOWNLOAD NOW!

Marketing budgets are now focused on Webinars combined with Lead Generation.

If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.

The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.

Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.

We look forward to discussing your campaign goals with you. Please click the button below.

MORE INFO HERE!

Read this article:
AWS to boost Australian cloud infrastructure - iTWire

Read More..

Why Cloud Data Replication Matters – The New Stack

Modern applications require data, and that data usually needs to be globally available, easily accessible and served with reliable performance expectations. Today, much of the heavy lifting happens behind the scenes. Lets look at why the cloud factors into the importance of data replication for business applications.

What is data replication? Simply put, it is a set of processes to keep additional copies of data available for emergencies, backups or to meet performance requirements. Copies may be done in duplicate, triplicate or more depending on the potential risk of a failure or the geographic spread of an applications user base.

These multiple pieces of data may be chopped up into smaller pieces and spread around a server, network, data center or continent. This ensures data is always available and performance is unfailing in a scalable way.

There are many reasons for building applications that understand replication, with or without cloud support. These are basic topics that any developer has had to deal with, but they are even more important when applications go global and/or mobile. Then they need ways to keep data secure and located efficiently.

These particular areas are commonly discussed when talking about cloud data replication:

This refers to making sure all data is ready for use when requested, with the latest versions and updates. Availability is affected when concurrent sessions do not share or replicate their data effectively. By replicating the latest changes to other nodes or servers, it should be instantly available to users who are accessing those other nodes.

Keeping a master copy is important, but it is equally important to keep that copy up to date as much as possible for all users. This means also keeping child nodes up to date with the master node so everyone stays up to date.

Data replication helps reduce latency of applications by keeping copies of data close to the end user of the application. Modern cloud applications are built on top of different networks often located in geographic regions where their user base is most active. While the overhead of keeping copies synchronized and copied might seem extreme, the positive impact on the end-user experience cannot be overstated they expect their data to be close by and ready for use. If local servers have to go around the globe to fetch their data, the outcome is high latency and poor user experience.

Replication is especially important for backup and disaster management purposes, such as when a node goes down. Replicas that were synchronized can then help recover data on new nodes that may be added due to a recent failure. When a data infrastructure requires too much manual copying of data during a failure, there are bound to be issues.

Failover of broken resources can be automated more fully when there are multiple replicas available, especially in different geographic regions that may not be affected by a regional disaster. Applications that can leverage data replication can also take care to preserve user data; otherwise, they risk losing information when a device breaks or a data center is destroyed.

Some see data replication as something nice to have, but as you can see, its not only about backup and disaster management; its also about application performance. There are other benefits as well that you can find as part of enterprise disaster management and performance plans.

The backend systems of a data replication system help keep copies of data spread around and redundant. This requires multiple nodes in the form of clusters that can communicate internally to keep data aligned. Adding a new cluster, a new node or new piece of data would then be automatically synchronized with other nodes to replicate it.

But the application level also needs to understand how the replication works. While a form-based app might just want a set of database tables, it must also understand that the source database has replicas available. Applications must know how to synchronize data it has just collected, as in a mobile app, so other users will have access.

The smaller pieces of data that are synchronized are often known as partitions. Different partitions go on different hardware storage pools, racks, networks, data centers, continents, etc., so they are not all exposed to a single point of failure.

The potential for complexity is often the limiting factor for companies seeking to implement data replication. Having frontend and backend systems that handle it transparently is essential.

As you can see, data replication does not explicitly depend on using cloud resources. Enterprises have been using their internal networks for decades with some of the same benefits. But with the addition of cloud-based resources, the opportunity to have extremely high availability and performance is easier than ever.

Traditional data replication has now been extended beyond just replicating from a PC to a network or between two servers. Instead, applications can replicate to a global network of endpoints that serve multiple purposes.

Traditionally, replication was used to preserve data in case of a failure. For example, replicas could be copied to a node if there was a failure, but replicas could not be used directly by an application.

Cloud data replication extends the traditional approach by sending data to multiple cloud-based data services that stay in sync with one another.

Todays cloud services allow us to add yet another rung on this replication ladder, allowing replication between multiple clouds. This adds another layer of redundancy and reduces the risk of vendor lock-in. Hybrid cloud options also bring local enterprise data services into the mix with the cloud-based providers serving as redundant copies of a master system.

As you can imagine, there are multiple ways to diagram all these interconnections and layers of redundancy. This diagram shows a few of the common models.

(Source: Couchbase)

Though the potential for an unbreakable data solution is more possible than ever, it can also become complicated quickly. Hybrid cloud-based architectures have to accommodate many edge cases and variables that make it challenging for developers to build on their own.

Ideally, your data management backend can already handle this for you. Systems must expose options in an easy-to-understand way so that architects and developers can have confidence and reduce risk.

For example, we built Couchbase from the ground up as a multinode, multicloud replication environment so you wouldnt have to. Built-in options include easily adding/removing nodes, failing over broken nodes easily, connecting to cloud services, etc. This allows developers to select options and architectures they need for balancing availability and performance for their applications.

Couchbases cross datacenter replication (XDCR) technology enables organizations to deploy geo-distributed applications with high availability in any environment (on premises, public and private cloud, or hybrid cloud). XDCR offers data redundancy across sites to guard against catastrophic data-center failures. It also enables deployments for globally distributed applications.

Read our whitepaper, High Availability and Disaster Recovery for Globally Distributed Data, for more information on the various topologies and approaches that we recommend.

Ready to try the benefits of cloud data replication with your own applications? Get started with Couchbase Capella:

Follow this link:
Why Cloud Data Replication Matters - The New Stack

Read More..