Page 2,892«..1020..2,8912,8922,8932,894..2,9002,910..»

Google urged to halt cloud-computing project in Saudi Arabia over human rights concerns – Business Insider

A Silicon Valley tech giant could end up enabling one of the world's worst human rights abusers to better spy on its citizens, human rights campaigners said Wednesday.

When Google announced last year that it had finalized an agreement to build a major new cloud-computing center in Saudi Arabia, the company said the move would allow businesses there to "confidently grow and scale their offerings in this market."

The company opened the first such centers, known as Google Cloud regions, in 2020, starting with the US, Indonesia, and South Korea. It also announced plans to open them in Spain, France, Italy, and Qatar.

But in a statement, critics said that setting up shop in Saudi Arabia could end up bringing more than just faster data transfer speeds to its clients, including Saudi Aramco, a state-owned oil company.

"In a country where dissidents are arrested, jailed for their expression and tortured for their work Google's plan could give the Saudi authorities even greater powers to infiltrate networks and gain access to data on peaceful activists and any individual expressing a dissenting opinion in the Kingdom," Rasha Abdul Rahim, director of Amnesty Tech, said in a press release.

The backlash underscores the difficulties Google faces in its aggressive pursuit of cloud computing, as the push into more markets risks tangling the company up in geopolitical quandaries.

The communique, signed by Amnesty International, Human Rights Watch, and the Electronic Frontier Foundation, among others, calls on Google to "immediately halt" work on the project until the company "can publicly demonstrate how it will mitigate adverse human rights impacts."

The stated fear among campaigners is not that Google will directly assist Saudi authorities' attempts to silence dissent, but that those authorities have shown no qualms about infiltrating technology companies and demanding that they hand over user data. In at least one case, the Saudi government appears to have placed spies within a US social media company, Twitter, to obtain information it could not get through legal means.

The US State Department, in a 2020 human rights report, noted that Saudi authorities "frequently attempted to identify and detain anonymous or pseudonymous users and writers who made critical or controversial remarks." The Saudi government "regularly surveilled websites, blogs, chat rooms, social media sites, emails, and text messages," the report noted, and a counter-terrorism law grants authorities the right to circumvent legal protections to access someone's "private communications."

Saudi Arabia is also a world leader when it comes to beheading citizens it deems enemies of the kingdom. Its top officials also orchestrated the killing of journalist Jamal Khashoggi, using spyware to keep tabs on the dissident and his friends, according to a lawsuit.

Campaigners want Google to come out and set "red lines" concerning requests from the Saudi government with which it will refuse to comply. It also wants Google to elaborate on the specifics of the independent human rights assessment the company said it conducted.

"We are saying they should not have any cloud region in Saudi Arabia, unless and until there has been a robust and thorough human rights due diligence process," Michael Kleinman, director of Amnesty International's Silicon Valley initiative, told Insider.

In 2018, after employee backlash over a cloud contract with the Department of Defense, Google published a set of principles around AI that included a commitment to not design or deploy AI that "contravenes widely accepted principles of international law and human rights."

But as Google races to catch Microsoft and Amazon in the cloud wars, deals with some governments risk backlash both inside and outside the company. Earlier this month, some Google employees called on the company to terminate contracts with the Israeli government due to the deadly attacks on Palestinians in Gaza.

The company did not respond to a request for comment.

Have a news tip? Email this reporter: cdavis@insider.com.

Have a tip about Google? Contact Hugh Langley securely using the encrypted messaging apps Signal and Telegram (+1-628-228-1836) or encrypted email (hslangley@protonmail.com

See more here:
Google urged to halt cloud-computing project in Saudi Arabia over human rights concerns - Business Insider

Read More..

GDPR: EU privacy watchdog probing the use of AWS and Azure cloud services – ZDNet

EU privacy watchdog the European Data Protection Supervisor (EDPS) has started examining whether the bloc's top institutions and agenciesare effectively protecting citizens' personal data when using Amazon's AWS and Microsoft's Azure cloud services.

In a separate investigation, the EDPS will also probe whether the European Commission's use of Microsoft Office 365 is compliant with data protection laws.

The EDPS announced the launch of both inquiries in relation to the Schrems II ruling that occurred last summer and which introduced new obstacles to the transfer of personal data between the US where Amazon and Microsoft are based and the EU.

SEE: IT Data Center Green Energy Policy (TechRepublic Premium)

In the ruling, the EU Court of Justice concluded thatnational laws in the US did not match the stringent data protection requirements established by the bloc's General Data Protection Regulation (GDPR), meaning that without additional safeguards, the personal data of EU citizens cannot be safely processed across the Atlantic.

For example, under the Clarifying Lawful Overseas Use of Data Act (CLOUD), US authorities are allowed to require national storage providers to give them access to information held on their servers, even if that data is located overseas.

An EU-based organization using a US-based cloud provider like AWS or Azure, therefore, might find that some of their data including personal data about customers or employees, for example can potentially be made available for US authorities to snoop on.

This is why the EU's Court of Justice invalidated the scheme that was in place to enable personal data to flow freely between the bloc and the US, called the Privacy Shield, and ruled that instead, organizations will have to implement new privacy-protecting contracts, called Standard Contractual Clauses (SCCs) for each data transfer.

In some cases where even SCCs are insufficient, the data exchange can be suspended.

The EDPS, an independent organization that monitors the processing of personal data by EU institutions, has been closely watching the impact of Schrems II on some of the contracts that tie European offices and agencies to tech companies in the US.

"We identified certain types of contracts that require particular attention and this is why we have decided to launch these two investigations," said Wojciech Wiewirowski, the European Data Protection Supervisor.

"We acknowledge that EUIs (European Union Institutions) like other entities in the EU/EEA are dependent on a limited number of large providers. With these investigations, the EDPS aims to help EUIs to improve their data protection compliance when negotiating contracts with their service provider."

In particular, the privacy watchdog will be looking at so-called "Cloud II" contracts agreed between the EU and Microsoft or Amazon for the use of their cloud services.

SEE: Cloud computing: Microsoft sets out new data storage options for European customers

When EUIs use Azure and AWS, in effect, individuals' personal information can be sent outside of the EU and to the US, and unless appropriate GDPR-compliant measures are taken to protect the data transfer, there is a risk of surveillance from the authorities.

In other words, the EDPS will now be checking whether these GDPR-compliant measures are being taken by institutions in the bloc.

"We will actively support the EU institutions to answer questions raised by the European Data Protection Supervisor and are confident to address any concerns swiftly," a Microsoft spokesperson told ZDNet. "We remain committed to responding to guidance from regulators and will continuously seek to strengthen customer privacy protections." AWS did not respond to a request for comment.

The privacy threats posed by the reliance on foreign ICT providers' cloud services have long been flagged by the EDPS: as early as 2018, the privacy watchdogpublished guidelines for EU institutions that highlighted EUIs' responsibility in ensuring the protection of personal datain cloud infrastructure.

The message has not gone unheard. Recently, the European Data Protection Boardvalidated the use of a new "EU Cloud Code of Conduct", which acts as a standard certifying that a given cloud service provider is GDPR-compliant. Microsoft Azure and Google Cloud, among others, have already declared adherence to the code of conduct.

What's more: since the Schrems II ruling, cloud providers have come forward to announce changes to their policy to better comply with GDPR restrictions. BothMicrosoftandAmazonhave promised to contest government requests for access to customer data when they are able to. When required by law, Amazon also committed to disclose the minimum amount necessary of information, while Microsoft said that it would provide monetary compensation to the customers affected.

Microsoft has even gone one step further bypledging to enable EU customers to store and process most of their data within the EU by the end of 2022, meaning that personal data wouldn't even need to be sent to the US anymore.

Wiewirowski recognized that both companies have made amends, but nevertheless said that the announced measures might not be sufficient to ensure full compliance with EU data protection law, and still require a proper investigation.

"It's not just about law it's also about ethics. There are many social and economic issues that come with relying on only a handful of corporations for your critical infrastructure. If they don't comply to the rules, then your privacy will never be protected," Subhajit Basu, associate professor of information technology law at the University of Leeds, told ZDNet.

But there is also a political dimension to the new investigations, according to Basu. The EU is increasingly keen to re-assert the bloc's "digital sovereignty", especially when it comes to data infrastructure and cloud services.

The majority of the European cloud market, in effect, is controlled by non-European hyperscalers, with recent research showing that more than half of decision makers on the continentuse AWS, Microsoft Azure, IBM Cloud and Google Cloud.

SEE: GDPR: Fines increased by 40% last year, and they're about to get a lot bigger

In an attempt to re-gain control over the bloc's digital infrastructure, EU leaders are trying to develop a homegrown cloud initiative called GAIA-X, which will adhere to European principles of data protection and transparency but the project is stalling, and still remains far behind US-based cloud behemoths.

"This is about the future of cloud services, and making sure that the EU has its share in the pie of cloud business," says Basu. "The whole world is in the cloud nowadays, showing the importance of having a cloud infrastructure."

In addition to probing EUIs' use of US-based cloud services, the EDPS is also investigating the European Commission's use of Microsoft Office 365 another sticking point for the privacy watchdog, given that over 45,000 staff of EU institutions are users of the Redmond giant's products and services.

Last year, the EDPS published afirst set of recommendations related to the use of Microsoft's suite, including the imperative of knowing exactly where data is located, what information is transferred out of the EU and whether it is protected by proper safeguards.

For Basu, the move falls in line with both the primary objective of better protecting EU citizens' privacy, and the underlying goal of re-establishing the bloc's digital sovereignty and control over the personal data of its residents.

"What surprises me is it's taken the EDPS this long to launch an investigation," says Basu. "This is good for EU citizens, but it was needed and it should have been done before."

Read more here:
GDPR: EU privacy watchdog probing the use of AWS and Azure cloud services - ZDNet

Read More..

Recent outages could be taking some of the shine off IBM’s cloud offensive – WRAL Tech Wire

Editors note: The Skinny blog is written by WRAL TechWire editor and cofounder Rick Smith.

RESEARCH TRIANGLE PARK Its difficult to tune in a golf event or even some entertainment programs and NOT see ads from IBM touting its cloud computing efforts. But behind the video glitz there are problems that tech companies strive to avoid: Outages.

A rash of outages have hit IBMs Cloud in recent months, raising some concerns about whats happening as IBM prepares to split the company with its could focuse built around Raleigh-based Red Hat and the $34 billion acquisition of the Hatters two years ago to go all-in on cloud.

With IBM betting the business on hybrid cloud, the constant issues with its own cloud can hardly be helping its sales team to convince customers that Big Blue has what it takes to help their businesses advance, warns tech news site The Register.

No cloud is perfect, The Register notes. Yet

Rick Smith, WRAL TechWires editor and a cofounder, writes The Skinny.

All clouds have outages, the news site which follows IBM very closely says. Google had a biggie last weekend when itreportedmultiple Google Cloud Products experienced elevated latencies and/or errors due to an issue with Access Control Lists (ACLs) intermittently for a duration of 10 hours and 42 minutes.

But IBM has now had ten significant issues since April 3, by our count.

That number certainly raised the eyebrows of one broadband expert who talked with WRAL TechWire on the basis that he not be identified.

The recent IBM outages, especially their frequency, would be a cause for concern, he said.

To be honest, he added: IBM Cloud is relatively small compared with AWS [Amazon Web Services] and [Microsoft[ Azure.

Figuring out how any cloud provider is doing is not easy, he adds.

Its difficult to assess the performance of the big public cloud providers. They dont publish numbers for one thing. Another is that outages are usually not an all or nothing situation. Their services are so complex that portionsmay be unavailable for a time. Its also possible for users to contract for specific types of services that dont take advantage of redundancy built-in to the infrastructure and to take charge of their own reliability. AWS and others do quote reliability stats for their base services that are very impressive but impossible to verify.

The Register has tracked IBM, though, noting that a 71 issue (the rank it uses for incidents that see business-critical systems become unavailable has occurred on:

Users may experience connectivity issues when trying to access the listed cloud services IBM warned about a May 25 incident that struck Washington DC, Osaka, London, Dallas, Sydney, Tokyo, and Frankfurt.

The list of affected services is interesting since it lcudes IBM Watson artificial intelligence which is the other big piece of the new IBM once the spinoff of its services group is complete.

Heres the list from The Register:

So is this a big problem for IBM?

I have not heard any buzz in the research and education community. Again, this is probably a reflection of their size (or lack thereof), the expert said.

IBM touts their hybrid cloud platform that has components in IBM datacentersand on customer premises. Its impossible to gauge the impact of their cloud incidents on customer operations. Its probably zero for some customers and could be severe for others.

One things for sure: IBM sales reps are being quizzed about whats happening.

See original here:
Recent outages could be taking some of the shine off IBM's cloud offensive - WRAL Tech Wire

Read More..

A SOC Tried To Detect Threats in the Cloud Your Wont Believe What Happened Next – Security Boulevard

A SOC Tried To Detect Threats in the Cloud You Wont Believe What HappenedNext

Now, we all agree that various cloud technologies such as SaaS SIEM help your Security Operations Center (SOC). However, theres also a need to talk about how traditional SOCs are challenged by the need to monitor cloud computing environments for threats. In this post, I wanted to quickly touch on this very topic and refresh some past analysis of this (and perhaps reminisce on how sad things were in2012).

Back in my analyst days, Ive noticed that some traditional organizations tried to include their cloud environments in the scope of their security monitoring at some point in their cloud migration journeys. Surprisingly (Hey you surprised about it? No? Thought so!), some of these projects have not gone well. SOC teams were not equipped to deal with various cloud challenges (old paper on this). There were also cases where both business and IT migrated to the cloud, but security was left behind and had to approach cloud challenges with on-premise tools and practices. Essentially, security was left behindagain.

Here, we wanted to quickly summarize some of the challenges, covering the usual range of people, tools, and processes:

Huge thanks to Iman Ghanizada (the Certs Guy) for his contributions to thispost.

Related posts:

A SOC Tried To Detect Threats in the Cloud Your Wont Believe What Happened Next was originally published in Anton on Security on Medium, where people are continuing the conversation by highlighting and responding to this story.

*** This is a Security Bloggers Network syndicated blog from Stories by Anton Chuvakin on Medium authored by Anton Chuvakin. Read the original post at: https://medium.com/anton-on-security/a-soc-tried-to-detect-threats-in-the-cloud-your-wont-believe-what-happened-next-4a2ba0ab5d81?source=rss-11065c9e943e------2

The rest is here:
A SOC Tried To Detect Threats in the Cloud Your Wont Believe What Happened Next - Security Boulevard

Read More..

Need to migrate legacy business logic to cloud-native? Here’s how – SiliconANGLE News

Translating legacy code to a modern language is only part of the answer. Transitioning to cloud-native architectures is the greater challenge.

For decades, information technology executives were faced with two unpalatable options for legacy application modernization: Rip and replace or Leave and layer.

When a legacy application still provides value, ripping and replacing is a high-risk option. The replacement may not adequately meet the needs of the organization, and the transition from old to new may be overly disruptive to the enterprise.

However, the leave-and-layer approach isnt much better. True, adding application programming interfaces or connectors to a legacy application can extend its lifetime, as can leveraging robotic process automation to script its user interface.

The problem: Leave and layer adds to technical debt, kicking the modernization can down the road. Any shortcomings of the legacy app remain a ball and chain around the enterprise.

Cloud computing has added a third option: Lift and shift. In theory, this approach makes sense: Simply migrate the legacy app off of its antiquated platform into the cloud and voila! Its modernized.

Lift and shift, however, rarely delivers on this promise. Just like leave and layer, it also adds to technical debt. Even more problematic: What if the older app was written in a language the cloud doesnt support? How can you expect lifting and shifting it to work at all?

The obvious solution to the problem of legacy apps written in cloud-unfriendly programming languages is to translate those programs line-by-line into a modern language that will run in the cloud.

Indeed, there have been tools for conducting such translations on the market for years. Such line-by-line translation on its own, however, rarely addresses the business need.

It does nothing about technical debt, and it rarely leads to maintainable code moving forward. I cant find anybody that has bought two line-by-line translation projects, says Tom Bragg, chief technology officer of ResQSoft Inc.

For many organizations, the goal of legacy code migration is to end up with maintainable modern applications, which means a combination of microservices as well as user interfaces that meet the needs of todays increasingly mobile workforce and customer base.

To achieve these goals, code migration must do more than translate legacy code. It must transition legacy software architecture to microservices architecture a task that todays tools cannot fully automate. With line-by-line translation, you get the old architecture, Bragg adds.

The missing piece of this puzzle is human expertise, and as a result all of the vendors in this market offer some combination of automation tooling and professional services, either in-house or via partners.

In many cases, therefore, the migration tools themselves are not standalone products, but rather consultants tools. Theres a big difference between tool-based professional services and a product, explains Joo de Oliveira, founder and director at MigrationWare Ltd. The gap is massive.

I spoke with several providers in this market, and here are the highlights:

MOST Technologies Ltd. executes legacy code migration projects by customizing its automated migration tooling for each project. Our approach is to customize the tools to automate the migration as needed for each and every project, explains Omry Genossar, vice president of projects at MOST Technologies.

As such, MOST takes an immutable configuration-based approach: If an error crops up in the migrated application, the company will update the customized migration tool and regenerate the target application.

MOST has had particular success modernizing legacy Japanese applications, tackling the difficult challenge of the variety of proprietary Kanji representations.

MOST handles the full migration project lifecycle, including assessment, code translation, data migration and rehosting. MOSTs target platform, however, is almost always the mainframe. Its all mainframe, mainframe, mainframe, quips Hadar Israeli, who also holds the title of vice president of projects at MOST Technologies. The mainframe is not going anywhere.

Another vendor that takes a metadata-based configuration approach is ResQSoft. ResQSoft extracts appropriate metadata from the source application, including rules, screen layouts, data formats and more. It then uses these metadata to guide its code generation.

ResQSoft also provides automated rewrite assistance, helping developers resolve issues of code structure for example, eliminating GOTO statements as well as grouping logically related code elements in order to support the generation of microservices.

To assist its customers in moving to target architectures, ResQSoft leverages templates that provide 90% of the structure necessary for the migration. Templates include Spring Boot for creating standalone Java applications and Hibernate for object-relational mapping.

Sometimes, a modernization vendor develops a specialty in less common, but still important legacy languages.

MigrationWare, for example, translates applications written in numerous legacy languages to COBOL and other languages, but has developed a particular specialty in IBMs MANTIS rapid application development and Broadcoms Easytrieve report generator. Languages have different characteristics, explains MigrationWares Oliveira. Its how you find solutions for different languages thats particularly useful.

In addition to code translation, it offers reimplementation and rehosting of applications. MigrationWare can also migrate legacy databases, file definitions and source data to modern platforms and formats.

The companys particular focus is on rehosting projects to COBOL (along with CICS and JCL) on IBM mainframes, including database conversion and application replatforming.

Another vendor with a particular specialty is Synchrony Systems Inc., which has deep expertise in translating Smalltalk, 3270/5250 green screen apps and IBM VisualAge Generator and Enterprise Generation Language, although it can apply its modernization techniques to any source or target language. (* Disclosure below.)

Synchrony provides automated and transparent modernization while ensuring 100% functional equivalency with no operational interruptions and takes an iterative, automated approach that assumes its customer will want to continue to maintain a modern application.

Many vendors in the code modernization market focus on mainframe applications either source applications or, in a surprising number of cases, target applications as well. However, some vendors work entirely in the distributed computing world off the mainframe.

Mobilize.net Corp.migrates applications by translating code from legacy Microsoft technologies such as VB6, Winforms and Silverlight to modern C#, VB.NET or Java.

The company also rearchitects monolithic apps by separating the front end from the business logic in a mostly automated fashion. It can also expose modernized business logic via APIs for headless application purposes.

In addition, Mobilize.net has a similar solution for legacy data warehouses from vendors such as Oracle and Teradata. The company migrates SQL code as well as data structures such as tables and views to Snowflake. It also converts legacy stored procedures to modern code that Snowflake can execute.

Another vendor with an even more surprising modernization offering is vFunction Inc., which converts legacy Java (typically Java Enterprise Edition) to modern Java. Itautomatically converts monolithic legacy Java applications to microservices by giving application architects and senior developers sufficient visibility into application functions to transition legacy apps to modern microservices architectures.

Unlike most legacy code migration tools, vFunction works with the original application binaries, performing both static and dynamic analysis in order to extract the logical relationships within the application. Itthen leverages the original source code to generate RESTful APIs and microservices, typically transitioning legacy Java EE applications to Spring Boot for deployment in cloud-native environments.

Our final vendor is Blu Age, owned by parent company Netfective Technology SA, which delivers functional equivalence migration from many legacy languages to modern Java/Spring or .NET Core cloud-native applications.Functional equivalence means that the final application will pass all functional tests that the source application could, thus giving customers a way to guarantee the migration has been completely successful.

Blu Age targets 100% automation of the code transformation part of every project, a goal thats achievable because of its comparison-based functional testing approach. The idea of Blu Age is to create modern applications, explains Xavier Plot, business development and cloud partner alliances manager at Blu Age.

The last decade has seen a dramatic shift in the economics of legacy migration, making newer, more cost-effective approaches a reality for organizations struggling with legacy technical debt.

Given the pandemic-enhanced urgency of todays digital transformation and IT transformation initiatives, legacy code migration has become a top priority for many enterprises that had previously struggled with their legacy burden.The ongoing maturation of cloud computing is a key enabler of this change. Over the last three years, the pace of modernization has changed, Plot says. Its accelerating because of the cloud.

Preserving still-valuable legacy business logic while moving to a modern software platform clearly has appeal but it has its pitfalls as well. Simply achieving functional equivalence between source and target is only a part of the answer. Successful migration migrates warts and all, Oliveira quips. Its hard to validate the success of a migration with a moving target.

Todays IT executives want it all: preservation of legacy business logic, modern cloud-native architectures and maintainable code that helps to pay off legacy technical debt. The good news is that todays legacy code migration offerings help make this wish list a reality.

Jason Bloomberg is founder and president of Intellyx,which publishes theCloud-Native Computing Posterand advises business leaders and technology vendors on their digital transformation strategies. Hewrote this article for SiliconANGLE.(* Disclosure: IBM and Synchrony Systems are Intellyx customers, and Broadcom and Microsoft are former Intellyx customers. None of the other companies mentioned in this article is an Intellyx customer.)

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Read more from the original source:
Need to migrate legacy business logic to cloud-native? Here's how - SiliconANGLE News

Read More..

Akash Network Provides Decentralized Cloud to the Largest Internet of Things (IoT) Network, Helium – PRNewswire

"The blockchain validator community values decentralization and Akash's decentralized cloud meet that key."

With containerization technology and a unique staking model to accelerate adoption, Akash leverages 85% of underutilized global cloud capacity in 8.4 million data centers and servers, enabling developers to set their price for cloud computing. Akash provides a fast, more efficient, and low-cost deployment and hosting solution for the Helium validator software.

"Helium validators running on Akash will have unique advantages such as setting their cost for deployment, scaling up quickly to multiple nodes without building their own Kubernetes cluster, settling payments using cryptocurrency, and switching cloud providers and regions instantly," said Greg Osuri, CEO of Akash Network. "The blockchain validator community values decentralization and Akash's permissionless decentralized cloud meet that key requirement."

Validators will play an integral role in the expansion, stability, and success of the Helium network, acting as the consensus group, and performing functions that include verifying transactions and adding new blocks to the blockchain.

Operating a validator will require hosting the software on a secure and reliable infrastructure. The return on investment for operating a validator is linked to the cost of server hosting and the amount of Helium tokens (HNT) staked. As a secure, decentralized cloud marketplace offering 2-3 times lower cost compute than centralized cloud providers, Akash ispositioned to onboard and support thousands of potential new validators on the Helium blockchain.

"Migrating the Helium blockchain's consensus group from hosted hotspots to validators is a major upgrade for scalability and performance," said Scott Sigel, Director of Operations at the Decentralized Wireless Alliance, the nonprofit foundation arm of the Helium Network. "From the Foundation's perspective, we want to see Helium validators optimize for diversity of infrastructure and decentralization, which is why we're thrilled to have Akash in the Helium ecosystem. Not only is their performance and cost structure attractive to node operators, but their decentralized cloud aligns with our own ethos of creating permissionless and open systems."

For media inquiries, please contact Kelsey Ruiz at (916) 412-8709 or kelsey(at)akash(dot)network.

About Akash Network: Akash Network, the world's first decentralized and open-source cloud, accelerates deployment, scale, efficiency and price performance for high-growth industries like blockchain and machine learning/AI. Known as the "Airbnb for Cloud Compute" Akash Network provides a fast, efficient and low-cost application deployment solution. Developers leveraging Akash Network can access cloud computing at up to three times less than the cost of centralized cloud providers like Amazon Web Services, Google Cloud and Microsoft Azure. Utilizing containerization and open-source technology, Akash Network leverages 85% of underutilized cloud capacity in 8.4 million global data centers, enabling anyone to buy and sell cloud computing. For more information visit: https://akash.network/.

SOURCE Akash Network

https://akash.network/

Here is the original post:
Akash Network Provides Decentralized Cloud to the Largest Internet of Things (IoT) Network, Helium - PRNewswire

Read More..

What to Consider When Choosing Between Self-Hosting and Cloud – hackernoon.com

The rise of cloud computing has been one of the biggest transformations in technology over the past decade. The Flexera 2021 State of the Cloud Report found that 92 percent of enterprises now have a multi-cloud strategy and 82 percent have a hybrid cloud strategy.

Even with those overwhelming numbers, there still exists a gap in cloud services. Many businessesoften because of security and regulatory concernsneed instant access to files and cant risk any delay in connecting to the cloud. How can a financial firm, or manufacturing plant in a remote location, enjoy the same benefits as its counterparts in more populated areas?

Managed services like AWS Outposts and Azure Stack provide a partial solution, creating secure storage solutions for data that need to remain on-premises. Both services create a hybrid environment of local and cloud access, while still ensuring low-latency access.

Companies should consider hosting on their own servers or a private data center. This helps to bring the full benefits to those remote locations with demanding file access and storage requirements.

A self-hosted solution offers the same features and benefits around cloud offerings, but also allows companies to retain complete control over their data. Employees, customers, and clients can have instant access to the network via a web portal or mobile apps, without delays and without using VPN.

Another emerging concern for companies moving to the cloud is the issue of data sovereigntythe idea that data is subject to the laws of the nation or region where it is collected.

For example, your company is based in the US and does business internationally, but your data might be hosted in Europe. That puts all of your critical business information under the legal rules and regulations of the EU.

What is complicating the matter is the fact that countries have different approaches to data privacy and those often conflict. With physical control over your data, youre likely facing a number of security concerns.

An ideal solution would allow you to select the region of your choice for storing and processing data. A series of easy switches can activate a local or hybrid environment as well as the location where your data is stored.

Nearly every organization today runs on data and software. And the question of where their whole IT environment lives has become more and more complex.

Theres still no silver bullet solution for hosting, but its becoming more evident that forward-thinking companies are opting for solutions that give them the most control.

Related Stories

Create your free account to unlock your custom reading experience.

View post:
What to Consider When Choosing Between Self-Hosting and Cloud - hackernoon.com

Read More..

Why Salesforce Stock Surged Today – The Motley Fool

What happened

Shares ofsalesforce.com (NYSE:CRM)jumped on Friday, following the release of the cloud computing leader's fiscal 2022 first-quarter results. As of 3:15 p.m. EDT, the tech stock's price was up more than 5%.

Salesforce's revenue leapt 23% year over year to $6 billion. The software giant's operating cash flow, in turn, soared 74% to $3.2 billion, while its free cash flow rose 99% to $3.1 billion.

Image source: Getty Images.

Businesses are accelerating their shift to the cloud during the coronavirus pandemic -- a trend that's boosting demand for Salesforce's offerings. "Our performance in the first quarter was strong across all financial metrics," chief financial officer Amy Weaver said in a press release. "We saw record levels of new business and strength across all products, regions, and customer sizes."

These solid results prompted Salesforce to boost its revenue and cash flow forecast. Management now expects sales to rise roughly 22%, while operating cash flow grows as much as 13%.

"With incredible momentum throughout our core business, we're raising our revenue guidance for this fiscal year by $250 million to approximately $26 billion and non-GAAP [adjusted] operating margin to 18%," CEO Marc Benioff said. "We're on our path to reach $50 billion in revenue in FY26."

This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis -- even one of our own -- helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.

See the article here:
Why Salesforce Stock Surged Today - The Motley Fool

Read More..

Edge Emerges As Third Computing Option – e3zine.com

Edge computing has gained in popularity for several good reasons. So, lets take a look and see where it might fit in your organization.

Technology moves at a breakneck pace, and at certain times, several elements merge, empowering businesses to use their computers in new and more sophisticated, more efficient ways. Such a change is now taking place with the emergence of edge computing.

Historically, businesses employed Information Technology (IT) professionals who bought, installed, and maintained computer hardware, software, and networking equipment. Public cloud computing became popular because vendors, like SAP, took on infrastructure installation and maintenance. Edge is a third system deployment option. In this distributed computing model, some processing occurs near the physical location where data are created rather than on a data center server or in the cloud.

Traditionally, computer processing power has gotten smaller, moving from large mainframe systems to PCs and smartphones. Edge is the next iteration on that theme, usually relying on intelligent sensors, dubbed the Internet of Things (IoT) devices. These products generate, collect, and correlate data increasingly in near real-time. Because these devices place intelligence in new locations, they offer corporations new capabilities, ranging from monitoring a persons heart rate to measuring the wear and tear on a factory floor conveyor belt.

Due to the emergence of technology, the volume of data collected by companies is growing at a mind-boggling rate. 55.7 billion connected IoT devices will generate 73.1 zettabytes (ZB, a zettabyte is 1 trillion gigabytes) of data by 2025, up from 18.3 ZB in 2019, according to International Data Corp. (IDC).

As data volumes grow, new challenges arise. Moving all of the information from the source to the destination involves adding a great deal of bandwidth and processing power to enterprise networks. Edge does some (most in some instances) of the processing close to the origination point and sends only consolidated data to the data center.

Many emerging applications could benefit from this design, like autonomous cars, augmented or virtual reality, industrial automation, predictive maintenance, and video monitoring. Because of these benefits, the edges future appears bright. In 2018, only 10 percent of corporate data were created outside of legacy systems or the cloud, but in 2025, about 75 percent of all data will be built at the edge, according to Gartner.

Computer technology is constantly evolving. Cloud has become an increasingly popular alternative to legacy computing. Edge offers companies a new variation on computing deployment models, one that fits many emerging applications and seems destined to become an essential element in building enterprise systems.

Original post:
Edge Emerges As Third Computing Option - e3zine.com

Read More..

Aspiring to Become a Cloud Engineer? Here’s What You Need to Know! – Analytics Insight

The cloud computing sector is flourishing as more businesses recognize the advantages of using cloud services in their day-to-day operations. TechRepublic named cloud engineering one of the most in-demand tech occupations of the year in 2019. Cloud engineering is an appealing career for IT experts wishing to change careers as well as beginners starting into the sector, with features such as good income, the flexibility to work remotely, and much more.

Lets take a closer look at what a cloud engineer actually does, how to become one, the skills required, and its average salary:

Within an organization, cloud engineers can do a range of tasks. Cloud engineers, in a wider sense, are in charge of an organizations cloud-based systems and operations. The following are examples of specific duties that fall under this category:

1. Building architectures with cloud providers such as Amazon Web Services, Microsoft Azure, Google Cloud, and others.

2. Transferring current infrastructures to cloud-based solutions.

3. Handling cloud-based system security and access.

4. Existing operational administration, maintenance, and troubleshooting.

A bachelors degree in information systems, computer science, or engineering is required to work as a cloud engineer. Furthermore, a candidate can pursue an MBA or any cloud engineering course certification from a reputable university, which will prepare her to be an expert in this field.

The following are the skill sets needed to work as a Cloud Engineer:

Cloud engineers can expect to earn good money, with an average annual compensation ranging from Rs 12.00 lakhs to Rs 15.00 lakhs. However, the typical wage varies based on where you reside and how long youve worked in the profession.

For example, a cloud engineer with a masters degree and several years of experience may make more than a new graduate with a four-year degree who is just starting out in his career.

Visit link:
Aspiring to Become a Cloud Engineer? Here's What You Need to Know! - Analytics Insight

Read More..