Page 1,997«..1020..1,9961,9971,9981,999..2,0102,020..»

How Cloud Virtualization is Used by Fujifilm, TIM, LIQ, Gateway Technical College, Hello Sunday Morning, and Rustomjee: Case Studies – Enterprise…

Cloud virtualization technology is used by companies to provision, share, and scale IT resources for the organization.

Virtualization of cloud-based IT resources is increasing due to the growth of data and the rising need for flexible storage, access, and management.

See below how organizations in different industries are using cloud virtualization to create IT outcomes:

See all about theVirtualization Market.

LIQ is a customer experience (CX) and customer relationship management (CRM) company. LIQs solutions target several audiences. The companys technology and multichannel format aim to be recognized as the best alternative for companies wishing to progress their relationship with customers.

The company was using VMware for cloud computing and virtualization services to create virtual machines (VMs). With VMware, they were able to streamline their cloud processes.

The process is still ongoing, but we have already achieved surprising results, says Nicholas Ramirez, head of technology and innovation, LIQ.

Industry: Professional services

Cloud virtualization solutions: VMwares VCenter, Google Cloud

Outcomes:

Rustomjee has become a large part of Indias real estate market. Their portfolio includes 14 million square feet of completed projects, 12 million square feet of current developments, and 28 million square feet of planned developments.

As the business grew, Rustomjee needed to set up the infrastructure and applications for more real estate development. They decided to move their information over to Google Cloud Platform, which gave the company the ability to complement every idea [they] had.

With Google Cloud, virtualization helped Rustomjee continue to grow.

Running the remote desktop service in Compute Engine and using public images enabled us to operate a leaner, simplified desktop-as-a-service environment, says V. M. Samir, head of information, Rustomjee.

We have been able to deploy business function-specific virtual machine instances on the cloud and compartmentalize data to the business functions that need them.

Industries: Real estate and construction

Cloud virtualization solutions: Google Cloud, Google Compute Engine

Outcomes:

TIM is a leading company in Italy and Brazil in the information and communications technology (ICT) sector. They have developed fixed, mobile, cloud, and data center infrastructure, in addition to products and services for entertainment.

TIM partnered with Citrix to explore desktop virtualization, along with Google Cloud to help make improvements in this area.

As they worked with Google Cloud, they also used the company Noovle. TIM was able to have more flexibility and a better budget due to the added cloud technology.

The virtualized cloud technology frees us from the constraints of physical hardware, while providing the operating systems, applications, and information we need, when we need it, says Mauro Maccagnani, head of digital enterprise solutions, TIM.

Industry: Technology

Cloud virtualization solutions: Google Cloud, Google Compute Engine, Citrix Virtual Desktop, Kubernetes Engine, Anthos

Outcomes:

Considering Google Cloud?: Google Cloud: Storage Portfolio Review

Hello Sunday Morning is a nonprofit in Australia working to help people change their relationship with alcohol. They created an app called Daybreak to help get the word out even more.

There was a large uptick in downloads of the app, up to 52% more than previously. Hello Sunday Morning reached out to AWS for assistance with their cloud virtualization to help handle the app demand more efficiently.

Some Australians were going through a very challenging transition during this period, and were grateful that our early investment in AWS gave us the tools to extend support to more users in their time of need, says Roger Falconer-Flint, head of marketing, Hello Sunday Morning.

Had we relied on the disparate workloads across cloud service providers (CSPs) prior to going all-in with AWS, we would not have been able to scale the app reliably to meet this surge.

Using the tools AWS provided, they were able to keep serving customer demand for the app.

Industry: Nonprofit

Cloud virtualization solutions: AWS KMS, Amazon EC2, Amazon EKS, Amazon RDS for PostgreSQL

Outcomes:

Considering AWS?: AWS: Cloud Storage Portfolio Review

Gateway Technical College is one of the largest members of the Wisconsin Technical College System. Gateway has a nationally recognized, hands-on technical education program.

Gateway wanted to grow their use of technology with a solution that wasnt labor-intensive. The technology they were using previously was not production-ready. The volume and variety of classes for students was increasing, and they needed better performance and stability.

Gateway began using a technology called V2 Cloud, desktop virtualization software.

We could not conduct these classes properly without V2, says Allen J. Pearson, an IT instructor at V2 Cloud.

I have found V2 to be reliable.

Industry: Education

Cloud virtualization solution: V2 Cloud

Outcomes:

Fujifilm Medical Systems USA is a company that provides imaging equipment, such as digital X-ray, mammography, and ultrasound. Fujifilm was in need of disaster recovery (DR) technology in case something happens to their systems.

To solve this problem, they reached out to VMware for help. Using cloud virtualization, Fujifilm was able to test and protect their medical systems.

Fujifilm was an early adopter of VMware technologies for health care, says Jim Morgan, VP of Fujifilm Medical Systems USA.

VMware vSphere and the cloud infrastructure suite reflect the gold standard for cloud virtualization and will enable Fujifilm to continue to provide outstanding value utilizing cutting-edge technologies, Morgan says. VMwares partnership has been invaluable to our company and ultimately to our customers.

With VMware vSphere, Fujifilm was able to use cloud virtualization to protect both their servers and customers in the case of an emergency.

Fujifilm also used VMware vCenter Site Recovery Manager to test a fail-over across two sites.

Industry: Medical

Cloud virtualization solutions: VMware vSphere, VMware vCenter Site Recovery Manager

Outcomes:

Originally posted here:
How Cloud Virtualization is Used by Fujifilm, TIM, LIQ, Gateway Technical College, Hello Sunday Morning, and Rustomjee: Case Studies - Enterprise...

Read More..

Having trouble finding power supplies or server racks? You’re not the only one – The Register

Power and thermal management equipment essential to building datacenters is in short supply, with delays of months on shipments a situation that's likely to persist well into 2023, Dell'Oro Group reports.

The analyst firm's latest datacenter physical infrastructure report which tracks an array of basic but essential components such as uninterruptible power supplies (UPS), thermal management systems, IT racks, and power distribution units found that manufacturers' shipments accounted for just one to two percent of datacenter physical infrastructure revenue growth during the first quarter.

"Unit shipments, for the most part, were flat to low single-digit growth," Dell'Oro analyst Lucas Beran told The Register.

He blamed challenging supply chain conditions and strong demand from hyperscalers which are expected to open at least 30 additional regions this year for the delays.

Customers hoping to get their hands on what Beran calls long-sales-cycle products large centralized, three-phase UPSes, thermal management, and cabinet power distribution systems, for example may have to wait between six months and a year before they even ship.

And the story isn't much better for products that are typically readily available. Things like single-phase UPSes, rack power distribution units, and IT racks have seen lead times slip to between four and six weeks depending on the vendor.

"Supply chain disruptions aren't going away by any means," Beran said, adding that while he does expect a higher volume of unit shipments in the second half of 2022, the improvement is likely to be marginal.

While supply remains challenged, Beran notes that some emerging technologies are gaining momentum.

He expects liquid and immersion cooling to see robust growth over the next few years as customers warm up to the tech. While liquid and immersion cooling combined accounted for just five percent of total thermal-management spending in 2021, it's a market that's growing rapidly up roughly 50 percent from the prior year.

What's more, interest in the technologies is at an all-time high, accelerated by investments by large OEMs and chipmakers. Last month, Intel announced a $700 million lab in Oregon to develop novel liquid cooling technologies.

"When Intel throws their weight behind something like that, and not just a little something, but $700 million close to a billion dollars that is a pretty large signal to the datacenter ecosystem that this is a serious technology," Beran said.

Despite the stark supply chain forecast for datacenter physical infrastructure, the sector's revenues surged six percent year-over-year in Q1 as pent-up demand was met with higher per-unit costs.

"When I look at datacenter physical infrastructure as a whole four to five percent of that was driven by price increases," Beran observed, adding that vendors are passing higher costs on to channel partners and customers.

The majority of this growth was realized in the North American, Asia Pacific, and Chinese markets, where Eaton Riello and Schneider Electric gained the largest shares during the quarter.

Short-sales-cycle products like single-phase UPS bore the brunt of inflationary pricing pressures during the quarter, according to Beran, who expects higher pricing to begin hitting longer-sales-cycle products in early 2023.

Looking ahead, Beran predicts datacenter physical infrastructure revenues will grow nine percent in full-year 2022, as improving supply chain conditions in the second half of 2022 are met with higher prices.

This growth will be fueled, in large part, by a surge of hyperscale and cloud spending this year. The analyst firm predicts cloud providers will spend upwards of 25 percent more to $18 billion on datacenter infrastructure in 2022 following record investments in Q1.

Read more:
Having trouble finding power supplies or server racks? You're not the only one - The Register

Read More..

VPN service providers to be held liable if violated CERT-In directives: Official – Hindustan Times

New Delhi: Companies offering virtual private network (VPN) or cloud services in India will be held liable if they do not comply with the governments cybersecurity policy, which mandates them to collect as well as maintain extensive and accurate data of their consumers for five years, an official familiar with the matter said.

While there is no mandatory need for these companies to inform the Union ministry of electronics and information technology (MeitY) about complying with the directives, they may face charges if failed to provide information regarding a particular case if sought by the Centre, the government official told HT requesting anonymity.

Earlier this month, Union minister of state for electronics and information technology Rajeev Chandrasekhar said that the companies must comply with the laws of the land or they can exit the Indian market. Defending the rules, the government said the information will only be sought on a case-to-case basis, therefore not violating citizens right to privacy.

ExpressVPN, one of the leading cloud service providers, has already announced that it is shutting its servers in India, becoming one of the first companies to pare back operations in the country after the Indian Computer Emergency Response Team (CERT-In) on April 28 issued directives that require additional compliances.

Several tech companies and experts have claimed that the directives, which came into effect on June 26, open avenues for misuse by mandating VPN service providers to maintain detailed logs of their customers.

ExpressVPN also cited similar reasons for folding its servers in the country. India has ordered all VPN providers in the country to start logging user activity and storing it for five years. This is incompatible with our commitment to user privacy, so we have made the straightforward decision to stop operating VPN servers within India, Harold Li, vice president of ExpressVPN, told HT in an email on June 2.

The new directives from CERT-in the governments nodal agency for detecting and responding to cyber incidents may have far-reaching ramifications on how VPN services are offered and used in the country. The directives state that all cloud service providers and VPN providers will be required to maintain a series of extensive customer information for at least five years, even after any cancellation or withdrawal of the registration by a customer. The information includes validated names, address and contact number of customers, period of subscription, email address and IPs being used and purpose for using services, among others.

The norms will also apply to data centres and virtual private server (VPS) providers.

With respect to transaction records, accurate information shall be maintained in such a way that individual transaction can be reconstructed along with the relevant elements comprising of, but not limited to, information relating to the identification of the relevant parties including IP addresses along with timestamps and time zones, transaction ID, the public keys (or equivalent identifiers), addresses or accounts involved (or equivalent identifiers), the nature and date of the transaction, and the amount transferred, the norms stated. The failure to furnish the information or non-compliance with the ... directions, may invite punitive action.

Continue reading here:
VPN service providers to be held liable if violated CERT-In directives: Official - Hindustan Times

Read More..

Launch your IT career with over 225 hours of training on Microsoft 365, Windows & Azure – ZDNet

StackCommerce

The following content is brought to you by ZDNet partners. If you buy a product featured here, we may earn an affiliate commission or other compensation.

More and more companies are using Azure for their cloud services lately, which makes sense on many levels. Other Microsoft platforms like Office have been the standard around the workplace for decades, after all.

With all this new tech comes new demand for IT workers that can set it up. The road to Microsoft certification can be long, but the Complete 2021 Microsoft 365, Windows and Azure Bundle is a great resource that can smooth out the path for budding admins.

What you've got here is a roundup of 17 online courses, all pulled from the extensive catalog at iCollege. If you've ever used online learning before, you probably already recognize that name. They're a trusted learning outlet whose educators have prepared thousands of workers for new careers on three continents. Each course is taught in an accessible, hands-on style, and you can tackle them at your own pace.

Depending on your job goals, you can start with one of several intro courses. In a few hours, you'll be able to learn the fundamentals of PowerShell, RDS, Teams and other essential platforms. After mastering those, you can move on to more targeted classes, each of which serves as study guides for a different Microsoft certification.

In addition, there are courses on the MS-100 and 101 for Microsoft 365, the MD-100 for Windows and multiple guides that can help you breeze through your first few Azure Administrator and Azure Associate exams. By the time you're done, you'll be an asset to any company that uses MS systems, whether they're working in the cloud or with onsite servers.

There are more than 225 hours of training in the Complete 2021 Microsoft 365, Windows, & Azure Bundle, and all 17 courses are now available for $59.99. That's under $4 per course!

Read the rest here:
Launch your IT career with over 225 hours of training on Microsoft 365, Windows & Azure - ZDNet

Read More..

ZTE intros ‘cloud laptop’ that draws just five watts of power – The Register

Chinese telecom equipment maker ZTE has announced what it claims is the first "cloud laptop" an Android-powered device that the consumes just five watts and links to its cloud desktop-as-a-service.

Announced this week at the partially state-owned company's 2022 Cloud Network Ecosystem Summit, the machine model W600D measures 325mm 215mm 14 mm, weighs 1.1kg and includes a 14-inch HD display, full-size keyboard, HD camera, and Bluetooth and Wi-Fi connectivity. An unspecified eight-core processors drives it, and a 40.42 watt-hour battery is claimed to last for eight hours.

It seems the primary purpose of this thing is to access a cloud-hosted remote desktop in which you do all or most of your work. ZTE claimed its home-grown RAP protocol ensures these remote desktops will be usable even on connections of a mere 128Kbit/sec, or with latency of 300ms and packet loss of six percent. That's quite a brag.

ZTE's rendering of its W600D 'cloud laptop'

As such, the machine is basically a client end-point connected to ZTEs uSmart cloud PC service, and this is suggested for use in almost any setting most especially when multiple users share a physical machine at home or work.

ZTE already has a cloud PC on the desktop the W100D, a pack-of-cards-sized device similar to Alibaba's Wuying device.

Alibaba released its virtual computer earlier this year. The Wuying is designed for use with Alibaba Cloud and is available in Singapore or China. Alibaba also suggests its cloudy client device as an option for consumers or businesses.

Desktop-as-a-service is seldom offered to consumers, anywhere. Now two of China's mightiest tech outfits think the nation has an appetite for such services and accompanying devices.

ZTE may struggle to find a market for the W600D outside China, given the company is so distrusted in the US that the FCC will literally reimburse medium and small carriers (or at least promise to, when there's enough money) who remove and replace the company's products.

This does not mean China's PC market is terminal, but it could mean terminals will take a chunk of China's PC market.

Continued here:
ZTE intros 'cloud laptop' that draws just five watts of power - The Register

Read More..

This startup says it can glue all your networks together in the cloud – The Register

Multi-cloud networking startup Alkira has decided it wants to be a network-as-a-service (NaaS) provider with the launch of its cloud area networking platform this week.

The upstart, founded in 2018, claims this platform lets customers automatically stitch together multiple on-prem datacenters, branches, and cloud workloads at the press of a button.

The subscription is the latest evolution of Alkiras multi-cloud platform introduced back in 2020. The service integrates with all major public cloud providers Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud and automates the provisioning and management of their network services.

"Cloud was supposed to make life easier, but it has grown more complex as customers struggle to manage islands of networking, each with its own rules and tools. They thought they were buying agility, but what arrived was a mountain of complexity and technical debt," Alkira CEO Amir Khan argued in a canned statement.

He argues that today's network architectures were never designed for the level of change that the cloud has introduced. "Until now, enterprises had a choice between shoehorning last-generation technology into the cloud or using orchestration tools to hide the complexity."

Rather than building its own private network as vendors like Aryaka (yes, Aryaka) have done, or rely on telecommunications providers like many SD-WAN vendors, Alkira piggybacks on the global network backbones that interconnect the public cloud providers' datacenters.

For example, if a customer needs to connect a workload running in AWS to another running in GCP or Azure, the platform automatically configures and connects the virtual networks on each the respective public clouds.

However, since launching the platform, Alkira has introduced several additional capabilities including support for branch-to-branch communications and hybrid-cloud networking for customers with a mix of on-prem and cloud infrastructure.

The company has also announced integrations with several large security and network vendors like Cisco, Fortinet, Check Point, Palo Alto Networks, and Aruba to enable customers to deploy the service alongside their existing infrastructure.

Alkira's Cloud Area Networks service consolidates these capabilities into a single platform, and adds support for Teraform and REST APIs for integration with customers' continuous integration and continuous delivery pipelines.

Altogether, this functionality has helped the multi-cloud startup secure multiple high-profile contracts with the likes of Warner Music Group, Tekion, and Koch Industries. The latter was one of the company's largest financiers and has deployed Alkira's services to connect its more than 700 locations around the world.

However, Alkira is far from the only vendor vying for a piece of the NaaS market. The business faces competition from the many of the same cloud providers on which its service relies.

As more enterprise workloads have made their way into the cloud, AWS, GCP, and Azure have all launched cloud transport services for customers that need to connect workloads running across multiple regions. Many of these services also support using their private networks as an alternative to multi-protocol label switching (MPLS) or broadband connectivity for branch-to-branch communications. Amazon's Cloud WAN service introduced late last year is one such example.

Meanwhile, Alkira also faces competition from traditional SD-WAN vendors like Cisco and Fortinet, which have leaned on these cloud transport services as a means for extending network architectures customers are ready familiar with to multi-cloud networking use cases.

Go here to see the original:
This startup says it can glue all your networks together in the cloud - The Register

Read More..

Chinese startup hires chip godfather and TSMC vet to break into DRAM biz – The Register

A Chinese state-backed startup has hired legendary Japanese chip exec Yukio Sakamoto as part of a strategy to launch a local DRAM industry.

Chinese press last week reported that Sakamoto has joined an outfit named SwaySure, also known as Shenzhen Sheng Weixu Technology Company or Sheng Weixu for brevity.

Sakamoto's last gig was as senior vice president of Chinese company Tsinghua Unigroup, where he was hired to build up a 100-employee team in Japan with the aim of making DRAM products in Chongqing, China. That effort reportedly faced challenges along the way some related to US sanctions, others from recruitment.

The company scrapped major memory projects in two cities and was forced into bankruptcy last year, before Beijing arranged a bailout.

While that venture failed, 75-year-old Sakamoto's CV remains hard to match. He was once president at Japan's Elpida Memory a major Apple supplier with the capacity to produce over 185,000 300mm wafers per month. Micron bought the company in 2013.

Sakamoto's new employer, which he claims will be his last, was established in March with 5 billion ($745 million) of registered capital and 100 percent controlled by Shenzhen state-owned assets, according to Chinese state media.

Its main products are listed as general-purpose DRAM chips for datacenters and smartphones, developed by teams in Japan and China.

Sakamoto will join Taiwan Semiconductor Manufacturing Co (TSMC) veteran Liu Xiaoqiang, said Chinese state media outlet Global Times. Although Liu left TSMC three years ago, the employment choice raises eyebrows given China's yearning for Taiwanese talent, complete with accusations of poaching and speculation of aggressive methods to obtain it.

Beijing has been extremely eager to achieve tech self-sufficiency amid US sanctions in an already critical supply chain environment. In October 2020, China set a goal of growing all its own tech at home by 2035.

Unfortunately for the Middle Kingdom, that goal seems more elusive by the day. Analyst house IC Insights predicted that by 2026, China will only produce 20 percent of the chips it uses.

Previous attempts to create a steady domestic DRAM stream in China have been thwarted by pesky things like IP laws. In addition to Tsinghua's failure to thrive, state-owned Fujian Jinhua Integrated Circuit Company was indicted on industrial espionage charges in the US and banned from importing semiconductor equipment and materials from the States.

Instead, the market remains dominated by the likes of Korea's Samsung and SK hynix, plus US company Micron. According to IC Insights [PDF], the trio held 94 percent of global DRAM market share in 2021.

More:
Chinese startup hires chip godfather and TSMC vet to break into DRAM biz - The Register

Read More..

Zscaler bulks up AI, cloud, IoT in its zero-trust systems – The Register

Zscaler is growing the machine-learning capabilities of its zero-trust platform and expanding it into the public cloud and network edge, CEO Jay Chaudhry told devotees at a conference in Las Vegas today.

Along with the AI advancements, Zscaler at its Zenith 2022 show in Sin City also announced greater integration of its technologies with Amazon Web Services, and a security management offering designed to enable infosec teams and developers to better detect risks in cloud-native applications.

In addition, the biz also is putting a focus on the Internet of Things (IoT) and operational technology (OT) control systems as it addresses the security side of the network edge. Zscaler, for those not aware, makes products that securely connect devices, networks, and backend systems together, and provides the monitoring, controls, and cloud services an organization might need to manage all that.

Enterprises are looking for ways to protect workloads and data that are increasingly being run, accessed, and created outside the central datacenter, making a legacy perimeter security defense more outdated, Chaudhry opined during his keynote Wednesday.

"Workloads, somewhat like users, talk to the internet," he said. "Workloads talk to other workloads, so zero trust plays an important role."

Zscaler has been banging on the idea of zero trust since the rollout of its first cloud services in 2008. Zero trust essentially operates on the premise that no user, device, or application on the network inherently can be trusted. Instead, a zero-trust framework relies on identity, behavior, authentication, and security policies to verify and validate everything on the network and to determine such issues as access and privileges.

It's a booming space, with analyst biz MarketsandMarkets recently forecasting the global zero-trust market growing from $27.4 billion this year to $60.7 billion by 2027. Zero trust has also become a buzzword in the industry, with a growing number of vendors claiming they offer such capabilities.

Chaudhry said his company is working to build out an integrated, cloud-based platform that gives enterprises tightly integrated services rather than a collection of point products that need to be pulled together by an organization.

The latest offerings are designed to expand what its Zero Trust Exchange architecture can do. Zscaler's Posture Control agentless offering is integrated into Zero Trust Exchange to prioritize risk, including unpatched vulnerabilities in containers and virtual machines, cloud service misconfigurations and excessive permissions.

It also scans workloads and detects and resolves issues early in the development lifecycle before they become problems in production. Posture Control is the second step in Zscaler's efforts to secure workloads, following the release last year of Cloud Connector, which Chaudhry said eliminated the need for multiple virtual firewalls.

"Workloads need to securely communicate, but in addition to that, when you are launching those workloads, you want to make sure they are configured right there are hundreds and hundreds of configurations around the workloads and you also need to make sure that the right people have the right access, entitlement and permissions," the CEO said. "In addition, you need to make sure the attack surface is minimized."

The new AI and machine learning capabilities integrated into the Zero Trust Exchange are aimed at both improving the user experience and better protecting the network against the rising numbers and sophistication of cyberattacks. According to Zscaler research, there was a 314 percent increase in encrypted attacks between September 2020 and 2021 and an 80 percent increase in ransomware attacks between February 2021 and March 2022, with a 117 percent jump in double-extortion attacks.

There also was a more than 100 percent [PDF] year-over-year rise in phishing attacks in 2021, it claimed.

AI and machine learning technologies are fed by data and Zscaler's security cloud inspects more than 240 million transactions a day and extracts more than 300 trillion signals that can feed the AI and machine learning algorithms. This now includes AI-powered phishing prevention, AI-based policy recommendations to stop the lateral movement of cyberthreats and user-to-app segmentation to reduce the attack surface, he said.

There also are an autonomous risk-based policy engine to enhance network integrity and enable customized policies based on risk scores applied to users, devices, apps and content, and an AI-driven root cause analysis capabilities to accelerate the mean time to resolution.

Chaudhry said customer demand drove the development of IoT and OT security capabilities in the platform. Enterprises said that many of their plants and factories rely on traditional security components that open them to ever-increasing cyberthreats.

"You can actually define those solutions within the factory floor or you can send telemetry from IoT or OT devices from your data lake at Azure, AWS or wherever else securely without doing VPN devices," the CEO said, noting that the company is partnering with Siemens developing and integrating products in this area.

Go here to see the original:
Zscaler bulks up AI, cloud, IoT in its zero-trust systems - The Register

Read More..

Mega’s unbreakable encryption proves to be anything but – The Register

Mega, the New Zealand-based file-sharing biz co-founded a decade ago by Kim Dotcom, promotes its "privacy by design" and user-controlled encryption keys to claim that data stored on Mega's servers can only be accessed by customers, even if its main system is taken over by law enforcement or others.

The design of the service, however, falls short of that promise thanks to poorly implemented encryption. Cryptography experts at ETH Zurich in Switzerland on Tuesday published a paper describing five possible attacks that can compromise the confidentiality of users' files.

The paper [PDF], titled "Mega: Malleable Encryption Goes Awry," by ETH cryptography researchers Matilda Backendal and Miro Haller, and computer science professor Kenneth Paterson, identifies "significant shortcomings in Megas cryptographic architecture" that allow Mega, or those able to mount a TLS MITM attack on Mega's client software, to access user files.

The findings, detailed on a separate website, proved sufficiently severe that Kim Dotcom, no longer affiliated with the file storage company, advised potential users of the service to stay away.

Mega chief architect Mathias Ortmann meanwhile published a blog post announcing a client software update addressing three of the five flaws identified by the researchers, promising further mitigations, and thanked the ETH Zurich boffins for responsibly reporting their findings.

"The first two attacks exploit the lack of integrity protection of ciphertexts containing keys (henceforth referred to as key ciphertexts), and allow full compromise of all user keys encrypted with the master key, leading to a complete break of data confidentiality in the MEGA system," the paper explains. "The next two attacks breach the integrity of file ciphertexts and allow a malicious service provider to insert chosen files into users cloud storage. The last attack is a Bleichenbacher-style attack against MEGAs RSA encryption mechanism."

The major issue here is that Mega's method for deriving the various cryptographic keys used to authenticate and encrypt files fails to check for key integrity. So a malicious server can tamper with the RSA private key and make it leak information.

The first issue is an RSA Key Recovery Attack. It allows an attacker controlling the Mega API or able to mount a TLS MiTM attack on the client, to abuse the authentication protocol to extract the user's private key. This is done by constructing an oracle a mathematical data leak to gather one bit of information per login attempt about a factor of the RSA modulus an integer that's the product of two primes used to generate the cryptographic key pair.

This attack takes at least 512 login attempts to carry out. Mega in its post cites this figure to suggest the attack is difficult to carry out but the ETH researchers note that it's possible to further manipulate Mega's software to force the client to log in repeatedly, allowing the attack to fully reveal a key within a few minutes.

The second is a Plaintext Recovery Attack. "Building on the previous vulnerability, the malicious service provider can recover any plaintext encrypted with AES-ECB under a users master key," the paper explains.

"This includes all node keys used for encrypting files and folders (including unshared ones not affected by the previous attack), as well as the private Ed25519 signature and Curve25519 chat key. As a consequence, the confidentiality of all user data protected by these keys, such as files and chat messages, is lost."

Attacks three and four allow a malicious service provider to "break the integrity of the file encryption scheme and insert arbitrary files into the users file storage which pass the authenticity checks during decryption. This enables framing of the user by inserting controversial, illegal, or compromising material into their file storage."

While this may sound outlandish, framing political opponents with fabricated evidence has been documented and represents a real threat.

The fifth attack is described as "a new Guess-and-Purge variant of Bleichenbachers attack." It relies on a lot of guesses (2^17) to decrypt node and chat keys.

Proof-of-concept code for these attacks has been published on GitHub.

Ortmann said Mega intends to release a client fix for attack number four and to remove the legacy code that allows attack number five.

Paterson, via Twitter said Mega has taken some steps to address these attacks but expressed disappointment that the company hasn't committed to a thorough overhaul of its approach because its cryptography is "pretty fragile."

"On the other hand, to fix everything thoroughly, all of [Mega's] customers would have to download all their files, re-encrypt them, and upload them again," he said. "With 1000 Petabytes of data to deal with, that's going to hurt."

Paterson and his colleagues argue that companies should work to standardize secure cloud storage to avoid repeated ad hoc implementations that repeat the same errors.

"We believe that this would be the easiest path to avoid attacks stemming from the lack of expert knowledge among developers, and that it would enable users to finally have confidence that their data remains just that theirs," the paper concludes.

Read this article:
Mega's unbreakable encryption proves to be anything but - The Register

Read More..

Another Issue With Internet Antitrust Bills: Sloppy Drafting Could Lead To Problems For Encryption – Techdirt

from the not-good,-not-good-at-all dept

As the big push is on to approve two internet-focused antitrust bills, the American Innovation and Choice Online Act (AICOA) and the Open App Markets Act, weve been calling out that while the overall intentions of both may be good, there are real concerns with the language of both and how it could impact content moderation debates. Indeed, it seems pretty clear that the only reason these bills have strong support from Republicans is because they know the bills can be abused to attack editorial discretion.

There have been some other claims made about problems with these bills, though some of them seem overblown to me (for example, the claims that the Open App Markets bill would magically undermine security on mobile phones). However, Bruce Schneier now points out another potential issue with both bills that seems like a legitimate concern. They both could be backdoors to pressuring companies into blocking encryption apps. He starts by highlighting how it might work with AICOA:

Lets start with S. 2992. Sec. 3(c)(7)(A)(iii) would allow a company to deny access to apps installed by users, where those app makers have been identified [by the Federal Government] as national security, intelligence, or law enforcement risks. That language is far too broad. It would allow Apple to deny access to an encryption service provider that provides encrypted cloud backups to the cloud (which Apple does not currently offer). All Apple would need to do is point to any number of FBI materials decrying the security risks with warrant proof encryption.

Sec. 3(c)(7)(A)(vi) states that there shall be no liability for a platform solely because it offers end-to-end encryption. This language is too narrow. The word solely suggests that offering end-to-end encryption could be a factor in determining liability, provided that it is not the only reason. This is very similar to one of the problems with the encryption carve-out in the EARN IT Act. The section also doesnt mention any other important privacy-protective features and policies, which also shouldnt be the basis for creating liability for a covered platform under Sec. 3(a).

It gets worse:

In Sec. 2(a)(2), the definition of business user excludes any person who is a clear national security risk. This term is undefined, and as such far too broad. It can easily be interpreted to cover any company that offers an end-to-end encrypted alternative, or a service offered in a country whose privacy laws forbid disclosing data in response to US court-ordered surveillance. Again, the FBIs repeated statements about end-to-end encryption could serve as support.

Finally, under Sec. 3(b)(2)(B), platforms have an affirmative defense for conduct that would otherwise violate the Act if they do so in order to protect safety, user privacy, the security of nonpublic data, or the security of the covered platform. This language is too vague, and could be used to deny users the ability to use competing services that offer better security/privacy than the incumbent platformparticularly where the platform offers subpar security in the name of public safety. For example, today Apple only offers unencrypted iCloud backups, which it can then turn over governments who claim this is necessary for public safety. Apple can raise this defense to justify its blocking third-party services from offering competing, end-to-end encrypted backups of iMessage and other sensitive data stored on an iPhone.

And the Open App Markets bill has similar issues:

S. 2710 has similar problems. Sec 7. (6)(B) contains language specifying that the bill does not require a covered company to interoperate or share data with persons or business users thathave been identified by the Federal Government as national security, intelligence, or law enforcement risks. This would mean that Apple could ignore the prohibition against private APIs, and deny access to otherwise private APIs, for developers of encryption products that have been publicly identified by the FBI. That is, end-to-end encryption products.

Some might push back on this by pointing out that Apple has strongly supported encryption over the years, but these bills open up some potential problems, and, at the very least, might allow companies like Apple to block third party encryption apps even as the stated purpose of the bill is the opposite.

As Schneier notes, he likes both bills in general, but this sloppy drafting is a problem.

The same is true of the language that could impact content moderation. In both cases, it seems that this is messy drafting (though in the content moderation case, it seems that Republicans have jumped on it and have now made it the main reason they support these bills, beyond general anger towards big tech for populist reasons).

Once again, the underlying thinking behind both bills seems mostly sound, but these problems again suggest that these bills are, at best, half-baked, and could do with some careful revisions. Unfortunately, the only revisions weve seen so far are those that carved out a few powerful industries.

Filed Under: aicoa, amy klobuchar, antitrust, bruce schneier, encryption, open app markets

Read more here:
Another Issue With Internet Antitrust Bills: Sloppy Drafting Could Lead To Problems For Encryption - Techdirt

Read More..