Page 3,775«..1020..3,7743,7753,7763,777..3,7803,790..»

Google reveals the wheels almost literally fell off one of its cloudy server racks – The Register

Google has revealed that the wheels almost literally fell off some of its servers.

A late Friday post about the virtues of its site reliability engineering (SRE) teams told the story of a recent incident in which its uptime squad found evidence of packet loss, isolated to a single rack of machines.

On closer inspection the servers in said rack were found to be rife with CPU throttling and some border gateway protocol weirdness to boot.

After plenty of remote probing by the SRE team failed to diagnose the problem, a Googler was despatched to endure the indignities of meatspace and inspect the problem rack with their actual eyes.

And heres what they found:

Crushed castors beneath a Google server rack. Click image to enlarge.

The wheels (castors) supporting the rack had been crushed under the weight of the fully loaded rack, wrote Google Cloud Solutions Architect Steve McGhee. The rack then had physically tilted forward, disrupting the flow of liquid coolant and resulting in some CPUs heating up to the point of being throttled.

The rack was duly propped back up and McGhee says Google has since performed a systematic replacement of all racks with the same issue, while avoiding any customer impact and also considered how to better transport and install its kit.

The post is of course self-promotion for how seriously Google takes its quest for uptime. But it is nonetheless interesting for revealing that Google has two internal aphorisms. One states that "All incidents should be novel" and should never occur more than once. The other posits At Google scale, million-to-one chances happen all the time.

The Register suggests the first is applicable anywhere. And the second is, thankfully, hardly ever a problem for our readers. Until they move into a hyperscale cloud.

One more thing to note: the post includes a photo of the leaning rack, a rare image of a Google bit barn's innards even if it reveals very little.

Sponsored: Webcast: Why you need managed detection and response

Read more here:
Google reveals the wheels almost literally fell off one of its cloudy server racks - The Register

Read More..

The Last Hurrah Before The Server Recession – The Next Platform

Excepting some potholes here and there and a few times when the hyperscalers and cloud builders tapped the brakes, it has been one hell of a run in the last decade for servers. But thanks to the coronavirus outbreak and some structural issues with sections of the global economy lets stop pretending economies are national things anymore, because they clearly are not this could be peak server for at least a few quarters. Maybe a few years.

We started The Next Platform in 2015, but our experience in the systems market goes back to the aftermath of the 1987 stock market crash that eventually caused a recession in the late 1980s and early 1990s that really didnt get resolved until the dot-com boom came along and injected a whole lot of hope and cash into the tech sector and then into every other sector that needed to become an e-business. When we think about transition points in IT, we think that the Great Recession was the point in time when a lot of different industries pivoted. And thus our financial analysis usually goes back to the Great Recession (when we are able to get numbers back that far) because we want to see how what is going on now compares to the difficult time we were going through then.

According to market researcher IDC, in the fourth quarter of 2019, which is technically a dozen years since the last recession started, server shipments were up 14 percent to 3.4 million units and revenues rose by 7.5 percent to $25.35 billion.

The big reason for that revenue increase was that the hyperscalers and cloud builders invested heavily in machinery in the quarter, with 1.05 million machines being sold by the ODMs who supply iron to these companies, up a stunning 53 percent and driving revenues up 37.9 percent to $6.47 billion. Clearly, with the hyperscalers and cloud builders buying mostly X86 servers and with increasing competition between Intel and AMD, the hyperscalers are getting great deals on processors with AMD leading the price/performance drive and Intel huffing and puffing to try to keep up without wrecking its profit margins. Dont feel bad for Intel the chip giant is driving historic revenues and very high operating profits in its Data Center Group even with the competitive pressure. IBMs System z mainframes also perked up in the quarter, driving revenues for Big Blue up 17.6 percent to just a tad under $2.3 billion. Inspur, thanks to a very aggressive X86 and Power server business in China, saw revenues grow by 12.1 percent to $1.74 billion.

The rest of the server makers were either up a few points or down a few points. As we recently discussed in our analysis of the datacenter businesses of Dell and Hewlett Packard Enterprises, these two companies are exemplary of what is happening in the enterprise and among smaller Tier 2 clouds, telcos, and service providers. Dell and HPE have fiscal years that are distinct from calendar years, so IDC reconciles their numbers to the solar cycle for us. In the fourth quarter, IDC reckons that Dell had $3.99 billion in sales, down 9.9 percent, against 549,488 servers shipped out of its factories to the channel or to customers, down 5.4 percent. HPE, including its H3C partnership in China, actually saw shipment growth of 4.7 percent to 507,228 units and raked in $4.14 billion in revenues against that, down 3.4 percent but giving HPE the mantel of top server shaker money maker in the quarter the first time that has happened in a while. Lenovo had $1.42 billion in server sales, down 2.6 percent, Huawei Technology had $1.28 billion, up 1.8 percent, and we estimated that Cisco had just under $1 billion in sales, up 6 percent.

To be fair, OEMs had a pretty good fourth quarter in 2018, making it a tough compare to this time around, even as the ODMs saw a pretty steep decline, making it an easier compare.

Heres the table of server revenues, which we have had to estimate in a few points (shown in red bold) for the past two years by source:

And here is the same data extended back to the belly of the Great Recession presented in a chart:

Now, if you do a little math on these numbers from IDC, you will see that if you take out the effect of the ODMs, who together comprised 25.5 percent of the sales in Q4 2019, the rest of the server market was flat as a pancake revenue-wise. That was better than the 8 percent revenue decline in Q2 and the 6.6 percent revenue decline in Q3, but the big different is really those incremental System z15 sales from IBM. Take that out, we are back in negative territory for X86 servers in the enterprise, service providers, and telcos once again. (At least as a group. Intel said that its telco and service provider customers from Data Center Group had 14 percent growth in its fourth quarter and enterprises were off 7 percent, which matches this period in server sales being analyzed by IDC.)

Thanks in large part to competition among Intel and AMD in the server CPU racket and falling DRAM and flash memory prices, the average cost of an X86 server has been trending downwards, as you can see:

The X86 server platform still represents something north of 98 percent of shipments, which grew by 12.9 percent to 3.35 million units (98.5 percent of shipments), with revenues of $22.44 billion (89 percent of revenues). Sales of non-X86 server shipments rose by 17.8 percent to $2.91 billion, and IBMs System z and Power Systems machines accounted for 78.8 percent of that non-X86 slice of the server pie.

The amount of compute we are consuming is growing a lot faster than the price is dropping these days, as we have calculated since the Great Recession:

The amount of compute acquired in recent years, mostly due to the hyperscalers and cloud builders, is enormous, as you can see. How much steeper can that curve get if there is a recession? We may find out.

There was not a peep out of IDC about the coronavirus outbreak, and that is to be expected because the effects of the interruption to the supply chain for servers as well as the impact on buying patterns for enterprises, governments, service providers of all stripes, hyperscalers, and cloud builders. And the reason why is simple: No one knows. The error bars on any thought experiment, much less simulation, about the global economy right now are too large because some, many, all (take your pick) of the underlying variables that go into trends in the economy are changing.

What we can say honestly is this: If we do go into a recession, there is no question that some aspects of the platforms that we build will change. This has happened time and time again. Platform transitions are not caused by recessions, but they are often accelerated by them, particularly if companies can save money or do things they have always wanted to do or, better still, never even dreamed of doing. Lets walk through it.

The move to proprietary minicomputers was certainly helped by the recession in the mid-1970s, which lingered for a while and then there was a mild one in 1980 and then again in 1981 and 1982 after the Iranian Revolution in 1979. Again an oil pricing shock jolted the system, although unlike last week, where we were worried that oil prices would be too low, in those two cases we knew they were going to be too high. IBMs and Hewlett-Packards proprietary minicomputers took off then because companies wanted to computerize their back-office and factories, but they could not afford mainframes.

In the late 1980s, another oil price shock combined with irrational exuberance on Wall Street shocked the economy and the RISC/Unix transition was there to benefit. The client/server revolution of the late 1980s to early 1990s was not only a reaction to a sluggish economy where central host systems were wildly more expensive than PCs, which were on everyones desktops at work and which had to be made more useful for the sake of the IT budget, but it was also a precursor to the Internet age, where hybrid computing across PCs and servers became so normal that we dont really talk about it much any more.

The dot-com bubble from around 1995 through 2001 coincided with the Unix revolution and then the rise of Intel iron and Linux and Windows Server, and this was an architectural change that was funded by fear of missing out and being stuck in a personal corporate recession as upstarts blew by you and left you in the ditch of economic ruin. We could argue about how much of the spending in the dot-com boom was wasted on hope and ideas, but the fear was running pretty high and companies like Sun Microsystems, EMC, and Oracle benefited mightily from all the hype and hope.

And after the September 11 attacks in the United States, we had another recession and that really put the nail in the coffin of RISC/Unix systems and marked the rise of Intel X86 server chips and within a few years AMD Opterons, and these systems rose until the Great Recession kicked in during 2009, when Intel essentially copied off AMDs homework and created the Nehalem Xeon architecture that we are still using predominantly in the datacenter today. When that last recession hit, VMware was there with a credible, enterprise-grade server virtualization platform that allows companies to get their existing iron to run at a higher utilization by converging the workloads on physical servers onto virtual machines on a physical server, and this helped save the day. AMD had made some architectural compromises and also had some bugs in chips, and server makers were in no mood to be patient. They all fell in behind the Nehalems, and Cisco Systems went so far as to converge compute and networking and set off the whole server industry on a tear for converged platforms at the same time Nutanix was being founded to offer us hyperconvergence, which emerged in 2011.

This time around, if a recession should come to pass and we surely hope that it does not then AMD, Ampere Computing, and Marvell might be the big beneficiaries. Not Intel.

Read this article:
The Last Hurrah Before The Server Recession - The Next Platform

Read More..

Google Translates real time transcription feature is out now for Android – The Verge

Google Translates new transcription feature, first demoed back in January, is out now for Android users as part of an update to the artificial intelligence-powered mobile app. The feature will allow you to record spoken words in one language and transform them into translated text on your phone, all in real time and without any delay for processing.

The feature will begin rolling out starting today and will be available to all users by the end of the week. The starting languages will be English, French, German, Hindi, Portuguese, Russian, Spanish, and Thai. That means youll be able to listen to any one of those languages spoken aloud and translate it any one of the other available languages.

This will work live for speeches, lectures, and other spoken word events and from pre-recorded audio, too. That means you could theoretically hold your phone up to computer speakers and play a recording in one language and have it translated into text in another without you having to input the words manually. Google told The Verge in January that it will not support the option to upload audio files at launch, but listening to a live audio source, like your laptop, should work as an alternative method.

Prior to this feature, you could have used Google Translates voice option for turning a spoken word, phrase, or sentence from one language into another, including in both text and verbal form. But a Google spokesperson says that part of the app wasnt well suited to listen to a longer translated discussion at a conference, a classroom lecture or a video of a lecture, a story from a grandparent, etc.

To start, this feature will require an internet connection, as Googles software has to communicate with its Tensor Processing Units (TPUs), a custom type of AI-focused processing chip for use in cloud servers, to perform the transcription live. In fact, a Google spokesperson says the feature works by combining the existing Live Transcribe feature built into the Recorder app on Pixel phones, which normally works offline, with the power of its TPUs in the cloud, thereby creating real-time translated transcription so long as you have that internet connection to facilitate the link.

Google says the new transcription feature will be Android-only at launch, but the company has plans to bring it to iOS at some point in the future. It should show up as its own transcribe option in the app after youve updated it. Google also says youll be able to pause or restart the transcription by tapping the mic icon as well as change the text size and customize dark theme options in the Translate settings menu.

More:
Google Translates real time transcription feature is out now for Android - The Verge

Read More..

Data storage in the cloud: 5 ways to make it faster and cheaper – TechGenix

The way cloud computing is becoming ubiquitous and major vendors like Google, Microsoft, and Amazon are competing to stay ahead, it is safe to say that cloud-based services will get more accessible and cheaper to use. In the next few years, not just organizations or the government but numerous smaller businesses and individuals are also expected to adopt the cloud. In such a situation, it becomes important to understand the basic factors that decide the cost and the performance factors for any application you wish to host in the cloud. To help you decide upon the right option, the key factors are explained here, which will help you make the right choice for faster and cheaper data storage in the cloud.

The main elements that work together to form the fundamental architecture of the cloud suitable for any application are the frontend, the backend platforms, applications, databases, and software capabilities. Different cloud types, namely private, public, hybrid, and multicloud, have different combinations and user-controls making them suitable for different needs. For example, a public cloud is highly scalable, cost-effective, and highly reliable. In contrast, a private cloud is a bit expensive but provides better security and customization. A hybrid offers a combination of public and private cloud solutions into unique cloud storage, while multicloud offers multiple public cloud services in a single heterogeneous architecture.

A business can have unique requirements for hosting its applications, so it is important to understand their requirements first. Then based on requirements and budget, a business can identify the right cloud data storage architecture that could deliver storage on demand in a scalable way.

Again, since businesses have different needs and offer various kinds of services to their customers, the nature of their files or data will also be different. For example, an enterprise that offers a streaming service may have a vast amount of media data, so it would probably need large volumes of storage and high bandwidth support.

After choosing the right architecture (public, private, or hybrid), you need to understand how the data will be stored, which can be defined and understood in three levels as file, block, and object-based storage. File-based storage refers to the storage of an individual file (document or spreadsheets) as a single entity. It can be used by applications that often need shared access to files and require a file system. It works well for organizing data in a simple, arranged and accessible platform. The block-level storage is used in SAN (storage area network) architectures, and it refers to an individual block of raw storage data. This format is convenient for enterprise applications like databases or ERP systems. Object-based storage is useful to solve the unorganized data such as videos, audio, photos, and scanned images. It is suitable for building modern applications from scratch that requires scale and flexibility. Selecting the right storage can help improve the performance of the application.

Some examples of data storage solutions provided by cloud vendors are Azure Storage and Amazon S3. You can get the required amount of storage capacity and other features by paying a monthly or annual subscription fee. For such subscription-based services, public cloud options are often considered economical, but some enterprises are cautious of using them because the stored data is sent outside of their network premises. So, in case the privacy of the stored data is a major concern, the organization can choose a private cloud, where the management of data always remains within the premises of an enterprises network. Some organizations even use a hybrid cloud, in which some resources are handled in-house while others are provided to third-party cloud providers. Leading enterprise storage vendors who sell these types of services are Dell EMC Enterprise Hybrid Cloud, IBM Elastic Storage Server, and Microsoft Azure Stack. There are many more to choose from.

Another major concern of data storage in the cloud is security. Cloud offers a less-expensive alternative compared to expanding physical storage, but it also has security-related concerns. Organizations must tackle challenges like security and performance to prevent any data breach or compromise.

To protect such sensitive information, one straightforward option is using encryption. All data stored in the cloud is first encrypted so that in case any hacker gets access to any sensitive data, they wont be able to misuse that data without knowing the correct decryption key. But this method has its own concerns choosing an outdated algorithm like MD5 over reliable encryption (SHA 3) may doom the entire effort. Also, it impacts the performance of your application, as encryption may slow down the transfer rates if the volume of data is high.

Another major factor is the high availability of data, which can be ensured by opting for georedundancy (physical separation of datacenters between geographic locations). This can ensure that your application will always be available, but this increases the overall cost and network complexity of the system. Also, when opting for this, IT teams should make sure they consider the issues related to regulatory compliance, administration, and cost. Organizations should also consider several factors like latency, performance, and resiliency requirements before making such investments.

There are many other factors you should look for with your cloud storage, such as automated upload and sync of data, auto-scaling options, or capping/notifications for max limits. Having auto-upload enabled may result in increased storage cost or may exhaust your existing data storage limit quickly. You also need to consider if your application requires auto-scaling of storage capacity (for example, automated subscription of extra storage spaces as soon as it reaches existing capacity). Turning on this feature by default may be a very convenient and hassle-free option for your application, but it may easily lead to high operational budgets. Setting up alerts or notification when storage reaches a threshold capacity gives you ample time to consider whether to expand capacity or clean up existing data and create additional storage space.

Besides the above-mentioned visible factors, there are several additional factors to look for, based on special needs. For example, your business requires holding a two-day online event with thousands of customers joining in or providing time-based discount schemes to your customers. Such events may result in a requirement of additional storage space for your applications to cater to the peak load of traffic. You must ensure that the selected cloud storage space provides support for these kinds of special requirements and that youve configured your resources appropriately.

An unsuitable cloud can increase your expenditure or negatively impact the performance of your application. Cloud providers have many data storage services, and each one of them serves a different purpose. So, individuals or businesses should carefully analyze their business requirements and then opt for a suitable option. We have listed several considerations that can help you improve your data storage capabilities in the cloud, but awareness of the latest trends and offerings is surely a major factor to help you identify the fastest and cheapest option for your cloud storage.

Featured image: Pixabay

Post Views: 45

Home Cloud Computing Data storage in the cloud: 5 ways to make it faster and cheaper

Follow this link:
Data storage in the cloud: 5 ways to make it faster and cheaper - TechGenix

Read More..

Spectro Cloud Launches With $7.5 Million to Help Enterprises Realize the Promise of Kubernetes – Container Journal

First Company to Hit Sweet Spot Between Managed Kubernetes Offerings That Are Restrictive and Complex DIY

SANTA CLARA, Calif., March 17, 2020 (GLOBE NEWSWIRE) Today Spectro Cloud, an enterprise cloud-native infrastructure company, emerged from stealth and unveiled its first product: Spectro Cloud. Spectro Cloud provides scalable, policy-based management of Kubernetes for enterprises that need a high degree of control over their infrastructure, whether it is in public cloud, private cloud, bare metal or in any combination. The product has been in private beta since January and will be generally available next quarter.

Enterprises are struggling to realize the promise of Kubernetes due to its operational complexity. While the managed Kubernetes services solve this problem for those that want/need a completely pre-packaged approach, for the majority, they can become too restrictive for the varied needs that enterprises have. Spectro Cloud has created a flexible solution that provides the scalable automation and ease-of-use of the managed services, but enables enterprises to retain greater control, said Roy Illsley, Distinguished Analyst, Enterprise IT, Omdia.

Spectro Cloud lets enterprises customize a Kubernetes infrastructure stack for specific business needs by using a declarative model to define cluster profiles. Spectro Cloud uses these cluster profiles to automate deployment and maintenance of clusters across the enterprise. Canary deployments, patterns for rolling out releases to a subset of users or servers, ensure Kubernetes upgrades dont break dependencies on other ecosystem components while keeping everything consistent with enterprise-wide standards.

Sbastien Morissette, P.Eng., IT Architect Specialist Infrastructure, Security and IT Services at Intact Financial Corporation, Canadas largest provider of property and casualty insurance, said: Our business units end up choosing different Kubernetes providers as they all have different niches and varying maturity levels in different fields like AI, machine learning, public cloud vs on premises offerings, etc. Operationally, this becomes a nightmare because IT needs multiple support structures to address the different infrastructure stacks.

Morissette continued: A platform like Spectro Cloud addresses both the day 1 and day 2 operations of our Kubernetes ecosystem by normalizing the way IT deploys, operates and manages Kubernetes clusters over a broad spectrum of endpoints, both on premises and in the cloud. The control IT gets from Spectro Clouds cluster profiles means they can customize offerings to each business unit while maintaining responsibility for overall operations.

Weve seen enterprises struggle with managed Kubernetes options, and weve also seen them waste time and money trying to do everything in-house. With Spectro Cloud, were giving enterprises a way to run Kubernetes at scale without having to convert their entire way of working to whatever one large vendor thinks is correct. Theyve been burned by that approach before, said Tenry Fu, co-founder and CEO of Spectro Cloud. Fu most recently led the architecture for the Cisco CloudCenter Suite and Cisco Container Platform after his previous company, CliQr, was acquired by Cisco. CliQrs technology enabled applications to run more efficiently across public and private clouds.

Instead of converting their entire business to a single way of working, enterprises can experiment with new approaches at the pace that makes sense for them. Developers can work at the speed they need, while security and audit controls are embedded into the process, regardless of where clusters are deployed. Enterprises can make use of public cloud, private cloud, whatever suits their needs at the time, and change their mind as circumstances require.

With Spectro Cloud, the promise of Kubernetes can finally be realized.

Today Spectro Cloud also announced $7.5 million in seed funding led by Sierra Ventures with participation from Boldstart Ventures.

The market for Kubernetes has crossed the chasm. What weve heard from our CXO Advisory Board of Global 1000 IT executives is that enterprises are still struggling with the operational complexity that comes with Kubernetes. Spectro Clouds team has a deep understanding of the needs of enterprises and has found a unique way to make Kubernetes easy to use for its rapidly growing customer base, said Mark Fernandes, managing director at Sierra Ventures.

From our dozens of conversations with Fortune 500s, it was clear that deploying Kubernetes was a top priority but there was still no solution that met their needs. Spectro Cloud is the first company that not only gives customers fine grained control, flexibility and multi-cloud capabilities for their Kubernetes stack but also the ease of use and scalability of a managed SaaS platform. The teams deep background in cloud infrastructure (founded CliQr sold to Cisco) and their design first ethos has been well received by large enterprises, and were thrilled to be partnered with Spectro Cloud as they redefine the infrastructure ecosystem, said Ed Sim, founder and managing partner at Boldstart Ventures.

About Spectro CloudSpectro Cloud is an enterprise cloud-native infrastructure company that makes Kubernetes manageable at scale for enterprises that need superior control and flexibility. Spectro Cloud provides solutions that help enterprises run Kubernetes their way, anywhere. Spectro Cloud is founded by multi-cloud management experts and is backed by Sierra Ventures and Boldstart Ventures. For more information, visithttps://www.spectrocloud.comor follow @spectrocloudinc.

Related

See the rest here:
Spectro Cloud Launches With $7.5 Million to Help Enterprises Realize the Promise of Kubernetes - Container Journal

Read More..

Need to build a high-performance private cloud? You need the QNAP TVS-1282T3 Thunderbolt 3 NAS – ZDNet

The other day I covered Synology's new DS220j and DS420j NAS boxes. Great devices for those looking for an entry-level network attached storage box to create a private cloud. But some of you need more. More power. More storage capacity. More performance. More of everything.

If you need more, then you should take a look at the QNAP TVS-1282T3 Thunderbolt 3 NAS.

This is a beast of a system, with performance -- and price -- that isn't for the faint of heart.

Must read: The ultimate MacBook Pro accessory just got cheaper

There's a lot to the QNAP TVS-1282T3 Thunderbolt 3 NAS.

QNAP TVS-1282T3 Thunderbolt 3 NAS

Tech specs QNAP TVS-1282T3 Thunderbolt 3:

There's a lot of customization options there, from picking the processor you need, to the RAM options, to how to load the system out with drives.

The 2.5-inch SSD trays and 3.5-inch hard drive trays have been designed to be tool-less for easy installation and replacement, although if you want to dig deeper into the system you will need to wield a screwdriver. That said, the TVS-1282T3 has been designed to be taken apart and rebuilt, which is nice. Everything is well thought out, and engineered in such a way that makes it easy to take apart and put back together.

The NAS is also fast. Using Thunderbolt 3, the QNAP TVS-1282T3 can achieve file transfer speeds up to 1,600 MB/s. The Thunderbolt 3 ports are also compatible with USB-C cables/devices and supports 10 Gbps USB 3.2 Gen 2, allowing compatibility with a broad range of external drives/enclosures.

Thanks to QNAP's Qtier Technology, the TVS-1282T3 is smart, and features Auto Tiering that continuously optimizes storage efficiency across M.2 SSD, SSD and SATA drives by allowing the system to move frequently-used "hot" data to high-performance storage tiers and less-accessed "cold" data to lower-cost, higher-capacity drives. This allows you to get the very best out of your investment in drives.

The system is surprisingly quiet in normal use, although the harder you push it, the more cooling it will require, and the noisier the cooling fans will be.

It's not cheap though. A diskless QNAP TVS-1282T3 with a Core i5 processor and 16GB of RAM will set you back over $3,300. But, if you need power and performance, it doesn't get much better than this.

See also:

Continued here:
Need to build a high-performance private cloud? You need the QNAP TVS-1282T3 Thunderbolt 3 NAS - ZDNet

Read More..

Your No. 1 Cloud Threat is ‘Excessive Permissions’ – CXOToday.com

By Nikhil Taneja

Migrating workloads to public cloud environment opens up organizations to a slate of new, cloud-native attack vectors which did not exist in the world of premise-based data centers. In this new environment, workload security is defined by which users have access to your cloud environment, and what permissions they have. As a result, protecting against excessive permissions, and quickly responding when those permissions are abused and this becomes the #1 priority for security administrators.

The Old Insider is the New Outsider

Traditionally, computing workloads resided within the organizations data centers, where they were protected against insider threats. Application protection was focused primarily on perimeter protection, through mechanisms such as firewalls, IPS/IDS, WAF and DDoS protection, secure gateways, etc.

However, moving workloads to the cloud has led organizations (and IT administrators) to lose direct physical control over their workloads, and relinquish many aspects of security through the Shared Responsibility Model. As a result, the insider of the old, premise-based world is suddenly an outsider in the new world of publicly hosted cloud workloads.

IT administrators and hackers now have identical access to publicly-hosted workloads, using standard connection methods, protocols, and public APIs. As a result, the whole world becomes your insider threat.

Workload security, therefore, is defined by the people who can access those workloads, and the permissions they have.

Your permissions are your Attack Surfaces

One of the primary reasons for migrating to the cloud is speeding up time-to-market and business processes. As a result, cloud environments make it very easy to spin up new resources and grant wide-ranging permissions, and very difficult to keep track of who has them, and what permissions they actually use.

All too frequently, there is a gap between granted permissions and used permissions. In other words, many users have too much permission, which they never use. Such permissions are frequently exploited by hackers, who take advantage of unnecessary permissions for malicious purposes.

As a result, cloud workloads are vulnerable to data breaches (i.e., theft of data from cloud accounts), service violation (i.e., completely taking over cloud resources), and resource exploitation (such as cryptomining). Such promiscuous permissions are frequently mischaracterized as misconfigurations, but are actually the result of permission misuse or abuse by people who shouldnt have them.

Therefore, protecting against those promiscuous permissions becomes the #1 priority for protecting publicly-hosted cloud workloads.

Traditional Protections offer Piecemeal Solutions

The problem, however, is that existing solutions provide incomplete protection against the threat of excessive permissions.

New Approach for Protection

Modern protection of publicly hosted cloud environments requires a new approach.

(The author is Managing Director-India, SAARC & Middle East at Radware, andthe views expressed here need not be in sync with those of the publication)

Visit link:
Your No. 1 Cloud Threat is 'Excessive Permissions' - CXOToday.com

Read More..

Bipartisan Senate Judiciary Committee introduces bill that censors online content and attacks encryption – World Socialist Web Site

Bipartisan Senate Judiciary Committee introduces bill that censors online content and attacks encryption By Kevin Reed 17 March 2020

Leading members of the Senate Judiciary Committee formally introduced a bipartisan bill on March 5 that escalates US government censorship of online content and directly attacks encryption of electronic communications under the cover of fighting online child sexual abuse material (CSAM).

Senate Judiciary Committee Chairman Lindsey Graham (Republican of South Carolina), US Senators Richard Blumenthal (Democrat of Connecticut), Josh Hawley (Republican of Missouri) and Ranking Member Dianne Feinstein (Democrat of California) jointly introduced the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT Act) that purports to encourage the tech industry to take online child sexual exploitation seriously.

The concept behind the law is that tech companies have to earn their protected immunity from prosecution for any illegal content published by users on their platforms by scanning and decrypting every message, image or post. Until now, online service providers were not responsibleunder what are known as the Section 230 provisions of the Communications Decency Act of 1996for anything users publish on websites, social media accounts or cloud servers.

In introducing the bill, Senator Graham said, This bill is a major first step. For the first time, you will have to earn blanket liability protection when it comes to protecting minors. Senator Blumenthal added, Companies that fail to comport with basic standards that protect children from exploitation have betrayed the public trust granted them by this special exemption.

While claiming to fight online CSAM and enlisting the support of 70 organizations involved in stopping child sexual exploitation, the bills actual content shows that its ultimate purpose is an attack on fundamental democratic rights.

The law calls for the creation of a 19-member commission controlled by the attorney general and US law enforcement agencies. The EARN IT commission will establish best practices that must be followed by the technology companies or they will face criminal prosecution if content on their services is found to be illegal.

According to the Electronic Frontier Foundation (EFF), among the best practices of the EARN IT Act is a proposal by John Shehan, vice president at the National Center for Missing and Exploited Children (NCMEC), that says, online services should be made to screen their messages for material that NCMEC considers abusive; use screening technology approved by NCMEC and law enforcement; report what they find in the messages to NCMEC; and be held legally responsible for the content of messages sent by others.

Therefore, the EARN IT law will place the tech companies and their users in a Catch-22. The law mandates that tech providers either agree to monitor the content and violate the privacy and free speech rights of their users by screening everything they publish, post or store on the service or they agree to be prosecuted by the state for any illegal content that appears on their site.

EFF further explains that the 19-member commission will be completely dominated by law enforcement and allied groups like NCMEC, and the bill gives Attorney General Barr the power to veto or approve the list of best practices. Even if other commission members do disagree with law enforcement, Barrs veto power will put him in a position to strongarm them.

It is well known that William Barr and the US Justice Department have been advocates of online censorship and for abolishing end-to-end encryption in consumer electronic devices. There is nothing stopping the EARN IT Act from introducing as one of its best practices a provision for law enforcements back-door access to encrypted communications and data files. Those firms which refuse to comply would then have their Section 230 protections eliminated.

It is a measure of the dishonesty of the American political system that leading Democrats and Republicans can so transparently use the fears and emotions of the public against child exploitation as a means of attacking fundamental rights protected by the Constitution.

On May 11, the Senate Judiciary Committee held a public hearing on the EARN IT Act and took testimony from witnesses on the proposed legislation. Among the speakers were representatives from the NCMEC, Jared Sine of the online dating company Match Group, a child exploitation legal expert, and Elizabeth Banker of the Internet Association. Of these speakers, only the last spoke against the EARN IT Act.

In her presentation, Banker explained that many of the major tech firms todayincluding Amazon, Ebay, Facebook, Google, Microsoft, Twitter and Uberare members of the Internet Association. She went on to review the multi-faceted measures that tech companies have been engaged in to combat CSAM, going back to the passage of the Communications Decency Act of 1996, from their platforms.

Banker then explained how the EARN IT Act would create numerous problems and hinder the efforts to combat CSAM by violating online users First and Fourth Amendment rights because the providers will be acting as agents of the government. She stated, Under Fourth Amendment jurisprudence, a search performed by an agent of the government is subject to the same requirements as if the government performed the search directly.

Banker also spoke about the implications for freedom of speech in the Senate Judiciary Committee bill, The EARN IT Act would delegate important decisions concerning security, privacy, and free speech on the internetweighty and complex matters that directly impact hundreds of millions of consumersto an administrative body that would be composed of members who are not elected representatives and that would operate with little transparency.

Finally, Banker said that although the bill does not specifically mention encryption, Requiring companies to engineer vulnerabilities into their services would make us all less secure. Encryption technology stands between billions of internet users around the globe and innumerable threatsfrom attacks on sensitive infrastructure, including our highly automated financial systems, to attempts by repressive governments to censor dissent and violate human rights.

It could not have been lost on the Democratic and Republican senators or Elizabeth Banker of the Internet Association that the greatest threat of censorship and violation of human rights all over the world, including within the US itself, comes from American imperialism.

2019 has been a year of mass social upheaval. We need you to help the WSWS and ICFI make 2020 the year of international socialist revival. We must expand our work and our influence in the international working class. If you agree, donate today. Thank you.

More:
Bipartisan Senate Judiciary Committee introduces bill that censors online content and attacks encryption - World Socialist Web Site

Read More..

Business101: 5 Reasons why small and medium businesses should consider cloud technology – IOL

By Ben Bierman Mar 15, 2020

Share this article:

JOHANNESBURG - Cloud technology offers businesses in the small and medium-sized enterprises(SMME) sector access to emerging technology, allowing them to benefit from the same agility as large businesses without having to pay an arm and a leg.

By utilising cloud platforms, businesses can focus on running and growing their core operations without worrying about the ownership or maintenance of physical data servers.

Here are some of the key benefits of using cloud technology in your business:

Cost-effectiveness

Cloud offers a consumption-based model, which allows you to use the technology on demand as and when needed, and businesses only pay for what they use. Added to this, because the data and applications are hosted remotely, there is no cost or burden of buying, maintaining and upgrading servers on your premises.

Security

Cloud-based data storage helps address security concerns by backing up local data to the

cloud in real-time. Businesses therefore do not need to rely on physical infrastructure and manual backups. There is also the added benefit of a shared security responsibility matrix, where the cloud provider is responsible for managing the infrastructure security aspects and the business only focuses on the application security.

Flexibility

Cloud services are easily accessible as they only require an internet connection and can be accessed anywhere, from any device. This in-turn offer employees the flexibility and support to work remotely. Cloud-based office platforms such as Microsoft Office 365 also provide better support for group-based projects by allowing multiple users to work simultaneously, with all changes reflected in real-time, which promotes collaboration between employees.

Scalability

Given the fast-paced competitive business environment of today, it is critical for business owners to be agile to market changes and constantly evolve. As cloud-based services are scalable and on-demand, cloud technology therefore enables businesses to create a robust operational environment in which to grow.

Utility

Cloud computing can give businesses the processing power necessary to organise and

analyse large quantities of data. Over and above giving business owners the ability to make more informed decisions, this can also help them to deliver more relevant and up-to-date information to your customers or adjust to customers needs in real-time.

Ben Bierman is the managing director at Business Partners Limited.

BUSINESS REPORT

See more here:
Business101: 5 Reasons why small and medium businesses should consider cloud technology - IOL

Read More..

NetApp Acquires Talon Storage, Providing Integration with Cloud Services and File Shares – Database Trends and Applications

NetApp, a provider of cloud data services, is acquiring Talon Storage, a provider in next generation software-defined storage solutions, enabling users to centralize data in the cloud while still maintaining a consistent branch office experience.

Talons software will integrate with NetApp Cloud Volumes ONTAP, Cloud Volumes Service, and Azure NetApp Files solutions and give customers a faster path to the public cloud at a better total cost of ownership.

"As we grow our cloud data services offerings with solutions like Cloud Volumes ONTAP, Cloud Volumes Service, Azure NetApp Files, and Cloud Insights, we are excited about the potential that lies in front of this new combined team to deliver complete solutions for primary workloads," said Anthony Lye, senior vice president and general manager of NetApp's Cloud Data Services business unit. "We share the same vision as the team did at Talon - a unified footprint of unstructured data that all users access seamlessly, regardless of where in the world they are, as if all users and data were in the same physical location.?And to do this without impacting workflow, user experience and at a lower cost.

NetApps acquisition of Talon enhances the companys cloud data services portfolio with a?solution that solves all the remote office and branch office challenges with file shares.

Talon FAST is a cloud data service that provides a Global File Cache service for ROBO workloads, to enable file server consolidation into a secure, globally accessible file system on our public cloud platform.

For more information about this news, visit http://www.netapp.com.

Read the original here:
NetApp Acquires Talon Storage, Providing Integration with Cloud Services and File Shares - Database Trends and Applications

Read More..