Page 1,801«..1020..1,8001,8011,8021,803..1,8101,820..»

Recycling the Cloud: Singapore facility gives second life to mega servers Recycling International – Recycling International

Microsoft has opened a plant to tackle the growing stream of e-scrap from data centres. The Circular Center in Singapore provides services for the reuse of computer components in schools, for job training, and much more.

Microsoft aims to reuse 90% of its cloud computing hardware assets by 2025. The launch of this first facility in Asia is claimed to be an important step towards that goal, while also reducing Microsofts carbon footprint and creating jobs.

Microsoft Cloud is powered by millions of servers in hundreds of data centres around the world and demand for cloud services is growing exponentially. At these facilities, decommissioned servers and other types of hardware can be repurposed or dissembled by technicians before the components and equipment move on to another phase of life.

Microsofts Intelligent Disposition and Routing System (IDARS) uses AI and machine learning to establish and execute a zero-waste plan for every piece of decommissioned hardware. IDARS also works to optimise routes for these hardware assets and provide Circular Center operators with instructions on how to dispose of each one.

Singapore, with strong government and private sector commitments and agile policy environment, has already laid the foundations for an advanced recycling infrastructure to take advantage of those opportunities. A Microsoft Circular Center in Singapore is in line with this approach, says the tech multinational.

Microsofts first Circular Center opened in Amsterdam in 2020. Since its inception, the company has reused or recycled 83% of all decommissioned assets. Plans are underway to expand the programme in Washington, Chicago, Sydney and in other locations.

Would you like to share any interesting developments or article ideas with us? Don't hesitate to contact us.

Read this article:
Recycling the Cloud: Singapore facility gives second life to mega servers Recycling International - Recycling International

Read More..

Application Server Market to Hit Valuation of $40.96 Billion by 2028 | Increasing Number of Cyberattacks is growing Concerns among End-Users -…

Westford, USA, Sept. 08, 2022 (GLOBE NEWSWIRE) -- As the world continues to become more digital, businesses are increasingly looking for application servers that can facilitate large-scale web and mobile deployments. The growth of the global application server market is only expected to increase in the coming years, as market players figure out new ways to stay competitive.

There is a growing demand for application servers and companies are rushing to invest in these technologies in order to meet the needs of their customers. Application servers are now essential for any business that depends on web applications, as well as traditional desktop applications. This demand in the global application server market is due to the popularity of cloud-based services and the need for businesses to reduce IT costs. Many businesses are seeking solutions that allow them to use existing hardware and software infrastructure while offloading some of the processing burden to a third-party. This can be especially beneficial for companies that have limited resources or cannot afford to hire additional IT staff.

To meet this growing demand, vendors in the application server market are investing in new product lines and innovation. For example, IBM is introduced its Bluemix platform in 2018, which makes it easier for developers to build cloud-based applications using IBMs Hypervisor technology. Hewlett Packard Enterprise has also made considerable investments in its Applied Data Science Platform, which provides databases and analytics capabilities for application development.

Get sample copy of this report:

https://skyquestt.com/sample-request/application-server-market

Why Businesses are Rapidly Turning to Application Services?

There are a number of reasons why businesses are turning to application server market. For one, these systems can help speed up web and mobile deployments by handling the heavy lifting required to run complex applications. Additionally, application servers offer reliability and security benefits that can be priceless for organizations that depend on their websites and apps for business success.

Today, web-based applications are increasingly being used to replace desktop applications. In addition, businesses are finding that application servers are a more efficient way to manage their software infrastructure than traditional hosting providers. This is because application servers offer higher performance and reliability than traditional hosting providers.

SkyQuest in the global application server market found that most of the business are using the product to configure and run multiple applications simultaneously without slowing down. This means that businesses can use application servers to run their business applications, as well as their own personal websites and applications.

Also, there's the increasing demand from cloud services providers for application servers. Cloud services providers want to use application servers so that they can provide customers with a scalable infrastructure. By using an application server, a cloud service provider can reduce the amount of time and effort it takes to set up a new service.

SkyQuests report on application server market offers insights on market dynamics, opportunities, trends, challenges, threats, pricing analysis, average spend per user, major spender by companies, consumer behavior, market impact, competitive landscape, and market share analysis.

Browse summary of the report and Complete Table of Contents (ToC):

https://skyquestt.com/report/application-server-market

High Risk of Ransomware in Application Server Infrastructure is Posing Challenge

From the last few years, the global application server market witnessed around 129 major attacks on application server infrastructure. The increasing risk of cyber-attacks on application servers is something that businesses need to be aware of and they are a key part of many organizations, and when they are attacked, it can open up a lot of opportunities for criminals. Cyber risks to application servers have increased in recent years, as attackers have become increasingly skilled at targeting these systems. At the same time, companies are increasingly reliant on these systems to provide critical services, making them targets for hackers. In 2021, on average, ransomware attack would cost application server hack around $17,000.

SkyQuest has recently conducted a survey on application server market to understand the frequency and insights about cyber-attack 150 large and 150 small enterprises. Wherein, it was obsefved that 13% of surveyed organizations have suffered at least one cyber-attack in the past two years. The small enterprises were at least 200% more susceptible cyberattacks. More than 26% of these organizations have suffered two or more attacks during that time period. Additionally, 44% of these same organizations reported that their cyber security capabilities were inadequate to respond to the attacks they experienced. As per our finding, 88% of all detected data breaches began with stolen or illegally accessed user credentials.

Top cyberattacks in application server market

SkyQuest has published a report on global application server market. The report provides a detailed analysis of cyberattacks on the application server consumers and their overall performance. The report also offers valuable insights about top players and their key advancements in order to avoid such attacks.

Speak to Analyst for your custom requirements:

https://skyquestt.com/speak-with-analyst/application-server-market

Top Players in Global Application Server Market

Related Reports in SkyQuests Library:

Global Electronic Data Interchange (EDI) Software Market

Global Human Resource (HR) Technology Market

Global Smart Label Market

Global Field Service Management (FSM) Market

Global Point Of Sale (POS) Software Market

About Us:

SkyQuest Technologyis leading growth consulting firm providing market intelligence, commercialization and technology services. It has 450+ happy clients globally.

Address:

1 Apache Way, Westford, Massachusetts 01886

Phone:

USA (+1) 617-230-0741

Email:sales@skyquestt.com

LinkedInFacebookTwitter

The rest is here:
Application Server Market to Hit Valuation of $40.96 Billion by 2028 | Increasing Number of Cyberattacks is growing Concerns among End-Users -...

Read More..

Improving Splunk and Kafka Platforms with Cloud-Native Technologies – InfoWorld

Intel Select Solutions for Splunk and Kafka on Kubernetes use containers and S3-compliant storage to increase application performance and infrastructure utilization while simplifying the management of hybrid cloud environments.

Executive Summary

Data architects and administrators of modern analytic and streaming platforms like Splunk and Kafka continually look for ways to simplify the managementofhybrid or multi-cloud platforms, while also scaling these platforms to meet the needs of their organizations. They are challenged with increasing data volumes and the need for faster insights and responses. Unfortunately, scaling often results in server sprawl, underutilized infrastructure resources and operational inefficiencies.

The release of Splunk Operator for Kubernetes and Confluent for Kubernetes, combined with Splunk SmartStore and Confluent Tiered Storage, offers new options for architectures designed with containers and S3-compatible storage. These new cloud-native technologies, running on Intel architecture and Pure Storage FlashBlade, can help improve application performance, increase infrastructure utilization and simplify the management of hybrid and multi-cloud environments.

Intel and Pure Storage architects designed a new reference architecture called Intel Select Solutions for Splunk and Kafka on Kubernetes and conducted a proof of concept(PoC) to test the value of this reference architecture. Tests were run using Splunk Operator for Kubernetes and Confluent for Kubernetes with Intel ITs high-cardinality production data to demonstrate a real-worldscenario.

In our PoC, a nine-node cluster reached a Splunk ingest rate of 886 MBps, while simultaneously completing 400 successful dense Splunk searches per minute, with an overall CPU utilization rate of 58%.1 We also tested Splunk super-sparse searches and Splunk ingest from Kafka data stored locally versus data in Confluent Tiered Storage on FlashBlade, which exhibited remarkable results. The outcomes of this PoC informed the Intel Select Solutions for Splunk and Kafka on Kubernetes.

Keep reading to find out how to build a similar Splunk and Kafka platform that can provide the performance and resource utilization your organization needs tomeet the demands of todays data-intensive workloads.

Solution Brief

Business challenge

The ongoing digital transformation of virtually every industry means that modern enterprise workloads utilize massive amounts of structured and unstructured data. Forapplications like Splunk and Kafka, the explosion of data can be compounded by other issues. First, thetraditional distributed scale-out model with direct-attached storage requires multiple copies of data to be stored, driving up storage needs even further. Second, many organizations are retaining their data for longer periods of time for security and/or compliance reasons. These trends createmany challenges, including:

Beyond the challenges presented by legacy architectures, organizations often have other challenges. Large organizations often have Splunk and Kaka platforms in both on-prem and multi-cloud environments. Managing the differences between these environments creates complexity for Splunk and Kafka administrators, architects and engineers.

Value of Intel Select Solutions for Splunk and Kafka on Kubernetes

Many organizations understand the value of Kubernetes, which offers portability and flexibility and works with almost any type of container runtime. It has become the standard across organizations for running cloud-native applications; 69% of respondents from a recent Cloud-Native Computing Foundation (CNCF) survey reported using Kubernetes in production.2 To support their customers desire to deploy Kubernetes, Confluent developed Confluent for Kubernetes, and Splunk led the development of Splunk Operator for Kubernetes.

In addition, Splunk and Confluent have developed new storage capabilities: Splunk SmartStore and Confluent Tiered Storage, respectively. These capabilities use S3compliant object storage to reduce the cost of massive data sets. In addition, organizations can maximize data availability by placing data in centralized S3 object storage, while reducing application storage requirements by storing a single copy of data that was moved to S3, relying on the S3 platform for data resiliency.

The cloud-native technologies underlying this reference architecture enable systems to quickly process the large amounts ofdata todays workloads demand; improve resource utilization and operational efficiency; and help simplify the deployment and management of Splunk andKafkacontainers.

Solution architecture highlights

We designed our reference architecture to take advantage of the previously mentioned new Splunk and Kafka products and technologies. We ran tests with a proof of concept (PoC) designed to assess Kafka and Splunk performance running on Kubernetes with servers based on high-performance Intel architecture and S3-compliant storage supported by Pure Storage FlashBlade.

Figure 1 illustrates the solution architecture at a high level. The critical software and hardware products and technologies included in this reference architecture are listed below:

Additional information about some of these components is provided in the A Closer Look at Intel Select Solutions for Splunk and Kafka on Kubernetes section that follows.

Figure 1. The solution reference architecture uses high-performance hardware and cloud-native software to help increase performance and improve hardware utilization and operational efficiency.

A Closer Look at Intel Select Solutions for Splunk and Kafka on Kubernetes

The ability to run Splunk and Kafka on the same Kubernetes cluster connected to S3-compliant flash storage unleashes seamless scalability with an extraordinary amount of performance and resource utilization efficiency. The following sections describe some of the software innovations that make this possible.

Confluent for Kubernetes and Confluent TieredStorage

Confluent for Kubernetes provides a cloud-native, infrastructure-as-code approach to deploying Kafka on Kubernetes. It goes beyond the open-source version of Kubernetes to provide a complete, declarative API to build a private cloud Kafka service. It automates the deployment of Confluent Platform and uses Kubernetes to enhance the platforms elasticity, ease of operations and resiliency for enterprises operating at any scale.

Confluent Tiered Storage architecture augments Kafka brokers with the S3 object store via FlashBlade, storing data on the FlashBlade instead of the local storage. Therefore, Kafka brokers contain significantly less state locally, making them more lightweight and rebalancing operations orders of magnitude faster. Tiered Storage simplifies the operation and scaling of the Kafka cluster and enables the cluster to scale efficiently to petabytes of data. With FlashBlade as the backend, Tiered Storage has the performance to make all Kafka data accessible for both streaming consumers and historical queries.

Splunk Operator for Kubernetes and SplunkSmartStore

The Splunk Operator for Kubernetes simplifies the deployment of Splunk Enterprise in a cloud-native environment that uses containers. The Operator simplifies the scaling and management of Splunk Enterprise by automating administrative workflows using Kubernetes best practices.

Splunk SmartStore is an indexer capability that provides a way to use remote object stores to store indexed data. SmartStore makes it easier for organizations to retain data for a longer period of time. Using FlashBlade as the high-performance remote object store, SmartStore holds the single master copy of the warm/cold data. At the same time, a cache manager on the indexer maintains the recently accessed data. The cache manager manages data movement between the indexer and the remote storage tier. The data availability and fidelity functions are offloaded to FlashBlade, which offers N+2 redundancy.4

Remote Object Storage Capabilities

Pure Storage FlashBlade is a scale-out, all-flash file and object storage system that is designed to consolidate complete data silos while accelerating real-time insights from machine data using applications such as Splunk and Kafka. FlashBlades ability to scale performance and capacity is based on five key innovations:

A complete FlashBlade system configuration consists of up to 10 self-contained rack-mounted servers. A single 4U chassis FlashBlade can host up to 15 blades and a full FlashBlade system configuration can scale up to 10 chassis (150 blades), potentially representing years of data for even higher ingest systems. Each blade assembly is a selfcontained compute module equipped with processors, communication interfaces and either 17TB or 52 TB of flash memory for persistent data storage. Figure 2 shows how the reference architecture uses Splunk SmartStore andFlashBlade.

Figure 2. Splunk SmartStore using FlashBlade for the remote object store.

Proof of Concept Testing Process andResults

The following tests were performed in our PoC:

For all the tests, we used Intel ITs real-world high-cardinality production data from sources such as DNS, Endpoint Detection and Response (EDR) and Firewall, which were collected into Kafka and ingested into Splunk through Splunk Connect for Kafka.

Test #1: Application Performance and InfrastructureUtilization

In this test, we compared the performance of a baremetal Splunk and Kafka deployment to a Kubernetes deployment. The test consisted of reading data from four Kafka topics and ingesting that data into Splunk, while dense searches were scheduled to run every minute.

Bare-Metal Performance

We started with a bare-metal test using nine physical servers. Five nodes served as Splunk indexers, three nodes as Kafka brokers and one node served as a Splunk search head. With this bare-metal cluster, the peak ingest rate was 301 MBps, while simultaneously finishing 90 successful Splunk dense searches per minute (60 in cache, 30 from FlashBlade), with an average CPU utilization of 12%. The average search runtime for the Splunk dense search was 22seconds.

Addition of Kubernetes

Next, we deployed Splunk Operator for Kubernetes and Confluent for Kubernetes on the same nine-node cluster. Kubernetes spawned 62 containers: 35 indexers, 18 Kafka brokers and nine search heads. With this setup, we reached a peak Splunk ingest rate of 886 MBps, while simultaneously finishing 400 successful Splunk dense searches per minute (300 in cache, 100 from FlashBlade), with an average CPU utilization of 58%. Theaverage search runtime for the Splunk dense search was 16 secondsa 27% decrease from the Splunk average search time on the bare-metal cluster. Figure 3 illustrates the improved CPU utilization gained from containerization using Kubernetes. Figure 4 shows the high performance enabled by the reference architecture.

Figure 3. Deployment of the Splunk Operator for Kubernetes and Confluent for Kubernetes enabled 62Splunk and Kafka containers on the nine physical serversinthe PoC cluster.

Figure 4. Running Splunk Operator for Kubernetes and Confluent for Kubernetes enabled up to 2.9X higher ingest rate, up to 4x more successful dense searches, and a 27% reduction in average Splunk search time, compared to the bare-metal cluster.

Test #2: Data Ingest from Kafka Local Storage versus Confluent Tiered Storage

Kafkas two key functions in event streaming are producer (ingest) and consumer (search/read). In the classic Kafka setup, the produced data is maintained at the broker's local storage, but with Tiered Storage, Confluent offloads the data from the Tiered Storage to the object store and enables infinite retention. If any consumer is looking for data that is not in the local storage, the data would be downloaded from the object storage.

To compare the consumer/download performance, we started the Splunk Connect workers for Kafka after one hour of data ingestion into Kafka with all data on the local SSD storage. The Connect workers read the data from Kafka and forwarded it to the Splunk indexers, where we measured the ingest throughput and elapsed time to load all the unconsumed events. During this time, Kafka read the data from the local SSD storage, and Splunk was also writing the hot buckets into the local SSD storage that hoststhe hot tier.

We repeated the same test when the topic was enabled with Tiered Storage by starting the Splunk Connect workers for Kafka, which initially read the data out of FlashBlade and later from the local SSD storage for the last 15 minutes. We then measured the ingest throughput and the elapsed time to load all the unconsumed events.

As shown in Figure 5, there is no reduction in the Kafka consumer performance when the broker data is hosted on Tiered Storage on FlashBlade. This reaffirms that offloading Kafka data to the object store, FlashBlade, gives not only similar performance for consumers but also the added benefit of longer retention.

Figure 5. Using Confluent Tiered Storage with FlashBlade enables longer data retention while maintaining (or even improving) the ingest rate.

Test #3: Splunk Super-Sparse Searches in SplunkSmartStore

When data is in the cache, Splunk SmartStore searches are expected to be similar to non-SmartStore searches. When data is not in the cache, search times are dependent on the amount of data to be downloaded from the remote object store to the cache. Hence, searches involving rarely accessed data or data covering longer time periods can have longer response times than experienced with non-SmartStore indexes. However, FlashBlade accelerates the download time considerably in comparison to any other cheap-and-deep object storage available today.4

To demonstrate FlashBlades ability to accelerate downloads, we tested the performance of a super-sparse search (the equivalent of finding a needle in a haystack); the response time of this type of search is generally tied to I/O performance. The search was initially performed against the data in the Splunk cache to measure the resulting event counts. The search returned 64 events out of several billion events. Following this, the entire cache was evicted from all the indexers, and the same super-sparse search was issued again, which downloaded all the required data from FlashBlade into the cache to perform the search. We discovered that FlashBlade supported a download of 376 GB in just 84 seconds with a maximum download throughput of 19 GBps (see Table 1).

Table 1. Results from Super-Sparse Search

Results

Downloaded Buckets

376 GB

Elapsed Time

84 seconds

Average Download Throughput

4.45 GBps

Maximum Download Throughput

19 GBps

A super-sparse search downloading

376 GB in 84 Seconds

Configuration Summary

Introduction

The previous pages provided a high-level discussion of the business value provided by Intel Select Solutions for Splunk andKafka on Kubernetes, the technologies used in the solution and the performance and scalability that can be expected. This section provides more detail about the Intel technologies used in the reference design and the bill of materials for building the solution.

Intel Select Solutions for Splunk and Kafka on Kubernetes Design

The following tables describe the required components needed to build this solution. Customers must use firmware with the latest microcode. Tables 2, 3 and 4 detail the key components of our reference architecture and PoC. Theselection of software, compute, network, and storage components was essential to achieving the performance gains observed.

Table 2. Key Server Components

Component

Description

CPU

2x Intel Xeon Platinum 8360Y (36 cores, 2.4 GHz)

Memory

16x 32 GB DDR4 @ 3200 MT/s

Storage (Cache Tier)

1x Intel Optane SSD P5800x (1.6 TB)

Storage (Capacity Tier)

1x SSD DC P4510 (4 TB)

Boot Drive

1x SSD D3-S4610 (960 GB)

Network

Intel Ethernet Network Adapter E810-XXVDA2 (25 GbE)

Table 3. Software Components

Software

Version

Kubernetes

1.23.0

Splunk Operator for Kubernetes

1.0.1

Splunk Enterprise

8.2.0

Splunk Connect for Kafka

2.0.2

Confluent for Kubernetes

2.2.0

Confluent Platform

7.0.1 using Apache Kafka 3.0.0

Table 4. S3 Object Storage Components

Read more from the original source:
Improving Splunk and Kafka Platforms with Cloud-Native Technologies - InfoWorld

Read More..

3 practical ways to fight recession by being cloud smart – IT Brief New Zealand

As COVID almost starts to feel like a distant memory, you think wed all cop a break. But no, the threat of recession now darkens the horizons. This makes it an excellent time to get smart about how you use cloud and ensure it delivers short- and long-term value to your organisation.

In this article, we suggest three specific ways to nail down some genuine savings or optimise the benefits (and savings) from your cloud and cloud applications.

1. Save more when you choose a cloud-native application

Depending on where you are on your roadmap to cloud adoption, you may want to look sideways at some of your legacy line-of-business applications and ask if they will serve you equally well in your transformed state.

If you have enough budget, practically any application can be retrospectively modernised to work in the cloud. And, unwilling to be left behind, some vendors have re-engineered their applications to run in the cloud with varying degrees of success. But its important to realise that unless the application was specifically built from the ground up to run on the cloud (i.e., a cloud-native), it may not deliver an ROI or enable your business to keep up with the current pace of change.

Think of it this way: Its like adapting your petrol-fuelled car to run on an EV battery. While innovation may prolong your beloved vehicle's life, it will never perform to the standard of a brand spanking new state-of-the-art Tesla.

Cloud-native applications are built from birth to be inherently efficient; to perform to a much better standard than applications with non-native features, and to cost less to run.

Lets break those benefits down a bit:

2. Check out that cloud sprawl

Its easy to rack up spikes on your cloud invoice when your organisation has gone cloud crazy. Cloud crawl is when your cloud resources have proliferated out of control, and you are paying for them, often unknowingly.

So, how does that happen? It usually comes about because of a failure to eliminate services that are no longer, or never were, part of your overall cloud strategy. Its like still paying a vehicle insurance policy on a Ferrari when youve made a sensible downgrade to a family-friendly Toyota.

Cloud sprawl can come around through departments adding on or trialling cloud applications, then not unsubscribing from them. Or from maintaining unneeded storage despite deleting the associated cloud server instance. Or from services you once needed when making the original move to the cloud and not decommissioning them.

Make your cloud strategy a living document to ensure youre only paying for what you need and use. One thats shared and compared with the real-life status quo regularly. Implement policies to control those random or one-off cloud application trials when theyre done with. Talk to your technology partner about setting up automated provisioning to shut down old workloads that are no longer of value or could be managed off-peak and, therefore more cost-effectively.

And compare every invoice to identify if you are paying for cloud services that you no longer need or use. If its all sounding a bit hard, a cloud crawl health check by your managed services partner could provide a great ROI.

3. Get more value from your no-where-near dead legacy applications

While cloud-native applications may seem to offer it all, we all know that sometimes its simply not practical to move on from your investment in a legacy solution. In that case, a lift and shift (think of it as uplifting your house as is, where is - from a slightly down-at-heel suburb to a more upmarket one with better facilities) may be the best option to breathe life into ageing technology without having to invest in renovations (or buy new servers).

When done well, lift and shift is a very cost-effective way to onramp your organisation onto the cloud. Just be aware that while you will save money by not modernising your application, youll not realise the true cloud benefits from native constructs (i.e., cheaper storage, elasticity, or additional security).

Dont forget to count your savings

If youre wondering where else you can make immediate or long-term savings, dont forget that your original decision to move to the cloud has delivered your organisation a positive ROI since Day One.

And if youve chosen fully managed services, youve saved even more.

Youve already walked away from the significant overheads of expensive servers stacked in a dust-free, temperature-controlled environment, the disruption caused by software upgrades or server downtime, and the need for IT resources to manage your environment and safeguard your data from cyberattacks. And youve said hello to a low-risk, secure, highly available environment from anywhere your people work, at any time.

If youd like to discuss how to optimise your cloud benefits, and get some well-considered, practical answers, contact us here.

Continue reading here:
3 practical ways to fight recession by being cloud smart - IT Brief New Zealand

Read More..

Security pros say the cloud has increased the number of identities at their organizations – SC Media

The Identity Defined Security Alliance (IDSA) on Wednesday reported that 98% the vast majority of companies surveyed confirmed that the number of identities has increased in their organization, with 52% saying its because of the rapid adoption of cloud applications.

Other factors increasing identities at organizations are an increase in third-party relationships (46%) and in new machine identities (43%).

Given the growing number of identities in organizations as they migrate to cloud, it makes sense that 84% of respondents report having had an identity-related attack in the past year.

The IDSA report said managing and monitoring permissions at such a high scale and in convoluted environments has become extremely difficult. Attackers are exploiting this challenge and continuously attempt to escalate their attack capabilities.

Identity breaches are by far one of the most common breaches, said Alon Nachmany, Field CISO at AppViewX, who said he dealt with two breaches of this kind when he was a CISO. Nachmany said the industry slowly evolved to privileged identities and ensured that privileged accounts were a separate identity, but when organizations moved to the cloud, the lines blurred.

The days of managing your own systems with your own systems were gone, Nachmany said. As an example, with on-prem Microsoft Exchange Servers migrating to Microsoft O365 we no longer managed the authentication piece. Our local accounts were now accessible from everywhere. And a lot of security best practices were overlooked. Another issue is that as some companies blew up and more systems came onboard, they were quickly deployed with the thought that we will go back and clean it up later. With the cloud making these deployments incredibly easier and faster, the issues just evolved.

Darryl MacLeod, vCISOat LARES Consulting, said while its effective to invest in IAM solutions, organizations need to go back to the basics and educate their employees about the importance of security. MacLeod said employees need to understand the dangers of phishing emails and other social engineering attacks. They should also know how to properly manage their passwords and other sensitive information, and in doing so, MacLeod said organizations can significantly reduce their identity-related risks.

With the growth of cloud computing, organizations are now entrusting their data to third-party service providers without thinking of the implications, MacLeod said.This shift has led to a huge increase in the number of identities that organizations have to manage. As a result, its made them much more vulnerable to attack.If an attacker can gain access to one of these cloud-based services, they can potentially access all of an organizations data. If an organization doesnt have the right security controls in place, they could be left scrambling to contain the damage.

Joseph Carson, chief security scientist and advisory CISO at Delinea, said the growth in mobility and the cloud greatly increases the complexity of securing identities. Carson pointed out that organizations still attempt to try and secure them with the existing security technologies they already have, which results in many security gaps and limitations.

Some organizations even fall short by trying to checkbox security identities with simple password managers, Carson said. However, this still means relying on business users to make good security decisions.To secure identities, you must first have a good strategy and plan in place. This means understanding the types of privileged identities that exist in the business and using security technology designed to discover and protect them. The good news is that many organizations understand the importance of protecting identities.

Originally posted here:
Security pros say the cloud has increased the number of identities at their organizations - SC Media

Read More..

Hardcoded API keys threaten data security in the cloud – TechHQ

Mobile apps are ubiquitous. Smartphones do a great job of running software that would have previously meant lugging a laptop around, and thats made mobile apps a popular choice for enterprises. Delving into the software, there are a number of use cases. But a common one is the use of mobile apps as a gateway to accessing information in the cloud for example, to query a corporate database (or more likely, several databases). For businesses, the productivity gains are clear. And often software designed to be run on smartphones, tablets, or other devices (vehicles are becoming another popular environment for app developers) turns out to be more widely used than its PC equivalent. So far so good, until security issues get in the way.

Search online for hardcoded credentials on mobile apps and the problem becomes clear (for the MITRE definition, check out CWE-798). In the early stages of software development it can be tempting to write API keys into the code for example, to quickly test an app idea, or prototype different solutions. These keys (which are unique and allow servers to identify the application making the request) provide authorization for software on a remote device to read values stored in a database hosted in the cloud. API keys, in themselves, work well and can help servers in other ways too for example, by allowing them to rate-limit requests; quenching denial of service (DoS) attacks. But keys are keys, and like the real world versions in your pocket or bag, you wouldnt want everyone to have access to them, while they remain valid.

Security search engines such as CloudSEKs BeVigil found (in April 2021) that 0.5% of mobile apps expose AWS API keys. And given how many apps are out there, thats a lot of keys and a lot of data that is potentially at risk of being breached. Its important to note that AWS is not the story here. AWS is one of the most popular cloud hosts on the planet, so theres no surprise to see its keys being widely used. The problem is at the app level and in the software supply chain that goes with it. More recently, Symantec looked into the issue and reported in September 2022 that in a survey of 1859 publicly available apps (featuring Android and iOS operating systems) it had found:

Worst still, hardcoded credentials is a problem that hasnt gone away Symantecs team raised the same issue three years ago. One reason for the problems persistence is that there are numerous ways that these API issues can arise. Kevin Watkins, a security researcher at the firm, notes that some companies outsource the development of their mobile apps, which can lead to vulnerable external software libraries and SDKs being unknowingly introduced. Internally, the use of cross-team libraries can also present issues, when vulnerabilities havent been picked up. And shared libraries add to the problem too, where access tokens have been hardcoded.

If the issue lies in an upstream library, vendors may not even realize that they are using hardcoded credentials, which emphasizes the importance of running security scanning and safe coding tools during software development Snyk is one example, and there are others too. And touching back on the software supply chain issue raised by Symantec, there are solutions that can be deployed here as well, such as software composition analysis integrations provided by Sonatype.

At this point in the discussion, its worth noting that adversaries may have to do a little digging to get their hands on the baked-in secrets. But they will find them. And if those API keys open the doors to a treasure trove of sensitive business data then victims of the breach will be in trouble. There are online guides showing how easy it is to scan code repositories such as GitHub for secrets and credentials. And even simply typing the Linux command strings (which lists all of the strings used in a program) could be enough to reveal clumsily hidden secrets. Tools such as MobSF a security framework for analyzing Android and iOS software are useful indicators of how good, or bad, the situation is. And Microsoft is very clear in its advice to coders. When a key is hard-coded, it is easily discovered, writes the computing giant. Even with compiled binaries, it is easy for malicious users to extract it.

Data breaches happen for all kinds of reasons, but shutting the door on the use of hardcoded credentials will certainly help to raise defenses. And there are lots of useful cheat sheets on how to implement secure cryptographic key management. Also, cloud providers such as Google and Amazon offer tools for keeping secrets such as API keys safe. Solutions such as AWSs secrets manager take much of the heavy lifting out of belt and braces approaches, which includes key rotation a way of further bolstering security. API hubs can help too. RapidAPI has a useful section explaining how to perform API key rotation or reset an API key that you know to be compromised.

Continued here:
Hardcoded API keys threaten data security in the cloud - TechHQ

Read More..

Common Email Security Practices Called Into Question as 89% of Organizations Suffered Breaches In the Past Year – Digital Information World

Email security is one of the foundational aspects of an organizations security protocols because of the fact that this is the sort of thing that could potentially end up keeping many threats at bay. In spite of the fact that this is the case, most widespread email security practices appear to be failing to protect organizations that use them, with 89% of them suffering an email breach over the past year.

This data comes out of a survey conducted by Osterman Research, and it was done at the request of Cyren. With all of that having been said and now out of the way, it is important to note that many businesses that use Microsoft 365 participated in this survey so its findings are thought to be fairly accurate. Just under half of organizations said that they felt like their security protocols were effective, and a similar number felt that they were able to successfully ward of potential threats that would target their email servers.

Whats more is that the total quantity of breaches of this variety have more than doubled in less than three years, with current numbers being twice that of those seen in 2019 with all things having been considered and taken into account. A truly concerning fact in this report is that 99% of organizations train their users to spot suspicious emails and flag them, but the vulnerabilities still get exploited despite them investing in frequent training for employees.

That might be because organizations only analyze a quarter of the emails that get flagged, something that indicates that theyre not going too far beyond the standard things that they know to do. Organizations need to be start being more proactive about things like this, because no matter how many suspicious emails their employees point out, these systems wont have an impact on improving security if organizations dont play their part by analyzing said emails and looking into how these threats might have worked out. Using nothing but the basic security protocols is why these threats are continuing to be common.

Excerpt from:
Common Email Security Practices Called Into Question as 89% of Organizations Suffered Breaches In the Past Year - Digital Information World

Read More..

Hybrid Cloud: Benefits and Barriers – Techopedia

In the beginning, cloud computing was just a utility -- commonly known as Infrastructure as a Service (IaaS).

But issues with primitive public cloud platforms (mainly revolving around security, compliance and customizability) has since caused many businesses to opt for the hybrid cloud model: a combination of the public and private cloud models, which gives organizations the best of both worlds and makes for an efficient cloud strategy. (Also read: Public Cloud vs. Private Cloud: How to Choose.)

However, is hybrid cloud as good as it seems?

Let's find out. But first,

Before we can examine the pros and cons of the hybrid cloud model, we need to know what we're talking about.

In general, hybrid cloud can be defined as a combination of public and private cloud infrastructures.

More specifically, the hybrid cloud model is a cloud solution that meets both of the following criteria:

As cloud computing technology has evolved, the hybrid cloud concept has extended. It now covers multiple heterogeneous on-premise infrastructures -- like private cloud, servers, containers and virtualized infrastructure -- and also includes multiple public clouds from various cloud providers. Moreover, these diverse cloud components are not in silos; they are well-connected and integrated to deliver seamless processing power. A proper hybrid cloud integration also makes it easier to manage and monitor the entire cloud infrastructure, allowing smooth portability for both data and applications.

In short, with hybrid cloud, the name says it all: It's truly hybrid in nature and spans multiple dimensions. (Also read: 3 Key Aspects of Effective Hybrid Cloud Management.)

First, the hybrid cloud optimizes the workload in both the private and public cloud infrastructures it comprises. It balances cost, security, speed, availability, and scalability efficiently.

Other major advantages of hybrid cloud include:

The hybrid cloud helps the enterprise optimize capital expenditure (CAPEX) and operational expenditures (OPEX).

Infrastructure cost is one of the biggest challenges in any enterprise; and hybrid cloud helps pacify this by bringing a balanced combination of public and private resources. This allows organizations to make a proper plan for workload distribution.

A combination of public and private cloud brings the best combination of security solutions.

That's because the public cloud, by nature, is configured with automated and highly efficient security systems. This reduces error, as human intervention is minimized, and is more cost-effective than traditional cloud security measures. At the same time, the private cloud provides more customized security to protect organizations' sensitive data.

Bringing these benefits together, hybrid cloud gives the enterprise the most bang for its buck in terms of security. (Also read: Cloud Security 101.)

The hybrid model can help the enterprise overcome availability issues.

Public cloud services rarely fail -- but if/when they do, it can be detrimental to client organizations. Private cloud and local data centers can provide the backup for public cloud downtime, but to really ensure airtight availability, organizations distribute their workload between the public and private clouds (i.e., the hybrid cloud). Ideally, store your critical data in the private cloud and/or your local data center so service continuity can be maintained even if there is any downtime in the public cloud infrastructure.

The above factors can also apply to latency; using the hybrid cloud model can help reduce the time it takes for data to travel.

In today's competitive business environment, scaling up to catch the growing market demand is the key to success. And the hybrid cloud is the perfect solution

The private cloud does not scale up quickly, but public cloud infrastructure is highly scalable. Since it combines the two models, hybrid cloud allows the enterprise to scale up the public part of its cloud infrastructure whenever necessary and in a cost-effective way.

Hybrid cloud solutions are easy to manage because they provide efficient and reliable management solutions for the infrastructure as a whole.

Public cloud solutions also provide lots of automation (sometimes AI-based), which is very helpful for managing the infrastructure.

Because the hybrid model is highly cost-effective, organizations can experiment with it without having to invest upfront.

This creates a great opportunity to innovate and grow: With a hybrid cloud, you can take calculated risks, test new ideas and implement them. (Also read: Experts Share the Top Cloud Computing Trends to Watch for.)

Along with great promises, hybrid model brings some challenges. Some key drawbacks to this model include:

There's no denying it: Hybrid cloud implementation is a complex task.

A proper migration strategy and planning are very important. Organizations should run a pilot project before migrating their workload into the hybrid cloud because, sometimes, the distribution of workload distribution between public and private clouds may go wrong, which can have a negative impact on the business.

While the hybrid model can be a cost-effective way to improve security, it can also be a serious concern if not designed properly.

The hybrid cloud has a public cloud component, which comes with unique risks that differ from the on-premise setup. (Also read: Is Your Organization Aware of These 6 Key Public Cloud Risks?)

If proper security tools and technologies are not implemented, data can be at risk with the hybrid cloud model. To protect it, organizations must ensure their entire public/private setup is protected from intrusions.

Again, while the hybrid cloud can also be a great tool to reduce network latency, it can also increase it if not done properly.

Transferring data to the public cloud can be an issue if your internet is slow. To avoid this, implement a backup plan to overcome latency-related issues.

Hybrid cloud management is another critical challenge.

The most important factor is to ensure consistency in multiple areas like infrastructure, operations, resources, and security. So, organizations should look for an experienced and trusted cloud partner for hybrid cloud deployment. Any management failure can cause severe damage to the business.

For a cloud strategy to succeed, an organization must consider two truths:

That's why hybrid cloud is an ideal fit: The most critical factor is proper workload management. The failure of the hybrid model is mostly caused due to poor workload planning and improper management. Enterprises should know that public cloud migration is not a simple lift and shift process. The hybrid cloud implementation journey should start when an organization is cloud matured and planning for work-load optimization.

For a cloud strategy to succeed, an organization must consider two truths:

That's why hybrid cloud is an ideal fit: it allows you to keep the public cloud's scalability and growth potential while housing your most sensitive data on-premise.

However, keep in mind that poor workload planning and improper management can cause even the best hybrid cloud strategies to fail. Public cloud migration is not a simple lift-and-shift process; your hybrid cloud implementation journey should start when an organization is cloud-mature and planning for workload optimization.

Go here to see the original:
Hybrid Cloud: Benefits and Barriers - Techopedia

Read More..

StorPool Launches New AWS Capabilities Enabling Migration of Demanding Monolithic Applications to the Public Cloud – Business Wire

SOFIA, Bulgaria--(BUSINESS WIRE)--StorPool Storage today announced a full-featured solution on Amazon Web Services (AWS) metal instances that enables the deployment of performance-intensive enterprise applications on hyperscale public clouds, which is not economically efficient with other technologies.

The new solution was developed in cooperation with Amazon Web Services and is designed to deliver extremely low latency and high IOPS to traditional applications such as transactional databases, monolithic applications and heavily loaded e-commerce websites. In the past, it was difficult and expensive to reach the same levels of storage performance in the cloud that are available in traditional on-premise deployments. With the StorPool on AWS solution, companies benefit from single-instance storage performance that matches or exceeds the performance of high-end, on-premises block storage, yet with all the advantages of the cloud.

Traditional applications that rely heavily on vertical scaling (i.e. scale up) are harder to lift and shift to AWS, demanding much more storage performance per instance than the cloud provider can deliver, even with the most expensive AWS EBS io2 Block Express. With its cost per IOPS, storage efficiency, advanced features and optimal performance even when engaging snapshots and other features that decrease performance in other storage architectures StorPool enables more than 5 times the maximum performance of io2 while reducing latency by 50 percent.

StorPool delivers blazing-fast 1.3+ million balanced random read/write IO/s to a single EC2 r5n instance. This frees businesses of per-instance storage limitations and the performance can be achieved for any compatible instance type with sufficient network bandwidth. StorPool achieves these numbers while utilizing less than 17 percent of client CPU resources for storage operations, leaving the remaining 83 percent for user applications and databases.

In terms of storage throughput, StorPool can deliver more than 10GB/s to a single client instance. StorPool saturates 100Gbps network connections with storage operations with block size of 8KB or larger all while keeping a complete set of benefits from StorPools advanced features.

"With the StorPool on AWS technology solution, organizations can now move traditional on-premises deployments to AWS to decrease capital expenditures and enjoy the more flexible OpEx model offered by the public cloud," said Boyan Ivanov, CEO at StorPool Storage. "Healthcare providers, insurance companies, SaaS vendors, those in financial services, or someone else with performance-hungry monolithic applications can leverage StorPool on AWS to get the extreme reliability and ultra-fast data access times they need in situations that would otherwise have been cost-prohibitive before."

The StorPool on AWS solution is currently available to partners interested in performing hybrid-cloud deployments of StorPool Storage on AWS i3en.metal instances for applications and use cases requiring exceptional storage performance. StorPool experts will architect, deploy, tune, monitor and maintain each StorPool Storage system so that users can focus on running their business with the peace of mind that their applications foundation is running optimally. Interested parties can email aws@storpool.com for more information.

You can find more technical details about the StorPool on AWS solution here.

About StorPool Storage

StorPool Storage is a primary storage platform designed for large-scale cloud infrastructure. It is the easiest way to convert sets of standard servers into primary storage systems. The StorPool team has experience working with various clients Managed Service Providers, Hosting Service Providers, Cloud Service Providers, enterprises and SaaS vendors. StorPool Storage comes as a software, plus a fully managed data storage service that transforms standard hardware into fast, highly available and scalable storage systems.

Here is the original post:
StorPool Launches New AWS Capabilities Enabling Migration of Demanding Monolithic Applications to the Public Cloud - Business Wire

Read More..

bmobile partners with TTMA to Create More Resilient Businesses through Cloud Solutions – Trinidad Guardian

bmobile has partnered with TTMA in creating one of the most resilient and secure Cloud solutions in the Caribbean. According to this press release, this technological leap will be critical for businesses seeking to grow. Details follow in this release...

bmobile has invested heavily in creating one of the most resilient and secure Cloud solutions in the Caribbean. This technological leap will be critical for businesses seeking to grow and will also be essential as more organisations, including the Government of the Republic of Trinidad and Tobago (GoRTT), seek greater digitisation.

While digitisation allows organisations to benefit in terms of data, speed and efficiency, it also brings inherent risks if not managed properly, as evidenced by recent ransomware attacks in the Caribbean Basin. To help businesses better understand secure Cloud solutions (and the availability of world-class facilities here in Trinidad and Tobago),bmobilepartnered with theTTMAto host a webinar as part of the2022 Trade and Investment Conference (TIC).

The virtual meeting,Creating More Resilient Businesses through Cloud Solutions, allowed several local and international experts to demystify Cloud solutions and showcase how businesses of any size, including SMEs, can benefit from Cloud services.

Tanya Muller, Senior Manager of IT Infrastructure, TSTT,welcomed attendees with several case studies to emphasise the urgency of implementing Cloud solutions. One client of ours was the victim of a ransomware attack which froze their critical financial data. Thankfully as they had been backed up to our Cloud, we were able to restore their data and have them operational soon after. Another client suffered a hardware failure on their internal servers. Still, as they were backed up via our offsite Cloud facilities, we were able to have them back in operation in just an hour too, Muller highlighted. For customers that already have on-site servers, Muller noted that TSTT could house those on behalf of customers in a secure, climate-managed facility that can extend the life of those servers. Doing so can also significantly reduce in-house expenses consumed through IT staffing, maintenance and electricity usage.

Muller also highlighted the primary benefits of the bmobile Cloud solution to attendees. According to Muller, customers considering Cloud solutions would be interested in three things: availability, flexibility and continuity. We are always available and fortunately for our customers we had 100% availability over the past 11 years. In terms of flexibility, we can also scale as needed starting off storing data for ten customers, but if you now have three hundred, we can scale that for you within minutes without you needing to buy more servers. Continuity is also vital. Our Disaster Recovery (DRaaS) service can back up your data easily to avoid events as we shared from our customers stories.

Muller noted that the bmobile Cloud service is not only designed for large businesses, but most SMEs can also benefit. Data and customer information is important today, and we can automate information that you normally write down manually. From the hairdresser to the hardware, we have solutions and apps that can be designed around your needs to make your life easier by making management of your business easier, Muller added.

TSTTs data centre and facilities are rated and inspected annually and hold several international and independent certifications, some of which are unique in the Western Hemisphere and the Caribbean. The TIA-942 Rated 3 certification merits the annual physical inspection of the facilities. The DCOS-3 rating is given on review of the operations, maintenance and personnel management aspects of the companys Cloud and data centre operations. Extensive background checks are also conducted on staff.

Mauricio Pennini, Senior Business Development Manager, VMware,emphasised that not even TSTT has access to the data it hosts for its clients. The data is more secure in the Cloud, than it is in your own facility, he added. VMware is a leading provider of multi-cloud services for all apps, enabling digital innovation with enterprise control with headquarters in Palo Alto, California.

Sean Koon Koon, Manager of IT Networks and Cloud, TSTT,also noted the extensive security measures that the company puts in place - that go well beyond what many companies can do in-house. We have designed our Cloud with collaboration and security architecture at their core. Using the Governance, Operations and Technology framework, the bmobile Cloud solution is built to be used by large clients like the GoRTT, which can share space but have those spaces differentiated by Ministry and be accessible in multiple locations while remaining secure, he added. Or be used by smaller clients just as securely.

Joseba Calvo, Managing Partner, EPI Latam, also shared insights with virtual attendees. In terms of certifications, TSTT has certifications to mitigate what we often call Murphys Law. And these reliable Cloud services will become more critical in the coming years. The number of internet users is projected to increase from 3.8 billion in 2019 to 5 billion by 2025, he highlighted. As a result, businesses poised to benefit are those with secure and reliable Cloud services, supporting their client databases, apps and operations. EPI is the global certification agency that verifies quality and provides independent audit services with specialisations in the ICT and Data Centre sectors.

Attendees also had the opportunity to ask direct questions of the team of experts assembled by bmobile and explore the Cloud solutions for their businesses. The event was hosted byKhamal Georges, Senior Manager Corporate, Environmental, Social, Reputation Management, TSTT.

Link:
bmobile partners with TTMA to Create More Resilient Businesses through Cloud Solutions - Trinidad Guardian

Read More..