Page 1,540«..1020..1,5391,5401,5411,542..1,5501,560..»

AlienFox malware caught in the cloud hen house – The Register

A fast-evolving toolkit that can be used to compromise email and web hosting services represents a disturbing evolution of attacks in the cloud, which for the most part have previously been confined to mining cryptocurrencies.

The AlienFox toolkit is being hawked on Telegram as a way to compromise misconfigured hosts on cloud services platforms and harvest sensitive information like API keys and other secrets, according to security shop SentinelOne.

It's a relatively fresh turn in opportunistic cloud attacks, Alex Delamotte, senior threat research with SentinelLabs, wrote in a report today.

"AlienFox tools facilitate attacks on minimal services that lack the resources needed for mining," she wrote. "By analyzing the tools and tool output, we found that actors use AlienFox to identify and collect service credentials from misconfigured or exposed services. For victims, compromise can lead to additional service costs, loss of customer trust, and remediation costs."

It can also open the doors to further criminal campaigns. Later versions of AlienFox include scripts that automate malicious operations using the stolen credentials, such as establishing persistence and allowing privilege escalation in AWS accounts. Another script automates spam campaigns through victim accounts and services.

Through AlienFox, attackers are able to collect lists of misconfigured hosts through scanning platforms like LeakIX and SecurityTrails, exhibiting an increasingly common trait among threat groups of using legitimate security products as with threat emulation tool Cobalt Strike in their malicious operations.

They can then use multiple scripts in the toolkit to steal sensitive information from the misconfigured hosts on such cloud platforms like Amazon Web Services and Microsoft Office 365. While the AlienFox scripts can be used against a range of web services, they primarily target cloud-based and software-as-a-service (SaaS) email hosting services, Delamotte wrote.

Most of the misconfigurations that are exploited are tied to a number of web frameworks, including Laravel, Drupal, WordPress, and OpenCart. The AlienFox scripts check for cloud services and includes a list of targets that are generated by a separate script, such as grabipe.py and grabsite.py. The targeting scrips use brute force methods for IPs and subnets and web APIs for open-source intelligence platforms like SecurityTrails and LeakIX.

When a vulnerable server is found, the miscreants move in for the sensitive information. SentinelOne found scripts targeting tokens and other secrets from more than a dozen cloud services, not only AWS and Office 365 but also Google Workspace, Nexmo, Twilio, and OneSignal.

AlienFox is a modular open source toolkit that is highly adaptable. While primarily available via Telegram, some modules can be found on GitHub, which can lead to constant adaptation and multiple variants being used, according to the report.

"The evolution of recurring features suggests the developers are becoming increasingly sophisticated, with performance considerations at the forefront in more recent versions," Delamotte wrote.

Given the massive amounts of sensitive data in cloud-based email and messaging systems that now are at "severe risk of exposure," the threat represented by AlienFox is a worry, according to Dan Benjamin, co-founder and CEO of cloud data security startup Dig Security.

"The emergence of toolkits like AlienFox underscores the increasing sophistication of attacker networks and their collective ability to cause harm and disruption," Benjamin told The Register. "This is a very concerning trend where the attackers behind AlienFox are adapting the tool to be effective across more targets, particularly those in use widely across enterprises."

SentinelOne has detected three versions of AlienFox dating back to February 2022 and some of the scripts found has been tagged as malware families by other researchers, such as Androxgh0st by Lacework.

"It is worth noting that each of the SES-abusing toolsets we analyzed targets servers using the Laravel PHP framework, which could indicate that Laravel is particularly susceptible to misconfigurations or exposures," she wrote.

AlienFox v4 is organized differently than the others for example, each tool gets a numerical identifier, such as Tool1 and Tool2 and some new ones suggest the developers are looking for new users or augmenting what existing toolkits can do. For example, one checks to see if email addresses are linked to Amazon retail accounts. If not, the script will create a new Amazon account using the email address. Another automates cryptocurrency wallet seeds for Bitcoin and Ethereum.

Given its ongoing evolution, it's likely that AlienFox will be around for a while.

"Cloud services have well-documented, powerful APIs, enabling developers of all skill levels to readily write tooling for the service," Delamotte wrote. "The toolset has gradually improved through improved coding practices as well as the addition of new modules and capabilities."

View post:
AlienFox malware caught in the cloud hen house - The Register

Read More..

A CEOs tactical guide to driving profitable growth – Bessemer Venture Partners

In the software world, a growth at all costs mindset has given way to profitable growth. Building a venture-backed business was easier when only growth mattered. But now CEOs need to drive both growth and profitability. In the public markets, the companies with the highest growth efficiency (which we define as ARR growth rate + free cash flow margin) command the highest multiples:

In this guide, we unpack a software profit and loss statement (P&L) into its component parts. Much has been written about how to drive growth, but here, we provide tactical steps for CEOs to follow in order to drive more efficiency and profitability.

Gross margin acts as the limit to the ultimate profitability of your business and has an enormous impact on your valuation. Its a great place to start because improving gross margin rarely comes at the expense of investment in growth.

The median gross margin for high-growth public cloud companies is 77%.

Case example: Take two identical software businesses. One is an 80% gross margin business that operates at 40% profit margins at scale. Holding all other factors constant, that same business with 60% gross margins will have only 20% profit margins. The difference between 80% gross margin and 60% gross margin cuts the value of the business by at least 50%.

Cloud Hosting Costs

Implementation

Customer Success and Support

The median amount of ARR that an Enterprise CSM manages is $2 million to $5 million.

Customer Profitability

As a guiding principle, we suggest you use CAC Payback benchmarks to assess your GTM efficiency. CAC payback benchmarks range by scale of the business and whether you are selling into an SMB or enterprise customer base, as sales cycles differ across segments. Here are the good-better-best benchmarks we have aggregated from private cloud companies:

At scale, one of the most powerful drivers of S&M efficiency is improving retention. It is always cheaper to retain and upsell existing customers than to acquire new customers. Moreover, if gross retention is low, refilling a leaky bucket makes it tough to maintain profitable growth.

Sales

Marketing Efficiency

Research & Development (R&D) is the most fraught area to cut spending to drive profitability: overcutting in R&D can lead to short-term wins but degrade competitive advantage over the long-term. Management teams should apply discretion when looking at R&D benchmarks given unique factors such as product complexity and market competitiveness.

The median R&D as a % of revenue is 20% for high-growth public cloud companies.

G&A is a ripe target to drive efficiency given that it is a cost center.

The median G&A as a % of revenue is 12% for high-growth public cloud companies.

Throughout your company, people-related costs tend to be the biggest area of expense. Making sure you are staffed appropriately across the entire organization is critical.

Pricing is one of the most important drivers of revenue growth and profitability. It is one of the most efficient ways to drive margin because any price increase drops straight to the bottom line. If youre a SaaS leader looking for new levers of revenue growth, take our B2B SaaS Pricing course.

The benchmarks we leverage for this article do not include stock-based compensation. However, it is important for founders to understand the impact of stock-based compensation. Although it does not immediately impact cash flow and profitability, it will inevitably at a future point. Further, looking at benchmarks inclusive of stock-based compensation also mitigates any noise resulting from different stock vs cash compensation structures across companies (e.g., the same company that gives 70% stock and 30% cash will look much more efficient than one that gives 50% stock and 50% cash). Lastly, the role of stock-based compensation is becoming an increasingly common topic of discussion for public market investors and has an increasing impact on valuation. We recommend benchmarking your business by including stock-based compensation or benchmarking against companies with a similar equity burn to fully understand relative cost structure and profitability.

As a CEO, it is important to remember:

In this article, we tried to be as comprehensive as possible in ideating tactics CEOs can implement to drive efficient growth. As we wrote the piece, I wondered if it might be helpful to share how I applied at least some of these 40 different tactics to a real business. The case study below shows that these ideas are very actionable. They changed the trajectory and outcome for SendGridand can for you, too!

Fortunately, I worked with a very talented and mature team at SendGrid. Creating alignment on our need to drive to a healthier rule of 40 was like pushing on an open door, and our whole company (not just the leadership team) got behind this goal.

Ultimately, it was a collective effort from across the organization that enabled SendGrid to drive profitable growth. A few examples of notable initiatives included a finance leader branding a company-wide Save to Reinvest campaign, a vice president of support helping us create and monetize new customer support tiers, and a customer success leader helping us launch new add-on services

Focusing on profitable growth isnt just the job of the CEO. Invite the smart and hard-working teammates of your company, who know your day-to-day operations best, to be part of the solution.

Our Save to Reinvest'' campaign was one of the best examples of internal marketing Id ever seen. We demonstrated to everyone in the company that we werent cost-cutting for its own sake, but rather so that we could afford to reinvest in growth levers for the business. This framing is what allowed us to both slash our burn and reaccelerate growth, what we later called the SendGrid smile; in other words, the graph that illustrates our growth rate over time which went down, then flat, then up and to the right.

Over the course of six quarters, we took action on the following tactical steps throughout the organization. Incrementally, we moved from -30% to roughly breakeven, and then reaccelerated our growth as we scaled in 2016, ramping toward our IPO in the fall of 2017.

SendGrids Gross Margin was in the 60s at the time I joined, and was mid-70s by our IPO, and mid-80s when I left Twilio. Unquestionably, some of that improvement was simply due to economies of scalecosts held flat as we increased our output. But it also was the result of many intentional, cost-focused initiatives across a number of areas, including:

General and administrative (G&A) expenses:

a. Vendors: We found it helpful to align the whole team on being more disciplined about vendor costs and negotiations. I personally helped renegotiate our renewal for a event and data analytics platform, which had reached $1 million per year when I arriveduntenable for a $30 million ARR company like ours. This saved us a boatload of money each year. Importantly, it also showed the company that the CEO cared a lot about cost containment.

b. Real estate: When SendGrid expanded from its roots in Boulder to a larger presence in Denver, our CFO and COO championed the consolidation of our two Colorado-based locations. Economies of scale, again, saved us a lot of money as we hired more people into the same physical footprint.

The biggest takeaway for CEOs is to remember that there are opportunities to drive better margins and profitable growth in every aspect of the business. If youre a SaaS leader fundraising in the near future and looking for ways to drive profitable growth, reach out to Brian Feinstein (Brian@bvp.com), Caty Rea (crea@bvp.com), Janelle Teng (jteng@bvp.com), or Sameer Dholakia (sdholakia@bvp.com) to learn more.

Read the original here:
A CEOs tactical guide to driving profitable growth - Bessemer Venture Partners

Read More..

Why Microsoft Teams has only just launched in China – IT PRO

Microsoft has officially launched Microsoft Teams in China via its local partner 21Vianet.

The tech giant launched the collaboration platform in the country on 1 April, at the same time as upgrading Office 365 to Microsoft 365, which will also be operated by 21Vianet.

Under the premise of fully satisfying the Chinese market's requirements for data security, personal information protection and other laws and regulations, Microsoft and 21Vianet Blue Cloud have joined hands to officially launch the Microsoft Teams service operated by 21Vianet for the Chinese market, the company said in a blog post.

Microsoft said that Teams has more than 280 million monthly users around the world after being launched globally in 2017.

Users in China were still able to use the service before it officially launched, although they may have experienced some latency due to the country's 'Great Firewall'.

Microsoft has advised businesses in the past on how to operate their Microsoft 365 accounts in the country if they have a branch or office there.

For enterprises with global Microsoft 365 tenants and a corporate presence in China, Microsoft 365 client performance for China-based users can be complicated by factors unique to China telco's internet architecture, the company said in a notice.

China ISPs have regulated offshore connections to the global public internet that go through perimeter devices that are prone to high levels of cross-border network congestion. This congestion creates packet loss and latency for all internet traffic going into and out of China, it said.

Users in the country who connect to global Microsoft 365 tenants from locations like their houses or hotels, without using an enterprise network, may have experienced poor network performance in the past. This is because the traffic has to go through Chinas congested cross-border network circuits.

The new offering of Teams may improve the service within the country, but it may not change anything for global companies who need to communicate with their branches in China. It can be assumed thatinternet traffic from outside of the countrys borders will still need to go through the same cross-border congestion.

Microsoft said it doesnt operate the service itself and it will instead be operated by 21Vianet. This partner will provide hosting, managed network services, and cloud computing infrastructure services.

21Vianet is a Microsoft strategic partner that is in charge of operating Microsoft Azure, Microsoft 365, Dynamics 365 and Power Platform in China.

It also operates Office 365 services in the country, and claims to be the largest carrier-neutral internet data centre service provider in China.

By licensing Microsoft technologies, 21Vianet operates local Office 365 data centres to provide the ability to use Office 365 services while keeping data within China. 21Vianet also provides subscription and billing services, as well as support, said Microsoft.

"Due to the unique nature of the China services operated by a partner from data centres inside China there are some features that have not yet been enabled, the tech giant added. Customers will see the services come closer to full feature parity over time.

In October 2022, China reportedly upgraded its Great Firewall to crack down on Transport Layer Security (TLS) encryption tools, which citizens had used to evade censorship.

Users had reported that at least one of their TLS-based censorship circumvention servers had been blocked, which was done by blocking the specific port the circumvention services listed on.

Anatomy of a good meeting

And how to eliminate horrid hybrids

The year before, in October 2021, Microsoft decided to shut LinkedIn down in China which was going to be replaced by a standalone job application platform.

Social media platforms in the country have to remove content deemed inappropriate which was uploaded by users. LinkedIn had already been ordered to perform a self-evaluation and suspend new sign-ups of users inside China after failing to control political content on the platform.

While weve found success in helping Chinese members find jobs and economic opportunity, we have not found that same level of success in the more social aspects of sharing and staying informed, Mohak Shroff said at the time, senior vice-president of engineering at LinkedIn.

Were also facing a significantly more challenging operating environment and greater compliance requirements in China.

The tech giant had been expanding its business in the country, and in June 2021 it revealed it was set to add four new data centres in China by early 2022 to expand its service capacity.

It already had six data centres in the country, and was making the move due to new regulation in China which encouraged domestic and foreign companies to shift to local data management.

The business value of storage solutions from Dell Technologies

Streamline your IT infrastructure while meeting the demands of digital transformation

Better APIs for better business

Realities of API security

Innovation to boost productivity and provide better data insights

Dell Technologies continuously modern storage

A roadmap to Zero Trust with Cloudflare and CrowdStrike

Achieve end-to-end protection across endpoints, networks, and applications

Continue reading here:
Why Microsoft Teams has only just launched in China - IT PRO

Read More..

5 Paths to Legacy Transformation – TechBeacon

It's common to talk about legacy-system transformation as if there's just one path available for modernizing systems.But in reality, legacy transformation is like navigating a sprawling interstate highway network; there are many routes that can potentially get your legacy systemsto where you want them to be. The challenge organizations face is identifying whichtactics will best help them update older technology systems to align with current business needswhileimproving performance.

Allow me to explain by discussing five different approaches to legacy-system transformation. As you'll learn, all of these approaches can add value to legacy systems, but they do so in different waysand it may or may not make sense to tryone particular approachor another on your transformation journey.

One of the simplest and most obvious ways to get more value from a legacy system is to update it to a newer version of the system (or to update the platform on which it depends).

For example, if your app is built on top of a legacy ERP platform, migrating to the newversion of the platform may well add efficiency, flexibility, and/or scalability to the appall without requiring you to modify the application itself.

Before taking this approach, it's important to evaluate how much value a platform upgrade would addand then weigh that against how much time, effort, and money the upgrade requires. Depending on when your last upgrade took place, a major platform upgrade may not create enough value to justify itself. But in other casesespecially if it has been years since you last updated the legacy systems or platforms on which your applications dependan upgrade is a comparatively fast and easy way to improve application performance and/or manageability. Although upgrading doesn't change the fundamentals of the technology you're using, itis likely tounlock new features and flexibility that help to modernize the application.

Another common transformation approach is moving legacy apps to the cloud. Here,again, moving to the cloud doesn't fundamentally change your system. But it makes it easier in many respects to operate and manage the system because you can take advantage ofcloud infrastructure that you can consume on demand. It also frees you from having to acquire, deploy,and maintain your own hosting infrastructure.

In many cases, legacy-platform vendors offer both on-premises and cloud-based versions of their systems. Although both types of offerings typically provide the same core features, migrating to the cloud-based versioncan simplify application management and increase scalability.

Moving to a cloud platform takes time and effort, so it is important to evaluate whether it is worth it before undertaking a cloud migration. In many cases, though, you may find that it is.

Whether you move your legacy system to the cloud or not, you canif your legacy-application platform supports ittake advantage of microservices architectures and/or container-based deployment.

A microservice implementation involves breakingcomplex applications into smaller pieces that operate independently from each other; these smaller pieces are called microservices. This makes applications easier to scale because you canallocate more resources to each microservice on an individual basis. It's also faster to deploy or update a microservice than it is to deploy a larger application.

Containers are a deployment technology that organizations commonly use to host microservices. You can run a different microservice inside each container, making it easy to keep track of which microservices you have running and to deploy new microservices by deploying new containers to host them.

There is an added benefit to containers. Containers representa form of virtualization, but they don't come with one of thebig drawbacks of other virtualization technologies. Traditional virtualization requires services to run guest operating systems on top of the host operating system. The more operating systems you have running on a server, the more CPU and memory you have to provide to the operating systemsand the fewer you have available for your applications. This is not a problem in containerization because containers do not rely on traditional virtualization technology.

Thus, by taking advantage of microservices and containers, you can deploy legacy applications in a more scalable and efficient way. You are likely in turn to improve performance and reduce hosting costs relative to operating your application as a monolith.

The catch here is that not every legacy system supports microservices and containers, so be sure to check your legacy-system vendor's documentation before assuming you can take advantage of these technologies.

In its narrow definition, DevOps representsthe integration of software development and IT operations. More broadly, it refers to a wide range of modern operational techniques and practices, such as the continuous deployment of changes and user-centric application management.

You can leverage DevOps methodologies for legacy apps just as well as you can for modern, cloud-native applications. In so doing, you'll gain more operational flexibility and agility, which translates to higher application availability and an enhanced ability to make changes without disrupting functionality.

Embracing DevOps requires changing the way your organization thinks about software delivery and management;it may require adopting some new tools, too. But the effort is almost always worth it.

Thanks especially to the AI revolution heralded by generative-AI tools such asChatGPT,artificial intelligence (AI)andmachine learning (ML) are transforming all sectors of the technology industry.

This technology is still maturing, and it's too soon to say exactly how it might support legacy-system transformation. But going forward, efficiency-focused organizations might use AI and MLfor tasks such as, for example,parsingthe configurations of legacy systems to detect opportunities for improvement.AI could also power chatbots that help to train end users in navigating new systems following a migration or transformation.

I'm being a little speculative here;again, AI tools designed for specific use cases such as these don't yet exist. But they're easy to envisionand they're likely to become another tool in the legacy-transformation arsenal for businesses going forward.

The fact that there are many viable routes toward legacy-system transformation is a great thing. Organizations canchoose which approaches and methodologies best align with their needs and resources.

But thisalso presents challenges. If you embark on a legacy transformation without knowing how best to arrive at your destinationor, worse, without being sure what your destination even isyou'll likely become bogged down in inefficient strategies that yield lackluster results.

That's why it'scritical to establish a road map that lays out your legacy-transformation strategy and helps you gain buy-in from stakeholders. Creating the road map may involve conducting a thorough assessment of the existing landscape, identifying areas for improvement and innovation, and prioritizing initiatives based on business value and impact.

To generate a realistic legacy-transformation road map, you'll likely need to evaluate the development resources you have available within your organizationand then decide on that basis (1) how many changes you can feasibly make to your applications and (2) how quickly your developers can implement the changes. You'll also want to think about what your most serious pain points are(Application cost? Scalability? Reliability? Something else?)and prioritize them accordingly.

Along similar lines, it's critical to have a strong team in place to guide your legacy-transformation journey. Your team should have expertise in both the legacyplatforms you use and the latest innovations in areas like cloud computing, DevOps, and AI. External partners and consultants maybe helpful as wellparticularly for organizations that mightnot have in-house expertise in all the areas needed for a successful legacy transformation.

After all, just as you wouldn't want to set off on a cross-country road trip without knowing anything about the roads you'll be traversing or the pros and cons of different routes, you don't want to start a complex legacy-system transformation without both critical knowledge and guiding insight on hand.

Go here to see the original:
5 Paths to Legacy Transformation - TechBeacon

Read More..

10 things to know about data-center outages – Network World

The severity of data-center outages appears to be falling, while the cost of outages continues to climb. Power failures are the biggest cause of significant site outages. Network failures and IT system glitches also bring down data centers, and human error often contributes.

Those are some of the problems pinpointed in the most recent Uptime Institute data-center outage report, which analyzes types of outages, their frequency, and what they cost both in money and consequences.

Uptime cautions that data relating to outages should be treated skeptically given the lack of transparency of some outage victims and the quality of reporting mechanisms. Outage information is opaque and unreliable, said Andy Lawrence, executive director of research at Uptime, during a briefing about Uptimes Annual Outages Analysis 2023.

While some industries, such as airlines, have mandatory reporting requirements, theres limited reporting in other industries, Lawrence said. So we have to rely on our own means and methods to get the data. And as we all know, not everybody wants to share details about outages for a whole variety of reasons. Sometimes you get a very detailed root-cause analysis, and other times you get pretty well nothing, he said.

The Uptime report culled data from three main sources: Uptimes Abnormal Incident Report (AIRs) database; its own surveys; and public reports, which include news stories, social media, outage trackers, and company statements. The accuracy of each varies. Public reports may lack details and sources might not be trustworthy, for example. Uptime rates its own surveys as producing fair/good data, since the respondents are anonymous, and their job roles vary. AIRs quality is deemed very good, since it comprises detailed, facility-level data voluntarily shared by data-center owners and operators among their peers.

Theres evidence that outage rates have been gradually falling in recent years, according to Uptime.

That doesnt mean the total number of outages is shrinkingin fact, the number of outages globally increases each year as the data-center industry expands. This can give the false impression that the rate of outages relative to IT load is growing, whereas the opposite is the case, Uptime reported. The frequency of outages is not growing as fast as the expansion of IT or the global data-center footprint.

Overall, Uptime has observed a steady decline in the outage rate per site, as tracked through four of its own surveys of data-center managers and operators conducted from 2020 to 2022. In 2022, 60% of survey respondents said they had an outage in the past three years, down from 69% in 2021 and 78% in 2020.

There seems to be a gently, gently improving picture of the outage rate, Lawrence said.

While 60% of data-center sites have experienced an outage in the past three years, only a small proportion are rated serious or severe.

Uptime measures the severity of outages on a scale of one to five, with five being the most severe. Level 1 outages are negligible and cause no service disruptions. Level five mission-critical outages involve major and damaging disruption of services and/or operations and often include large financial losses, safety issues, compliance breaches, customer losses. and reputational damage.

Level 5 and Level 4 (serious) outages historically account for about 20% of all outages. In 2022, outages in the serious/severe categories fell to 14%.

A key reason is that data-center operators are better equipped to handle unexpected events, according to Chris Brown, chief technical officer at Uptime. Weve become much better at designing systems and managing operations to a point where a single fault or failure does not necessarily result in a severe or serious outage, he said.

Todays systems are built with redundancy, and operators are more disciplined about creating systems that are capable of responding to abnormal incidences and averting outages, Brown said.

When outages do occur, they are becoming more expensivea trend that is likely to continue as dependency on digital services grows.

Looking at the last four years of Uptimes own survey data, the proportion of major outages that cost more than $100,000 in direct and indirect costs is increasing. In 2019, 60% of outages fell under $100,000 in terms of recovery costs. In 2022, just 39% of outages cost less than $100,000.

Also in 2022, 25% of respondents said their most recent outage cost more than $1 million, and 45% said their most recent outage cost between $100,000 and $1 million.

Inflation is part of the reason, Brown said; the cost of replacement equipment and labor are higher.

More significant is the degree to which companies depend on digital services to run their businesses. The loss of a critical IT service can be tied directly to disrupted business and lost revenue. Any of these outages, especially the serious and severe outages, have the ability to impact multiple organizations, and a larger swath of people, Brown said, and the cost of having to mitigate that is ever increasing.

As more workloads are outsourced to external service providers, the reliability of third-party digital infrastructure companies is increasingly important to enterprise customers, and these providers tend to suffer the most public outages.

Third-party commercial operators of IT and data centerscloud providers, digital service providers, telecommunications providersaccounted for 66% of all the public outages tracked since 2016, Uptime reported. Looked at year-by-year, the percentage has been creeping up. In 2021 the proportion of outages caused by cloud, colocation, telecommunications, and hosting companies was 70%, and in 2022 it was up to 81%.

The more that companies push their IT services into other peoples domain, theyre going to have to do their due diligenceand also continue to do their due diligence even after the deal is struck, Brown said.

While its rarely the single or root cause of an outage, human error plays some role in 66% to 80% of all outages, according to Uptimes estimate based on 25 years of data. But it acknowledges that analyzing human error is challenging. Shortcomings such as improper training, operator fatigue, and a lack of resources can be difficult to pinpoint.

Uptime found that human error-related outages are mostly caused either by staff failing to follow procedures (cited by 47% of respondents) or by the procedures themselves being faulty (40%). Other common causes include in-service issues (27%), installation issues (20%), insufficient staff (14%), preventative maintenance-frequency issues (12%), and data-center design or omissions (12%).

On the positive side, investing in good training and management processes can go a long way toward reducing outages without costing too much.

You dont need to go to a banker and get a bunch of capital money to solve these problems, Brown said. People need to make the effort to create the procedures, test them, make sure theyre correct, train their staff to follow them, and then have the oversight to ensure that they truly are following them.

This is the low hanging fruit to prevent outages, because human error is implicated in so many, Lawrence said.

Uptime said its current survey findings are consistent with previous years and show that on-site power problems remain the biggest cause of significant site outages by a large margin. This despite the fact that most outages have several causes, and that the quality of reporting about them varies.

In 2022, 44% of respondents said power was the primary cause of their most recent impactful incident or outage. Power was also the leading cause of significant outages in 2021 (cited by 43%) and 2020 (37%)

Network issues, IT system errors, and cooling failures also stand out as troubling causes, Uptime said.

Uptime used its own data, from its2023 Uptime resiliency survey, to dig into network outage trends. Among survey respondents, 44% said their organization had experienced a major outage caused by network or connectivity issues over the past three years. Another 45% said no, and 12% didnt know.

The two most common causes of networking- and connectivity-related outages are configuration or change management failure (cited by 45% of respondents) and a third-party network providers failure (39%).

Uptime attributed the trend to todays network complexity. In modern, dynamically switched and software-defined environments, programs to manage and optimize networks are constantly revised or reconfigured. Errors become inevitable, and in such a complex and high-throughput environment, frequent small errors can propagate across networks, resulting in cascading failures that can be difficult to stop, diagnose, and fix, Uptime reported.

Other common causes of major network-related outages include:

When Uptime asked respondents toits resiliency survey if their organization experienced a major outage caused by an IT systems or software failure over the past three years, 36% said yes, 50% said no, and 15% didnt know. The most common causes of outages related to IT systems and software are:

Publicly recorded outages, which include outages that are reported in the media, reveal a wide range of causes. The causes can differ from what data-center operators and IT teams report, since the media sources knowledge and understanding of outages depends on their perspective. Whats really interesting is the sheer variety of causes, and thats partly because this is how the public and the media perceive them, Lawrence said.

Fire is one cause that showed up among publicly reported outages but didnt rank highly among IT-related sources. Specifically, Uptime found that 7% of publicly reported data-center outages were caused by fires. In the web briefing, Uptime researchers related the incidence of data-center fires to increasing use of lithium-ion (Li-ion) batteries.

Li-ion batteries have a smaller footprint, simpler maintenance, and longer lifespan compared to lead-acid batteries. However, Li-ion batteries present a greater fire risk. A Maxnod data center in France suffered a devasting fire on March 28, 2023, and we believe its caused by lithium-ion battery fire, Lawrence said. A lithium-ion battery fire is also the reported cause of a major fire on Oct. 15, 2022, at a South Korea colocation facility owned by SK Group and operated by its C&C subsidiary.

We find, every time we do these surveys, fire doesnt go away, Lawrence said.

Read more here:
10 things to know about data-center outages - Network World

Read More..

Narrowing Down the Right Lending Management Solution – Banking CIO Outlook

The right LMS platform provides scalability, customization, and data security features.

FREMONT, CA: A lending company has a variety of options when it comes to loan origination and servicing software, while depending on the type of loan your business offers, there are certain features it is important to consider when choosing a loan management system or even if you are designing one for your business. You need to provide the best loan origination and servicing service to stand out in the highly competitive lending industry. The right loan process tool can mean the difference between being the best in the market and being just one of many options for borrowers to choose from, with nothing that separates you from the competition.

To design a loan management system that includes all the tools you need to streamline the entire loan process, it must contain the most important features that will not only allow you to complete a deal faster but will also provide the most accurate customer information and analytical insight you need to thrive in the modern market.

Scalability: Business owners strive to grow and diversify their products and services. Many loan systems are already fixed and hardcoded, making it difficult for the software to support businesses as they grow. For your organization's loan management system to be effective, it should feature scalable modules that will adapt as your business grows or allow you to introduce new products and services that meet the needs of the latest market.

Cloud-based software: Implementing new software may not be possible for some businesses without an on-demand IT team. This is why an easy-to-deploy loan management system is important. A lender can facilitate the deployment process by using cloud-based loan management software. In most cases, cloud-based systems include frequent upgrades and automatic updates, provide quick implementation, give you software flexibility, and can be contacted for support by your services provider's IT department. The most important benefit of being in the cloud is allowing your team to access any information faster from anywhere with an internet connection.

Easy to use software: Finding a tool that can help your team be up and running quickly is crucial in the lending industry. The LMS you choose must also be easy to use and don't require too much training for your staff. You must select the best loan management software for your business to ensure successful user adoption.

Centralized approach: A centralized location is ideal for hosting all the phases of a loan lending lifecycle. Small lending businesses cannot afford to have multiple systems to store and retrieve borrower data, especially since there are not enough systems that work well from beginning to end on the loan process. This is one of the many pain points that this industry faces. In addition to saving your organization a lot of money per month, selecting software that supports all loan management system modules, from origination to servicing to syndication, will give your organization a 360-degree view of each loan applicant. All related information, including the lending stage of an application, the case's handler, and who is handling it, can be viewed anywhere.

See the rest here:
Narrowing Down the Right Lending Management Solution - Banking CIO Outlook

Read More..

Fedora Workstation Aiming To Improve Encryption, Possibly Encrypted Disk By Default In The Future – Phoronix

Fedora Workstation developers and those involved at Red Hat have been working to improve the state of disk encryption on Fedora with a end-goal of possibly making the installer encrypt systems by default.

While many Linux distributions allow for full-disk encryption these days, not many distributions enable it by default (Pop!_OS being among the rare that actively encourage it) while it looks like in the future Fedora Workstation could default to having its installer encrypt the disk.

Pop!_OS does a great job actively encouraging encryption on new installs.

The Fedora Workstation plan would be to use the upcoming Btrfs fscrypt support for encrypting both the system and home directories.

Fedora Workstation in the future could by default enable Btrfs FSCRYPT system and home directory encryption and store the keys in the TPM.

Overall this is a good move for Fedora Workstation. Especially for laptops I for years have actively encouraged making use of disk encryption. Especially with modern processors and storage drives, encryption costs are very low and worthwhile for those actively taking their laptops with them as well as other desktops/workstations with sensitive data to physical theft, etc. It will be interesting to see how (and when) the Fedora encryption-by-default plans materialize.

Read the original post:
Fedora Workstation Aiming To Improve Encryption, Possibly Encrypted Disk By Default In The Future - Phoronix

Read More..

How AWS Wickr’s encryption service is helping at-risk Afghan citizens – About Amazon

For more than a year, Jawid wondered if he would ever reunite with his wife. Originally from Afghanistan, the former interpreter worked with the United States Army before earning his U.S. citizenship. Jawid moved to the states with a plan for his wife, Farzana, to join him once her visa process was complete. Their plan, however, was shattered on August 15, 2021, when the Taliban took over Afghanistan. Farzana, like thousands of other Afghanistan citizens, was unable to evacuate, and because of her husbands connection with the U.S. Army, she was in danger of Taliban retaliation.

Day and night, I was thinking about how to get my family out of Afghanistan, Jawid recalled. My wife was always asking me, Did you find a solution?

Photo by Operation Recovery

After the Taliban takeover, Jawid sought help from Operation Recovery, a U.S.-based nonprofit with a mission to safely evacuate at-risk Afghan citizens. Farzana was one of more than 7,500 applicants on Operation Recoverys evacuation list. As the nonprofit assembled the tools it needed to coordinate with family members and potential evacueesas well as volunteers known as shepherdsthey realized that communication was a huge challenge due to the sheer volume and elevated risk of putting people in danger.

Since the Taliban controls the internet, email is not a reliable way to communicate. They actually use the network to track down people theyre searching for, said Jon Collette, president and CEO of Operation Recovery. We needed secure communications.

To do this, Operation Recovery looked to Amazon Web Services (AWS) Wickr. AWS Wickr is an end-to-end encrypted service that allows secure one-to-one and group messaging, voice and video calling, file sharing, screen sharing, and location sharing. With Wickr, encryption takes place locally on the client endpoint. Every call, message, and file is encrypted with a new random key, and remains indecipherable in transit; only the intended recipients and the customer organization (not even AWS) can decrypt each transmission.

Part of Wickr's mission is to improve the world through privacy, said Chris Lalonde, AWS Wickr software development director. The kind of encryption that we use is three layers deep and is impenetrable by modern computer systems, even the most sophisticated computer systems that exist today.

AWS Wickr serves a diverse set of customers, allowing businesses and public sector organizations to communicate more securely while also meeting auditing and regulatory requirements. Financial service organizations use Wickr to maintain data retention requirements and secure chain of custody, while safeguarding sensitive IP and intelligence. In addition, enterprises use Wickr to help bolster out-of-band communication during an incident and eradicate the use of shadow IT within their organizations.

Wickr teamed up with consulting firm, UNCOMN, to develop a solution that would integrate Operation Recoverys existing case management system and provide end-to-end encrypted communication for shepherds and evacuees. To further improve efficiencies and streamline workflow, the solution also includes a bot to answer frequently asked questions surrounding evacuees case statuses. This gives shepherds the ability to query information from Operation Recoverys systems at any time, without requiring human intervention.

So far, Operation Recovery has used its AWS Wickr solution to coordinate the evacuation of nearly 4,000 at-risk Afghan citizens, including Farzana. After three years apart, she was finally reunited with Jawid in the U.S., where the couple is building a new life.

Photo by Operation Recovery

We are together and have our best life, said Jawid.

In addition to the continued coordination of evacuations, Operation Recovery and its partners are providing humanitarian aid to individuals across Afghanistan in an effort to help as many people as possible find safety.

Learn more about Operation Recovery's mission and initiatives.

Visit link:
How AWS Wickr's encryption service is helping at-risk Afghan citizens - About Amazon

Read More..

How Post-Quantum Encryption Mandates Affect Healthcare – BankInfoSecurity.com

A 3-month-old federal law meant to future-proof federal computers from quantum computer decryption will have an effect on healthcare sector entities, too, says Mac McMillan, founder and CEO emeritus of privacy and security consulting firm CynergisTek.

"Data and systems that we have today that use at least the current cryptography standard will no longer be adequate when quantum computing becomes mainstream," he said (see: Biden Signs Law to Safeguard IT Against Quantum Computing).

Ultimately, private sector organizations, including healthcare entities - "whether they like it or not" - also will need to migrate to the new cryptographic standards, which are being hammered out by the National Institute of Standards and Technology, the National Security Agency and others, according to McMillan.

The eventual mass migration to post-quantum cryptography will compel healthcare entities to take "a 100% inventory" of their network ecosystems, he said. "Everywhere you have encryption, you will need to consider upgrading to the new standard in order to protect that data."

"Right now, if I were a CISO at a health system, I would be looking at this legislation and say, 'Even though I'm not a federal agency and it doesn't apply to me directly, I'm going to start working with IT to identity the systems, applications and data that we need to be thinking about for migration and putting together a plan so that by the time the new standards come out, were prepared to do that."

In the interview (see audio link below photo), McMillan also discusses:

McMillan is co-founder and CEO emeritus of CynergisTek, which was acquired last year by privacy and security consultancy Clearwater. He has more than 40 years of security and risk management experience, including 20 years at the U.S. Department of Defense and its Defense Threat Reduction Agency.

Read more:
How Post-Quantum Encryption Mandates Affect Healthcare - BankInfoSecurity.com

Read More..

Use backup encryption to protect data from would-be thieves – TechTarget

The key to data integrity is reliability and trust at all times. Backups are a vital part of data and application recoverability and must always be secure.

Encryption is essential to data protection, and backups are no exception. Data backup encryption adds another layer of protection from major threats, including "unauthorized access, exfiltration and unauthorized data restores," said Christophe Bertrand, a practice director at TechTarget's Enterprise Strategy Group (ESG).

"Encrypting backups can aid in regulatory compliance and protect an organization from criminal activity. Many regulations discuss encryption in a broad sense, and the rule of thumb should really be that this applies to backups as well," Bertrand said. "As data is backed up from point A to point B, encrypting the data in flight is highly recommended so that it can't be intercepted."

Encryption in transit involves encrypting data that is moving across the network, said Jack Poller, a senior analyst at ESG. Any web transaction using Secure Sockets Layer/Transport Layer Security, or SSL/TLS -- such as HTTPS -- is encrypted in transit. This protects the data from an attacker that can see data moving across the network, for example, via a Wi-Fi connection.

Encryption at rest involves encrypting data that is stored on disk or in the backup system. This protects the data if an attacker has access to the data storage system. While some backup applications create backup files in a proprietary format, additional protection is necessary to keep potential attackers from easily accessing and reading these files or repositories.

If data backups are not encrypted, an attacker could gain access to the backup system and exfiltrate backup data, Poller said.

Exfiltrated backup data that is encrypted has no value to cybercriminals because malicious actors and the public can't read the data.

"This is a typical method of operation of ransomware actors who double dip by both preventing the organization from accessing their own data and holding exfiltrated data hostage. [It requires] a separate payment to prevent the public exposure of the data," he said.

If data is encrypted, only individuals who hold the keys can make sense of the data. Exfiltrated backup data that is encrypted has no value to cybercriminals because malicious actors and the public can't read the data, Poller said.

This is a last layer of defense, protecting the organization in the worst case, and is part of a defense-in-depth strategy.

In general, most data security and data privacy regulations apply to backup data, just as they apply to any other data sets. Organizations must encrypt any sensitive or regulated information to ensure that data is protected in case of exfiltration or inadvertent public exposure.

Specific regulations that apply to backup data include the following:

When it comes to hardening your cyber-resilience overall, there are no downsides, Bertrand said. Still, there might be tradeoffs. Encryption is computationally expensive, and it affects the time and possibly the cost of the backup and recovery process, he noted.

"In some cases, backup encryption can incur performance penalties, but modern solutions handle security by design in general, including encryption, at scale," Bertrand said.

In addition, encryption alone is not enough to protect data, so organizations must manage multiple encryption keys.

"It's not sufficient to protect all data in the organization with one key -- if an attacker gets access to the key, they get access to all data," Bertrand said. "The same for backups: Get access to the key, get access to all data in the backup data set. Therefore, organizations need to have separate keys for divisible, distinct chunks of data -- including distinct chunks of backup data."

Original post:
Use backup encryption to protect data from would-be thieves - TechTarget

Read More..