Page 1,481«..1020..1,4801,4811,4821,483..1,4901,500..»

While Bitcoin Stagnates, Is Altcoin Season Finally Here? QCP Weighs In – NewsBTC

While the Bitcoin price has failed to sustainably advance into the $29,000 region since mid-March, several altcoins are currently experiencing a strong rally. Bitcoin dominance has risen to as high as 46.5% in recent weeks, but is currently seeing a small retracement.

Singapore-based crypto options trading firm QCP Capital says in an analysis today that both Bitcoin and Ethereum are entering a difficult time for monetization as both are wedged in a very tight range. Altcoins could benefit from this in the second quarter:

Perhaps Q2 is indeed shaping up to be the quarter of #Alts and #Airdrops, while BTC takes a breather. Pricing has essentially gone nowhere since March 17, when BTC closed at $27.5k and ETH at $1.8k.

According to the company, this is largely due to the tremendous resistance that Bitcoin and Ethereum are facing. Even the major events of the past few days have not been able to get Bitcoin out of its tight trading range. Neither the FOMC meeting on March 22 with the 25 basis point hike nor the CTFCs lawsuit against Binance were able to change that.

According to QCP, the markets have largely dismissed civil lawsuits because they are likely to have the same outcome as the BitMEX lawsuit in 2020. We tend to agree. It is likely to go the same way as a suit against Bitmex a few years back where a large settlement was reached to conclude the affair, the analysts wrote.

Thats why they saw it as a buying opportunity; but now on the first sign of a recessionary turn in US data last night, the firm warned to pay attention to the recession narrative, which will be formed with the macro data coming up this week.

Both the US dollar and bond yields turned sharply lower yesterday following the release of the ISM manufacturing index, which showed the sharpest decline since April 2020 amid the pandemic. And the recession outlook is likely to cloud further in the coming days, according to QCP:

We expect more weak US data to come out this week, further cementing the recession narrative. After many false dawns, we believe this will indeed be the lasting one.

Since both Bitcoin and cryptocurrencies in general have never been in a recessionary environment, the asset class should be classified as unproven, according to QCP Capital, which is even more true for a stagflationary environment.

However, should the Federal Reserve act quickly in a recession, as it did during the banking crisis last month, QCP expects Bitcoin to soar and lead the crypto market once again. But the risk of major headwinds is high according to QCP Capital for both Bitcoin and Altcoins:

Price-wise all the easy work is now done, and we have gotten to the hard work zone for bulls. Firstly, Q2 tends to be a difficult quarter for risk markets, crypto notwithstanding.

At press time, the bitcoin price was at $28,329 and has fully recovered from the FUD crash over an Interpol Red Notice for Binance CEO Changpeng Zhao.

Featured image from iStock, chart from TradingView.com

See the original post:
While Bitcoin Stagnates, Is Altcoin Season Finally Here? QCP Weighs In - NewsBTC

Read More..

Why Businesses and Leaders Need to Think About Digital Value … – CEOWORLD magazine

Some business leaders underestimate cloud technologys potential, regarding it as just a tech solution. In reality, cloud technology has far more to offer than just making a businesss existing technology better.

Just as Open Source created access to more innovation, Cloud shall provide access to more services. It can also integrate into business solutions to achieve more significant results for end customers.

Business reporting is sometimes guilty of characterizing cloud technology as if it were some mystical, unseen force fundamental to the fabric of the universe, responsible (somehow) for a host of undefined and unexplained tech solutions. The reality is much more mundane, but cloud computing doesnt need to be magical to be impactful.

New technologys full range of potential applications isnt always obvious at first glance: The Teflon that coats your non-stick pan was once designed as a component of artillery shell fuses. Cloud computing is no exception. The rest of this article will lay out the reasons business decision-makers should recognize cloud computings scalability and versatility.

Using Cloud Technology to Create Value

Some business leaders underestimate cloud technologys potential, regarding it as just a tech solution. In reality, cloud technology has far more to offer than just making a businesss existing technology better. Just as Open Source created access to more innovation, Cloud shall provide access to more services. It can also integrate into business solutions to achieve more significant results for end customers.

So what does that reconceptualization look like? The journey to and in the cloud can and should only be seen from the lens of the end customers needs and the value your company intends to deliver to them. So, its essential to think of the digital value chains that directly create value for the end customer. This thought process invariably involves thinking about the business transformation agenda, which in turn needs you to think of how technology can enable the business transformation.

No technology transformation, whether cloud computing-based or otherwise, is possible without a parallel transformation of the businesss operating model. One of the most effective ways to accomplish this is to think of how your products directly impact clients and ensure that the front, middle, and back ends are completely aligned with that.

Rethinking the Cloud

Customers, customers, customers: At the end of the day, cloud computing applications are powerful tools for achieving the primary aim of any businesscustomer satisfaction. As a first step, every enterprise and enterprise leader should consider driving clear initiatives to improve efficiencies and unlock budgets that can help with the transformation agenda. The next stage is to re-organize by products and platforms that can create not just velocity but clear and measurable impact to the end client. As they do this, they need to rethink on how to reduce both technical and technology debt.

How do IT and business teams align? For one thing, its not enough to focus on how they work together to achieve existing project objectives you have to also create biz-tech teams that can jointly sketchout future business strategies and the features that matter to end customers. What new revenue streams might you be able to exploit? What new technology is on the horizon? The key to a successful long-term tech strategy is to always think a few steps ahead.

What Are the Benefits of Rethinking Digital Value Chains?

Much as inventions and technological breakthroughs frequently offer more utility than just their literal intended purpose, cloud computing creates much more significant value beyond some vague notion of better tech. Here are three concrete examples of cloud computings capacity to transform businesses:

Any business in any industry can benefit from cloud computing, but only if they recognize its full potential. In fact, cloud computing technologys versatility might be its greatest selling point. There is no single correct approach to a successful cloud computing strategy, nor is there a predetermined path one must follow in order to implement cloud-based innovation the right way.

You (and your business) are the authors of your particular cloud computing scenario and that scenarios definition, scope, and conclusion. The specific circumstances of your financial situation, economic landscape, and existing infrastructure will naturally vary, as will the goals cloud computing will enable you to achieve. And therein lies the beauty of cloud computing: The cloud is an apt name for this technology because only the sky limits the course of your progress.

Written by Arun Melkote.

Have you read?Why Employers Forcing a Return to Office is Leading to More Worker Power and Unionization by Dr. Gleb Tsipursky.Want to succeed as a digital entrepreneur? The key is working smarter, not harder by Hemi Hossain.Pirates, Treasure, & Your Retirement by David C. Bentall.How to Become Successful in Business By Leaning Into A Community by Alden Mills.CEOS: Give the Gift of a Peer Group by Leo Bottary.

Follow this link:
Why Businesses and Leaders Need to Think About Digital Value ... - CEOWORLD magazine

Read More..

Accenture and Microsoft help Unilever with huge cloud transition – CloudTech News

Accenture, Microsoft and Unilever have completed one of the largest and most complex cloud migrations in the consumer goods industry. The migration has helped Unilever whose 400+ brands are used by 3.4 billion people daily become a cloud-only enterprise.

Accenture and Microsoft, together with their joint venture, Avanade, worked closely with Unilever to deliver the transformation in just 18 months with minimal disruption to business operations. It has not only helped ensure resilient, secure and optimized operations for Unilever but also provides a platform to drive innovation and growth.

With Azure as its primary cloud platform, Unilever will be able to accelerate product launches, enhance customer service and improve operational efficiency. Additionally, the move to Azure aligns with Unilevers sustainability commitment by helping the company to build on the progress its making towards curbing carbon emissions.

The creation of an agile, high-performing digital core that delivers greater efficiency will provide Unilever with increased computing power to explore new ways of working. Unilevers adoption of a cloud-only approach will significantly improve business resilience, strengthening security and enhancing control of the IT landscape.

Accenture, Microsoft and Unilever have set a new benchmark for cloud transformation in the consumer goods industry including:

Steve McCrystal, chief enterprise & technology officer, Unilever said: Unilever is a truly data-powered organization. Were using advanced analytics to make better-informed decisions quicker than ever before. Working with Accenture and Microsoft on this global transformation project, we can respond to ever-changing consumer needs faster, allocate our resources more effectively to focus on what drives growth, and bring services and products to the market faster.

Nicole van Det, senior managing director at Accenture and global account lead for Unilever, said: The path to business resilience now and in the future is through total enterprise reinvention which involves the transformation of every part of the business with cloud at the core. With access to the full continuum of cloud capabilities, including generative AI, Unilever has the elasticity to drive innovation faster, accelerate growth and continue to set the pace as a digital powerhouse and leader in its industry.

Judson Althoff, executive vice president and chief commercial officer, Microsoft, said: Together with Accenture, were proud to expand our longstanding partnership with Unilever.

With Microsoft Azure as its cloud foundation, Unilevers end-to-end digitization will enable rapid innovation across its entire business. From embracing the industrial metaverse across its factories to reimagining how its lines of business can do more with tools like Azure OpenAI Service, Unilevers digital-first approach will empower it to grow resiliently and exceed the industrys pace of innovation.

Want to learn more about cybersecurity and the cloud from industry leaders? Check outCyber Security & Cloud Expotaking place in Amsterdam, California, and London.Explore other upcoming enterprise technology events and webinars powered by TechForgehere.

Tags: Accenture, microsoft

Read more from the original source:
Accenture and Microsoft help Unilever with huge cloud transition - CloudTech News

Read More..

GFT and CloudFrame help industries say ‘cheerio’ to COBOL – CloudTech News

GFT, a digital transformation pioneer, and CloudFrame, a provider of pathways to digital transformation for large organisations who are running mission-critical applications on COBOL, have partnered to help COBOL users make the transition to more efficient platforms that enable users to reduce their overall mainframe costs.

CloudFrames proprietary technology converts COBOL code into more efficient and future-proof Java. On average, the cost of a mainframe process is reduced by 50% after this conversion. GFT brings its expertise in CloudFrame implementation and mainframe modernisation to the partnership.

In 2023, COBOL, a programming language debuted in the 1950s, is still ubiquitous in financial institutions, airlines, retail companies the list goes on. It is increasingly becoming a problem for users, due to high costs of mainframe use and a shortage of experts needed for development and maintenance.

Taking the risk out of a dreaded process

Venkat Pillay, founder and CEO of CloudFrame, said: Partnering with GFT will help our customers achieve application modernisation of their COBOL systems.

The combination of CloudFrames Relocate and Renovate COBOL modernisation products, along with GFTs skilled and comprehensive services, will enable customers to transform COBOL into maintainable Java.

Marika Lulay, CEO of GFT, said: Even experts are often surprised by how widely COBOL is still being used.

Migrating legacy applications to a new platform can be a daunting challenge, but it is becoming prohibitively expensive and risky to keep supporting COBOL. With CloudFrames conversion solutions and our implementation and mainframe expertise, we take the risk out of a process many IT managers dread.

Want to learn more about cybersecurity and the cloud from industry leaders? Check outCyber Security & Cloud Expotaking place in Amsterdam, California, and London.Explore other upcoming enterprise technology events and webinars powered by TechForgehere.

Tags: CloudFrame, COBOL, GFT, platforms

See the original post:
GFT and CloudFrame help industries say 'cheerio' to COBOL - CloudTech News

Read More..

ADA Price Prediction: Demand Pressure at Local Support Sets Cardano Price For 10% Jump – CoinGape

ADA price Prediction: A sustained rally in Cardano price has breached several minor resistance levels since last month, acting as a stepping stone for crypto buyers. Thus, as the bull trend continues the coin price recently offered another breakout opportunity from the $0.38 barrier. The prices retesting the breached resistance with the formation of a morning star candle pattern indicated that altcoin possesses a higher possibility to prolong the ongoing recovery.

Source- Tradingview

On March 31st, the rising Cardano coin price gave a bullish breakout from a horizontal resistance of $0.38-$0.377. This breakout is a sign that buyers are confident about reaching higher despite the growing FUD in the market.

Since last week, the coin price hovering between the $0.38 breached resistance, providing buyers a new launchpad to bounce off to higher levels. The lower price rejection at the aforementioned level sign of increasing demand pressure as sidelined buyers enter the market.

Also Read: What Is Regenerative Finance (Refi) And Who Is It For?

Thus, with sustained buying the ADA price will likely increase by 10% and hit the $0.427-$0.42 multiple-month resistance. Furthermore, the mentioned resistance is also the neckline resistance of a famous bullish reversal pattern called the inverted head and shoulder pattern.

Therefore, a potential breakout from $0.427 would be an early sign of a trend change.

Relative Strength Index: Similar to price action, the daily RSI slope which reflects the current strength of the ongoing trend is gradually rising to the higher level of bullish territory. This growth signals a sustained buying activity in Cardano.

Bollinger Band: The long traders can use the midline of the Bollinger Band indicator as a key support to indicate the recovery rally will continue. Thus, a breakdown from it will signal for price reversal.

Continued here:
ADA Price Prediction: Demand Pressure at Local Support Sets Cardano Price For 10% Jump - CoinGape

Read More..

ServiceNow, Inc.: Leading the Way in Enterprise Cloud Computing … – Best Stocks

ServiceNow, Inc.: A Leading Provider of Enterprise Cloud Computing Solutions

In todays digital age, businesses rely on technology to run their operations smoothly and efficiently. One company that has been leading the way in providing enterprise cloud computing solutions is ServiceNow, Inc. (NYSE:NOW). Lindbrook Capital LLC recently increased its position in ServiceNow by 27.2% during the fourth quarter, according to the companys most recent 13F filing with the Securities & Exchange Commission.

Based in Santa Clara, California, ServiceNow provides a single enterprise cloud platform called the Now Platform to deliver digital workflows. Its product portfolio is focused on providing Information Technology (IT), Employee and Customer workflows. The company was founded by Frederic B.s Mr. Luddy in 2004.

ServiceNows most recent quarterly earnings results showed impressive growth for the firm, with revenue figures exceeding analyst estimates and a notable increase in net profit margin. In January of this year, the information technology services provider reported $1.94 billion in revenue compared to analyst estimates of $1.93 billion. Additionally, ServiceNow had a net margin of 4.49%, a return on equity of 9.37%, and exceeded earnings per share (EPS) estimates by $0.28.

This upward trend has not gone unnoticed by investors like Lindbrook Capital LLC who have increased their holdings significantly over recent months as they recognize its potential for continued growth moving forward.

ServiceNow has become increasingly popular among businesses due to its Now Platform and ability to provide efficient digital workflow solutions for IT, employees and customers alike which is seen as essential for businesses amidst COVID-19 pandemic where remote working set up /digital transformation is becoming very important/relevant/demanding . The Now Platform includes workflows that automate routine tasks while also allowing users to create custom applications with ease.

Looking ahead into 2021 , analysts predict that NOW will post an EPS of 2.65 for the year, indicating that the company will continue to experience growth in the coming months.

In summary, ServiceNow has emerged as a leading provider of enterprise cloud computing solutions with impressive revenue growth and increasing investor confidence. Its delivery of digital workflows via its innovative Now Platform has led to widespread adoption among businesses seeking automation and customizations . Its clear that this company is one worth watching, with potential for continued success in the future.

ServiceNow, Inc., a leading provider of enterprise cloud computing solutions, has experienced significant investment activity in recent months from major institutional investors and hedge funds. Armstrong Advisory Group Inc., High Net Worth Advisory Group LLC, Romano Brothers AND Company, Vigilant Capital Management LLC, and Motco have all acquired positions in the company worth between $29,000 and $37,000. Institutional investors and hedge funds now own 86.31% of ServiceNows stock.

Shares of NYSE:NOW opened at $476.05 on Wednesday with a market capitalization of $96.64 billion. The company has a price-to-earnings ratio of 297.53, a PEG ratio of 6.20 and a beta of 1.04. Its product portfolio is focused on providing Information Technology, Employee and Customer workflows under the Now Platform offering digital workflows on a single enterprise cloud platform.

ServiceNow has received positive target price increases from several research analysts including Oppenheimer, who raised their price from $450 to $500 per share and gave it an outperform rating following strong financial performance results in the previous quarter.

Recent legal filings related to insider trading reveal that CEO William R. Mcdermott sold over 2,400 shares priced at over $455 each for a total transaction value exceeding one million dollars on February 1st alone.

Despite some analysts downgrading ServiceNows status from buy to hold earlier this year whilst assessing its reduced price valuation potential for market uncertainty arising during Covid-19 lockdowns across global economies so far in 2021; there is still widespread confidence both within the industry sector and across wider investing communities that demand for this provider offering integrated workplace management solutions specific to streamlined digital workflows during unprecedented times remains robustly high as adaptation continues into new business environments globally throughout the pandemic era.

Original post:
ServiceNow, Inc.: Leading the Way in Enterprise Cloud Computing ... - Best Stocks

Read More..

Cloud Computing Market in Healthcare Industry Demand will reach … – Digital Journal

PRESS RELEASE

Published April 6, 2023

The globalCloud Computing in Healthcaremarket is estimated to attain a valuation of US$911.6 Bn by the end of 2028, states a study by Transparency Market Research (TMR). Besides, the report notes that the market is prognosticated to expand at a CAGR of 10.8% during the forecast period, 2021-2028.

The key objective of the TMR report is to offer a complete assessment of the global market including major leading stakeholders of the Cloud Computing in Healthcare industry. The current and historical status of the market together with forecasted market size and trends are demonstrated in the assessment in simple manner. In addition, the report delivers data on the volume, share, revenue, production, and sales in the market.

Request for a sample of this research report at (Use Corporate Mail Id for Quick Response) https://www.transparencymarketresearch.com/sample/sample.php?flag=S&rep_id=802

The report by TMR is the end-product of a study performed using different methodologies including the PESTEL, PORTER, and SWOT analysis. The study with the help of these models shed light on the key financial considerations that players in the Cloud Computing in Healthcare market need to focus on identifying competition and formulate their marketing strategies for both consumer and industrial markets. The report leverages a wide spectrum of research methods including surveys, interviews, and social media listening to analyze consumer behaviors in its entirety.

Cloud Computing in Healthcare Market: Industry Trends and Value Chain

The study on the Cloud Computing in Healthcare market presents a granular assessment of the macroeconomic and microeconomic factors that have shaped the industry dynamics. An in-depth focus on industry value chain help companies find out effective and pertinent trends that define customer value creation in the market. The analysis presents a data-driven and industry-validated frameworks for understanding the role of government regulations and financial and monetary policies. The analysts offer a deep-dive into the how these factors will shape the value delivery network for companies and firms operating in the market.

Buy this Premium Research Report | Immediate Delivery Available at https://www.transparencymarketresearch.com/checkout.php?rep_id=802&ltype=S

Cloud Computing in Healthcare Market: Branding Strategies and Competitive Strategies

Some of the key questions scrutinized in the study are:

The list of key players operating in the Cloud Computing in Healthcare market includes following names:

CareCloud Corporation,ClearDATA Networks,Carestream Health, Inc.,AGFA Healthcare,Cisco Systems, Inc.,Merge Healthcare, Inc.,IBM Corporation,Intel Corporation,Microsoft Corporation,Oracle Corporation,Amazon Web Services,e-Zest,Kinvey,Salesforce

Request for customization of this research report at https://www.transparencymarketresearch.com/sample/sample.php?flag=CR&rep_id=802

Cloud Computing in Healthcare Market: Assessment of Avenues and Revenue Potential in Key Geographies

Some of the key aspects that the study analyzes and sheds light are:

More Trending Reports by Transparency Market Research

Antihypertensive Drugs Markethttps://www.prnewswire.com/news-releases/antihypertensive-drugs-market-to-expand-at-3-cagr-during-forecast-period-notes-2022-tmr-study-301708951.html

Self-injection Device Markethttps://www.globenewswire.com/news-release/2022/12/08/2570451/0/en/Self-Injection-Device-Market-Estimated-to-Reach-Value-of-US-11-3-Bn-by-2026-TMR-Study.html

About Us Transparency Market Research

Transparency Market Research, a global market research company registered at Wilmington, Delaware, United States, providescustom research and consulting services. The firm scrutinizes factors shaping the dynamics of demand in various markets. The insights and perspectives on the markets evaluate opportunities in various segments. The opportunities in the segments based on source, application, demographics, sales channel, and end-use are analysed, which will determine growth in the markets over the next decade.

Our exclusive blend of quantitative forecasting and trends analysis provides forward-looking insights for thousands of decision-makers, made possible by experienced teams of Analysts, Researchers, and Consultants. The proprietary data sources and various tools & techniques we use always reflect the latest trends and information. With a broad research and analysis capability, Transparency Market Research employs rigorous primary and secondary research techniques in all of its business reports.

ContactUs

Nikhil SawlaniTransparency Market Research Inc.CORPORATE HEADQUARTER DOWNTOWN,1000 N. West Street,Suite 1200, Wilmington, Delaware 19801 USATel: +1-518-618-1030USA Canada Toll Free: 866-552-3453Blog:https://tmrblog.com

Transparency Market Research

See the original post here:
Cloud Computing Market in Healthcare Industry Demand will reach ... - Digital Journal

Read More..

What is FedRAMP High P-ATO? FedRAMP High Compliance and Certification Explained – Security Boulevard

FedRAMP compliance is a requirement for commercial cloud service providers (CSP) looking to provide s a security and compliance accreditation requirement for commercial cloud service providers looking to sell their solutions to US Government agencies. FedRAMP certifications are managed by GSA which is a US Government agency takes with operating the program. Federal agencies select and procure commercial cloud services based on their security requirements that are based on specific security levels called baselines. There are four major security baselines in the FedRAMP program High, Moderate, Low and Low-Impact SaaS (LI-SaaS).

What is FedRAMP Compliance?

FedRAMP is a Government-wide Program for Authorizing Cloud Services that was established by Congress and managed by GSA. The FedRAMP program provides a standardized approach to securing systems, assessing security controls, and continuously monitoring cloud services used by federal agencies. The FedRAMP program allows commercial organizations to streamline the compliance and certification process by certify once, use many times across agencies. The programs key participants are the FedRAMP PMO, JAB, federal agencies, cloud service providers, and third-party assessor organizations (3PAO). The FedRAMPs PMO (Program Management Office) is headed by GSA and serves as the facilitator of the program. The offices responsibilities include managing the programs day-to-day operations, creating guidance and templates for agencies and cloud service providers to use for developing, assessing, authorizing, and continuously monitoring cloud services per federal requirements.

FedRAMP High Baseline

The FedRAMP High baseline is based on Federal Information Processing Standard (FIPS) 199, which provides the standards for categorizing information and information systems. It is important that commercial cloud service providers understand the impact level of their offering(s) and correlated security categorization when developing their authorization strategy. The baselines are developed across three security objectives: Confidentiality, Integrity, and Availability.

High Impact data is usually in Law Enforcement and Emergency Services systems, Financial systems, Health systems, and any other system where loss of confidentiality, integrity, or availability could be expected to have a severe or catastrophic adverse effect on organizational operations, organizational assets, or individuals. FedRAMP introduced their High Baseline to account for the governments most sensitive, unclassified data in cloud computing environments.

The FedRAMP Marketplace has around 300 authorized commercial cloud services, of which less than 10% are accredited at the FedRAMP High baseline. This presents significant competitive advantage for commercial cloud providers looking to offer their services to meet sensitive mission requirements. There are 421 security controls that must be implemented based on the NIST Special Publication 800-53 Rev 4 requirements. The FedRAMP High baseline based on the NIST Special Publication 800-53 Rev 5 is expected to have 392 controls.

Accelerating FedRAMP High Compliance and Certification

Conducting market research and getting a sense of options and trends is essential to making an informed decision on selecting the right FedRAMP ATO (Authority To Operate) strategy.

Here are some available links with additional content for research.

How much does it cost to get FedRAMP compliant and obtain an ATO?

This blog post provides details on specific cost line items and critical drivers. The blog post also includes comments from FedRAMP SMEs and CISO/CTOs of companies that have successfully achieved FedRAMP compliance.

Are you interested in FedRAMP certification? Schedule a free consultation to learn more about our FedRAMP Accelerator Assessment that can reduce the time and cost of your project by over 40%.

*** This is a Security Bloggers Network syndicated blog from Blog Archives - stackArmor authored by stackArmor. Read the original post at: https://stackarmor.com/fedramp-high-ato-explained/

Read more:
What is FedRAMP High P-ATO? FedRAMP High Compliance and Certification Explained - Security Boulevard

Read More..

Cloud Native Identity and Access Management in Kubernetes – The New Stack

Cloud native applications deserve a cloud native identity and access management (IAM) system. In this article, I will highlight some cloud native principles and discuss their roles in deploying an IAM system. Further, Ill demonstrate how a single IAM system can serve customized APIs in Kubernetes using cloud native principles and also show an API-first approach. Below, IAM is loosely used to refer to any type of IAM system, including a customer identity and access management (CIAM) system.

Cloud native refers to a software approach using the tools, services and technologies offered in cloud computing environments. Examples of cloud native technologies include containers, related platforms such as Kubernetes, and immutable infrastructure and declarative APIs with tools like Terraform. The goal is to take advantage of the scale, agility and resilience provided by modern cloud computing to build, deploy and manage dynamic applications. The following principles apply to any cloud native application but are essential for an IAM system.

Elasticity is the ability of a system to dynamically adapt its capacity in response to changing demands. Elastic systems scale up (grow) and scale down (shrink) depending on the needs and policies. This flexibility is a primary driver of cloud computing. An IAM system may encounter high-demand spikes when many users log in simultaneously, for example, at the beginning of a work day or when launching a campaign. A cloud native approach allows for quickly allocating more resources, such as scaling an IAM system without affecting user experience.

Resilience describes the ability to handle failures and recover from faults. Applications should be designed to tolerate failures such as unexpected network latency. For example, on an architectural level, a resilient system might include redundant deployments in different availability zones, allowing one deployment to take over if the other fails. If the IAM system is down, no users, employees or customers can log in, which could significantly hurt a businesss revenue. Therefore, an IAM system must remain functional even when failures occur.

Observability provides visibility of the state of the system. It allows teams and tools to take actions if required (like restarting an instance). Observability may be combined with traceability or audibility, and is therefore essential for compliance and security. Not only is it important to know the operational state of the IAM system, but also its security state. Observability is key to detecting fraudulent activity in real time and helping security teams react to security breaches.

Automation is the process of replacing manual tasks with automated processes. It includes DevOps features like continuous integration and deployment to deliver and change software on a regular basis. Such automation enables an IAM system to be deployed in a repeatable manner. In particular, it should be possible to automatically scale the IAM system and quickly replace or update instances.

To fulfill the above principles, cloud native applications must adopt certain characteristics. For cloud native IAM systems, these common traits include:

Cloud native is tightly coupled to microservice architecture, in which the features of an application are implemented by loosely coupled services. Each service has its own codebase and explicitly defined dependencies. Since the services are loosely coupled, a microservice architecture can scale horizontally. To scale horizontally means that one or more services can be duplicated to increase capacity if necessary. This task becomes particularly easy with containerized services.

Within an IAM system, these microservices typically include:

All of these services should be able to scale independently, and any administrative or maintenance task should run separately from the application. That is, an IAM system should have a separate administrative service to manage the configuration. The administrative service could include a self-service portal for developers to register their OAuth 2.0 clients, for example. It can also be part of the continuous integration/continuous delivery (CI/CD) pipeline that triggers an update of the IAM system when configuration changes.

Microservices communicate over APIs, and a service commonly relies on many other backend services as part of its normal operation. A cloud native application should not make any distinction between local or third-party backend services but communicate over standard interfaces and protocols. In this way, it can maintain the loose coupling between services and resources.

With loose coupling and standard interfaces, any resource can be replaced during runtime without updating the related services. For an IAM system, these resources commonly include databases like credential stores, token stores or user stores. An IAM system must also integrate with email or SMS providers to send users one-time passwords (OTPs) or activation links.

As mentioned above, observability is vital for an IAM system. Therefore, a cloud native IAM system must support integration with observability tools. That includes publishing metrics in a standardized form, sending logs to the log-management solutions, and providing tracing and audit information.

Note that the IAM system itself is an important backend service for applications. As such, it should provide a standardized API. In other words, it should support standard protocols that other services can integrate with. OAuth 2.0 and OpenID Connect are two examples of protocols an IAM system is expected to support.

Each microservice should manage its own data, meaning that services should communicate via APIs and not share data stores. Therefore, the IAMs authentication service, token service and user management service should store credentials, token data and user account data in separate locations. In addition to data isolation, cloud native applications should store any stateful data, session data or other data shared with backend services. In this way, instances of services become disposable. They can start, stop or be replaced without losing data.

Stateless and disposable components are essential to automation and are key when deploying containers. Moreover, they simplify routing rules because a load balancer does not need to keep track of the state either. Consequently, authentication or authorization can continue seamlessly, even when a node of the IAM system gets torn down.

When working with cloud native applications, one recommendation is to separate the steps for building, running and deploying. Use a CI/CD tool to automate builds and deployments. For example, create container images for every build and deploy a new version of the system based on the new image.

Aim for the different environments development, stage and production to be as identical as possible. Once more, containers are a great tool for that purpose because you can easily reuse the same image in different environments.

If the configuration is file-based and does not contain secrets, it can easily be put into version control.

Another good approach is to share the configuration between the environments but keep environment-specific configuration parameters in environment variables. If the configuration is file-based and does not contain secrets, it can easily be put into version control.

That way, the configuration can be changed outside the running application, and the CI/CD pipeline can take over administrative tasks. Then there is no need to enable administrative support in the IAM application, thus reducing the attack vector and minimizing the risk of the environments diverging. At the same time, the version control system provides audibility for the configuration file.

As mentioned above, containers are the ultimate tool for cloud native apps. When used correctly, containers are self-contained, disposable, resilient and scalable. Kubernetes is the de-facto platform for managing containers and containerized applications. It is, therefore, the first choice for managing a cloud native IAM system as well.

When deploying a cloud native IAM in Kubernetes, you get more power and control than when consuming a SaaS product. Obviously, you can choose the product for the IAM system. Some vendors, like Curity, use an elastic licensing model that does not add extra licensing costs when scaling automatically.

In addition, you can also control the deployment in Kubernetes and reduce the attack surface. For example, you can configure Kubernetes to expose only certain endpoints to the internet and keep other endpoints private.

Health-status endpoints and metrics help to implement auto-healing and scaling for the containers.

An IAM system should expose status information and metrics. Health-status endpoints and metrics help to implement auto-healing and scaling for the containers. Kubernetes can replace a broken container automatically if health checks fail.

If certain metrics values pass a threshold, new instances can be added to increase the capacity. A cloud native IAM system must support some integration with the Kubernetes Control Plane, observability tools and auto-scaling features of the cloud platform to improve availability and resilience.

Typically, each IAM service is deployed in a separate container and together they form a cluster. Kubernetes takes care of service discovery and DNS resolution, among other duties within the cluster.

Consequently, new services are automatically detected, and routing rules are automatically set up to enable services to receive requests and responses. This feature is important for the magic behind auto-healing and scaling but also for update procedures, where one part of the IAM system after the other gets replaced with a new version to keep things working.

As an option, a service mesh can be added to the cluster to protect and improve inter-service communication. Typically, each container in a service mesh is accompanied by a proxy that can, for example, encrypt communication between the services, handle load balancing or enforce policies.

Security policies for the IAM cluster are configured in the same manner as for any other application running in a Kubernetes cluster. In particular, you can make use of technologies such as SPIFFE to automatically manage workload identities of the IAM system.

It may require several services to satisfy the diverse requirements of API consumers. For this and security reasons, place an ingress controller or API gateway at the edge of the IAM cluster. Not only can an API gateway obfuscate the internals of the cluster and provide customized APIs by packaging (micro) services into different products, but it also provides security measures such as throttling traffic or request validation.

The IAM system is ultimately a specialized API that must meet its consumers requirements despite standard-based restraints. Therefore, the API-first approach is applicable even for an IAM. In an API-first approach, you start designing the API according to the needs of its (future) consumers.

Now, I strongly discourage you from writing your own IAM system. However, I still want to stress the importance of selecting an IAM solution that lets you design an IAM-specific API as part of the deployment. Ensure that the API you expose via the API gateway meets the demands of your consumers (clients, in OAuth 2.0 terms). For example, the IAM systems authentication and token service are typically considered external services, whereas user management typically serves an internal audience.

At Curity, we recommend issuing so-called opaque access tokens to external clients for security reasons. An organization cannot control external clients and should limit the impact of leaked access tokens. Opaque tokens are just random strings with no further meaning outside the IAM system.

Consequently, the impact of such a token being lost or stolen is limited. Internal consumers, on the other hand, may receive structured tokens like JWTs (JSON Web Tokens). This approach is called the phantom token pattern. The idea is to have pairs of opaque and structured tokens where a reverse proxy or an API gateway uses the opaque token as a reference to fetch the structured token that it can forward to the APIs of the application.

For an API-first approach, the services of the IAM system must be customizable.

Internal consumers can benefit from the JWTs without compromising security. The pattern obfuscates the details for the client, thus the name phantom token. It only requires simple plugins in the Kubernetes ingress controller, for example.

Not all features and services of an IAM system are equally suitable for all types of clients. For example, user management is typically reserved for internal clients and not hosted next to common APIs for external clients. However, it is not enough to simply divide an IAM system into static services.

For an API-first approach, the services of the IAM system must be customizable. For example, the authentication service can serve many authentication methods, but for various reasons, some methods should be only accessible for certain types of clients.

The token service publishes endpoints for different OAuth 2.0 or OpenID Connect flows. You may need to configure the endpoints differently and only expose some for a certain group of clients and others for another group, or you may want to be able to scale different flows independently because some are more requested than others. The custom services and IAM system configuration should be manageable even for complex setups.

Preferably, there should be only one configuration applicable for all (customized) services of the IAM system.

Preferably, there should be only one configuration applicable for all (customized) services of the IAM system.

To achieve that, the Curity Identity Server offers three runtime service types: an authentication service, a token service and a user management service. Each service provides a list of endpoints. Which endpoints a service includes depends partly on the supported features and can be adapted individually.

A service does not automatically expose all endpoints, but endpoints are mapped to a service role. A runtime instance of the Curity Identity Server is then assigned a service role and deployed in a container that publishes the listed endpoints of that role. In that way, the overall set of features of a service can be divided and spread over (specialized) service roles and containers. The containers can scale independently and serve different clients.

You can add new instances or remove old ones dynamically without affecting others. This is possible because the Curity Identity Server follows the principles of independent and stateless services.

A cluster of the Curity Identity Server may include runtime instances with different service roles. This is called an asymmetric cluster. However, even when runtime instances expose different endpoints and run different service roles, they all use the same (global) configuration.

Runtime instances are independent of each other and, in particular, of the admin service. They can operate completely isolated as long as they have a working configuration. The configuration may be parameterized with environment variables to easily be ported from one environment to another. With that approach, it is feasible to add the configuration file under version control, track and audit changes, and let the CI/CD pipeline handle the deployment completely automated, as you would expect working with cloud native applications.

Cloud native principles work well for an IAM system. If implemented properly, they improve the performance, reliability, and security of the deployment. By sticking to a standard model, such as containers and Kubernetes, it is possible to deploy the IAM system in any cloud computing environment, including private clouds.

To get the best out of an IAM system, ensure it is flexible and apply an API-first approach. This means first considering your requirements and designing the IAM system to suit them. Standardized interfaces, customizable and independent services as offered by the Curity Identity Server help on the way to running a cloud native IAM system that fits your needs.

More here:
Cloud Native Identity and Access Management in Kubernetes - The New Stack

Read More..

Global Disaster Recovery-as-a-Service Market Expected to Grow … – PR Newswire

The global disaster recovery-as-a-service market is expected to grow primarily due to various benefits of cloud computing service model of disaster recovery. Public cloud sub-segment is expected to flourish immensely. The market in the North America region is predicted to grow at a high CAGR by 2031.

NEW YORK, April 3, 2023 /PRNewswire/ --

Global Disaster Recovery-as-a-Service Market Forecast Analysis:

As per the report published by Research Dive, the global disaster recovery-as-a-service marketis expected to register a revenue of $60,404.3 million by 2031 at a CAGR of 23.9% during the forecast period 2022-2031.

Segments of the Disaster Recovery-as-a-Service Market

The report has divided the disaster recovery-as-a-service market into the following segments:

Operating Model: managed DRaaS, assisted DRaaS, and self-service DRaaS

Service Type: real-time replication, backup & restore, data protection, and others

Deployment Mode: public cloud and private cloud

Organization Size: large enterprise and small & medium-sized enterprises

End-use Industry: Banking, Financial Services, and Insurance (BFSI), IT & Telecommunication, government & public sector, healthcare, retail & consumer goods, and media & entertainment, and others

Region: North America, Europe, Asia-Pacific, and LAMEA

To get access to the Complete PDF Sample of Disaster Recovery-as-a-Service Market Click Here!

Segment

Sub-Segment

Operating Model

Managed DRaaS Fastest growth by 2031

The ability of managed DRaaS to handle routine management chores such as firewalls, load balancing, monitoring, antivirus, operating system patching is expected to push the growth of this sub-segment further.

Service Type

Real-time Replication Highest market share in 2021

The wide range of applications offered by real-time replication, including data distribution to other servers for application processing is expected to propel the sub-segment forward.

Deployment Mode

Public Cloud Most dominant in 2021

The advantages offered by public cloud storage such as scalability, agility, and economic flexibility is anticipated to push the growth of this sub-segment in the forecast period.

Organization Size

Large Enterprise Highest market share in 2021

Growing investments by large enterprises in advanced technology and adopting cutting-edge DRaaS solutions to boost their overall efficiency and effectiveness

End-Use Industry

Region

Banking, Financial Services, and Insurance (BFSI) Highest CAGR by 2031

The growing need for BFSI sector to have a comprehensive disaster recovery plan that guarantees business continuity is anticipated to offer numerous growth opportunities to the sub-segment in the forecast period.

North America Most profitable by 2031

Growing number of businesses dealing in telecommunications, information technology, and financial services industries is predicted to propel the market in the forecast period.

Check out COVID-19 Impact on Disaster Recovery-as-a-Service Market. Connect with an Expert Analyst or Schedule a call

Dynamics of the Global Disaster Recovery-as-a-Service Market

The various benefits, such as cost-effectiveness, offered by the cloud computing service model of disaster recovery is expected to become the primary growth driver of the disaster recovery-as-a-service market in the forecast period. Additionally, growing expanse of large-scale internet services such as data backup and information recovery is predicted to propel the market forward. However, according to market analysts, data breach and security issues due to increasing data volumes might become a restraint in the growth of the market.

The flexibility offered by disaster recovery-as-a-service in facilitating efficient recovery of crucial data during different disasters such as floods, hurricanes, earthquakes, and wildfires is predicted to offer numerous growth opportunities to the market in the forecast period. Moreover, the growing popularity of cloud-based services is expected to propel the market growth in the coming period.

COVID-19 Impact on the Global Disaster Recovery-as-a-Service Market

The Covid-19 pandemic disrupted the routine lifestyle of people across the globe and the subsequent lockdowns adversely impacted the industrial processes across all sectors. The disaster recovery-as-a-service market, however, was positively impacted due to the pandemic. Lockdowns and travel restrictions during the pandemic led to an increase in online transactions which caused a growth in demand for data security solutions. This helped the market register a positive growth rate during the pandemic

Key Players of the Global Disaster Recovery-as-a-Service Market

The major players of the market include

These players are working on developing strategies such as product development, merger and acquisition, partnerships, and collaborations to sustain the market growth.

For instance, in January 2022, Dataprise, a leading IT service management firm, announced the acquisition of Global Data Vault, a disaster recovery-as-a-service (DRaaS) and Backup-as-a-service (BaaS) provider. This acquisition is expected to expand the market share of the acquiring company, i.e., Dataprise in the coming period.

Request Customization of Disaster Recovery-as-a-Service Market Report as per your Definition and Format & Avail of Amazing Discount

What the Report Covers

Apart from the information summarized in this press release, the final report covers crucial aspects of the market including SWOT analysis, market overview, Porter's five forces analysis, market dynamics, segmentation (key market trends, forecast analysis, and regional analysis), and company profiles (company overview, operating business segments, product portfolio, financial performance, and latest strategic moves and developments.)

More about Disaster Recovery-as-a-Service Market:

Some Trending Reports:

About Research Dive

Research Dive is a market research firm based in Pune, India. Maintaining the integrity and authenticity of the services, the firm provides the services that are solely based on its exclusive data model, compelled by the 360-degree research methodology, which guarantees comprehensive and accurate analysis. With an unprecedented access to several paid data resources, team of expert researchers, and strict work ethic, the firm offers insights that are extremely precise and reliable. Scrutinizing relevant news releases, government publications, decades of trade data, and technical & white papers, Research dive deliver the required services to its clients well within the required timeframe. Its expertise is focused on examining niche markets, targeting its major driving factors, and spotting threatening hindrances. Complementarily, it also has a seamless collaboration with the major industry aficionado that further offers its research an edge.

Contact:Mr. Abhishek PaliwalResearch Dive30 Wall St. 8th Floor, New York NY 10005(P) +91-(788)-802-9103 (India)+1-(917)-444-1262 (US)Toll Free: 1-888-961-4454E-mail: [emailprotected]Website: https://www.researchdive.comBlog: https://www.researchdive.com/blog/LinkedIn: https://www.linkedin.com/company/research-dive/Twitter: https://twitter.com/ResearchDiveFacebook: https://www.facebook.com/Research-Dive-1385542314927521

Logo: https://mma.prnewswire.com/media/997523/Research_Dive_Logo.jpg

SOURCE Research Dive

View original post here:
Global Disaster Recovery-as-a-Service Market Expected to Grow ... - PR Newswire

Read More..