Page 2,257«..1020..2,2562,2572,2582,259..2,2702,280..»

Logicdrop, Vaadin Tout Cloud Native Java Runtime Quarkus The New Stack – thenewstack.io

When the cloud began to take over enterprise application hosting, many developers also found that they needed new technologies, languages and databases to do the job, hence the growth of JavaScript, NoSQL and Kubernetes.

Alex Handy

Alex is a technical marketing manager at Red Hat. In his previous life, he cut his teeth covering the launch of the first iMac before embarking upon a 20-plus-year career as a technology journalist.

Now that the cloud is a permanent fixture inside enterprise development shops, its only natural to bring legacy applications to the cloud. But deciding just how to do that remains an issue of contention and discussion for many teams.

For enterprises with a large variety of Java applications in their environments, Spring Boot has long been the path forward to the cloud for enterprise Java users. But over the course of 2021, Quarkus, the Kubernetes-native pure Java stack, began to gain momentum as an alternative path to the cloud for Java applications.

Now, in 2022, that growth is spurring Quarkus to become one of the most popular topics for Java developers. Its also allowing companies to adopt a new stack.

That includes developers like KimJohn Quinn, co-founder of Logicdrop. His company makes a business automation platform that abstracts complexity away from users, creating a low-code platform for building business rules in the cloud.

Quinn said his company has been using Java since it was founded in 2010. He said he and the team were attracted to the productivity and time savings that Quarkus provides.

Quinn said the team at Logicdrop had adopted Quarkus less than a year ago. In six months, we ported almost 70% of our platform over to Quarkus between two engineers doing most of that work.

Quinn is now a full believer in Quarkus and has begun planning the future of the companys Java applications through its lens.

Where at one time we seriously were considering other languages to replace most of our existing Spring Boot-based platform, we now think Quarkus is the future and have standardized on it as our primary stack. Quarkus lets us use the Java we all know and love as well as accepted standard APIs. We have a much more cohesive microservice environment that developers of any skill set are comfortable working in, it fits very well into our CI/CD process, configuration is simple and straightforward, and the developer tools are solid. Adding icing to the cake, we get native executables comparable to, if not smaller than, other alternatives such as Node, Python or C# to boot. Quarkus has been a refreshing change, and it really does make developing in Java fun again.

The Java world is rich with frameworks and APIs each having their own merits, said Quinn. For any developer, this plethora of choices can be overwhelming, especially when building a product that needs to be performant, maintainable and flexible. Spring is almost always the obvious go-to choice because of the richness of its libraries, the length of time it has been around and the familiarity many developers have with it, but it is those very reasons we sought out an alternative. We wanted the simplicity of Google Guice for DI/IoC but also a robust platform with enough supporting libraries to meet our needs in a more conventional Java way. Quarkus literally has everything we need.

Since Spring Boot has a rich battle-tested ecosystem supporting it, the company had hoped that it would enrich the development process and simplify integration with various Java and cloud technologies, making it faster and easier.

In the end, as our platform grew, the combination of opinionated views on configuration, injection magic, and very seldom definitive answers on how to perform everyday tasks beyond the obvious, we found ourselves dealing with an increasingly complicated and bloated system, Quinn said.

Our core product, originally based on Spring Boot, bent Drools to our will before Quarkus and Kogito were publicly available. Later we added new technologies such as Reactive streams, Camel integration and messaging, he said.

Still, a good portion of our platform, probably like many others, relied on common boiler-plate approaches for security, MVC (modelviewcontroller) and CRUD. Spring was overkill in terms of what we needed versus what we used versus how we used it. Further complicating this was our head-on adoption of Kubernetes and Knative. Using Spring Boot, our container sizes and startup times were too heavy to play nicely in our environment.

That was one of the major victories from moving to Quarkus, said Quinn.

When we made the jump to Quarkus, it dramatically changed how we did things for the better, and it played with Kubernetes and Knative out of the box. Everything we needed, and more, was available in the Quarkiverse, and in short periods of time, we could prototype features that would have normally taken us much longer to just get our arms around. said Quinn.

Quinn said that Quarkus is now the basis for the companys core application and some of its other projects as well.

We shifted to Quarkus and even in our early experimentation, we started to see a dramatic improvement in everything. We were going from one- to two-week first merge cycles to a matter of hours. Even to this day, we are pushing things out in hours. Features are flying in and out lightning fast, and weve been able to bring in developers who were formerly Node developers and Python developers, and we have a common understanding across our whole Quarkus platform by everybody. Theres productivity across the board. Quarkus has let us focus on what we need to do rather than how to get there first, said Quinn.

Quinn particularly likes the way that Quarkus just works, something not necessarily familiar to Java developers.

Getting set up was no problem. What pushed us over the top coming from Spring Boot: We took the OpenID Connect extension, hooked it into Keycloak and it worked. As an example, we said Lets throw it into Auth0 and see what happens. We were sure it was going to explode. We sent it, and I got a 401 HTTP error. Thats exactly what Im looking for. We said Lets get a token, and it logged in and executed. When a single property changed, we bounced between Keycloak and Auth0 and everything ran, said Quinn, explaining how easy it was to tie the Quarkus landscape into Kubernetes authentication procedures.

Matti Tahvonen, senior developer advocate at Vaadin, said Quarkus has increased productivity among its developers as well. The company offers a web framework for Java applications, enabling modern UX development for modern browsers while still using Java behind the scenes.

Because Vaadin offers a UI framework for Java developers, they can see the trends among them fairly easily, and it sees Quarkus gaining steam.

To date, Spring Boot has the deepest integrations with Vaadin, created because Spring Boot was the most popular path to the cloud for many Java developers. Vaadin also has lots of users on Java EE servers, as well as some companies that want to build their own stack on plain servlet containers. Lately there have been requests for Quarkus as well, Tahvonen said.

But the reason Vaadin is now providing official integrations for Quarkus is not simply because customers are asking for it the company had already chosen to build its big product with Quarkus as well. Indeed, some customers had already built community integration tools for Vaadin and Quarkus before Vaadin had a chance to build its own. The demand is there, said Tahvonen.

The customers he has seen transitioning to Quarkus are coming mostly from Java EE servers.

So if they have been using JBoss in the past, now they are looking into Quarkus as a replacement for Spring Boot because Spring Boot has totally changed how people package and deploy their applications, and Quarkus is doing something similar, so the server is part of the application and not the other way around, he said.

And Quarkus is pure Java, like Vaadin, another thing Tahvonen and his team like about it.

Quarkus and Vaadin are a great combination, said Tahvonen. Vaadin is pure Java, only Java. Theres no XML, no HTML, no CSS, unless you want to get the lower abstraction level. You can work in one single language. The biggest benefit of Vaadin is that you dont need to do context switches between languages and execution platforms.

While Java may be viewed as an older language, Quarkus is helping companies around the world modernize their core applications for cloud deployment. Decathlon was able to ramp up to Quarkus from Spring Boot in a single week. Abraxas used Quarkus to build the backbone of its new tax solution for Swiss government tax agencies. Vodafone Greece migrated dozens of applications and improved performance over Spring Boot.

Quarkus offers a host of modern luxuries for the enterprise Java developer. From faster boot and REST response times, to smaller memory footprints, Quarkus can be used in containers without overwhelming the host server with mundane and traditional Java overhead. And its open source, so developers can contribute to it if they want to give back to the community.

Feature image via Pixabay

Go here to see the original:
Logicdrop, Vaadin Tout Cloud Native Java Runtime Quarkus The New Stack - thenewstack.io

Read More..

WD My Cloud Home: My Support Nightmare – The Mac Observer

Way back in 2017, I wrote about My Cloud Home, a (then) new Network Attached Storage (NAS) device from Western Digital (WD).

Whats a NAS? Like Dropbox, Google Drive, iCloud Drive, and other cloud-based storage services, a NAS provides many of the same remote storage features with a delightful difference: no monthly charges.

I was impressed by the then-new My Cloud Home, which was reasonably-priced, easier-than-others to configure and use, and surprisingly full-featured for its price (starting at $159 for a 2TB model). I recommended it without hesitation and have used it for personal network storage ever since.

It performed well for the first couple of years before becoming less and less reliable. Its icon, which had appeared on my desktop for months at a time without intervention, began disappearing and requiring me to log in manually. It was annoying, but I was always able to log in and access my files. At least until recently.

Last week when I attempted to log in, instead of mounting its icon on my desktop, I received an error message: My Cloud is having trouble connecting to the server. Check your internet connection and try again.

My internet connection worked flawlessly for everything except my My Cloud Home, so I tried again. And again. When I was still unable to connect to my My Cloud Home after 24 hours, I contacted WD support and explained that my network-attached storage device is worthless if its unable to connect to their (Western Digitals) servers.

After four days without a response, I submitted a second support ticket and asked to escalate my issue. I received an automated reply that they would escalate my case and follow up as soon as possible.

Its been six days since I could access my files from the desktop, and Im still awaiting a response.

To be fair, while I cant log and mount an icon for my My Cloud Home device on the desktop, I can access my files via a web browser or IOS app. Its inconvenient, but I can see and download my files when necessary.

Still, I got the device because it behaves like a directly connected storage device on my Mac, displaying an icon on my desktop as though it were a USB or Thunderbolt drive. Managing its files through a web browser is awkward, to say the least.

My point is that after six days without a response from Western Digital support, I no longer recommend WD products. Waiting a week or more to hear from a support rep is unconscionable. Im not sure other storage vendors are better at support, but they couldnt be any worse.

Caveat Emptor.

Follow this link:
WD My Cloud Home: My Support Nightmare - The Mac Observer

Read More..

Fujitsu is ending its mainframe and Unix services – TechRadar

Fujitsu has quietly revealed its plans to shutter both its mainframe and Unix server system business by the end of this decade.

In a notice posted to the Japanese IT giant's website, the company announced its plans to stop selling its mainframes and Unix server systems by 2030 though support systems will continue for an additional five years.

Fujitsu will stop manufacturing and selling its mainframe systems by 2030 as well as discontinue its Unix server systems by the end of 2029. As support services for both portfolios will extend for another five years, 2034 will mark the end of support for its Unix servers while 2035 will be the end of its mainframes.

In its notice, Fujitsu argues that everything in society will be connected by digital touchpoints in the near future which will require new, robust digital infrastructure. As such, businesses will need to reevaluate their existing core systems and embrace a fully digital, hybrid IT model to remain competitive and sustainable.

Fujitsu's plan also includes a timetable for shifting its mainframes and Unix servers to the cloud as part of a new business brand called Fujitsu Uvance.

Through this new brand, the company aims to provide businesses access to computing resources such as HPC using an as-a-service model that will give them access to advanced capabilities when needed.

While the move makes sense for the future of Fujitsu, the company's mainframe customers now have a deadline before which they will need to migrate their mainframe applications to another platform or rebuild them from scratch on modern infrastructure. However, mainframes are a long-term investment for organizations that often handle their most mission-critical applications.

On the Unix server side, customers have things a bit easier as their workloads can be transitioned to Linux without too much of a hassle.

We'll likely hear more from Fujitsu as the company begins winding down both its mainframe and Unix server businesses.

Via The Register

Visit link:
Fujitsu is ending its mainframe and Unix services - TechRadar

Read More..

The North American PC And PC/ABS In IT Server Industry is Expected to Reach $81+ Million by 2028 – Yahoo Finance

Company Logo

North America PC And PC/ABS In IT Server Market

North America PC And PC/ABS In IT Server Market

Dublin, Feb. 23, 2022 (GLOBE NEWSWIRE) -- The "North America PC And PC/ABS In IT Server Market Size, Share & Trends Analysis Report Product By Application (Polycarbonate, Polycarbonate/Acrylonitrile Butadiene Styrene), By Country (U.S., Canada, Mexico), And Segment Forecasts, 2021 - 2028" report has been added to ResearchAndMarkets.com's offering.

The North America PC and PC/ABS in IT server market size is expected to reach USD 81.95 million by 2028, according to a new report by the publisher. The market is expected to expand at a CAGR of 4.3%, in terms of revenue, from 2021 to 2028. Rising demand for cloud storage and internet services from the growing population has increased the installation of data centers and servers, hence propelling the demand for Polycarbonate (PC) and Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS) across the IT server component manufacturers in the region.

Polycarbonate (PC) is a highly durable, flexible, and thermal resistant resin suitable for manufacturing twinwalls used for thermal separation across the aisle in the server room. In addition, data centers are a significant component of modern technology, especially technologies that rely on cloud computing. In addition, components such as housing for cooling systems require higher resistance towards heat, for which the aforementioned resins are highly suitable.

Major tech companies such as Google LLC; Amazon.com, Inc.; and HSN, Inc. utilize polycarbonate-based twinwall and multiwall for containment systems to provide reliable IT environments, allowing maximum cooling for servers and components along with physically separating a group of servers and batteries from each other. Furthermore, high conductivity characteristics of polycarbonate and polycarbonate/acrylonitrile butadiene styrene are suitable for the manufacturing of connectors used across the IT industry.

Server racks such as wall mount, floor-standing, open-frame, among others are designed for mounting, organizing, and securing equipment including servers, routers, hubs, switches, and audio/video components. In addition, they provide cable management and enable optimized airflow for increased operational efficiency and prolonged equipment life. Hence, polycarbonate and polycarbonate/acrylonitrile butadiene styrene are suitable for manufacturing server racks.

North America PC And PC/ABS In IT Server Market Report Highlights

Story continues

In terms of revenue, the twinwall sub-segment accounted for a prominent share in the market in 2020 across both polycarbonate (PC) as well as polycarbonate/acrylonitrile butadiene styrene (PC/ABS) segment due to its high utility for creating a physical barrier across the aisle to manage heat generated from the servers

In 2020, the U.S. accounted for the major market share across North America in terms of volume and is estimated to be more than 80.0%. This is due to the increasing demand for data centers and cloud storage from the rising population across the country

The presence of major tech giants including International Business Machines Corporation (IBM); Google LLC; Facebook, Inc.; and Microsoft Corporation have increased the demand for the servers and server components across the country. Therefore, propelling the demand for polycarbonate and polycarbonate/acrylonitrile butadiene styrene for manufacturing the servers and server components

Increasing demand for PC) and PC/ABS across the IT server industry has introduced plastic molding companies across the region. Companies such as Server Technology, Tenere Inc.; AIC Inc. IT Creations; Inc.; and Jameco Electronics are the prominent plastic injection molders providing services for the IT industry across North America

Key Topics Covered:

Chapter 1 North America PC and PC/ABS in IT Server Market: Product by Application Estimates & Trend Analysis1.1 North America PC and PC/ABS in IT Server Market: Product by Application movement analysis, 2020 & 20281.2 Polycarbonate1.2.1 North America PC and PC/ABS in IT Server market estimates and forecasts, By Polycarbonate, 2017 - 2028 (Tons) (USD Million)1.3 Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS)1.3.1 North America PC and PC/ABS in IT Server market estimates and forecasts, By Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS), 2017 - 2028 (Tons) (USD Million)

Chapter 2 North America PC and PC/ABS in IT Server Market: Country Estimates & Trend Analysis2.1 North America PC and PC/ABS in IT Server Market: Country movement analysis, 2020 & 20282.2 U.S.2.2.1 U.S. PC and PC/ABS in IT Server market estimates and forecasts, 2017 - 2028 (Tons) (USD Million)2.2.2 U.S. PC and PC/ABS in IT Server market estimates and forecasts, by Polycarbonate, 2017 - 2028 (Tons) (USD Million)2.2.3 U.S. PC and PC/ABS in IT Server market estimates and forecasts, by Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS), 2017 - 2028 (Tons) (USD Million)2.3 Canada2.3.1 Canada PC and PC/ABS in IT Server market estimates and forecasts, 2017 - 2028 (Tons) (USD Million)2.3.2 Canada PC and PC/ABS in IT Server market estimates and forecasts, by Polycarbonate, 2017 - 2028 (Tons) (USD Million)2.3.3 Canada PC and PC/ABS in IT Server market estimates and forecasts, by Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS), 2017 - 2028 (Tons) (USD Million)2.4 Mexico2.4.1 Mexico PC and PC/ABS in IT Server market estimates and forecasts, 2017 - 2028 (Tons) (USD Million)2.4.2 Mexico PC and PC/ABS in IT Server market estimates and forecasts, by Polycarbonate, 2017 - 2028 (Tons) (USD Million)2.4.3 Mexico PC and PC/ABS in IT Server market estimates and forecasts, by Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS), 2017 - 2028 (Tons) (USD Million)

Chapter 3 North America PC and PC/ABS in IT Server Market: Market Drivers3.1 Polycarbonate and PC/ABS Plastics are Utilized Vs. Other Thermoplastics3.2 Comparative Analysis of Plastic Properties

Chapter 4 North America PC and PC/ABS in IT Server Market: Price Trend Analysis4.1 North America PC and PC/ABS in IT Server Market: Polycarbonate (PC) Price Trend Analysis, 2017 - 2021 (USD/Kg)4.1.1 Polycarbonate Pricing Analysis, by Manufacturers, 2020 & 2021 (USD/Kg)4.2 North America PC and PC/ABS in IT Server Market: Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS) Price Trend Analysis, 2017 - 2021 (USD/Kg)

Chapter 5 North America PC and PC/ABS in IT Server Market: Competitive Landscape5.1 Vendor Landscape5.1.1 List of IT Server Manufacturers (Market Ranking Analysis)5.1.2 List of Plastic Injection Molders for IT Servers & IT Server component manufacturers5.1.3 List of PC and PC/AB Product Types offered by SABIC for IT Servers

For more information about this report visit https://www.researchandmarkets.com/r/kyiouz

Attachment

See the rest here:
The North American PC And PC/ABS In IT Server Industry is Expected to Reach $81+ Million by 2028 - Yahoo Finance

Read More..

Cloud storage cost challenges and how to tackle them – ComputerWeekly.com

Cloud storage is often positioned as a way to save money. Firms can reduce their overheads by using the public cloud to replace datacentres, hardware and even staff or so the story goes.

In reality, those cost savings can be hard to achieve. Some of the features that make the cloud attractive including its ability to scale up quickly can make costs harder to control. And with more organisations pursuing a multicloud or hybrid cloud strategy, CIOs need to be sure that operational flexibility doesnt come at too high a price.

A recent research report, the Enterprise cloud index from Nutanix, found that 64% of organisations expect to operate in multicloud environments within three years. At the same time, 43% said that managing costs across environments is a challenge. Those surveyed also said they expected moving workloads between clouds would increase costs.

The cloud offers almost unlimited capacity, and the flexibility for organisations to pay for the resources they need. These are key benefits to the cloud. Firms are not constrained by the need to build datacentres, and they dont or shouldnt pay for capacity they do not need.

But, as with any other rental agreement, there is a price for flexibility. And cloud providers reward customers who can commit up front, or commit for a longer period of time, with discounts. This makes it harder for IT departments to predict pricing, and some experts argue, overturns the pay-as-you-go model of cloud.

The economic model of the cloud is essentially that the more you can commit to use, the better the deal will be, says Stephen Edwards, a digital expert at PA Consulting. It is not really that different to a mobile phone contract. If you commit to a higher [volume], they will give you a better discount.

This, says Patrick Smith, chief technology officer (CTO) for Europe, Middle East and Africa (EMEA) at storage provider Pure Storage, is pushing buyers to signing up to larger contracts. They are potentially over-buying, or moving too quickly to the cloud. One thing with those [contracts] is organisations have to adopt at a certain rate, he says. If you dont adopt fast enough the discount diminishes. If organisations fail to onboard to the cloud quickly enough, they face additional costs.

Added to this is a tendency for the larger cloud providers to promote proprietary technologies that lock buyers in. If it is harder for firms to move between suppliers, they are less likely to be able to consolidate their requirements with a single supplier or use the threat of switching to drive down costs.

This is reinforced by the different architectures and approaches used by cloud providers. Suppliers have their own views of best practice that make it harder to move between clouds and to create a setup that works equally well across multiple clouds.

The situation is made worse by the ease of cloud purchasing because almost anyone can spin up cloud resources, it is hard for IT departments to maintain control.

The key challenge in cloud cost management is visibility into the consumption and use of cloud services. For many organisations, adoption of cloud services happens in a decentralised way, says Nick Heudecker, a director at Cribl, a company that helps organisations sift through their IT data.

Another aspect of visibility is simply understanding the bill at the end of the month from your chosen cloud service providers, he says. The decentralised nature of cloud consumption in most organisations makes deciphering bills difficult, if not impossible.

Then there is the issue of the underlying pricing itself. Cloud pricing is far from transparent.

According to Aran Khanna, CEO of Archera, a cloud pricing specialist, Amazons pricing data alone is a 2GB JSON file. This makes comparisons difficult. You cant just load that up and then put in the Azure pricing sheet, he says. This is why when you look at large enterprises cloud management teams, they have three or four data scientists sitting there. It is no longer an Excel job.

Perhaps the biggest challenges facing firms using the cloud is accurately forecasting demand, and then ensuring what is bought is actually being used.

From a cloud perspective, the real challenge is cost predictability, says Paul Walker, EMEA technical director at iManage, which provides tools for knowledge workers on top of Azure.

The key questions that CIOs get asked when trying to build a business case for moving to the cloud is how much will it cost us annually, over two years and so forth? Also, does a scalable solution mean that our costs will fluctuate month on month and year on year?

A key source of overspending is buying more capacity than the project or business needs. Forecasting helps here, although it is an inexact science as part of the point of the cloud is to be able to add capacity quickly.

Can you make [usage] more predictable? You can do planning based on what you expect to use on the platform, but the issue is its just too easy to add capacity, warns Oscar Arean, head of operations at cloud service provider Databarracks.

If someone in the past needed additional compute, theyd ask the IT department and might decide it is not worth the cost, he added. With the cloud, unless there are clear purchasing controls, it is all too easy for developers and others to add compute, memory or even storage.

Waste also occurs through keeping capacity running once a task has finished, or through failing to use the scalability features of the cloud platform. One of the big buckets we look at is waste, says Aran Khanna at Archera. You have to shut stuff down or resize it.

This means finding redundant test-and-dev servers, and taking them offline. Firms should look to scale down systems out-of-hours or during less busy periods. And extending this to storage, firms can move less frequently accessed data to cheaper storage tiers or dedicated archiving products. Business units face hardware limits with on-premise technology. With the cloud, it is too easy to add capacity without considering the cost.

This needs active management to flex services up and down according to business requirements, says Terry Storrar, managing director at Leaseweb UK.

Organisations also need to update their architectural approaches to suit the cloud. Not all firms are ready for cloud-native technology such as Kubernetes.

Nonetheless, the worst option is to recreate existing infrastructure in the cloud. Unlike local hardware, cloud systems do not need to be built for usage peaks as they can scale quickly. Virtualisation is less useful than it is on-premise, and it can be costly when virtual machines are using resources all the time. The cloud provides easier and cheaper backup and business continuity systems, even across geographies.

Experts suggest that potential savings of 30-40% are possible through better planning and management, optimising architecture for the cloud, and better purchasing.

One of the advantages of cloud architecture is that solutions can be very easily procured, says iManages Paul Walker. The challenge is to ensure a level of adoption that delivers the return on investment.

Read more:
Cloud storage cost challenges and how to tackle them - ComputerWeekly.com

Read More..

Backup heads to cloud as ransomware hits 76% and RTOs/RPOs fail – ComputerWeekly.com

In a disaster recovery scenario, most organisations cant recover the data they want or do it quickly enough.

Meanwhile, ransomware attacks now firmly among the list of potential disasters have been suffered by 76% of organisations in the past 12 months, with successful entries by malware mostly down to users clicking links and compromised admin systems.

Those are some of the findings of the Veeam Data protection trends report 2022, which questioned more than 3,000 IT decision-makers, mostly in organisations of more than 1,000 employees and in 28 countries.

Top-level findings in the survey included that the average outage lasts 78 minutes and the estimated average cost is $1,467 per minute or $88,000 per hour with 40% of servers suffering one unexpected outage a year.

A key finding was that recovery time objectives and recovery point objectives (RTOs and RPOs) are not being achieved. That is, there is an availability gap and a protection gap, according to the Veeam view of the survey results.

When asked whether their organisation can recover applications as quickly as service-level agreements (SLAs) demand and whether they can restore all data that SLAs specify, the answers were resoundingly that they couldnt.

Nine out of 10 (90%) said they could not recover data as quickly as they wanted, and 89% said they couldnt recover all they wanted.

Backup is increasingly making use of the cloud, and by 2024, 79% of organisations expect to use the cloud in some form for backup purposes. That is a projected increase from 67% this year.

Meanwhile, disaster recovery (DR) is also expected to undergo a big shift to use of the cloud, according to the survey. While 34% managed DR using their own datacentres in 2022, the expectation of respondents was that 53% would be done via the cloud and a disaster-recovery-as-a-service (DRaaS) provider by 2024, although 28% of data would still be held on customers own sites.

How to recover from a disaster varied, with most (61%) saying they would restore to on-premise sites, while 39% would recover to the cloud. A significant portion in both cases (40% and 20% of all who responded to these questions) said reconfiguring networking would be manual.

And with servers, 29% expect to manually reconfigure them, while 45% will use pre-written scripts, and 25% have orchestrated workflows.

But the datacentre is not dead that is the Veeam take on results that saw the proportion of virtual machines hosted in the cloud already close to half (47%), according to those questioned, with this expected to rise a little by 2024 (52%). But the datacentre will remain vital, with the remainder split equally between physical and virtual servers staying on-site.

Ransomware attacks have been suffered by 76% of those questioned. Only a quarter (24%) had suffered no ransomware attack in the past 12 months, but 23% had been the victim of two, while 19% suffered three and 16% only one.

When asked about causes in more detail, malicious links were the most common means of entry (25%), followed by compromised credentials such as logins, passwords and remote desktop protocol (RDP) vulnerabilities (23%). One-fifth (20%) of ransomware attacks gained entry via an infected patch or software update, while 17% came from spam email and 12% from an insider threat.

When it comes to recovering data from a ransomware attack, an average of 64% of data was restored. More than one-third (36%) of respondents got more than 80% back, while 19% restored between 61% and 80%, 20% restored between 41% and 60%, and 18% recovered between 21% and 40%.

Containers are a small but significant growth area as a means of running applications that is cloud-native and portable between locations. According to the Veeam survey, 56% of respondents use containers in production, and 35% plan to.

But the survey found an uneven set of responses to the question of who was responsible for container application data protection. Just under one-fifth (19%) said it is handled by the main backup team, while 21% said it is handled by those who manage Kubernetes. Meanwhile, 27% said backup was handed by the application owners and 28% by the team that manages storage for components used by Kubernetes.

According to Veeam, snapshots, taken throughout the working day, need to be used in conjunction with backups, usually run once a day usually. That is because, according to the survey results, there is not much difference between the importance of high priority data, of which 55% has a downtime tolerance of one hour, and normal data, of which 49% can stand the same delay.

But according to the survey, most respondents do that anyway. Nearly one in five (19%) protect high-priority applications constantly, while 13% do the same for normal applications. Both are protected every 15 minutes by 17% and every hour by 19%. About one-fifth (18% high priority, 20% normal) protect data no less frequently than every two hours. Those figures become 14% and 16% for between two and four hours. Every four to six hours is 7% and 8%, six to 12 is 3% and 12 to 24 hours is 2% and 4%.

The survey also asked about digital transformation projects, and found the pandemic had tended to speed things up, often by accelerating already-planned modernisation. Most (73%) said they had speeded up digital transformation initiatives, while 18% said they were unaffected, and 9% said things had slowed.

That said, there are obstacles. The most commonly cited are lack of skills (54%), dependence on legacy systems (53%), focus on maintaining operations due to the pandemic (51%), lack of management buy-in (43%), lack of time (39%) and lack of money (35%). Only 8% said nothing stood in the way of their digital transformation initiatives.

Read more here:
Backup heads to cloud as ransomware hits 76% and RTOs/RPOs fail - ComputerWeekly.com

Read More..

(New Report) Cloud Fax Market In 2022 : The Increasing use in Individual and Home Office, Small and Medium Enterprises, Large Enterprises, Global is…

[98 Pages Report] Cloud Fax Market Insights 2022 Cloud Fax is a simple, cost-effective cloud-based alternative to traditional fax machines and servers. Suitable for organizations of all sizes, it provides a streamlined faxing process, while substantially lowering your overall faxing costs. It lets you send and receive faxes to and from anywhere in the world via email and is fully compatible with all email platforms. It is secure, reliable and eliminates the need for fax machines, toner, paper, fax servers or dedicated fax lines.

Cloud Fax is mainly used by three groups: Individual and home officeSmall and Medium EnterprisesLarge Enterprises Issuers and Investors. And Large Enterprises are the most widely used group which takes up about 49.48% of the global market in 2016.

North America is the largest sales region of Cloud Fax market in the world in the past few years. North America market took up about 56.91% the global market in 2016, while Europe was 22.26%.

North America is now the key developers of Cloud Fax market. There are several companiessuch as OpenTextCenturyLinkeFax Corporate and TELUS.

OpenTextCenturyLinkEskereFax CorporateBiscomTELUS and Retarus are the key suppliers in the global Cloud Fax market. Top 3 took up about 56.59% of the global market in 2016.

Scope of the Cloud Fax Market Report :

In 2019, The Worldwide Cloud Fax market size was USD 516.7 million and it is expected to reach USD 1034 million by the end of 2026, with a CAGR of 10.3% during 2021-2026.

Get a Sample PDF of report https://www.360researchreports.com/enquiry/request-sample/15068552

Leading key players of Cloud Fax Market are

Cloud Fax Market Type Segment Analysis (Market size available for years 2022-2026, Consumption Volume, Average Price, Revenue, Market Share and Trend 2015-2027): Fax from the Desktop, Fax from Email, Fax from Web

Regions that are expected to dominate the Cloud Fax market are North America, Europe, Asia-Pacific, South America, Middle East and Africa and others

If you have any question on this report or if you are looking for any specific Segment, Application, Region or any other custom requirements, then Connect with an expert for customization of Report.

Get a Sample PDF of report https://www.360researchreports.com/enquiry/request-sample/15068552

For More Related Reports Click Here :

Bi-Metal Band Saw Blade Market In 2022

Auto-Dimming Mirror Market In 2022

Read the original post:
(New Report) Cloud Fax Market In 2022 : The Increasing use in Individual and Home Office, Small and Medium Enterprises, Large Enterprises, Global is...

Read More..

Ninja Van Goes the Extra Mile with Google Cloud to Fulfill Vision of Tech-Enabled, End-to-End Logistics Management for Businesses in Southeast Asia -…

Ninja Van Goes the Extra Mile with Google Cloud to Fulfill Vision of Tech-Enabled, End-to-End Logistics Management for Businesses in Southeast Asia

Extended collaboration will support Ninja Van's strategy for secure and sustainable expansion, while maximizing the potential of its technology talent

SINGAPORE, Feb. 24, 2022 /PRNewswire/ -- Ninja Van, Southeast Asia's leading logistics provider, is extending its multi-year collaboration with Google Cloud to help businesses seize digital growth opportunities and overcome supply chain disruptions. By running its platform and applications on Google Cloud's scalable, secure and open-source infrastructure, Ninja Van aims to strengthen its leadership in last-mile courier services and expand upstream into supply chain management solutions.

Backed by the likes of Europe's largest parcel delivery network GeoPost / DPDgroup and global investment firm B Capital, Ninja Van has operations in Singapore, Malaysia, Indonesia, Thailand, Vietnam, and the Philippines. It is now the trusted delivery partner for close to two million businesses and handles around two million parcels daily across Southeast Asia. To facilitate onshore or nearshore production and distribution for businesses, Ninja Van recently launched Ninja Direct, a procurement concierge that covers supplier sourcing and management, customs clearance, financing, and shipments tracking.

"Retailers have shifted toward e-commerce strategies as opposed to selling through physical stores, especially with the pandemic forcing consumers to shop online more than ever before. With disruptions to the traditional flow of raw materials and finished goods, there's also an urgent need for adaptive micro supply chains that allow businesses to 'make where they sell' and fulfill orders quicker," said Shaun Chong, Co-Founder and Chief Technology Officer, Ninja Van. "We chose Google Cloud because of its proven ability to help us scale reliably and innovate at high velocity, as we address the region's end-to-end logistics management needs."

Freeing Up Talent to Drive Innovation and Impact

Ninja Van consistently evaluates whether it is making efficient use of its technical talent. This means determining how the cloud can empower its 150 engineers, developers and data scientists to build products that add value to the business, instead of spending time managing complex IT infrastructure.

With Google's open cloud approach, Ninja Van's technical teams are free to choose the tools they need to accelerate software development and scale more efficiently while also reducing technology risk. For instance, by using Google Cloud's open-source data processing platform which integrates seamlessly with its data scientists' preferred external data visualization tools, Ninja Van's teams can comfortably process terabytes of data daily to support the company's business needs.

To create its NinjaChat chatbot and simulate the quality and immediacy of in-person interactions, Ninja Van's developers turned to Google Cloud's open-source and pre-built virtual agents to bring the feature to life in a month, rather than spend three months building a machine learning framework from scratch.

"From a chatbot that enhances customer experiences to algorithms for fuel-saving route optimization these are amongst the hundreds of new features released by Ninja Van each day. By automating application deployment and upgrades using Google Kubernetes Engine (GKE), our technical teams can avoid engaging in manual backend configurations and stay laser-focused on innovation," said Chong.

"We're actively hiring to bolster our technology teams in the region. Once we have the talent in the door, it makes zero sense to have them recreate code that exists and is ready-to-use, or manually select servers with the right vCPUs or RAM to deploy each application. Google Cloud puts open-source at the center of its solutions, while GKE does a great job at making sure things happen automatically. These allow us to avoid lock-in, reduce costs and truly give our in-house talent the ability to make more meaningful contributions to the business," added Chong.

Security and Zero Downtime to Enable 24/7 Operations

Southeast Asia's e-commerce gross merchandise value is expected to reach US$234 billion by 2025. As Ninja Van continues to help retailers compete in this fast-growing market and expand its services upstream, reliability and data protection remain top priorities.

The demand for Ninja Van's services have grown by three times during the pandemic. Whether it is preparing for major online sales events like Singles' Day, responding to sudden surges in demand because of merchants running ad hoc campaigns, or making thousands of software upgrades each year, Google Cloud's dynamic autoscaling capabilities enable Ninja Van's website and mobile applications to handle 10 times the normal traffic with a smooth user experience, before scaling down to reduce costs when additional computing resources are no longer needed.

"Having benefited from security solutions that are designed by default into Google Cloud's infrastructure, such as end-to-end encryption and automated patching, we will now work with Google Cloud's security specialists on additional ways to reinforce our zero trust security model," said Chong.

"We're proud to have played a part in Ninja Van becoming a leading end-to-end logistics management provider and one of the most admired technology unicorns," said Ruma Balasubramanian, Managing Director, Southeast Asia, Google Cloud. "By investing in world-class talent and relentless innovation, the company is well-positioned to deliver the just-in-time production and distribution capabilities that businesses need to satisfy contemporary consumers' desire for product variety and immediacy. Google Cloud will continue to uphold our high standards in reliability, collaboration, openness and security, as we support Ninja Van's vision of connecting Southeast Asia to a world of commerce possibilities."

About Google Cloud

Google Cloud accelerates organizations' ability to digitally transform their business. We deliver enterprise-grade solutions that leverage Google's cutting-edge technology all on the cleanest cloud in the industry. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.

About Ninja Van

Ninja Van is a leading tech-enabled express logistics company providing supply chain solutions for businesses of all sizes across Southeast Asia. Launched in 2014, Ninja Van started operations in Singapore as a last-mile logistics company. Since then, it has become one of the region's fastest-growing tech logistics companies, powering businesses with innovative solutions that optimize e-commerce opportunities. Today, Ninja Van has grown its network to cover six countries: Singapore, Malaysia, Indonesia, Thailand, Vietnam, and the Philippines. For more information, visit http://www.ninjavan.co.

SOURCE Google Cloud

Read the original post:
Ninja Van Goes the Extra Mile with Google Cloud to Fulfill Vision of Tech-Enabled, End-to-End Logistics Management for Businesses in Southeast Asia -...

Read More..

Ad Insertion Servers Market 2021 Industry Size, Business Growth, Demand, and Forecast to 2028 | Adobe Systems, Anevia SAS, ARRIS International,…

The latest market research report analyzes Ad Insertion Servers Market demand by Different segments Size, Share, Growth, Industry Trends and Forecast to 2028 in its database, which describes a systematic image of the market and provides an in-depth explanation of the various factors that are expected to drive the market growth. The universal Ad Insertion Servers market research report is the high-quality report having in-depth market research studies. It presents a definite solution to obtain market insights with which market place can be visualised clearly and thereby important decisions for the growth of the business can be taken. All the data, facts, figures and information covered in this business document is backed up by well renowned analysis tools which include SWOT analysis and Porters Five Forces analysis. A number of steps are used while preparing Ad Insertion Servers report by taking the inputs from a dedicated team of researchers, analysts and forecasters.

Get Access to PDF Sample of Global Ad Insertion Servers Market Research Report with Opportunities and Strategies to Boost Growth- COVID-19 Impact and Recovery @: https://globalmarketvision.com/sample_request/133853

The projected sale of a product is also included in this Ad Insertion Servers market report, which aids market participants in bringing new product releases to market and avoiding errors. It suggests which parts of the business should be enhanced in order for the company to be prosperous. It is also simple to discover a new chance to keep ahead in the market, and this market study report gives the most recent trends to assist you in placing your company in the market and gaining a significant benefit.

Major Market Players Profiled in the Report include:

Adobe Systems, Anevia S.A.S, ARRIS International, Beijing Topreal Technologies, Brightcove, Cisco Systems, DJC Media Group, Edgeware, Harmonic, Imagine Communications, Nokia Corporation, SeaChange International, Ericsson.

One of the crucial parts of this report comprises Ad Insertion Servers industry key vendors discussion about the brands summary, profiles, market revenue, and financial analysis. The report will help market players build future business strategies and discover worldwide competition. A detailed segmentation analysis of the market is done on producers, regions, type and applications in the report.

Market Segmentation:

On the basis of type:

Cloud-basedOn-premises

On the basis of application:

the market can be split intoSmall and Medium Enterprises (SMEs)Large Enterprises

The study analysis was carried out worldwide and presents current and traditional growth analysis, competition analysis and the growth prospects of the central regions. With industry-standard accuracy in analysis and high data integrity, the report offers an excellent attempt to highlight the key opportunities available in the global Ad Insertion Servers Market to help players build strong market positions. Buyers of the report can access verified and reliable market forecast, including those for the overall size of the global Ad Insertion Servers Market in terms of sales and volume.

Key Benefits for Stakeholders

Table of Content:

Chapter 1: Introduction, market driving force product Objective of Study and Research Scope Ad Insertion Servers market

Chapter 2: Exclusive Summary the basic information of Ad Insertion Servers Market.

Chapter 3: Displaying the Market Dynamics- Drivers, Trends and Challenges of Ad Insertion Servers

Chapter 4: Presenting Ad Insertion Servers Market Factor Analysis Porters Five Forces, Supply/Value Chain, PESTEL analysis, Market Entropy, Patent/Trademark Analysis.

Chapter 5: Displaying the by Type, End User and Region 2013-2018

Chapter 6: Evaluating the leading manufacturers of Ad Insertion Servers market which consists of its Competitive Landscape, Peer Group Analysis, BCG Matrix & Company Profile

Chapter 7: To evaluate the market by segments, by countries and by manufacturers with revenue share and sales by key countries in these various regions.

Chapter 8 & 9: Displaying the Appendix, Methodology and Data Source

Conclusion: At the end of Ad Insertion Servers Market report, all the findings and estimation are given. It also includes major drivers, and opportunities along with regional analysis. Segment analysis is also providing in terms of type and application both.

Get Research Report within 48 Hours @ https://globalmarketvision.com/checkout/?currency=USD&type=single_user_license&report_id=133853

If you have any special requirements, please let us know and we will offer you the report at a customized price.

About Global Market Vision

Global Market Vision consists of an ambitious team of young, experienced people who focus on the details and provide the information as per customers needs. Information is vital in the business world, and we specialize in disseminating it. Our experts not only have in-depth expertise, but can also create a comprehensive report to help you develop your own business.

With our reports, you can make important tactical business decisions with the certainty that they are based on accurate and well-founded information. Our experts can dispel any concerns or doubts about our accuracy and help you differentiate between reliable and less reliable reports, reducing the risk of making decisions. We can make your decision-making process more precise and increase the probability of success of your goals.

Contact Us

Sarah Ivans | Business Development

Phone: +1-3105055739

Email: [emailprotected]

Global Market Vision

Website: http://www.globalmarketvision.com

Read the original:
Ad Insertion Servers Market 2021 Industry Size, Business Growth, Demand, and Forecast to 2028 | Adobe Systems, Anevia SAS, ARRIS International,...

Read More..

Enabling Data Security with Homomorphic Encryption | ITBE – IT Business Edge

Regardless of the strength of datas encryption, more and more potential vulnerabilities surface in data security as more people are granted access to sensitive information. However, a relatively new encryption protocol poses a unique solution to these types of mounting privacy exposures.

Homomorphic encryption enables users to edit data without decrypting it, meaning the broader dataset is kept private even as it is being written. The technology may not be an ideal solution for everyone, but it does have significant promise for companies looking to protect huge troves of private data.

Homomorphic encryption was proposed in 2009 by a graduate student, who described his concept through an analogy of a jewelry store owner.

Alice, the owner, has a lockbox with expensive gems to which she alone has the key. When Alice wants new jewelry made from the gems, her employees wear special gloves that allow them to reach into the closed box and craft the jewelry using the gems without being able to pull them out of the box. When their work is done, Alice uses her key to open the box and withdraw the finished product.

In a conventional encryption model, data must be downloaded from its cloud location, decrypted, read or edited, re-encrypted, and then reuploaded. As files expand into the gigabyte or petabyte scale, these tasks can become increasingly burdensome, and they expose the greater dataset to wandering eyes.

By contrast, data that is encrypted homomorphically can have limited operations performed on it while its still on the server, no decryption necessary. Then, the final encrypted product is sent to the user, who uses their key to decrypt the message. This is similar to end-to-end encryption, only the receiver can access the decrypted message.

Also read: Data Security: Tokenization vs. Encryption

AI-driven healthcare analytics have come a long way in recent years, with AI being able to predict disease and other health risks from large sets of medical data.

Today, services like 23 and Me allow customers to hand over sensitive medical information for genetic testing and ancestry information. But these companies have been hit with accusations of selling this personal information or providing it to third parties such as the government, without customer knowledge or consent.

If that data was protected through homomorphic encryption, the company would still be able to process the data and return its results to the customer, but at all times that information would be completely useless until it is decrypted by the customer, keeping his or her information entirely confidential.

Within the last two years, Microsoft, Google, and many other of the largest names in tech have been investing in developing the technology, even freely offering their open-source implementations.

In the case of Google, the company may be pursuing the technology as a means of complying with privacy regulations such as the European GDPR. With homomorphic encryption, Google could continue to build an ad profile, based on large volumes of personal data that it collects through various means, and compile it into an encrypted database with limited usage or applications that only the end user might experience.

For instance, a user may search Google for restaurants near them. The query would hit the homomorphic black box, privately process the users preferences and location, and return tailored results.

There are three common iterations of this technology, and one size does not fit all.

Homomorphic encryption has yet to see widespread adoption. However, its not uncommon for encryption protocols to spend a decade in development.

There are community standards that need to be established. Public confidence that the technology is safe, secure, solid, and not exploitable needs to be reached. APIs need to be implemented. And lastly, perhaps the biggest hurdle for homomorphic encryption is that the technology needs to perform well.

No one wants to adopt a more secure protocol only to discover that system performance has taken a massive hit. From an end-user standpoint, that will feel more like a massive setback than a step forward. While the protocol has become massively more efficient since its inception in 2009, it still lags behind todays conventional encryption methods, particularly as users move from PHE to SHE to FHE.

While the computational overhead is too large for many businesses that dont need the added security, homomorphic encryption may yet become the go-to standard for sensitive industries like finance and healthcare.

Read next: Best Encryption Software & Tools

View post:
Enabling Data Security with Homomorphic Encryption | ITBE - IT Business Edge

Read More..