Page 2,054«..1020..2,0532,0542,0552,056..2,0602,070..»

The next iteration of cloud transformation: a cloud-like experience, anywhere – IT World Canada

When people think about digital transformation, they often think public cloud. However, the public cloud isnt suited to every application, or every project. There is massive appeal to the cloud experience not having to pay anything up front, only paying for what you use, and having a monthly bill that is metered up and down. It has also become impactful in this new working age of remote, hybrid, and onsite workers, where data needs to be accessible everywhere and at any time while still providing security. So, if not everything can be moved to the public cloud, how can we bring a cloud-like experience to data and applications, no matter where they are housed?

Perhaps we should stop thinking about cloud as a destination, and start thinking about cloud as an experience (or operating model) one where you can scale your technology requirements up and down on demand, and only pay for what you use. I posit that the multi-cloud experience is the next iteration of digital transformation one where data can be stored in a combination of places: public and/or private, and on premises, in colocation centres, at the edge and in the cloud while accessible through a single platform.

A decade after the public cloud emerged, more than 70 per cent of applications remain outside the public cloud due to challenges related to compliance, data privacy, latency, and app entanglement. For these workloads, the multi-cloud model will be a worthy solution, especially in an age where working anywhere at any time has become table stakes.

In the past, clients had to balance running their main business, plus managing a datacentre, and managing the real estate to house that datacentre. They had to predict their needs several years in advance of use, and pivoting for any changes was a major challenge. In the months ramping up a project, they would have costly servers sitting dormant. The RFP process could be daunting in this regard; determining needs for a project that hadnt yet started and finding ways to maximize datacentres while launching new ideas and projects.

With a decade of buzz surrounding cloud models, the technology of the past will ultimately not be replaced by public cloud on its own, but by the multi-cloud model, which will reign for the next decade at minimum. This new archetype of a mixed approach to data processing and storage that has some on premises technology for items under regulatory or latency pressures, some at the edge for example in factories, branch offices, or on distant oil rigs and some in the public cloud will prove to be the most competitive, effective, and flexible solution that organizations will adopt for their needs.

The multi-cloud model helps prepare clients for what they know they have coming and the unknowns that the future can bring. The benefits of a multi-cloud model are many, and they include:

In Canada, we have a unique opportunity to not only bring this multi-cloud approach to our clients applications and data, but to help them monetize their data in new and unique ways. By establishing their digital core in highly available real estate with global software defined interconnection capabilities, networks can be rearchitected on-demand optimizing data transmission from remote sites to cloud apps in minutes, not months.So, not only are clients experiencing a hyper scalable and metered costing system that provides them full control of their operating expenses, theyre also seeing value from data sharing at the edge.

More:
The next iteration of cloud transformation: a cloud-like experience, anywhere - IT World Canada

Read More..

Why mTLS Should Be Everywhere in Kubernetes and Not Just Entry and Exit – Security Boulevard

Kubernetes has rapidly become one of the most widely used tools for managing containerized applications. The 2021 Cloud Native Computing Foundation Annual Survey found that 96% of the more than 2300 Kubernetes-specific respondents were either already using Kubernetes or evaluating it. In addition, the survey identified 5.6 million Kubernetes users, an increase of nearly 70% in a single year.

Accordingly, Kubernetes is also a common target for cyberattacks. In a survey by Veritas Technologies, 94% of respondents expressed concern about ransomware attacks on Kubernetes environments, and 56% have already suffered at least one such attack.

Unfortunately, many organizations are rushing ahead with Kubernetes deployments before fully understanding all relevant security issues. By doing so, they are unnecessarily increasing their attack surface and exposing themselves to hacks.

There are many steps companies can take to secure their Kubernetes workloads. One best practice that Kubernetes itself recommends is the extensive use of transport layer security (TLS). TLS helps prevent traffic sniffing in a client-server connection by verifying the server and encrypting the traffic between the client and server.

An even better option that developers should apply everywhere possible is mutual TLS (mTLS). This article discusses the benefits of mTLS and how developers can use it to their advantage to frustrate would-be attackers.

Mutual TLS takes TLS to the next level by authenticating both sides of the client-server connection before exchanging communications. This may seem like a common-sense approach, but there are many situations where the clients identity is irrelevant to the connection.

When only the servers identity matters, standard unidirectional TLS is the most efficient approach. TLS uses public-key encryption, requiring a private and public key pair for encrypted communications. To verify the servers identity, the client sends a message encrypted using the public key (obtained from the servers TLS certificate) to the server. Only a server holding the appropriate private key can decrypt the message, so successful decryption authenticates the server.

To have bi-directional authentication would require that all clients also have TLS certificates, which come from a certificate authority. Because of the sheer number of potential clients (browsers accessing websites, for example), generating and managing so many certificates would be extremely difficult.

However, for some applications and services, it can be crucial to verify that only trusted clients connect to the server. Perhaps only certain users should have access to particular servers. Or maybe you have API calls that should only come from specific services. In these situations, the added burdens of mTLS are well worth it. And if your organization reinforces security with zero trust policies where every attempt to access the server must be verified, mTLS is necessary.

mTLS adds a separate authentication of the client following verification of the server. Only after verifying both parties to the connection can the two exchange data. With mTLS, the server knows that a trusted source is attempting to access it.

mTLS is generally valuable for defeating a variety of attacks. Among the attacks mTLS can fend off are:

With Kubernetes, many different communication pathways can benefit from mTLS, particularly communications between microservices and communications between microservices and any API server. Using mTLS secures these communications without needing any specific identity management or verification process at the application level.

Containerized applications may include many different microservices that exchange data, including sensitive customer data. mTLS keeps hackers from intercepting these communications and creating a data breach.

mTLS also gives you added visibility into potential attacks. Just as with other tools that give you real-time information on anomalous behaviors (network traffic analysis tools like BlueHexagon, website alteration and defacement monitoring tools like Visualping, etc.), by reviewing audit logs, you can quickly pick up unauthorized activity.

Finally, mTLS need not create added difficulties for the user. Instead, properly implemented, mTLS can give users an added sense of security without extra complexity, enhancing the overall user experience. When mTLS acts at the platform level, only a single authentication action is necessary. Users dont have to reauthenticate for every microservice within the application.

While using mTLS in your Kubernetes environment can give you an added comfort level about security, it comes with a cost. That cost comes from the need for an effective certificate provisioning and management system.

When applying mTLS widely in a Kubernetes container deployment, recall that many clients are individual services. Each of these services will need its own certificate, and there will be many.

In addition, Kubernetes services, by their very nature, are impermanent. With replicas of Kubernetes services being created and destroyed dynamically, the challenge of managing certificates can be daunting.

Your certificate management system must also be robust enough to handle the deprovisioning and reprovisioning of certificates according to your internal security policies. If, like many, you rely on certificate rotation (giving certificates limited lifetimes and issuing new certificates on expiration) to minimize the chances that a hacker can exploit them, you must be able to assign new certificates to all affected services quickly.

Fortunately, many different tools are available to help you easily manage certificates, even if you have a massive number of certificates for your Kubernetes services.

As more and more businesses transition to containerization and the use of container orchestration services like Kubernetes, more need will exist for effective and efficient methods for securing data flows between services in the containers. By applying mTLS at every point where it is possible, developers can reduce an applications attack surface and minimize the risk that a hacker can access sensitive data and systems.

Continued here:
Why mTLS Should Be Everywhere in Kubernetes and Not Just Entry and Exit - Security Boulevard

Read More..

Synology ups the surveillance with new Station and cloud backup – Manila Bulletin

Synology recently announced the general availability ofSurveillance Station 9.0, the latest update to its cornerstone solution for comprehensive and scalable surveillance. Together with the on-premises offering, Synology is also introducingC2 Surveillance, a companion cloud service for footage backup and sharing.

With over 500,000 active installations and more than 2.5 million cameras managed, Surveillance Station has earned its spot as one of the most popular and trusted VMS solutions among both private and enterprise customers, said Tony Lin, product manager for surveillance at Synology. Todays releases embody our commitment to continuously upgrading the capabilities of our existing products and preparing them for increasingly complex implementations.

Designed for larger and more sophisticated deployments, Surveillance Station 9.0 features a redesigned user interface that seamlessly blends camera feeds, maps, playback controls, and alerts into a single dashboard to boost situational awareness.

Users can now set up hundreds of cameras within minutes using the new streamlined wizard that enables batch configuration by either importing a prepopulated spreadsheet or by copying settings from already deployed cameras.

Surveillance Station 9.0 enables businesses to add a separate layer of encryption to recorded footage for added protection in case the NVR, drives, or administrative credentials are compromised or stolen. Video streams and management data can also be ingested over the encrypted SRTP/HTTPS protocol to maximize privacy and protection against insider attacks.

Remote update management for online and offline recording servers inCentralized Management System (CMS)makes it simpler to keep multi-site deployments secure and resilient, ensuring that all devices receive the latest security patches and functionality improvements.

As we release yet another iteration of one of our signature products, we are proud to say that users large and small will find securing their premises an altogether more pleasant and integrated experience, Lin said. In Surveillance Station 9.0, everything is simply more within reach.

Taking protection a step further, Surveillance Station 9.0 supports dual recording to stream surveillance footage simultaneously to C2 Surveillance. Designed specifically to minimize potential data loss to mere seconds, C2 Surveillance ensures that footage is always accessible, even after a catastrophic event or in case of theft of systems from the premises they protect.

The new services online portal allows users to review recordings from anywhere, without requiring restoration of large amounts of data from a backup server and, if required, to share clips with authorities efficiently, whether or not the recording server is still available and functioning.

C2 Surveillance protects surveillance footage when it is most vulnerable, Lin said. Home users and businesses can now achieve peace of mind without investing in costly and complicated solutions complete data protection is really just a click away.

Surveillance Station 9.0 is available starting today for all Synology systems.1Users looking to upgrade their system can find more details in therelease notes.

C2 Surveillance launches today with plans starting from US$1 per month for each camera. Users will be able to try the service free of charge until October 31, 2022.

SIGN UP TO DAILY NEWSLETTER

See original here:
Synology ups the surveillance with new Station and cloud backup - Manila Bulletin

Read More..

GigaIO Announces Series of Composability Appliances Powered by AMD, First Edition Purpose-Built for Higher Education and Launched at ISC – Business…

SAN DIEGO--(BUSINESS WIRE)--GigaIO, provider of the worlds only open rack-scale computing platform for advanced scale workflows, today announced the launch of a new composability appliance. The GigaIO Composability Appliance: University Edition, powered by AMD, is a flexible environment for heterogeneous compute designed for Higher Education that can easily accommodate the different workloads required for teaching, professor research, and grad-student research. Future iterations of the appliance will bring the benefits of composability to Manufacturing and Life Science users over the coming year.

With the launch of this rack-scale appliance, we are bringing easy-to-use infrastructure to the classroom, where composability can provide students a wide array of flexible technology for learning and growth, said Alan Benjamin, CEO of GigaIO. AMD is the perfect partner for this venture because we share a commitment to create an open, industry standards-based platform. We are keen to make it easy for people to avail themselves of this new technology, and with the experience and success that the company has had in the Higher Ed space, our first joint product with AMD is well positioned for critical success.

Composability can supply students with access to a range of the equipment they will use in the real world, so they will be better prepared for the job market, said Brock Taylor, Director, Global HPC Solutions, AMD. Our recent joint deployments of composable infrastructure at the San Diego Supercomputing Center at the University of California San Diego and the Texas Advanced Computing Center at the University of Texas, Austin demonstrate the promise of composability to solve complex computational problems.

The GigaIO Composability Appliance: University Edition, powered by AMD, was built with ease of use in mind, so that it can be used in a classroom or laboratory setting without requiring dedicated IT expertise. It is a complete, highly efficient, future-proofed composable infrastructure solution that provides cloud-like agility to on-prem infrastructure, allowing cloud bursting as needed within a single interface. Flexibility and composability means that systems dont remain idle while not being used for teaching they can instead be reconfigured for actual simulation work and swapped back into teaching mode as needed.

For ease of use, the GigaIO Composability Appliance: University Edition is delivered with NVIDIA Bright Cluster Manager pre-installed, combining its ability to easily build and manage clusters with GigaIOs ability to connect AMD accelerators, AMD-powered servers, and other devices in a seamless dynamic fabric. Native integration of GigaIOs universal dynamic memory fabric, FabreXTM, within NVIDIA Bright Cluster Manager allows owners to easily assign configurations prior to use, dividing hardware among students to allow them the experience of running actual simulation workloads on the same compute infrastructure they will utilize upon graduation.

FabreX enables an entire server rack to be treated as a single compute resource, handling all compute communication, including server-to-server traffic (such as MPI and NVMe-oF). Resources normally located inside of a server including accelerators, storage, and even memory can now be pooled in accelerator or storage enclosures, where they are available to all of the servers in a rack. These resources and servers continue to communicate over a native PCIe memory fabric for the lowest possible latency and highest possible bandwidth performance, just as they would if they were plugged into the server motherboard.

GigaIO Composability Appliances are designed to accommodate a variety of accelerator types and brands and provide a truly vendor-agnostic environment. The University Edition units are container-ready and easily composed via bare metal, and feature AMD EPYCTM processors and AMD InstinctTM MI210 accelerators. The GigaIO Composability Appliance: University Edition, powered by AMD, is offered in three configurations and is available now. Learn more.

About GigaIO

Headquartered in Carlsbad, California, GigaIO democratizes AI and HPC architectures by delivering the elasticity of the cloud at a fraction of the TCO (Total Cost of Ownership). With its universal dynamic infrastructure fabric, FabreX, and its innovative open architecture using industry-standard PCI Express/soon CXL technology, GigaIO breaks the constraints of the server box, liberating resources to shorten time to results. Contact info@gigaio.com, visit http://www.gigaio.com, or follow on Twitter and LinkedIn.

AMD, the AMD Arrow logo, EPYC, AMD Instinct, and combinations thereof are trademarks of Advanced Micro Devices, Inc.

See the original post:
GigaIO Announces Series of Composability Appliances Powered by AMD, First Edition Purpose-Built for Higher Education and Launched at ISC - Business...

Read More..

Facilities Management Market Size to Grow by USD 660.29 billion | Increasing Demand for Cloud-based Facility Management Solutions to Drive Growth |…

The report analyzes the facilities management market by the end-user (commercial, government, and residential) and geography (Europe, North America, APAC, South America, and MEA)

NEW YORK, May 30, 2022 /PRNewswire/ -- The facilities management marketwill be driven by factors such as the increasing demand for cloud-based facility management solutions. These solutions enable secure hosting of critical data along. It also offers other advantages such as improved security and scalability and quicker disaster recovery. Companies can recover critical server data from backups stored on a shared or private cloud host platform. They can also increase security and collaboration among their teams and subsidiaries present in multiple locations, thereby reducing the operating costs.

Technavio has announced its latest market research report titled Facilities Management Market by End-user and Geography - Forecast and Analysis 2021-2025

The facilities management market is expected to grow by USD 660.29 bn from2020 to 2025. Moreover, the growth momentum of the market will accelerate at a CAGR of 8.3%during the forecast period.

Request a Sample Reporttolearn about additional factors impacting the growth of the market

Facilities Management Market: Major Segmentation

By end-user, the commercial segment will have significant market share growth during the forecast period. The growth in the number of multinational conglomerates (MNCs) and small and medium businesses (SMBs) has increased the demand for commercial office spaces across the world, which is expected to drive the demand for facility management. The commercial segment is one of the major contributors to the global facilities management market. A major part of the demand arises from the business services, information technology (IT), industrial and manufacturing, real estate, and healthcare sectors.

Facilities Management Market: Major Trend

The adoption of green cleaning products is a trend in the facilities management market. Many vendors are offering green and sustainable cleaning agents owing to the increasing awareness about the benefits of green and eco-friendly products among commercial and industrial users. Green cleaning products are also safe to use, as they do not involve toxic chemicals or corrosive materials. They are derived from natural essential oils, such as basil, lavender, lemon, and other plant sources.

Story continues

Gain more insights into the global trends impacting the future of the facilities management market. Request a Sample Report Now!

Facilities Management Market: Vendor Analysis

The facilities management market is fragmented, and the vendors are deploying growth strategies such as price, service, and brand name recognition to compete in the market. Some of the key vendors operating in the market include Aramark Corp., International Business Machines Corp., Interserve Group Ltd., ISS AS, Johnson Controls International Plc, OCS Group Ltd., SAP SE, SIS Ltd., Serco Group Plc, and Sodexo Group, among others.

Reasons to Buy Facilities Management Market Report:

CAGR of the market during the forecast period 2021-2025

Detailed information on factors that will assist facilities management market growth during the next five years

Estimation of the facilities management market size and its contribution to the parent market

Predictions on upcoming trends and changes in consumer behavior

The growth of the facilities management market across Europe, North America, APAC, South America, and MEA

Analysis of the market's competitive landscape and detailed information on vendors

Comprehensive details of factors that will challenge the growth of facilities management market vendors

This report can be personalized according to your business needs. Speak to our Analyst

Related Reports

Food Waste Management Market by Method and Geography - Forecast and Analysis 2022-2026

Smart Waste Management Market by Application and Geography - Global Forecast and Analysis 2022-2026

Facilities Management Market Scope

Report Coverage

Details

Page number

120

Base year

2020

Forecast period

2021-2025

Growth momentum & CAGR

Accelerate at a CAGR of 8.3%

Market growth 2021-2025

USD 660.29 billion

Market structure

Fragmented

YoY growth (%)

4.07

Regional analysis

Europe, North America, APAC, South America, and MEA

Performing market contribution

APAC at 43%

Key consumer countries

US, China, India, Germany, and UK

Competitive landscape

Leading companies, competitive strategies, consumer engagement scope

Companies profiled

Aramark Corp., International Business Machines Corp., Interserve Group Ltd., ISS AS, Johnson Controls International Plc, OCS Group Ltd., SAP SE, SIS Ltd., Serco Group Plc, and Sodexo Group

Market Dynamics

Parent market analysis, Market growth inducers and obstacles, Fast-growing and slow-growing segment analysis, COVID 19 impact and future consumer dynamics, market condition analysis for forecast period,

Customization purview

If our report has not included the data that you are looking for, you can reach out to our analysts and get segments customized.

Table of Contents

1 Executive Summary

2 Market Landscape

3 Market Sizing

4 Five Forces Analysis

5 Market Segmentation by End-user

6 Customer landscape

7 Geographic Landscape

8 Drivers, Challenges, and Trends

9 Vendor Landscape

10 Vendor Analysis

11 Appendix

About UsTechnavio is a leading global technology research and advisory company. Their research and analysis focuses on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions. With over 500 specialized analysts, Technavio's report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavio's comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

ContactTechnavio ResearchJesse MaidaMedia & Marketing ExecutiveUS: +1 844 364 1100UK: +44 203 893 3200Email: media@technavio.comWebsite: http://www.technavio.com/

Technavio (PRNewsfoto/Technavio)

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/facilities-management-market-size-to-grow-by-usd-660-29-billion--increasing-demand-for-cloud-based-facility-management-solutions-to-drive-growth--technavio-301556653.html

SOURCE Technavio

See the article here:
Facilities Management Market Size to Grow by USD 660.29 billion | Increasing Demand for Cloud-based Facility Management Solutions to Drive Growth |...

Read More..

Re-platforming critical payments infrastructure to the cloud – Finextra

This is an excerpt from Finextras report, The Future of Payments 2022: The cutting edge of digital payments

From mainframes to cloud

Like many century old industries, financial institutions established their organisational structures around a physical space, the office. The infrastructure needed to handle their critical business functions followed suit, and with the first computers being the size of a room, it seemed natural for the computer to live in a physical workspace.

Fast forward to the 1990s, server-rooms became the technological pulse of business, with every trade, payment, deal and email relying on it. With advances in technology moving at a rapid rate, so too did the demand for banks to digitalise their offerings. The multitude of business software needed was becoming as large as the hardware to manage it with offices being homes to endless racks of servers that continuously called for updates and maintenance.

In response to advances in technology and growth of digital, software hosting became the new normal with datacentres popping up all over the place enabling banks to host their business applications. Several decades later and this still continues today for many financial institutions around the world.

However, the advent of cloud has seen a steady adoption of cloud-based services, that have now become mainstream across multiple industries. Now we are seeing the adoption of cloud for the very services at the core of banks, including payments. On the latest leg of this journey, cloud native technology has become the new standard for managing and deploying applications which are built for the cloud from the ground up.

In todays new, everything instant world, offices have turned virtual and organisational structures are built around web not physical addresses. No longer do banks need to keep hosting and maintaining their ageing software applications in the cloud but can remove the infrastructure burden completely by outsourcing the entire payment processing, clearing and settlement to a cloud-native, payments-as-a-service platform instead.

Why re-platform?

With the continued rise in both adoption and volume of instant payments the direction of travel in this space is clear: towards an always on 24/7 payments landscape. This increased need to be able to process payments in real-time as well as scale capacity to accommodate the significant growth in payment volumes puts a tremendous strain on traditional back office technologies.

Unfortunately with this increase in volumes of payments comes with it the inevitable increase in potential fraud. Whilst it is often touted a payment is a payment banks now need to ensure their own payments become smarter. The ability to implement new technologies like real-time fraud screening or overlay services like confirmation of payee is key to avoid the weaknesses found in traditional payment systems that can be exploited by fraudsters a trend we are seeing more and more in these times.

All financial institutions running on legacy technology are currently facing the same problem: they need to re-think legacy architectures that are increasingly costly, risky to maintain and slow to develop. Year on year, as customers expect ever more reliable, easy to use digital offerings from their banking services (with challenger banks setting out the realities of the art of the possible), legacy banks look to improve their services to rival that of the challengers.

They then face the problem that their legacy infrastructure, already outdated after a lengthy implementation, is based upon decades old technology that is hard to update. Increasing or decreasing physical capacity isnt possible so increases in demand lead to downtime for customers. When updates are planned for the scheduled downtime inevitably impacts customers. Finally, banks that are tasked to operate and update these systems in the face of regulatory changes and scheme updates are inevitably over-reliant on the expertise of the original implementation teams whose professional services fees reflect this harsh reality.

By moving to a cloud-first platform banks can now access the functionality and speed to market now required to adapt and survive. They need to be architected differently, from the core foundations up if they want to really benefit from new banking and payments services. This doesnt simply mean technologically; they need to be architected for constant change across the organisation, from the way they interact with their customers to the way they develop, test and deploy code.

Through embracing re-platforming to the cloud, banks will be able to:

Banking software of the past is predominantly based on mainframe services architecture. The software itself is a large program with one input point and one output point. If you need to add new capacity it is a case of linear scaling, you need to add more servers to run more instances of the software in order to have more capacity, or sometimes the only option is to scale vertically by upgrading the servers. Such tasks are traditionally manual and can take months to complete.

This means more material cost to deal with higher demand, potentially downtime and a physical process of increasing your capacity. Thus, the critical infrastructure and one of the core components of every banks output (the ability to send and receive money) is based on physical infrastructure that isnt able to adapt.

Removing barriers to change

With financial institutions being aware that their current payment processing capabilities are unlikely to be fit for purpose in the near future, they then have a whole new challenge to solve: not only transitioning the technology and creating a more sustainable operating model but transitioning the organisational mindset, its structure, its teams and in doing so, its business agility.

This is completely different to the traditional command and control governance, waterfall delivery, of so many large banks. Questions of risk and regulatory scrutiny are very important; with Banks being a critical part of nationwide infrastructure even a small risk of a service blackout is enough to bring transformation projects to a halt.

Culturally, there needs to be a shift from the old ways to a new way of working which requires a mindset change across the organisation. Moving critical payments infrastructure to a PaaS model will vastly improve the technology capability but also optimises processes and crucially changes organisational thinking. And that means spending time on the clarity of purpose, gaining buy-in and thorough design as well as organising collaborative teams to build a sufficient plan around testing and control mechanisms that enables a smooth transition with fewer bumps in the road.

Back-end platforms as enablers of front-end innovation

Huge advances in technology innovation are transforming the very core of financial services, challenging banks to reassess their front- and back-end platform architecture. It is not surprising that financial institutions around the world have started to examine payment platforms and the capabilities of the cloud as a key first step in moving away from the restrictions and barriers inherent in their existing legacy payment solutions.

One of the key reasons that platform technology can be so effective is its agility. A central element of this is the speed with which the systems can be improved. Paired with the power of the cloud these new agile back-end systems can provide banks the ability to upgrade and enhance on a monthly basis as opposed to previous yearly timeframes. This platform agility is the foundation on which the payment innovations of tomorrow will be built.

Read more from the original source:
Re-platforming critical payments infrastructure to the cloud - Finextra

Read More..

ASU IT event aims to empower communities: Those we serve and those we belong to – ASU News Now

Starting local, thinking global

Throughout the full week of Empower, ASU IT community members volunteered with organizations that have missions to better the lives of Arizonans. Areas of support were food donations, technology access for seniors and more.

One such project included hosting workshops with senior residents. There, ASU IT professionals partnered with local seniors to create online grocery shopping accounts. Together, they set up an account and got to shopping using the $10 gift certificate provided to each resident. Seniors also got to ask tech questions about their devices.

It was powerful to see our teams use their skills in the local community, like working with senior residents to better navigate their devices for real-world tasks," said Breanna Smith, event coordinator for Empower. "In doing so, our impact reaches beyond UTO, beyond ASU and into the communities we live and serve."

In addition to local volunteer opportunities, ASUs IT community is advancing a series of initiatives that serve the broader Arizona community.

During Empower, ASU Chief Information Officer Lev Gonick took the stage to share examples of this work in action, starting with the Digital Equity Initiative. In partnership with Watts College of Public Service and Community Solutions'Maryvale One Square Mile Initiative, ASUs IT community is helping to bring high-speed, reliable internet access to local families in Phoenix through the use of millimeter wave technology.

Gonick also shared projects like the universitysuse of chatbotsto enhance students interactions when, for instance, seeking financial aid information. He announced theT4 Leadership Academy, which cultivates IT leaders who are globally engaged and locally attuned to the role of technology for social benefit and invested in designing the intergenerational workforce of the future.

Then a panel of six ASU, industry and local leaders took the stage to expand upon the theme of community, diving into their shared and unique experiences across the workforce.

Neal Lester, founding director of Project Humanities at ASU, challenged participants to disrupt the notion of the community and realize that there are many communities around the world in which we can feel included and part of. He explained that he came to that realization when he saw places where he was included, but felt excluded or invisible.

So, community is when I felt and knew that I was connected and being heard and being seen, said Lester.

With a greater and more diverse definition of community shared by the panelists, teams were primed to tackle eight IT areas to transform society. Spanning digital trust, communications, data architecture and learning technologies, the topics focused on:

Panelist and ASU Chief Research Information Officer Sean Dudley contextualized the development of helpful technology within these spheres at the university.

For those of us who are proficient in technology, we can lose sight of some of the basics, which can truly be transformative for people, Dudley said, adding that innovation must be human-centered and not just for the sake of technical improvements.

For example, as Debbie Esparza, chief executive officer of YMCA Metropolitan Phoenix, put it in regards to YWCAs Meals on Wheels program, there was an assumption seniors couldnt access technology. But that assumption was wrong, and new technology interfaces have been implemented as a result.

When it comes to creating a sense of community for ASUs IT professionals, its about creating an environment where all feel empowered.

We are intentional about the way that we designed the (ASU IT) community, the way we actionalize and operationalize the community, and find ways to sustain the community, Gonick said.

The Empower event turns this notion into action for the ASU IT community.

Teams spent the second half of the day connecting with colleagues and developing new ideas around the eight focus areas duringWorld Cafe-style discussions. The World Cafe Method pulls from integrated design principles that make discussion simple and effective for large group conversations.

It was an excellent opportunity to engage with so many amazing colleagues across our community, said Eddie Garcia, director of law information technology for the Sandra Day OConnor College of Law at ASU. I truly enjoyed this humanizing and thought-provoking event.

For the past five years, the University Technology Office has hosted the annual event to give Sun Devils time to foster a stronger sense of community amongst the universitys IT network. This fifth Empower emphasized that connection, as more than 500 Sun Devils joined together last week at the Student Pavilion on Tempe campus.

When asked what community means to them, ASUs IT professionals used words like belonging, equality, respect, happiness, connection and kindness. By exploring IT themes through the lens of human impact, teams were able to build connections and more closely collaborate to better serve the ASU community and beyond.

Special thanks to the leadership panels:

And to community partners:

Go here to read the rest:
ASU IT event aims to empower communities: Those we serve and those we belong to - ASU News Now

Read More..

FDT Group Introduces the FDT Unified Environment |ARC – ARC Advisory Group

FDT Group, an independent, international, not-for-profit industry association supporting the evolution of FDT technology, introduced the FDT Unified Environment (UE) and developer tools based on the new FDT 3.0 standard. These announcements are intended to deliver next-generation FDT industrial device management system and device solutions for field-to-cloud IT/OT data harmonization, analytics, services, and mobility based on user-driven requirements for smart manufacturing in the process, hybrid, and discrete markets.

Driven by digital transformation use cases to support new Industrial Internet of Things (IIoT) business models, the standard has evolved to include a new distributed, multi-user, FDT Server application that includes built-in and pre-wired OPC UA and Web servers that enable an FDT Unified Environment (FDT 3.x) that merges IT/OT data analytics supporting service-oriented architectures. The new Server environment deployable in the cloud or on-premise delivers the same use cases and functionally as the previous generation FDT hosting environment, but now provides data storage for the whole device lifecycle at the core of the architecture - allowing information modeling and data consistency to authenticated OPC UA and browser-based clients (tablets and phones) for modern accessibility to address the challenges of IIoT.

FDT UE consists of FDT Server, FDT Desktop, and FDT DTM components. System and device suppliers can take a well-established standard they are familiar with and easily create and customize standards-based, data-centric, cross-platform FDT 3.0 solutionsexpanding their portfolio offerings to meet requirements for next-generation industrial control applications. Each solution auto-enables OPC UA integration and allows the development team to focus on value-added features that differentiate their products, including WebUI and App support. FDT Desktop applications are fully backward compatible supporting the existing install base.

FDT 3.0 specification license agreements and developer toolkits are now available on the FDT website (www.fdtgroup.org/resources ).

Originally posted here:
FDT Group Introduces the FDT Unified Environment |ARC - ARC Advisory Group

Read More..

The Rise Of Sovereign Computing With Personal Servers – Bitcoin Magazine

Pascal Hgli is a lecturer at the University of Applied Sciences in Business Administration in Zurich where he teaches students about Bitcoin.

In human affairs, action is a fundamental force. It was the great Austrian economist, Ludwig von Mises, who stated that action is axiomatic to human conduct. Humans undeniably act, as non-action or the denial thereof is an action in and of itself.

Consequently, human beings cannot escape acting. Being social beings by design, every interpersonal action is either exchanging words (communication), exchanging produce (property) or exchanging value (money). It is these primitives that make human beings into the homo sapiens that they are.

As humans, we have become ever better at practicing these primitives thanks to the use of technology. As a matter of fact, technological progress has been the single most important enabler of the human species in fostering the exchange of words, produce and value. But as powerful as technology is, it is also very demanding; human beings must learn, understand and adapt to ever-evolving technological change. Because technology inherently comes with complexity attached, middlemen of all sorts have emerged to handle this complexity on behalf of individual human beings.

While the existence of middlemen has been an empowering force helping human society actualize technologys vast potential, the inevitable intermediation they bring has been a force of concentration and centralization. Consequently, as a species, we are increasingly subject to centralized powers monitoring and controlling more and more aspects of our lives.

This has become particularly obvious in todays digital age. With human interaction around communication, property and money being continuously more digital, everyday human interaction has also grown more intermediated to the point where the digital life of a typical homo digitalis is entirely dependent on third parties.

Alarmingly, the negative effects of this development become ever more prevalent: From content censorship or outright deplatforming to personal data exploits and general privacy infringement to slight user manipulation and relentless user monetization, the pitfalls of centralization are manifold. There is no denying that as inhabitants of the digital world, we are completely and utterly at the mercy of powerful intermediaries.

The desire to shed the shackles of todays digital overlords has never been greater. While incidents like #DeleteWhatsApp or #DeleteFacebook campaigns serve as undeniable proof of this urge, the spell of todays highly convenient digital applications remains unforgivingly strong nonetheless. Because of convenience, network effects, and a lack of alternatives hardly anyone manages to break free from contemporary tech monopolists.

Are the prospects for humanity really that bleak? Not if you take a domain that was in the seemingly firm hands of an unrelenting monopolist money. The course of this technology has been dictated by the state for ages. Only recently, has a potent escape valve emerged in the form of non-sovereign money called bitcoin. It is Bitcoin that has wrested the governments power over money, giving it to individuals, thereby equipping them with sovereignty over their own money.

With the existence of bitcoin as non-sovereign money, exchanging value can be done digitally in a peer-to-peer fashion. No intermediaries are needed for digital value transfer between any two parties be it friends or strangers. Interacting with money, no matter what form it takes, has become entirely free from any centralized third parties.

Bitcoins success in providing ordinary people with self-sovereignty in monetary matters has inspired entrepreneurs and developers to extend the self-sovereignty to the area of general computing. The start of more self-sovereign computing was initiated with the emergence of personal computers. Before, computers so-called mainframes were largely owned by corporations and only the rise of the personal computer made it possible for every regular person to have a computer at home.

As it turned out, having a personal computer has not been enough, especially not in a globally-connected web of computers talking to one another over the internet. Software as a service (SaaS) companies established themselves as indispensable mediators between humans and their computers. As such, they enabled a convenient and smooth user experience for the interconnected web of computers by running server farms on behalf of individuals. And the presence of these servers today mostly in the form of cloud computing let centralization become the norm.

Although many people inherently think so, servers don't need to be centrally run by large corporations. Open-source and free operating systems like Linux or Ubuntu allow for the operation of private servers. Companies themselves that don't want to be dependent on other SaaS companies are running their own servers thanks to Linux and Ubuntu. Unfortunately, these operating systems have not been made for everyday users to run their own servers as they require a high degree of technical competency and attention, and as an individual, its difficult to just hire a systems administrator or DevOps engineer.

Things are changing though. What has been missing for self-sovereign computing to take off is now being developed: new types of open-source, free, and permissionless operating systems that are vastly more accessible than Linux or Ubuntu. They come in the form of plug-and-play services and represent one-stop shops for all sorts of self-hosted computer applications. At the click of a button, these new personal servers can be bootstrapped while being smoothly operated through a convenient, customer-friendly user interface. As a consequence, computing is shifting from rooms full of servers commonly called data centers owned by corporations, to personal servers run at home and owned by regular individuals.

I have been testing the two most prominent personal server solutions currently on the market: Umbrel and Start9. Both of these projects offer a plug-and-play operating system for personal servers. Behind Umbrel is a company with the same name while Start9 is the company behind the Embassy. Also common to both projects is the fact that they have each raised capital from investors who value privacy as well as self-sovereignty.

What makes these projects so interesting is the fact that they have taken a generalized approach to running self-hosted software in an easy-to-use way. By doing this, they severely weaken the number one argument against personal servers, which is that everyday people are never going to use such devices. And while terms like sovereign computing or personal server might still be foreign to the general public, we are beginning to see those self-hosted servers like Umbrel or Embassy are increasingly run outside of tech-heavy circles by ordinary people that want privacy as well as self-sovereignty when it comes to their operating their online life. I am one of them.

In terms of differences between these two solutions, a few are worth mentioning. While Umbrel calls its server-side applications apps, Start9 is referring to them as services. Hence, on a Start9 device, a user will find a service marketplace as opposed to on Umbrel, where an app store can be found.

As for the user experience, both platforms are very straightforward. However, there are some differences in the architecture of each solution. For now, with Umbrel, there is no way of updating single apps. Its either all or nothing. This is different from Start9s Embassy. If a service needs any (security) update, the new version can be installed without having to update the entire Embassy.

Furthermore, alternative marketplaces can be hosted on Start9s Embassy, whereas Umbrel has only one app store and this is provided by Umbrel itself. An Embassy also creates a complete and encrypted backup of your entire system, which is a matter of clicking Create Backup in the user interface and selecting a target destination. As of now, this is not possible with Umbrel.

An important distinction is that there is no built-in health check system for apps on Umbrel. With Embassy, Start9 developers define what constitutes health for a given service and write scripts to test for it. An Embassy performs these health checks on a continuous basis, presenting results to the user inside the user interface. This way users can immediately tell whether or not a service is running smoothly.

In their current stages of development, there are also some areas where Umbrel seems to have the upper hand. For one, Umbrel currently has more services and they are more widely known in the Bitcoin space. This is witnessed by their actively engaged Twitter community that is doing a lot of free marketing for the product. Also, their design is sexier.

While there are differences between Umbrel and Start9, in the grand scheme of things, these two competitors work towards the common goal of making personal servers as widely accepted and used as possible. Because they both offer convenient solutions that are simple to use, the odds to achieve this goal have never been better.

Besides the convenience factor, which developers can work towards on their own, another paradigm shift is underway that will play into their hands and most likely boost the personal server revolution. Right now, we witness the first innings of how SaaS companies will have to increasingly alter their business model.

As a matter of fact, the days of the freemium model are numbered. Todays users have long since discovered that they are not actually the customer, but rather the product. This has led more and more users to abandon the most pernicious services for other services that purport to be more mindful of users data privacy and less dependent on a business model functioning around the monetization of user data. While this is the first step to taking control over ones digital activities, more and more users will figure out that such services are still prone to the traditional architecture of Web 2.0 and therefore cannot offer the privacy and integrity that is expected. In another iteration, this will only drive such people to the up-and-coming personal server solutions.

Moreover, todays freemium setups will increasingly have to be turned into subscription-based models. Because online users have grown weary of the fact that their data is being monetized, giants like Apple are advocating for the ability to retain users privacy. At the same time, through regulations like GDPR (general data protection regulation) in Europe, regulators are making it ever more difficult for SaaS companies to monetize data.

As these companies will no longer be able to mine and monetize user data in the same way that they always have, the question is: How will SaaS companies make money? There really is only one option: subscriptions. This means that traditional apps will increasingly come with a cost attached that users will have to consistently pay. This again will alter the situation for another swath of people who have grown up with the belief that software apps are just free.

While this shift will not happen overnight, more and more people along the way will likely reach for alternatives in the form of free personal server solutions. As opposed to traditional applications, the use of services on a personal server will have a one-time cost attached in the beginning but then can be used free of charge for the rest of ones lifetime because there are no middlemen involved that could charge any subscription fees. This is quite a value proposition, indeed. So, while the personal server revolution will surely be driven by ideology and conviction, there will also be tangible economic incentives that will drive people to adopt personal servers.

Until these intensifying circumstances will push a greater herd of regular people into using personal servers, its privacy-conscious individuals and Bitcoin aficionados that are acting as pioneers in this field. Many of them have been using either Umbrel, Start9s Embassy or both on a daily basis.

Both Umbrel and Start9s Embassy come with a Bitcoin node as well as a Lightning node integrated. This way, popular Bitcoin wallets can easily be paired with either of these nodes. Also, software services like BTCPay Server, Ride The Lightning, ThunderHub or Sphinx Chat can be run on both devices.

When it comes to non-Bitcoin services, there are a few differences. With Umbrel, some of the concrete examples the product has to offer are a private cloud service (Nextcloud), an ad blocker (Pi-hole), a self-hosted photo and video library (PhotoPrism) or instant messaging (Matrix/Element).

Similar services for photos or messaging are offered by Start9 like Photoview or Synapse. At this point in time, Start9s Embassy has some additional services. One of them is Embassy Pages. This allows for the one-click hosting of a static website as an anonymous onion URL on Tor. Furthermore, Start9s Embassy is offering a password manager called Vaultwarden. With it, the master password to all ones internet logins can be conveniently but secretly stored on ones personal server. Also, there is File Browser, which can be used to create, upload, download, edit, organize and share files of all sorts. These files can also be shared with multiple users. And Syncthing, another of Start9s services, integrates seamlessly with File Browser to turn an Embassy into a private cloud backup solution that automatically synchronizes data across all devices.

Umbrel does also have an app called Home Assistant. Through it, ones home can be automated as Home Assistant connects to all devices and shows them in a unified dashboard for better management. But make no mistake: This is only the beginning. As of now, the connected devices that are being gathered into one dashboard are still driven by third-party cloud computing and subscription cost.

The ultimate vision, as laid out by Start9, is to create solutions for a sovereign smart home installation. This way, sovereign individuals will be able to build out and profit from the convenience a smart home delivers, without having the home report back to Google, Amazon or any other tech giant. As a matter of fact, the dystopian internet of things and robot future will not have to be so dystopian after all thanks to self-hosted, personal servers. The homo digitalis will turn into homo superanus the sovereign individual.

This is a guest post by Pascal Hgli. Opinions expressed are entirely their own and do not necessarily reflect those of BTC Inc. or Bitcoin Magazine.

Follow this link:
The Rise Of Sovereign Computing With Personal Servers - Bitcoin Magazine

Read More..

The top five benefits of SD-WAN – ITWeb

The COVID-19 crisis has accelerated digital transformation initiatives to offer an improved customer experience and personalised services, all while tackling increasingly sophisticated cyber security threats. More than ever, organisations need flexibility to keep pace with business change, but they are often hindered by a sluggish and rigid network infrastructure.

Jacob Chacko, Regional Director Middle East, Saudi & South Africa at Aruba (a Hewlett Packard Enterprise company), describes the five key benefits of implementing an advanced SD-WAN solution and how it can accelerate business growth:

1. Improve business agility while reducing overall WAN cost

To connect branch offices to the corporate data centre, organisations have traditionally used expensive MPLS lines. As more bandwidth is required to support increasing connectivity demands, MPLS lines become cost-prohibitive, preventing organisations from fully satisfying their business needs. New MPLS circuits can take up to four months to be provisioned, greatly slowing down the ability to spin up a new branch. This hinders business changes, agility and flexibility.

An SD-WAN leverages less expensive internet and 5G connections by virtualising and bonding network links, creating secure tunnels from the branch offices to the data centre and to the cloud. With SD-WAN, organisations can realise the agility needed by the business while reducing costs, says Warren Gordon, Aruba/HPE Business Unit Manager at Duxbury Networking, local distributors of Aruba/HPE technology.

2. Increase security and seamlessly transition to a SASE architecture

Organisations with an MPLS router-based architecture are not able to easily enforce security policies in branches due to the rigidity and the complexity of their network, especially in hybrid cloud environments. The security perimeter is dissolving as users and devices now connect from anywhere.

Traditional security measures such as VPNs are limited as a VPN doesnt support enforcement of granular security policies. Indeed, once a user has been identified and authenticated with a VPN connection, they can access critical resources inside the network even if they shouldnt.

SD-WAN is the foundational component to implement a robust SASE architecture. By choosing best-of-breed security capabilities with a tight SD-WAN integration, organisations ensure maximum protection to their employees and other stakeholders accessing the network.

3. Enable a cloud architecture

Organisations are migrating their applications to the cloud and use software as a service (SaaS) cloud-hosted business applications such as Microsoft 365, Salesforce, Box, Dropbox, ServiceNow and many more instead of hosting them in the data centre. However, organisations with traditional router-based WAN architectures continue to backhaul cloud-destined traffic from branch locations to the data centre, mainly for security reasons, severely impacting the performance of cloud applications at the branch.

SD-WAN enables organisations to embrace the flexibility of a cloud architecture while improving cloud application performance by steering traffic directly to the cloud using local internet breakout.

4. Simplify WAN infrastructure

Over the years, organisations have built their network infrastructure as their business grows. Branch offices often ended up with a stack of appliances in their facilities including routers, firewalls, VPN concentrators and WAN optimisation devices. Updating a business or security policy such as moving an application to the cloud or improving quality of service often requires manually reconfiguring multiple devices. Not only does equipment sprawl require advanced networking skills to maintain and manage, but it also results in multiple maintenance contracts to administer.

SD-WAN enables organisations to move to a thin-branch model by reducing the amount of equipment in branch locations streamlining the network architecture and significantly reducing WAN management overhead.

5. Centrally manage network operations and get visibility

Very often, organisations must manage their network operations on a local basis resulting in a lack of flexibility. The network deployment of new remote sites can be tedious and can take several weeks to accomplish. Corporate IT departments often dont have complete network visibility to comprehensively monitor transport throughput, packet loss, latency and jitter. An advanced SD-WAN continuously monitors network health and automatically adapts to changing conditions to always deliver optimal application performance.

With SD-WAN, new branch offices are set up quickly and easily, and security policy changes can be automatically distributed to hundreds or thousands of branches in minutes while minimising errors. Network administrators can monitor network health through a single pane of glass and dashboards.

For more information, contact Duxbury Networking, (+27) 011 351 9800, info@duxnet.co.za, http://www.duxbury.co.za.

Link:
The top five benefits of SD-WAN - ITWeb

Read More..