Page 3,301«..1020..3,3003,3013,3023,303..3,3103,320..»

Three Reasons Why You Should Invest in Cloud-based Email – My TechDecisions – TechDecisions

Look no further than the COVID-19 pandemic and the global movement toward remote work to illustrate the importance of cloud computing and the ability to access your messages and critical information from anywhere.

We dont need to tell you the importance of email and communication because most of the developed world is using email to communicate with coworkers, customers, clients, vendors and more to carry out their duties and run a company.

Now comes the question about where to deploy that critical business application.

Cloud-based email services, as opposed to in-house email servers, allow organizations to deploy the critical business communication application at a rapid pace with minimal technical support and no burdensome upfront capital costs.

If your organization invests in an on-premises email server but is quickly growing and needs to add capacity, youll have to spend to upgrade and manage that hardware.

With a cloud-based email service like Microsoft 365 from Microsoft or Google Workspace, your organization doesnt have to worry about scaling its data center appropriately.

Instead, that work is done by the email provider, affording your organization more time and energy to focus on what really matters being profitable and serving your customers.

Cloud email allows an organization to deploy the needed application of messaging, at a rapid pace, with minimal technical support, and no upfront capital expenditure, says Tommy Mullins, senior vice president of sales at IT provider 1Path.

Since theres no additional hardware you need to introduce, or an on-site server to install, the speed of deploying a cloud-based solution is much faster and less expensive, Mullins says.

The cloud server is managed by the provider, so your internal IT team can take up other projects and leave the maintenance to the vendor.

Most cloud-based email providers like Microsoft and Google have integrated up-to-date cybersecurity features like security filters, virus scanning and phishing protections.

Cloud computing is now more advanced than it ever has been, and providers now follow a set of cybersecurity standards, and security patches and updates are deployed routinely, taking the burden off your own internal IT staff to make sure your organizations email is secure.

And, according to Mullins, cloud-based email solutions will help your organization recover from downtime or system compromise faster than on-premises solutions.

Natively, it includes some key benefits for backup or your email data, as it is already in the cloud, Mullins says.

Cloud-based email platforms have redundancy built in so users can recover their data stored on the email system much faster than they could on an on-site solution. In some cases, recovery of on-premises solutions isnt possible at all.

Read Next: Tips for Buying Cloud Email Technology

Cloud email can also give your organization the convenience of being able to access email accounts from anywhere not just the office.

As the world begins to recognize that remote work brought on by the COVID-19 pandemic isnt going anywhere anytime soon, more organizations will need to invest in cloud-based email to support their remote workers.

To access your cloud-based email account, all you need is an internet-ready device and an internet connection.

That means your employees can catch up on their work over the weekend or outside normal working hours, making them more productive.

Because of that aforementioned redundancy, cloud-based email servers are inherently more reliable than on-premises solutions.

If an employees endpoint succumbs to a BSOD, they only need to sign in from a loaner device or perhaps even their own mobile device, and theyre right back to work.

In these days, any disruptions to your business could have a detrimental effect on the entire organization and the livelihood of your employees and customers.

The cost of doing business is getting more expensive each day, cyber attacks are increasing and more and more workers want the flexibility of working from home as an option.

Migrating your business operations including email to the cloud is now the sensible approach.

More:
Three Reasons Why You Should Invest in Cloud-based Email - My TechDecisions - TechDecisions

Read More..

No way to go but up as cloud solutions shape the future of business – CNN Philippines

Metro Manila (CNN Philippines) Cloud computing has definitely proven to be a worthy investment and a seemingly inevitable way forward for businesses to ensure longevity.

The pandemic has proven how efficient it is for companies forced to shut down office operations and suddenly migrate to work-from-home arrangements due to the local COVID-19 outbreak. Early migrators found their investment working to their advantage.

Businesses count heavily on proactive and extensive planning for stability, and enterprise decision-makers should know, that cloud solutions are a viable and cost-efficient option towards digitalization.

"It's quite affordable in the sense of scalability," Globe Business Senior Vice President Peter Maquera told CNN Philippines' The Exchange with Rico Hizon. "Whether you're one user or 10,000 users, you only pay for what you need. In the old world, you would buy in advance a lot of hardware and over-capacitize. It's a huge difference, since [the Cloud is] much more optimal in the way you operate."

What used to require spacious, airconditioned rooms for costly servers or even old-school filing cabinets for storage can be replaced by a virtual space that carries virtually no weight nor real estate for an agile office.

"Things that used to take six months, we can literally do in minutes," Maquera said. "We are much more careful when we deploy, but you really just reduce your time to market quite significantly."

Jared Reimer, Chief Executive Officer and Founder of Cascadeo, also set the record straight about cybersecurity fears on the cloud.

"Fundamentally, cloud computing is inherently more secure than conventional corporate IT," Reimer explained. "The companies that you are building on in the States, that would be Google, Microsoft, and Amazon they are some of the largest, most powerful technology companies on the planet. They have effectively unlimited resources, the best expertise, infrastructure, software."

"You're basically standing on the shoulders of the giants," he added. "If you do cloud incorrectly meaning bad architecture, bad operations, it can cause a disaster just as is the case in a traditional data center... But if you do it right, it's vastly superior."

Cascadeo is Globe Business' partner in cloud consulting and managed services. Reimer took the now-popular video conferencing platform Zoom as an example of cloud technology as an enabler.

During the first quarter of 2020, Zooms users increased from 10 million in December of last year to 300 million daily since the pandemic. Zoom, being hosted on the cloud, allows people to use it simultaneously, regardless of how many users there are. Without the cloud, companies like Zoom would have to build or buy physical infrastructure, new servers, hire manpower, all of which would have taken years to do so.

Cloud solutions also spell a definitive step toward the future of business a virtual space where people can easily collaborate, keep documents, strengthen relationships, and streamline operations in a world of surging digital transactions and e-commerce.

"If you are going to build data analytics infrastructure and get good at artificial intelligence and machine learning which are must-haves in the future for most businesses building those on a cloud platform is really important," Reimer said. "When you combine 5G, the Internet of Things, data from anything and anywhere all the time, you get a business advantage that in the future which I think you can't live without."

"The sooner you begin this process, the more likely you are to be competitive in a post-pandemic world..." he added. "These changes are going to stick. Getting good at this stuff now is an imperative for almost every company."

View post:
No way to go but up as cloud solutions shape the future of business - CNN Philippines

Read More..

The global cyber insurance market is expected to reach a value of $70,671.9 million by 2030, from $5,573.2 million in 2019 – Yahoo Finance

Bloomberg

(Bloomberg) -- Chinas move to abruptly halt the worlds biggest stock-market debut sends global investors a clear message: Any financial opening will only be done on terms that benefit President Xi Jinping and the Communist Party.Policy makers in Beijing shocked the investment world on Tuesday by suspending an initial public offering by Ant Group Co., a fintech company owned by billionaire Jack Ma -- Chinas second-richest man. The decision came just two days before shares were set to trade in a listing that attracted at least $3 trillion of orders from individual investors.The timing of the decision showed once again that for Xi and the party, financial and political stability take precedence over ceding control of the economy -- especially to a private company. In Beijings view, allowing the IPO to go forward could effectively give Ant too much sway over the financial system, posing broader risks that could ultimately undermine the partys grip on power.The party is flexing its muscle, said Victor Shih, associate professor at UC San Diego and author of Factions and Finance in China: Elite Conflict and Inflation. Its saying to Jack Ma, you are going to have the biggest IPO in the world, but thats not a big deal for the CCP, which oversees the worlds second-largest economy.While the party has ample tools to quash political dissidents, local officials have struggled at times to contain outbursts of anger brought on by bread-and-butter issues such as labor disputes, investment fraud, and environmental disasters. To mitigate any threats to the financial system or the partys authority, Xis government has demonstrated over the past decade that it has no problem taking down billionaires and private companies.For foreign investors, the Ant saga has raised questions about the viability of Hong Kong and Shanghai as premium financial centers. Thats particularly so after China last week signaled greater openness in a new five-year plan that put a timeline on moving forward with past promises of allowing greater foreign access and gradually relaxing controls over the yuan and capital flows.Both the sequence and timing of events of the IPO failure will raise doubts among foreign investors about Chinas commitment to the kind of transparency needed in modern, open capital markets, said Fraser Howie, author of Red Capitalism: The Fragile Financial Foundation of Chinas Extraordinary Rise.It sends a number of signals, often conflicting, Howie said. Investors must therefore be concerned about the listing process in China, they will be concerned by disclosure, they will be concerned about arbitrary moves on the part of the regulators.Many analysts saw the move as sensible, even if the timing was disruptive. Chinese regulators said Ants business model effectively allowed it to charge higher fees for transactions while state-run banks took on most of the risk. At the same time Ant sought to list, authorities were racing to develop rules that would subject financial holding companies to higher capital requirements. Its also planning to create a digital yuan, which is part of its push to maintain control over the stability of its payment system.China Securities Regulatory Commission said Wednesday it supported a decision by the Shanghai Stock Exchange to block a hasty initial public offering. Changes in fintech industry regulations have a huge impact on Ants operational structure and profit model, it said in a statement.Mas Risky SpeechAt a conference in Shanghai on Oct. 24, Ma blamed global regulators for focusing too much on risk, and criticized Chinas own measures for stifling innovation. The remarks came after Vice President Wang Qishan -- a Xi confidante -- called for a balance between financial innovation and strong regulations to prevent financial risks.It appeared that, intentionally or not, Ma was openly defying and criticizing the Chinese governments approach to financial regulation, Andrew Batson, China research director at Gavekal Research Limited., wrote in a note.Mas comments came right before the Communist Party held a key meeting to plan the countrys economy for the next 15 years, bringing the issues of technology, financial stability and economic growth to the top of the national agenda. After it ended last week, regulators released new rules affecting Ants businesses and summoned Ma to Beijing for a rare meeting on Monday. The IPO was suspended the next day.Within China, state-run media have highlighted Ants failures to comply with regulatory requirements while showcasing the governments strong market supervision mechanisms and risk controls to protect consumers. In a commentary dated late Tuesday, the party-backed Economic Daily said suspending the IPO showed that every link of the capital market has perfect rules and serious supervision methods.Its understandable from the regulatory perspective and it is still a better outcome for investors than facing a black-swan event immediately after the listing, said Lv Changshun, an analyst at Beijing Zhonghe Yingtai Management Consultant Co. Policymakers can tolerate innovation, but that should not be at the cost of a systemic financial risk. Avoiding that risk is an important foundation to push forward more capital market reforms.China Accelerates Capital Market Reform to Counter Virus, U.S.Ants IPO prospectus was a bigger contributor to the timing of Chinas moves than Mas speech in Shanghai, according to Gao Zhikai, a former Chinese diplomat and former China policy adviser for the Hong Kong Securities and Futures Commission. Once regulators saw that Ant could do things that were off limits to commercial lenders, he said, someone rang the bell and brought it to the attention of the regulators.Traditional financial institutions, banks in particular, would probably welcome this decision when the dust settles, he said. It also does not create a regulatory disadvantage to Ant Group. It reminds Ant they need to treat certain parts of its operation as a commercial bank.Growing ScrutinyChinese authorities have been stepping up oversight of private companies for several years. In 2018, the central bank identified Ant and other firms as financial holding companies, putting them under increased scrutiny because of their growing role in the nations money flows and financial plumbing.That same year, regulators seized Anbang Insurance Group Co., which symbolized the recent era of mega-acquisitive Chinese companies, and imprisoned its former chairman for fraud. HNA Group Co. and Tomorrow Holding Co. were later taken over by the state or broken up, while China Evergrande Group in September is to have warned of a potential cash crunch that could pose systemic risks to China.Ostentatious and blunt, Ma is perhaps Chinas most well-known entrepreneur in the communist nation. The globe-trotting tycoon is a special adviser to the United Nations, has debated Elon Musk on international forums, and is a regulator at annual Davos gatherings. Hes created two multi-hundred-billion dollar companies and has labeled himself a champion for the little guy and small businesses.On Wednesday, however, posts on Chinese social-media platforms were largely unsympathetic toward Ma. One anonymous Weibo poster wrote if you dont go out looking for trouble, trouble wont find you. Another quipped that its time for Jack Ma to wake up, listen often and speak less.Despite Mas public dressing down and the reputational blow to Chinas markets, many investors are still optimistic about Ants IPO. Higher liquidity requirements would hit sentiment, but thats not necessarily a bad thing for a listing that saw shares selling for a 50% premium in gray-market trading ahead of the IPO.Ram Parameswaran, founder of San Francisco-based Octahedron Capital Management, a hedge fund that holds shares in Alibaba Group Holding Ltd. and is planning to invest in the Ant IPO, saw the suspension as positive to stamp the speculation in the stock. Shares of Alibaba, which owns a third of Ant, fell 7.5% in Hong Kong, the most since its debut in the city last year.Whats clear to me is that the lending business will grow slower over the next few years, Parameswaran said. That in the larger scheme of things is net positive for the sector and Ant. Steady growth is good.Strings PulledFor global investors, however, the episode is likely to reinforce the notion that the party calls all the shots when it comes to major business decisions -- and any opening measures will be carefully calibrated for the impact on the Communist Party. That could be all the more important in the years ahead as China seeks to develop its own core technologies in the face of growing pressure from the U.S., which is likely to continue no matter who ends up the winner of Tuesdays election.This sends a signal to the major tech players not to get too big for their britches and that the party is still in charge, said Kendra Schaefer, head of digital research at the Trivium China consultancy in Beijing. Internationally, however, moves like this do very little to alleviate concerns that tech companies going out are not having their strings pulled by Beijing.(Updates with CSRC statement in 10th paragraph)For more articles like this, please visit us at bloomberg.comSubscribe now to stay ahead with the most trusted business news source.2020 Bloomberg L.P.

Read the original post:
The global cyber insurance market is expected to reach a value of $70,671.9 million by 2030, from $5,573.2 million in 2019 - Yahoo Finance

Read More..

How Digital Twins Accelerate the Growth of IoT – IoT For All

Internet of Things (IoT) represents a digital mesh of internet-connected devices. IoT comes in various forms and sizes. They could be in your living room as a smart virtual assistant, a smart home security system, or your car in the garage. In the larger scheme of things, they take the form of smart cities that have traffic signals connected to the internet.

Statistics suggest that every second 127 new IoT devices are connected to the web. By 2021, at least 35 billion IoT devices will be installed globally. Such is the rapid growth of IoT. Unknown to many, there is a silent force that is enabling this rapid ascent of IoT: allied Digital Twins.

A Digital Twin is a virtual replica of a physical device. They are used by IoT developers, researchers, and scientists for running simulations without having a physical device.In a way, digital twins can be given credit for the mushrooming growth of IoT.

An IoT device takes occupancy like a physical object in the real world. A digital twin is the virtual representation of the physical device in a system. It replicates the physical dimensions, capabilities, and functionalities of the IoT device in a virtual environment.

The sensors attached to the IoT device gather data and send it back to its digital twin. IoT developers and researchers use the data to create new schemas and logic and test it upon the digital twin. Once vetted, the working code is updated into the IoT device through over the air updates.

Digital Twin use cases exist in every industry and space of IoT. From delicate healthcare to mechanical manufacturing, digital twins can act as a pillar of support for IoT initiatives in every industry. We are already aware of AI-based chatbots and its use case in different industries, similarly here, a digital twin can be used.

IoT in healthcare takes the form of patient wearables, fitness trackers, motion trackers, etc. Digital twins enable developers to test out new functionalities, make the device take accurate readings, and also invent new ways to exchange data between the data and the servers. In fact, doctors can also use the digital twin of the patient to monitor their vital stats on a real-time basis.

For example, a doctor can visualize a patients vital health signs like heart rate, blood pressure, etc. using a digital twin. The digital twin eliminates the need to transfer to create separate physical records of the patients data thus eliminating errors the possibility of errors. Also, with patient wearables that are connected to cloud servers, the data can be transmitted to the doctors system without requiring the patient to be physically present for examination.

Oil and gas equipment, factory equipment, assembly lines these are sophisticated utility equipment. Thanks to IoT, these sprawling surfaces have become data spewing smart devices. Digital twins can enable developers to have an as-designed, as-built, as-operated version of the utilities in a virtual environment. This drastically reduces the possibility of mishaps that can cause downtime.

A classic example of this is managing power grids in an urban environment or even a manufacturing plant for that matter. Digital twins can be used as virtual depictions of the actual power grid that can help monitor the real-time power consumption, asset management, and predicting/repairing power outages all without having to station personnel on the site.

How can a digitally connected city become smart? Digital twins help look at the possibilities from multiple angles and suggest future plans. Developers can also toy with innovative ways to make IoT devices work. For example, in the event of a disaster, motion sensors can be used to identify locations where there are maximum activity and risks involved.

In fact, the student community in the UK and the Northumbrian Water authorities are already working together to create a digital twin of the city. The project led by the post-graduate students from Newcastle University will create a virtual twin of the city.

Chris Kilsby, professor of hydrology and climate change in Newcastle Universitys School of Engineering, says, The digital twin will not only allow the city to react in real-time to such freak weather events but also to test an infinite number of potential future emergencies.

From augmenting the ability to run diverse experiments to giving real-time insights, a digital twin helps IoT in a number of ways. Some of them are detailed below:

What will happen if the workflow is tweaked? Will it get more data, will it result in consuming less energy, will it result in better user experience? These are some insights that a digital twin can give in an IoT environment. All this without having to push updates for the physical device working in a production environment.

Experiments of any kind are difficult, to begin with. They incur expensive resources, and if they do not work out as planned can even cost more than planned. IoT is a relatively new technology, and there is an abundant need for experimentation. The experimentation needs to be carried out with judicious usage of resources. Digital twins provide the virtual infrastructure to conduct countless experiments even when there are not many physical devices available.

IoTs most popular benefit is that it gives access to a large population of devices at the same time. This, in turn, is also a downside. A minor security flow can give room for hackers and unauthorized personnel to gain access to the IoT network. The risk is magnified when actual physical devices deployed in production are used for experimentation.

Digital twins take away that risk. It makes it possible for developers and researchers to safely toy with multiple scenarios before arriving at a final one that is secure and operationally feasible.

Having the same replica of anything can be slightly troublesome. It gives room for misuse and also is considered dangerous from a security point of view. In other words, it is the classic Mr. Hyde or Dr. Jekyll scenario.

A Digital Twin is assured to be Dr. Jekyll. It helps IoT professionals to conduct diverse experiments without having the need for a physical device. It spares a lot of physical resources and also results in cost savings. Additionally, it also reduces the risk of mishaps that could happen if updates are pushed into live production.

See the original post:
How Digital Twins Accelerate the Growth of IoT - IoT For All

Read More..

Moving to cloud-native applications and data with Kubernetes and Apache Cassandra – JAXenter

Moving your applications to run in the cloud is attractive to developers. Who doesnt like the idea of being able to easily scale out and have someone else worry about the hardware? However, making use of cloud-native methodologies to design your applications is more than migrating them to a cloud platform, or using cloud services.

What does this mean in practice? It involves understanding the role that containers and orchestration tools play in automating your applications, how to use APIs effectively and how other elements like data are affected by dynamic changes to your application infrastructure. More specifically, it means running your application using virtually unlimited compute and storage in the cloud alongside a move to distributed data. Apache Cassandra was built for cloud data and is now becoming the choice of developers for cloud native applications.

SEE ALSO: Practical Implications for Adopting a Multi-Cluster, Multi-Cloud Kubernetes Strategy

Lets look at how we got to today. Over the past twenty years, there have been several big trends in distributed computing. Reliable scale networking was the big area of focus in the 2000s, which enabled the linking of multiple locations and services together so that they could function at the velocity and volume the Internet demanded. This was followed in the 2010s by moving compute and storage to the cloud, which used the power of that distributed network to link application infrastructure together on-demand with elasticity. That works well for the application itself, but it does not change how we have been managing data.

Managing a distributed database like Cassandra can be complex. To manage transactions across multiple servers, it takes some understanding of the tradeoffs presented in Brewers Theorem which covers Consistency, Availability and Partition Tolerance (CAP): how a database can manage data across nodes; the availability of that data; and what happens across different locations respectively. More importantly, how does the database react when non-ideal conditions are present. The inevitable failures that happen in a system with multiple parts.

Not only does your database have to manage failure cases, it also has to do this while maintaining data consistency, availability and partition tolerance across multiple locations. This is exactly what Cassandra was built to do and has proven itself in just those tough conditions. Being rooted in a distributed foundation, has given Cassandra the ability to do hybrid cloud, multi-cloud or geographically distributed environments from the beginning. As applications have been built to withstand failures and scalability problems, Cassandra has been the database of choice for developers.

Today, we have more developers using microservices designs to decompose applications into smaller and more manageable units. Each unit fulfills a specific purpose which can scale independently using containers. To manage these container instances, the container orchestration tool Kubernetes has become the de-facto choice.

Kubernetes can handle creating new container instances as needed, which can help scale the amount of compute power available for the application. Similarly, Kubernetes dynamically tracks the health of running containers if a container goes down, Kubernetes handles restarting it, and can schedule its container replacement on other hardware. You can rapidly build microservice-powered applications and ensure they run as designed across any Kubernetes platform. For an application to run continuously and avoid downtime, even while things are going wrong, are powerful attributes.

In order to run Kubernetes together with Apache Cassandra, you will need to use a Cassandra Operator within your Kubernetes cluster. This allows Cassandra nodes to run on top of your existing Kubernetes cluster as a service. Operators provide an interface between Kubernetes and more complex processes like Cassandra to allow them to be managed together. Starting a Cassandra cluster, scaling it and dealing with failures are handled via the Kubernetes Operator in a way that Cassandra understands.

Since Cassandra nodes are considered stateful services you will need to provision additional parts of your Kubernetes cluster. Storage requirements needed by Cassandra can be satisfied by using PersistentVolumes and StatefulSets to guarantee that data volumes are attached to the same running nodes between any restart event. Containers for Cassandra nodes are built with the idea of external volumes and are a key element in the success of a cluster deployment. When properly configured, a single YAML file can deploy both the application and data tiers in a consistent fashion across a variety of environments.

SEE ALSO: Successful Containerized, Multi-Cloud Strategy: Tips for Avoiding FOMO

As you look at adopting microservices and using application containers, you can take advantage of fully distributed computing to help scale out. However, to really take advantage of this, you need to include distributed data in your planning. While Kubernetes can make it easier to automate and manage cloud-native applications, using Cassandra can complete the picture.

Bringing together Apache Cassandra and Kubernetes can make it easier to scale out applications. Planning this process involves understanding how distributed compute and distributed data can work together, in order to take advantage of what cloud-native applications can really deliver.

Original post:
Moving to cloud-native applications and data with Kubernetes and Apache Cassandra - JAXenter

Read More..

Bluebeam expands its global Studio data infrastructure – Planning, BIM & Construction Today

Studio is a cloud-enabled collaborative space accessed from within Bluebeam Revu that allows project teams to annotate documents and collaborate in real time from anywhere in the world, whether it be the jobsite, the trailer or most recently their homes.

Bluebeam began expanding Studios global data infrastructure in August 2020 with a new Australian server joining the existing US and UK-based servers and plans to bring an additional server online in Germany in Q4 2020.

Local Studio servers allow users to collaborate in real-time through Studio Sessions and manage documents in Studio Projects faster, while storing data locally and meeting local data storage laws and requirements.

Online collaboration requires much more than just an internet connection, said Bluebeam CTO Jason Bonifay.

Although the concept of cloud computing isnt new, data security and accessibility are more important than ever.

The legal and regulatory framework that defines how we connect internationally online has matured significantly over the last few years and the drive for increased performance and security is driving many countries to implement data residency regulations that require new infrastructure.

The new cloud infrastructure were putting in place will address the immediate issue of data residency, while also providing a more robust platform as more builders begin to collaborate digitally.

Digital collaboration has never been more important than it is now, with teams separated not only globally, but locally as well, said Bluebeam CPO Roger Angarita.

Given the specific needs of builders working in and across different regions, weve applied our customer-centric approach to solve the problem, working directly with customers to gain a deeper understanding of the issues they face.

By understanding exactly how the increasing regulatory demand for local data residency was affecting their organisations abilities to collaborate digitally, we could partner with them to outline local solutions that addressed their global problems.

And it shouldnt be a surprise that we were able to improve overall Studio performance locally as a result.

More information about the new Studio server in Germany can be found here.

Access to this new server will be automatic for all Revu 20 or Revu 2019 users in the DACH region.

See original here:
Bluebeam expands its global Studio data infrastructure - Planning, BIM & Construction Today

Read More..

The journey to a cloud BSS – Ericsson

BSS sits at the center of the telco network this is where product, order, revenue and customer management take place in order to transform operators assets into revenue. Borne of technology that is now decades old, traditional BSS didnt give vendors much choice. Complex applications spanning multiple services running in bare-metal servers with local databases are still present in many CSPs around the world. Proprietary tightly coupled integrations were built to serve as the glue that connects all these necessary functions.

In recent years there has been much progress in the way BSS applications interact with the underlying infra-structure, mainly through virtualization. But this was not enough to meet the growing CSP demands of automation, speed, lower operational costs and agility. We are at a critical juncture, especially now that we are entering a new era with 5G monetization challenges and opportunities. A re-architecture to cloud BSS is the answer.

Cloud BSS is the evolved BSS applications architecture that leverages all the benefits of the cloud infra-structure, such as deployment automation, automatic scalability, resources usage efficiency and built-in high availability. Despite the lack of consensus around what constitutes cloud-native BSS applications there are several principles that must be followed to simplify BSS architecture and get all the benefits of cloud architectures:

Even though there were some standards and architectural principles, in the past the traditional BSS was created and evolved mainly to solve customer needs as quick as possible as the network technology rapidly enabled new products and services. In a lot of cases a new development was done wherever it was easier and faster to deliver a new feature in order to get products and services to market. The result is that today many CSPs have big infra-structure footprints tied to their BSS applications, with complex architectures to guarantee high availability, distributed databases across different systems and large modules handling different capabilities that didnt have much to do with each other. Their software systems became a patchwork of home-grown systems and systems bought from multiple vendors over a long period of time (some more than 30 years ago). This puts a special burden on CSPs in terms of upgrading both these systems and the skills of their staff. CSP businesses have many moving parts that require substantial planning and execution effort, with associated capex and opex burdens.

If we compare the traditional and cloud BSS side-by-side, it is quite impressive to see that even the terms and descriptions are getting simpler:

TRADITIONAL BSS

PRINCIPLE

CLOUD BSS

Coupled hardware and software

Choices

IaaS

Monoliths, highly customized

Decomposition

Mini and microservices

Complex, expensive HA solutions

Resiliency

99,999% availability

Local databases, distributed states

State optimization

Stateless applications

Complex O&M with long maintenance windows

Orchestration and automation

Continuous Delivery / Continuous Integration (CI / CD)

Proprietary customized integrations

Openness

Open APIs

There are multiple benefits in moving BSS to a cloud architecture. Software application development using microservices architecture (or miniservices, a more pragmatic approach to microservices for BSS focusing on achieving business objectives) is currently the fastest way to develop and deploy software applications while separating functionalities, integrations and databases. It allows the broad adoption of DevOps software engineering principles that allows optimized and automated deployment and operation and maintenance (O&M). In addition, it becomes much simpler and faster to deploy high availability and resiliency by bringing containers up and down. As a result, CSPs can save capex and opex and shorten the time it takes to launch new services to the market.

The benefits of cloud BSS are quite impressive but there is no one route forward the journey is dependent on the unique combination of components and connective tissue in place today. Most operators are still using many legacy business support systems that were designed for on-premises deployment in CSPs operational environments. Customizations are everywhere. Implementing a completely new, side-by-side stack while maintaining the legacy one isnt really an option. A gradual, stepwise re-architecture of BSS components to a cloud architecture is needed in order to get the benefits of the cloud while safeguarding existing revenues.

The first step is to identify which BSS applications offer a compelling case to migrate. For example, front-end digital systems are a good place to start because they are usually new, and speed of adoption is unpredictable, so a quick payback is possible from the elasticity that cloud delivers. By starting with this outer layer, CSPs can develop the expertise and competence they need to tackle more challenging migrations at a deeper level at a more gradual pace. Such as OCS (on-line charging system) and billing, where most of their monetization is coming from and are heavy on customizations and have critical real-time and/or availability requirements. In these cases, modules decomposition into mini and microservices enhanced with new cloud capabilities and separation of data is the way to introduce a cloud architecture along with existing legacy functionalities. Finally, to simplify integration between front and back-end systems, new cloud-based layers can sit between front and back-end applications to decouple them, making full use of TM Forum Open APIs to simplify and speed-up the integration of multiple vendors systems. Multiple small steps in tandem will allow the stepwise introduction of CI/CD pipelines across all the delivery process.

Modernizing BSS is a critical component on the journey to becoming a digital service provider. As digital service providers seek competitive advantage, they need a consistent application foundation to foster innovation and speed. The introduction of 5G, its new use cases and monetization models are already posing challenges to existing traditional BSS. At the same time CSPs must balance the needs of the existing customer base that brings them the revenues they need to invest in the new technologies, new business models and new revenues opportunities.

Thats what we have in mind here at Ericsson while evolving our telecom BSS products to the cloud. By evolving our portfolio, we want to leverage previous CSPs investments with a clear path to 5G monetization. This allows migration at a flexible pace to a cloud native microservices-based BSS architecture, integrating legacy applications as needed. Then, slowly but surely, the benefits of cloud BSS will transform the monetization capabilities of the telecom business.

Read what operators are saying about 5G monetization. Download the MIT report

Read morte about Telecom BSS

More:
The journey to a cloud BSS - Ericsson

Read More..

How do we protect the hybrid workplace? – TechHQ

With lockdowns coming back into force in UK and other countries in Europe, uncertainty still abounds in 2020. But its safe to say that many organizations wont be returning to the rigidity of the physical office for good, even when or if the dust begins to finally settle and social distancing becomes less necessary.

Instead, having experienced the viability of remote working first-hand and the benefits of a wider talent pool, reduced need for physical office space, and employee productivity the world of desk-based work seems to be on course towards an era of hybrid work, where time is split between a shared workplace and the employees own remote working situation.

Businesses are increasingly reliant on cloud services for collaboration and digital resilience, and while the adoption of technology has been a boon in changing the way we work, data is now more spread out than before.

But this easy access to data has also led to an influx of threats. While cybersecurity may have seemed like another budget to cut as the pandemic hit, it is instead more important than ever. Businesses have moved to adapt, but the rapid adoption of IT solutions has also resulted in technology stacks that feel more akin to patchwork quilts with multiple, fragmented cloud-native applications that are difficult to secure.

Businesses have learned at a rapid pace since the pandemic began, reworking security and device policies, and quickly adapting to the new work practices.

The threat landscape is becoming much more dynamic a recent McAfee Labs COVID-19 threat report noted that threats targeting cloud services increased by 630%, with attackers using the credentials harvested from phishing campaigns to exploit the anonymous, decentralized nature of cloud applications.

Today, malicious players have shifted their focus away from targeting IT infrastructure which is usually heavily defended. These malicious players have revised their strategies, which now revolve around exploiting openings of employees through methods such as phishing.

Here are three things businesses need to do in order to ensure that they are resilient against threats targeting a scattered workforce, and a changing threat landscape:

As hybrid work looks to stay consistent in the long run, and employees work from different locations, data will move between a greater number of devices. This includes office servers, company devices, and even personal IoT devices such as routers, or even public hotspots, which present a security risk, as malicious players have a larger surface area to attack.

There are also possible backchannels where data may end up, such as the use of shadow IT solutions or devices that are not approved by companies and are difficult for IT teams to track and manage, let alone ensure the security of.

While cybersecurity policies for devices and data management may seem like a static set of rules, they are not the end-all. As remote work moves from a stopgap measure to becoming the future of work, IT teams must continuously revisit these policies to ensure that their company can stay safe.

Additionally, an often-overlooked angle that can compromise organizations is personal data protection. In order to speed up user experience during the rapid transformation to WFH, organizations are starting to implement hybrid networks that allow users to access cloud SaaS applications directly, without having to connect to corporate VPNs.

However, many organizations are neglecting to address data protection for the cloud SaaS applications that they are rapidly deploying, creating potential future issues with personal data protection legislation, and other liabilities that may arise from lax management of their workforces personal data.

Traditional cybersecurity follows the concept of a moat for defense. While any attempts to access data from outside the moat need to be verified, all users inside the moat are assumed to be trusted. However, as cloud technology becomes more and more commonly used, it is difficult for businesses to keep data secure, as it is spread out over multiple areas, and no longer in a single one.

Zero trust security instead assumes the possibilities of attackers inside and outside the moat, and is thus, any attempt to access data needs to be authenticated. By reducing the amount of data each employee can access and keeping information on a need-to-know basis, the likelihood of phishing attacks being successful is lowered.

Likewise, segmenting networks into microsegments, which require separate authentication to access will also ensure that threats are contained within one segment, and will not be able to gain access to data, or affect other segments of the network.

Today, while IT teams work hard to manage business IT infrastructure, they also need to contend with an ever-growing number of threats. The most dangerous threats are not the ones that have been previously detected, and instead are those which are yet to be discovered. As technology becomes integral to business, prevention is quickly becoming more important than a cure and the same applies to cybersecurity.

Businesses should remove barriers, whether with organization or resources, that hinder them from taking advantage of the latest advancements in fighting cyber threats; these advancements include technology such as predictive AI and big data, which are capable of analyzing threats by making use of global pools of information to help identify exploits and defend against zero-day attacks.

As IT teams find themselves responsible for an increasing number of endpoints to manage, automation can prevent IT burnout, and prevent attacks from malicious players in a hybrid work environment.

Despite remote work being a common topic for the past few months, as an economy, we are only just at the beginning of a new era of work.

Similar to how digital transformation projects were enacted in the past, businesses need to consistently take stock, and see if their solutions fulfill their needs sufficiently. The cybersecurity landscape is constantly shifting, and businesses must likewise shift with it by exploring different options while maintaining their vigilance only then, will we be ready for the next step towards the future of work.

This article was contributed by Jonathan Tan, managing director, Asia, McAfee.

Continue reading here:
How do we protect the hybrid workplace? - TechHQ

Read More..

Edge computing strategies will determine the next cloud frontier – TechTarget

With the latest improvements in networking technology and code portability, hyperscale cloud providers have extended their hybrid cloud offerings to the edge. But these technological advancements have also given rise to the next batch of competitors.

Recent Forrester research highlights the edge as the next cloud frontier. It explores how a mix of networking and colocation vendors plan to compete in this emerging space. They're pursuing an edge computing strategy that bundles their spare compute capacity and new technologies to offer cloud-like compute services that could give the hyperscalers a run for their money.

"In three to five years, the edge will become the next hybrid cloud target architecture as firms seek to act on their customers' behalf using voice, image and video at scale," Forrester analyst Brian Hopkins writes in a recent report, "Trend: Cloud Strategies Shift Towards the Edge." (The full report is available here, though it's paywalled for members only.)

The public cloud vendors will respond by expanding their converged edge infrastructure and existing partnerships, according to the report. For the most part, the major cloud providers have viewed the edge as an extension of their hybrid cloud architectures -- you run their same cloud-hosted services, on premises and now at the edge. Specifically, AWS and Microsoft extended edge capabilities to packaged hardware and software through offerings like AWS Outposts and Azure Stack, respectively. IT teams can also use newer cloud services like Azure Arc and Google Anthos to centrally and uniformly manage edge computing as part of a broader IT footprint.

Competition is good for users, Hopkins notes. Containerization technologies like Kubernetes and Docker enable code portability, so organizations can deploy the same code to different locations. These advancements open the door to more edge computing possibilities, and cloud and edge vendors alike have embraced containers to capitalize on this market.

The networking vendors and colocation providers are making the case that the best place to deploy at the edge isn't with the cloud providers but with them, since they've been operating at the edge for years. Let's look at how this edge computing strategy has evolved and how it differs from the approach of the big cloud providers.

Hopkins says the vendors that could challenge the big cloud providers cover three broad categories -- content delivery networks (CDN), colocation and telecommunication. To compete with the major cloud providers, these edge vendors have realized they can offer their own compute services.

Over the past five years, CDN vendors -- such as Akami, Fastly, Limelight Networks and CenturyLink -- have quietly added cloud-like services IT teams can flexibly provision at scale, Hopkins said.

These vendors rely on points of presence, which are clusters of compute capability, prepositioned close to the user to cache content and provide high performance throughput, Hopkins said. These CDN vendors have developed points of presence all over the world to support existing usage such as video streaming. But customers eventually needed additional services for things like disk caching, load balancing and security.

"And then what they discovered is clients who were using all those services also wanted to be able to build code and deploy that code to their points of presence," Hopkins said in an interview. Services such as CenturyLink's CDN Edge Compute and Akami EdgeWorkers work in a similar fashion to serverless and function as a service (FaaS) offerings, which can be used to spin up web apps and power streaming content close to the user.

Colocation vendors have similarly evolved to offer cloud-like compute services. These companies, such as Equinix, started as an alternative place to host your servers so you don't have to worry about wiring, air conditioning and other considerations associated with owning your own data center. And to improve connectivity to the cloud, colocation vendors placed data centers close to where the main cloud data centers reside, Hopkins said.

Colocation companies then dropped data centers in secondary cities and eventually created a web of facilities around the world. Through high connectivity, whether wired or wireless, this web operates as a fabric that enterprises can build services on, Hopkins said. Now, you can deploy code to these vendors' fabric of data centers, in a similar way as with CDN services.

Finally, telcos like AT&T and Verizon, which tried and failed to build a portfolio of cloud services in the 2010s, are utilizing excess edge compute capacity in the servers in their mobile base stations, smart cable boxes and homes, Hopkins said. They can offer that compute as cloud-like services for running software. And when available, 5G should only boost these capabilities.

The big cloud providers haven't been idle, but their edge computing strategy is fundamentally different. "The cloud vendors -- Azure and AWS, specifically -- their edge strategy is primarily to make their cloud services available in more of these localized edge environments," Hopkins said.

Through AWS Outposts and Azure Stack Edge, they can push cloud compute to the edge. This is a converged hardware strategy, extending their cloud -- their services and infrastructure -- to the edge.

They've also partnered with telcos to offer their own edge services like AWS Wavelength and Azure Edge Zones. And in a bid to counter telco's 5G, cloud vendors have looked to launch and connect their cloud to satellites, which could offer the connectivity of 5G, without laying miles and miles of millimeter wave antennas, Hopkins said.

"Now, we're not saying that the big cloud providers are going to completely lose their dominance in the 2020s, but we think that is a disruptive threat [from these other competitors]," Hopkins said.

Google and IBM also have partnerships with the major telcos around 5G, but they've otherwise taken a different approach to competing in the edge computing market -- at least for now. Rather than selling their own appliances and trying to compete directly against network and colocation vendors, they've emphasized containers and flexible deployment models that theoretically work in any environment.

And while Microsoft has emphasized the use of converged hardware with Azure Stack, it could ultimately follow a similar path as Google. Azure Arc extends Microsoft's cloud services and management to edge locations through software. However, only parts of the service are generally available as of publication, so it's too soon to say exactly what impact it could have on the market.

The hyperscalers have firmly established themselves in the enterprise market, but these other providers have some strategic advantages with their global network of locations and experience at the edge. Ultimately, customers will have to evaluate the price and availability of the various services, and decide which edge compute services are the best fit for their specific edge computing strategy.

Follow this link:
Edge computing strategies will determine the next cloud frontier - TechTarget

Read More..

Evolution of File Sharing and its method – InfotechLead.com

File Sharing is a method of transferring data from one computer or device to another computer, a friend, or a team member in a different geographic location, etc. Files can be shared using a local network at the workplace or over the internet.The Evolution of file sharing is quite impressive and vivid. The impact of technology on file sharing is quite humungous.

Due to the advancement in technology, file sharing methods have evolved faster. The one big challenge which comes with file sharing is data protection. Remember protecting data is as important as transferring your files to other geographic regions or devices in a few steps.

File sharing has evolved from posting documents to the desired location to sending emails attached with files in a single click. Protecting files in a locker room to storing files in a server with a user id and password. There are various file sharing methods have come into existence, which can broadly have classified into P2P, F2P, cloud services, portable devices, etc.

And there is no doubt that email has remained a widely preferred tool for sharing smaller files and it is constantly improvising its features to meet the users expectations and the challenges from its competitors in file sharing industry.

Files sharing can be performed using multiple methods, but the most common techniques for file sharing are as follows.

FTP is a client-server protocol established between the client and the server over a computer network. It is one of the most renowned methods to transfer data across networks using a command prompt window or a tool with a user-interface.

Under FTP, files are stored on the server, a user should log in to the FTP server using a username and password to download a file from it. By utilizing FTP, a client can upload, download, edit, delete, rename, and move files on a server. However, in some servers, users can directly access data without any authentication which is known as anonymous FTP.

FTP can be used to transfer large and multiple files including the directories. However, FTP can be considered as a non-secure way of transferring files because of the reason that when you transfer a file, the data along with the user name and password are shared in plain text, which can be easily accessible to the hackers.

Some of the common examples for File Transfer Protocol are FileZilla Client, Coffee Cup Free FTP, and Core FTP.

Peer to Peer is a decentralized way of transferring files from one computer to another without the use of the server. P2P allows users to obtain files such as photos, videos, ebooks, and other media files securely. Under P2P file sharing, individual clients are connected to a distributed network of peers to transfer files over their own network connections. The basic purpose of a P2P server is to eliminate the role of a centralized server that stores data.

P2P method of file transferring is very helpful if you want to share files quickly and it also eliminates the cost of a separate server. However, in this method, maintaining a backup and performing data recovery is challenging. Some common examples for P2P file sharing are Bluetooth, LimeWire, Skype, Telnet, etc.

Portable drives like memory card, USB drive, external hard drive, SSD, etc. are commonly used physical storage devices. Basically, these devices are non-violate storage drives that are used to store and transfer thousands of files such as photos, raw-images, videos, Microsoft Office files, etc., based on their capacity.

These physical storage devices are commonly used to take a backup of important files. The data stored on these physical drives are prone to corruption or deletion in various scenarios like accidental formatting, file deletion or loss, malware-attack, bad sectors, etc. If you know how to recover deleted files using reliable methods then, you can easily get back your files, or else you need to visit a trustworthy website like https://www.remosoftware.com/ that offers secure data recovery software, which are available for both Windows and Mac operating systems including Windows 10 and macOS Catalina.

The data stored on a physical storage device like a pen drive or SSD can be accessed without internet connectivity. However, there are various scenarios under which one might lose data saved on it. If you happened to break or lose the drive, then the data saved on it will be permanently deleted.

Under this method of file sharing, the owner of the content has to upload the data into the server which is hosted by a third-party. With the help of a link, one can directly download the files from the cloud server. However, the owner of the file can set permissions to the data uploaded on the cloud such as view only, edit, download, etc.

The cloud service method is best-suited to take a backup of important files and folders. Dropbox, Box, OneDrive, Google Drive, iCloud, etc. are commonly used tools to backup and transfer files. The common drawback of cloud file sharing service is its dependency on internet connection. During no or low internet connectivity accessing files is merely impossible.

Basically, when someone says file sharing, the first thing that comes to our mind is email. This is obvious since it is one of the most popular methods of file transferring. However, one cannot limit file sharing only to email because of our daily work life dependency on it. There are various file sharing methods available in todays world which can prove more beneficial than email in certain circumstances. But no one can deny the importance of email as a tool to transfer files in simple clicks.

Email works on basic principles, wherein the sender has to attach a file in an outgoing email with the recipient address. This method of file transferring is crucial when you want to send smaller files. The most popular tools are Gmail, Microsoft Outlook, etc.

Baburajan Kizhakedath

The rest is here:
Evolution of File Sharing and its method - InfotechLead.com

Read More..