Page 2,004«..1020..2,0032,0042,0052,006..2,0102,020..»

Cloud services are convenient, but they hurt smart home longevity – Digital Trends

So many of the smart home products people know and love rely on the cloud.

Alexa, Google Assistant, and Siri all rely on the cloud for language processing. Even many smart lights need the cloud to turn on or off. Its one thing when these cloud services provide some additional function (like storage or automation presets), but relying on them for basic functionality? Thats a terrible idea.

While it does feel like smart home technology has been around for a while, the industry is still relatively young. Many of the companies that sprung out of the initial wave of interest in the early 2010s were startups, and its safe to say that quite a few were created only to later fail.

These companies produced products that are now little more than expensive paperweights. And its not just smaller companies, either. Lowes started the Iris smart home platform, only for it to shut down in March 2019.

Insteon announced the death of its cloud servers several months back, and iHome has shut down its services too. While iHome didnt have the broadest product range, Insteon was well-known. It was one of the easiest ways to convert an older home into a smart home due to its use of in-wall wiring to send smart signals. While there is some hope left for Insteon customers their devices will become, at worst, dumb switches the same cant be said for a lot of other products.

It seems like everything is cloud this and cloud that, but a cloud server isnt an easy thing to maintain. A single server can cost as much as $400 per month to maintain, and a large company will have multiple servers. An average back-end infrastructure might be $15,000 or more per month for even a medium-sized company.

If a product isnt generating enough revenue (or a company is relying on Kickstarter or Indiegogo funding), then cloud servers will be one of the first things to go when a company looks to cut costs. When this happens, its most often the customer that suffers.

Features get cut, functionality thins out, and products offer a lot less benefit than you initially expected when you bought them.

While there are a lot of benefits to using cloud servers, there are just as many downsides. Theyre not as secure as on-device processing, for one.

If smart home devices rely on local processing, the entire system improves. I shouldnt need internet access to tell the lights in my home to turn off, especially if I have to dedicate a port on my router to a smart hub. If the hub cant relay a basic on/off command, whats the point of it?

Alexa and Google Assistant devices could translate your commands into actions without the need to relay them through an external server. Natural language processing isnt as difficult as it once was, and new-and-improved chips offer dramatically more power without an increase in size. (This is also a good time to mention that HomeKit should pick up the pace and implement the new M1 chips into Siri processing to close the gap with the competition.)

Perhaps the biggest indication that on-device processing is the way to go is that if a company shuts its doors, the products still retain functionality. Customers wont be cheated out of their money in two or three years just because financing fell through or startup funding ran out.

Insteon, iHome, and Iris are just the tip of the iceberg. Theres already enough skepticism about smart home devices as it is. If customers cant feel like theyre making a good investment, the industry wont grow, and development will stall. On-device processing can provide customers with some assurance in their investment and continuing functionality, even if the company doesnt produce anything new.

More here:
Cloud services are convenient, but they hurt smart home longevity - Digital Trends

Read More..

TikTok has moved all its US traffic to Oracle’s cloud servers – Protocol

The Code which was developed by 34 signatories, including Meta, Google, TikTok and Microsoft is essentially a list of disinformation-fighting practices tech companies can employ if they want to demonstrate theyre at least trying to mitigate risk and stay in compliance with the Digital Services Act in Europe.

To be credible, the new Code of Practice will be backed up by the DSA including for heavy dissuasive sanctions, Thierry Breton, European Commissioner for Internal Market, said in a statement on Thursday. Very large platforms that repeatedly break the Code and do not carry out risk mitigation measures properly risk fines of up to 6% of their global turnover.

The list of signatories also includes Twitter, Twitch, Vimeo, Clubhouse, Adobe and a range of civil society, research and fact-checking groups. Notably missing from the list, however, are other tech giants, including Apple and Telegram, which have played a particularly key role in the spread of misinformation around the war in Ukraine. Amazon is also largely missing, with the exception of livestreaming platform Twitch, which the company owns.

Not every company that did sign on has committed to every line item in the code, leading to some ongoing conflict even among signatories. In some cases, that could be because the commitment just isnt relevant to their business. In others, it could mean tech platforms are picking and choosing the commitments that are the easiest for them to pull off.

Still, the list of companies that have signed up and what they have signed up for is significant, and could lead to dramatically more transparency into some of the worlds biggest platforms.

Companies now have a six-month window to implement the code. Here are a few of the biggest promises theyre making:

The Code would increase pressure on platforms to not only cease carrying disinformation but also avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies.

The companies committed to creating dedicated searchable ad repositories and ensuring that political ads come with a disclaimer and details about how much an ad cost and how long it ran. Meta and Google already offer this, but the Code would encourage even more platforms that want to stay on the right side of the DSA to provide this visibility. (Of course, it could also, alternatively, push some platforms to cut off political ads altogether, as Twitter, LinkedIn and Twitch already do, a move some argue has only made it harder for small campaigns and advocacy groups to get their messages out.)

The Code requires companies to offer researchers automated access to non-personal, anonymized, aggregated or manifestly made public data.

This is potentially huge, Mathias Vermeulen, director of European data rights agency AWO, said in a tweet. It could entail the development of a Crowdtangle platform for all these companies.

In the words of Joe Biden, it's a big fucking deal, and potentially an inflection point in the history of social media, tweeted CrowdTangle founder Brandon Silverman, who has become an outspoken advocate for transparency in the tech industry. But whether that's true will be determined in all the work that happens from this point on..and there's a lot.

Under the Code, very large platforms (defined as having more than 45 million average monthly active users in the EU) will have to report every six months on their progress implementing the Code. Other companies will report on an annual basis.

The signatories agreed to work more closely across platforms to compare notes on manipulative user behavior theyre encountering. Thats a potentially meaningful shift, which would give smaller companies operating in Europe the benefit of visibility into what the largest players with the most resources are seeing and what theyre doing about it.

The hardest part of regulating tech is that innovation often outpaces the law itself. The Code establishes a task force, which will review and adapt the commitments in view of technological, societal, market and legislative developments.

More here:
TikTok has moved all its US traffic to Oracle's cloud servers - Protocol

Read More..

Google recasts Anthos with hitch to AWS Outposts – The Register

Google Cloud's Anthos on-prem platform is getting a new home under the search giants recently announced Google Distributed Cloud (GDC) portfolio, where it will live on as a software-based competitor to AWS Outposts and Microsoft Azure Stack.

Introduced last fall, GDC enables customers to deploy managed servers and software in private datacenters and at communication service provider or on the edge.

Its latest update sees Google reposition Anthos on-prem, introduced back in 2020, as the bring-your-own-server edition of GDC. Using the service, customers can extend Google Cloud-style management and services to applications running on-prem.

For example, customers can use the service to provision and manage Google Kubernetes Engine (GKE) clusters on virtual machines or bare-metal servers in their own datacenters, and do it all from the Google Cloud Console.

GDC Virtual doesnt appear to introduce any new functionality not already found in Anthos on-prem.

Customers of Anthos on-premises will continue to enjoy the consistent management and developer experience they have come to know and expect, with no changes to current capabilities, pricing structure, or look and feel across user interfaces, Chen Goldberg, GM and VP of engineering for cloud-native runtimes at Google, said in a statement.

The announcement marks the latest evolution to the Anthos hybrid cloud platform, which launched in early 2019 following Thomas Kurian's appointment as CEO of Google Cloud.

Anthos was initially conceived as a way to extend a consistent management plane to applications running in multiple clouds GCP, AWS, Azure, etc or workloads that customers werent ready to see leave the corporate datacenter.

The idea was that customers could manage their workloads wherever they were deployed and migrate them to GCP with minimal retooling. The platform quickly picked up additional features, including integration with VMware's vSphere VM management suite, and a migration tool designed to re-wrap virtual machines to run in containers on GKE.

Googles motivations dont appear to have changed much in that regard. The company cites customers with significant investment in their own VM environments or those wishing to migrate their applications to the cloud as the target market for GDC Virtual.

Despite the emphasis on GDC, were told the platform isnt so much the spiritual successor to Anthos, but rather a consolidation of SKUs powered by the platform aimed at simplifying customer journeys. Or put another way, making Anthos a little less confusing for customers.

Only time will tell whether well see Anthos subjected to the same fate as so many of Googles products. Im looking at you Google Talk, I mean Hangouts, or wait, is it Chat now?

The rest is here:
Google recasts Anthos with hitch to AWS Outposts - The Register

Read More..

Cloud hosting group sees clear skies of recovery as market begins to normalise | TheBusinessDesk.com – The Business Desk

Liverpool-based Cloud hosting provider, SysGroup, said it is starting to see normalisation of market conditions following the pandemic, and anticipates further acquisition possibilities, following two recent additions to the group.

The business announced its annual results today, for the year to March 31, 2022, which revealed a fall in revenues, but a sizeable improvement in pre-tax profits.

Sales of 14.75m compared with 18.13m the previous year, but pre-tax profits soared 192% to 600,000, which chief executive Adam Binks said reflected the strength of the groups business model.

Adjusted EBITDA of 2.82m was slightly down on last years 2.91m figure.

Net cash stood at 2.99m, up from 1.88m a year ago.

During the reporting period the group completed its project to deliver a unified platform of systems, Project Fusion, which has resulted in significant benefits across all operations.

It achieved the successful migration to SysCloud 2.0, the groups multi-tenanted cloud platform which went fully live in May 2022, delivering higher client performance and group efficiency with greater capacity from less physical space.

A unified sales and marketing hub opened in Manchester with a number of highly targeted campaigns planned for fiscal year 2023 to drive new customer engagement and continue to build its sales pipeline, and customer approval scores were comfortably ahead of the 97% target throughout the entire year.

Also, its office rationalisation completed with a refurbishment programme delivered in Newport and closure of theTelford site, which will generate a small operational saving.

In the first quarter of the current financial year, the business acquired Edinburgh-based Truststream Security Solutions a fast-growing provider of cyber security solutions which enhances SysGroups security services and gives the group a presence in Scotland from which to grow, and Independent Network Solutions, trading as Orchard Computers, further enhancing the groups presence in the South West region and complementing its South Wales based operations.

Both acquisitions are expected to be immediately earnings enhancing.

Adam Binks said: The adjusted EBITDA performance and strong cash generation in a year when turnover was impacted by COVID highlights the strength of our business model.

We have invested to drive future growth whilst maintaining prudent financial discipline throughout the business. Operationally, the group is ideally placed to take advantage of conditions as they begin to normalise and we have started to see the early green shoots of such a recovery.

He added: The acquisitions of Truststream and Orchard added further customers, expertise and geographical reach and demonstrate our ongoing commitment to be consolidators in this highly fragmented market.

M&A activity in our sector is picking up and we believe there will be further opportunities that we can take advantage of during the course of this year. With a clear strategy for both organic and inorganic growth, the board is confident in the future.

And he revealed: Towards the end of the last financial year we began to see the green shoots of recovery for new business, with existing clients beginning to engage on projects and an increasing pipeline of opportunities from new potential clients.

Whilst these are still early days and we must remain cautious, I am confident that we will see improvements to both revenue and EBITDA performance in this new financial year.

Read more from the original source:
Cloud hosting group sees clear skies of recovery as market begins to normalise | TheBusinessDesk.com - The Business Desk

Read More..

StorPool Named ‘Storage Optimization Company of the Year’ At Storage Awards XIX – Business Wire

SOFIA, Bulgaria--(BUSINESS WIRE)--StorPool Storage today announced that it won the Storage Optimization Company of the Year award at the 2022 Storage Awards. The company was previously honored by the Storage Awards with wins for Software Defined Storage (SDS) Vendor of the Year in 2020 and One to Watch Product in 2017.

The "Storries" awards are a premier IT sector event that recognizes the industrys finest products, companies and people. Winners were chosen via online voting by Storage Magazine readers with results presented at a black-tie gala dinner on June 9 in London.

StorPool accelerates the world by storing data more productively and helping businesses streamline their operations. StorPool storage systems are ideal for storing and managing the data of demanding primary workloads databases, web servers, virtual desktops, real-time analytics solutions, and other mission-critical software. Under the hood, the primary storage platform provides thin-provisioned volumes to the workloads and applications running in on-premise clouds. The native multi-site, multi-cluster and BC/DR capabilities supercharge hybrid- and multi-cloud efforts at scale.

"While awards cannot necessarily tell whether products, services or companies are truly better than others, they serve as a good barometer of quality and success when presented from those within the industry," said Boyan Ivanov, CEO at StorPool Storage. "Being nominated for this years Storage Awards and being chosen by the readers of Storage Magazine as the Storage Optimization Company of the Year is especially satisfying because of the third-party validation of our ongoing efforts. We are thankful to all who took time to recognize us and look forward to continuing our work delivering our next-generation primary storage platform for demanding workloads."

About StorPool Storage

StorPool Storage is a primary storage platform designed for large-scale cloud infrastructure. It is the easiest way to convert sets of standard servers into primary or secondary storage systems. The StorPool team has experience working with various clients Managed Service Providers, Hosting Service Providers, Cloud Service Providers, enterprises and SaaS vendors. StorPool Storage comes as a software, plus a fully managed data storage service that transforms standard hardware into fast, highly available and scalable storage systems.

View original post here:
StorPool Named 'Storage Optimization Company of the Year' At Storage Awards XIX - Business Wire

Read More..

Linux Foundation Announces Open Programmable Infrastructure Project to Drive Open Standards for New Class of Cloud Native Infrastructure – Yahoo…

Data Processing and Infrastructure Processing Units DPU and IPU are changing the way enterprises deploy and manage compute resources across their networks; OPI will nurture an ecosystem to enable easy adoption of these innovative technologies

SAN FRANCISCO, June 21, 2022 /PRNewswire/ -- The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the new Open Programmable Infrastructure (OPI) Project. OPI will foster a community-driven, standards-based open ecosystem for next-generation architectures and frameworks based on DPU and IPU technologies. OPI is designed to facilitate the simplification of network, storage and security APIs within applications to enable more portable and performant applications in the cloud and datacenter across DevOps, SecOps and NetOps.

Founding members of OPI include Dell Technologies, F5, Intel, Keysight Technologies, Marvell, NVIDIA and Red Hat with a growing number of contributors representing a broad range of leading companies in their fields ranging from silicon and device manufactures, ISVs, test and measurement partners, OEMs to end users.

"When new technologies emerge, there is so much opportunity for both technical and business innovation but barriers often include a lack of open standards and a thriving community to support them," said Mike Dolan, senior vice president of Projects at the Linux Foundation. "DPUs and IPUs are great examples of some of the most promising technologies emerging today for cloud and datacenter, and OPI is poised to accelerate adoption and opportunity by supporting an ecosystem for DPU and IPU technologies."

DPUs and IPUs are increasingly being used to support high-speed network capabilities and packet processing for applications like 5G, AI/ML, Web3, crypto and more because of their flexibility in managing resources across networking, compute, security and storage domains. Instead of the servers being the infrastructure unit for cloud, edge or the data center, operators can now create pools of disaggregated networking, compute and storage resources supported by DPUs, IPUs, GPUs, and CPUs to meet their customers' application workloads and scaling requirements.

Story continues

OPI will help establish and nurture an open and creative software ecosystem for DPU and IPU-based infrastructures. As more DPUs and IPUs are offered by various vendors, the OPI Project seeks to help define the architecture and frameworks for the DPU and IPU software stacks that can be applied to any vendor's hardware offerings. The OPI Project also aims to foster a rich open source application ecosystem, leveraging existing open source projects, such as DPDK, SPDK, OvS, P4, etc., as appropriate. The project intends to:

Define DPU and IPU,

Delineate vendor-agnostic frameworks and architectures for DPU- and IPU-based software stacks applicable to any hardware solutions,

Enable the creation of a rich open source application ecosystem,

Integrate with existing open source projects aligned to the same vision such as the Linux kernel, and,

Create new APIs for interaction with, and between, the elements of the DPU and IPU ecosystem, including hardware, hosted applications, host node, and the remote provisioning and orchestration of software

With several working groups already active, the initial technology contributions will come in the form of the Infrastructure Programmer Development Kit (IPDK) that is now an official sub-project of OPI governed by the Linux Foundation.IPDK is an open source framework of drivers and APIs for infrastructure offload and management that runs on a CPU, IPU, DPU or switch. In addition, NVIDIA DOCA, an open source software development framework for NVIDIA's BlueField DPU, will be contributed to OPI to help developers create applications that can be offloaded, accelerated, and isolated across DPUs, IPUs, and other hardware platforms.

For more information visit: https://opiproject.org; start contributing here: https://github.com/opiproject/opi.

Founding Member Comments

Geng Lin, EVP and Chief Technology Officer, F5"The emerging DPU market is a golden opportunity to reimagine how infrastructure services can be deployed and managed. With collective collaboration across many vendors representing both the silicon devices and the entire DPU software stack, an ecosystem is emerging that will provide a low friction customer experience and achieve portability of services across a DPU enabled infrastructure layer of next generation data centers, private clouds, and edge deployments."

Patricia Kummrow, CVP and GM, Ethernet Products Group, IntelIntel is committed to open software to advance collaborative and competitive ecosystems and is pleased to be a founding member of the Open Programmable Infrastructure project, as well as fully supportive of the Infrastructure Processor Development Kit (IPDK) as part of OPI. We look forward to advancing these tools, with the Linux Foundation, fulfilling the need for a programmable infrastructure across cloud, data center, communication and enterprise industries making it easier for developers to accelerate innovation and advance technological developments.

Ram Periakaruppan, VP and General Manager, Network Test and Security Solutions Group, Keysight Technologies"Programmable infrastructure built with DPUs/IPUs enables significant innovation for networking, security, storage and other areas in disaggregated cloud environments. As a founding member of the Open Programmable Infrastructure Project, we are committed to providing our test and validation expertise as we collaboratively develop and foster a standards-based open ecosystem that furthers infrastructure development, enabling cloud providers to maximize their investment."

Cary Ussery, Vice President, Software and Support, Processors, MarvellData center operators across multiple industry segments are increasingly incorporating DPUs as an integral part of their infrastructure processing to offload complex workloads from general purpose to more robust compute platforms. Marvell strongly believes that software standardization in the ecosystem will significantly contribute to the success of workload acceleration solutions. As a founding member of the OPI Project, Marvell aims to address the need for standardization of software frameworks used in provisioning, lifecycle management, orchestration, virtualization and deployment of workloads.

Kevin Deierling, vice president of Networking at NVIDIA"The fundamental architecture of data centers is evolving to meet the demands of private and hyperscale clouds and AI, which require extreme performance enabled by DPUs such as the NVIDIA BlueField and open frameworks such as NVIDIA DOCA. These will support OPI to provide BlueField users with extreme acceleration, enabled by common, multi-vendor management and applications. NVIDIA is a founding member of the Linux Foundation's Open Programmable Infrastructure Project to continue pushing the boundaries of networking performance and accelerated data center infrastructure while championing open standards and ecosystems."

Erin Boyd, director of emerging technologies, Red Hat"As a founding member of the Open Programmable Infrastructure project, Red Hat is committed to helping promote, grow and collaborate on the emergent advantage that new hardware stacks can bring to the cloud-native community, and we believe that the formalization of OPI into the Linux Foundation is an important step toward achieving this in an open and transparent fashion. Establishing an open standards-based ecosystem will enable us to create fully programmable infrastructure, opening up new possibilities for better performance, consumption, and the ability to more easily manage unique hardware at scale."

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 1,800 members and is the world's leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation's projects are critical to the world's infrastructure including Linux, Kubernetes, Node.js, Hyperledger, RISC-V, and more. The Linux Foundation's methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us atlinuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: http://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds. Red Hat is a registered trademark of Red Hat, Inc. or its subsidiaries in the U.S. and other countries.

Marvell Disclaimer: This press release contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events or achievements. Actual events or results may differ materially from those contemplated in this press release. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise.

Media ContactCarolyn LehmanThe Linux Foundationclehman@linuxfoundation.org

Cision

View original content:https://www.prnewswire.com/news-releases/linux-foundation-announces-open-programmable-infrastructure-project-to-drive-open-standards-for-new-class-of-cloud-native-infrastructure-301571791.html

SOURCE The Linux Foundation

Visit link:
Linux Foundation Announces Open Programmable Infrastructure Project to Drive Open Standards for New Class of Cloud Native Infrastructure - Yahoo...

Read More..

How to install the latest version of Nextcloud on Ubuntu Server 22.04 – TechRepublic

Jack Wallen takes a slightly easier route for the installation of the latest version of the Nextcloud cloud platform.

For those that arent in the know, Nextcloud is a cloud-based suite of tools that includes things like document and file management, calendar, chat (video and audio), email, forms and contacts. In fact, for those interested, Nextcloud could easily become a drop-in replacement for the likes of either Google Workspace or Microsoft 365. Ive been using Nextcloud since its early days and I am confident that just about anyone can benefit from this platform.

I want to show you how to install the latest version of Nextcloud (v24) on Ubuntu Server 22.04. This time around, however, Im going to make use of their installer script. Although this script doesnt strip away some of the manual installation steps, it does make it slightly easier.

With that said, lets get to the installation.

SEE: Hiring Kit: Cloud Engineer (TechRepublic Premium)

Youll only need two things to make this work: A running instance of Ubuntu Server 22.04 and a user with sudo privileges. Thats it lets make like Kate Bush and do some cloudbusting.

The installer script doesnt handle the installation of the dependencies, so we have to take care of that first. To begin, lets install the full LAMP stack. Log into your Ubuntu Server and issue the command:

sudo apt-get install lamp-server^ -y

When that installation completes, take care of the PHP requirements with:

sudo apt-get install php zip libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-mysql php-bcmath php-gmp zip -y

Restart Apache with:

sudo systemctl restart apache2

Next, well secure the database installation with:

sudo mysql_secure_installation

Make sure the document root is owned by the Apache group with:

sudo chown -R www-data.www-data /var/www/html

Next, we must create a database. Log in to the MySQL console with:

sudo mysql -u root -p

Create the database with:

CREATE DATABASE nextcloud;

Next, create the Nextcloud database user with the command:

CREATE USER 'nextcloud'@'localhost' IDENTIFIED BY 'PASSWORD';

Where PASSWORD is a unique/strong password.

We now need to give the nextcloud user the necessary permissions with the command:

GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextcloud'@'localhost';

Flush the privileges and exit the console with the two commands:

FLUSH PRIVILEGES;

exit

Change into the document root with:

cd /var/www/html

Download the installer with the command:

wget https://download.nextcloud.com/server/installer/setup-nextcloud.php

Open a web browser and point it to http://SERVER/setup-nextcloud.php, where SERVER is the IP address or domain of the hosting server. You will be greeted by the first window in the installer wizard. Click Next and you should see a window indicating all dependency checks have passed (Figure A), which allows you to set the document root for the installation.

Figure A

You can either create a new directory house Nextcloud (by typing the name field) or type a . to install it in the web server document root. Do one or the other, then click Next and the script will download and unpack everything necessary. This will take anywhere from two to 10 minutes depending on the speed of your network connection and the power of your server.

Once Nextcloud has been installed, you will be prompted to click Next again, where youll be delivered to the database setup window (Figure B).

Figure B

First, create an admin user and make sure to select MySQL/MariaDB as the database. You will then fill out the database information as such:

Leave localhost as is and then click Install. Once the database is taken care of, youll be asked if you want to install the recommended apps (Figure C).

Figure C

Click Install Recommended Apps, and when that finishes, youll be presented with the Nextcloud main window (Figure D).

Figure D

You can now further customize your installation by installing more apps or simply start working with your new Nextcloud cloud platform. Congratulations on taking your productivity to the next level.

Subscribe to TechRepublics How To Make Tech Work on YouTube for all the latest tech advice for business pros from Jack Wallen.

Read more here:
How to install the latest version of Nextcloud on Ubuntu Server 22.04 - TechRepublic

Read More..

Know the difference and benefits of web apps vs cloud apps – Times of India

To contend in the fast-paced world, businesses are focusing on developing their apps. As the technology is experiencing an incremental demand, more and more features are being available in the market to add on to the platforms. According to the survey by Gartner, almost 91% of companies surveyed have developed and implemented their applications for customers. The qualities of a good application are rigid infrastructure, a rich user interface, and trouble-free customization. However, businesses sometimes face a dilemma in choosing the right technology type between web and cloud applications. Although they perform similarly, they have a thin line between them and come with their fair share of benefits as well as differences. From a technical perspective, several noteworthy differences can be spotted.

Web apps: Software programs running on the web browser

Web applications are software programs that are designed to run on the web browser and have a website as an interface to interact with the user. They are built with the integration of coding languages such as PHP, HTML, and Java. When we talk about the architecture of such applications, it includes client-side scripting and server-side scripting. The client-side of the web app includes the web browser whereas the data and the source code are situated on the server side. To access the application, a user requires an internet connection with a system installed with a web browser. For web applications to work, it requires three major components- a user, a server, and a database. Users access and interact with the app using Web UI (User Interface) that connects with a web app server and returns the request of the user via database.

Benefits of Web applications

Over the years, web applications have transformed into more secure and compatible to work with cross platforms. Whether the user has Mac IOs, Windows, or Linux, these applications run on every operating system and browser available today. Another advantage of web apps is their hassle-free management. As the data is stored on remote web servers it reduces the hassle of regular maintenance. Even the updating of web applications takes place without interfering with the user. The deployment of the web application is gullible as every web browser supports the deployment which creates a trouble-free working environment. With such applications, the users live data is secure due to the presence of a myriad of WEBAppSec (Web app security) options available in the market including SSL/TSL certificates and Web Application Firewall (WAF).

Cloud Apps: Storing and accessing data through the cloud

Cloud apps are software programs that store and access data through a cloud environment instead of a computer hard drive. The unique feature of such apps is that they can be accessed offline with the help of locally cached data and they do not depend on the browser for functioning. The user has the benefit of storing their data on the cloud storage which facilitates backup with security and these applications can further be stored on the user device. Since they run on the cloud infrastructure, they can operate by consuming less space.

Cloud applications operate in three different service models- private cloud, public cloud, and hybrid cloud. The private clouds are owned by a single organization as an infrastructure whereas public cloud-based apps deliver low-cost data storage and computing facilities and can be used as SaaS or PaaS by multiple companies. Hybrid models, on the other hand, use API technologies to combine private and public cloud environments. By switching to a hybrid model companies can reduce their latency when required and improve flexibility as the portable applications can move from public to private cloud.

Cloud Apps with remarkable performance

Although all cloud apps are web applications, not all web applications seem to be cloud apps. The main feature of these types of applications is the reduced cost. As cloud spaces are scalable, customers only pay for the features they utilize. Due to the ease of data sharing, synchronization, and editing, cloud applications are preferred by companies for industry collaborations. Better mobility of these apps facilitates data to be accessed from remote locations and they can also be integrated with API analytical solutions to get hold of customer data and gather valuable insights.

Web Apps vs Cloud Apps

Although the functionality of these apps is similar, the technical aspects present a few notable differences between both kinds of apps. Web applications work on the browser and do not need to be downloaded on the user system, and cloud apps can work on a cloud background level independently with minimum human intervention. Web apps do not have compatibility issues with the systems or the browsers and can be updated without reinstalling the application and cloud apps do not depend entirely on the browsers to operate. In terms of security, cloud apps measure user data to be secure on the cloud and web applications verify the security of user data on their web app servers. Both the applications have scalable structures but with certain limits to where they are applicable and can be customized concerning the business needs.

Summing up!

Web apps and cloud apps can be used in integration to deliver a complete solution for businesses. Custom build and multi-tenancy apps are a choice of the companies to cater to a large user base. Both the applications come with a fair share of benefits and limitations, therefore choosing the right application depends on the type of business, the market it is operating in, customer base, operations, etc. These applications are available readily in the market for businesses to function optimally and improvise their performance, user engagement, and revenues. With the growth of the digital era where the speed of the functionalities is a top priority, companies need to choose between user-friendliness or freedom of customization.

Views expressed above are the author's own.

END OF ARTICLE

See the rest here:
Know the difference and benefits of web apps vs cloud apps - Times of India

Read More..

Cloud Outsourcing and Disaster Recovery Bundle 2022: Focus on Having Security and Business Continuity Solutions that Utilize Cloud Processing as a Top…

DUBLIN, June 21, 2022 /PRNewswire/ -- The "Cloud Outsourcing and Disaster Recovery Bundle" report has been added to ResearchAndMarkets.com's offering.

The 2022 editions of the Cloud, DR/BC, and Security Templates we have addressed those needs directly.

CIOs, CSOs, and IT managers have eagerly implemented Internet-based applications to reap its many benefits including lower hardware, infrastructure, and energy costs. However recent cyberattacks have shown how these applications put companies at risk.

Now the focus is on having security and business continuity solutions that utilize cloud processing as a top priority. The recent pipeline cyberattack has focused many CIOs on the protection and recovery from them as a top priority. They understand protection and business continuity plans need to be more resilient.

But when disaster strikes, some IT managers find their disaster recovery techniques and hardware configuration have not kept pace with their changing production environment, and they're stuck, along with their recovery times, in the pre-cloud era.

They falsely believe the improved day-to-day resilience of their cloud environment lessens their need for disaster recovery (DR) planning. In fact, the opposite is true: Catastrophic hardware failures in the cloud environments bring down many more applications than in non-virtualized environments, making DR planning and implementation more critical, not less.

Protecting business means protecting ongoing access to functional applications, servers and data; traditionally that means backing up data. However, backing up the data is only part of the equation. If you can't restore the data, the backup effort is useless. If a business relies on tape backup alone, restoration is easy only for the simplest failure, and only if everything goes perfectly.

If a hard disk fails and all the backup tapes are good and the staff is practiced at doing the repair and restore, then you might be able to simply buy a replacement part and get things up within a couple of hours - though the data will be from last night's backup. If the problem is more complicated and involves a replacement server, for instance, you will probably need a day or two to get new hardware in place before you even begin to recover.

The right way to evaluate the quality of your system and data protection is to evaluate the Recovery Time Objective (RTO) and Recovery Point Objective (RPO). These metrics define how long you think it will take you to get back online and how current the data has to be.

The best way to ensure a fast recovery is to have replacement equipment standing by at an off-site location with the necessary software and configuration to quickly transfer users and data. The best practice includes a remote data center with servers, storage, networking equipment and internet access.

Restoring to this remote data center from backup tapes will likely take too long, assuming that the tapes were not affected by the original problem and still leaves the risk of only recovering old data. Instead, replication software can be used to keep the backup systems constantly updated.

The Cloud Outsourcing and Disaster Recovery Bundle includes in editable Microsoft WORD and PDF formats:

A four hour RTO and RPO requires:

For more information about this report visit https://www.researchandmarkets.com/r/poj4wx

Media Contact:

Research and MarketsLaura Wood, Senior Manager[emailprotected]

For E.S.T Office Hours Call +1-917-300-0470For U.S./CAN Toll Free Call +1-800-526-8630For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

Originally posted here:
Cloud Outsourcing and Disaster Recovery Bundle 2022: Focus on Having Security and Business Continuity Solutions that Utilize Cloud Processing as a Top...

Read More..

Alibaba Cloud challenges AWS with its own custom smartNIC – The Register

Alibaba Cloud offered a peek at its latest homegrown silicon at its annual summit this week, which it calls Cloud Infrastructure Processing Units (CIPU).

The data processing units (DPUs), which we're told have already been deployed in a handful of the Chinese giants datacenters, offload virtualization functions associated with storage, networking, and security from the host CPU cores onto dedicated hardware.

The rapid increase in data volume and scale, together with higher demand for lower latency, call for the creation of new tech infrastructure, Alibaba Cloud Intelligence President Jeff Zhang said in a release.

However, the tech is hardly new, even for Alibaba Cloud. SmartNICs, IPUs, DPUs (call them what you want) have been knocking around hyperscale and cloud datacenters for years now. Alibaba's CIPU appears to be an evolution of the company's X-Dragon smartNIC, designed to compete head on with Amazon Web Services Nitro cards.

The exact architecture underpinning Alibabas CIPUs is unclear, however, the cards do use a standard PCIe card form factor.

Alibaba claims the accelerator is capable of reducing network latency to as little as 5 microseconds, while improving compute performance in data-intensive AI and big-data Spark deployments by as much as 30 percent, according to the companys internal benchmarks.

The Register reached out to Alibaba Cloud for more details; well let you know if we hear back.

DPUs have become a hot topic over the past few years as weve seen an influx of products from the likes of Intel, Marvell, Fungible, Nvidia, and AMDs Xilinx and Pensando business units to name just a few.

All of these devices share a common goal: accelerate input/output intensive workloads common in networking, storage, and security applications by offloading them to specialized domain-specific accelerators, freeing CPU resources to run tenant workloads in the process.

In this regard, Alibabas CIPUs are nothing new, but still notable as being developed in house as opposed to using third-party smartNICs as Google Cloud Platform and others have done.

Alibabas CIPU comes just months after the cloud provider detailed an in-house microprocessor.

Developed by the companys T-Head development branch, the Yitian 710 is based on a 5-nanometer manufacturing process, boasts 128 Armv9-compatible CPU cores clocked at 3.2 GHz, and supports the latest DDR5 and PCIe 5.0 standards.

According to Alibabas internal benchmarks, the chip is 20 percent more powerful and 50 percent more efficient than the current crop of server processors on the market.

Customers can deploy workloads on the chips now on Alibabas Elastic Compute Service (ECS) g8m instances.

Alibabas efforts closely mirror those taken by American rival Amazon, which was among the first to pursue custom silicon as a differentiator for its public cloud.

AWS offers a full suite of instances using any combination of its Graviton CPUs, Nitro smartNICs, and Trainium and Inferentia AI processors.

Alibaba is hardly the only cloud vendor now warming up to idea of custom cloud infrastructure. Microsoft is actively deploying Amperes Arm-based Altra CPUs in Azure and is rumored to be working on a custom processor of its own.

View post:
Alibaba Cloud challenges AWS with its own custom smartNIC - The Register

Read More..