Page 2,627«..1020..2,6262,6272,6282,629..2,6402,650..»

How a glitch in the Matrix led to apps potentially exposing encrypted chats – The Register

The Matrix.org Foundation, which oversees the Matrix decentralized communication protocol, said on Monday multiple Matrix clients and libraries contain a vulnerability that can potentially be abused to expose encrypted messages.

The organization said a blunder in an implementation of the Matrix key sharing scheme designed to allow a user's newly logged-in device to obtain the keys to decrypt old messages led to the creation of client code that fails to adequately verify device identity. As a result, an attacker could fetch a Matrix client user's keys.

Specifically, a paragraph in Matrix E2EE (end-to-end encryption) Implementation Guide, which described the desired key handling routine, was followed in the creation of Matrix's original matrix-js-sdk code. According to the foundation, this SDK "did not sufficiently verify the identity of the device requesting the keyshare," and this oversight made its way into other libraries and Matrix chat clients.

"This is not a protocol or specification bug, but an implementation bug which was then unfortunately replicated in other independent implementations," the foundation insisted.

To exploit this vulnerability, an attacker would need to access the message recipient's account, via stolen credentials or compromising the victim's homeserver.

"Thus, the greatest risk is to users who are in encrypted rooms containing malicious servers," the Matrix.org Foundation said in a blog post. "Admins of malicious servers could attempt to impersonate their users' devices in order to spy on messages sent by vulnerable clients in that room."

Admins of malicious servers could attempt to impersonate their users' devices in order to spy on messages sent by vulnerable clients in that room

At the moment, this risk remains theoretical as the foundation said it has not seen this flaw being exploited in the wild. Among the affected clients and libraries are: Element (Web/Desktop/Android, but not iOS), FluffyChat, Nheko, Cinny, and SchildiChat.

A handful of other applications that haven't implemented key sharing are believed not to be vulnerable. These include: Chatty, Hydrogen, mautrix, purple-matrix, and Syphon.

Matrix's key-sharing scheme was added in 2016 as a way to let a Matrix client app ask a message recipient's other devices or the sender's originating device for the keys to decrypt past messages. It also served to provide a way for a user to log into a new client and gain access to chat history when devices with the necessary keys were offline or the user hadn't backed the keys up.

The recommended implementation, as taken in matrix-js-sdk, involved sharing keys automatically only to devices of the same user that have been verified.

"Unfortunately, the implementation did not sufficiently verify the identity of the device requesting the keyshare, meaning that a compromised account can impersonate the device requesting the keys, creating this vulnerability," explained the Matrix.org Foundation.

Patches for affected software have been made available in the relevant repositories. The foundation said it intends to review the key sharing documentation and to revise it to make it clearer how to implement key sharing in a safe way. The group also said it will revisit whether key sharing is really necessary in the Matrix protocol and will focus on making matrix-rust-sdk a portable reference implementation of the Matrix protocol, so other libraries don't have to reimplement logic that has proven to be difficult to do properly.

"This will have the effect of reducing attack surface and simplifying audits for software which chooses to use matrix-rust-sdk," the foundation said.

Read more from the original source:
How a glitch in the Matrix led to apps potentially exposing encrypted chats - The Register

Read More..

Secure cloud storage: which are the most secure providers? – ITProPortal

The best cloud storage platforms are designed to enable you to store files, data, and other information in a secure environment. Once youve created an account and uploaded your files to your chosen secure cloud storage platform, you will be able to access them from anywhere with an internet connection.

However, some services really dont perform well on the security front. In theory, your files may be encrypted and stored away from hackers and other malicious third parties, but things arent always as good as they seem.For example, many of the most popular platforms actually control your encryption keys, which essentially means that they can access your data if required.

They may be forced to do this by law enforcement, or hackers may cause a data breach resulting in leaked information.Fortunately, truly secure cloud storage solutions do exist. These generally use zero-knowledge encryption, which means that you have full control over who can view your files.

Versatile administrator controls are usually available, and all data is stored in highly secure, well-maintained data centers.For those looking for the best cloud storage for business, these elements are particularly important and key to have in place when it comes to your confidential, vital business data and information.

Below, we take a close look at the leading secure cloud storage platforms on the market today. We focus on encryption, data safety, and all-around security practices, alongside other noteworthy features.

1. IDrive: the best secure cloud storage providerIDriveoffers lots of storage for incredibly reasonable prices, end-to-end and at-rest encryption for files, and a private key that can be created to enable zero-knowledge encryption too. It supports unlimited devices, provides extensive file versioning, and other top features including data center security measures.View Deal

2. pCloud: a security leader in cloud storagepCloud provides encryption services across the board, not least via its pCloud Encryption add-on, which includes zero-knowledge architecture as well as client-side encryption. For business plans meanwhile, user and access controls are available, with the encryption add-on only $4.99 a month on top of subscriptions.View Deal

Our pick of the best secure cloud storage providers available is IDrive, thanks to its range of excellent secure storage solutions for individuals and businesses. Configurable storage and backups, alongside multi-device compatibility, are top features only enhanced by zero-knowledge and at-rest encryption on all files.

pCloud follows closely behind, its pCloud Encryption paid add-on providing advanced zero-knowledge and client-side encryption for an extra $4.99 a month, while business plans benefit from user and access controls as well as multi-device capabilities.

SpiderOak meanwhile is the leader in zero-knowledge, with advanced end-to-end encryption only adding to its zero-knowledge policy, which means the company and its staff cannot access any of your data or information. We also recommend that you consider Sync.com, Tresorit, MEGA, NordLocker, and IceDrive when considering which secure cloud storage solution might be right for you or your business.

Best configurable secure cloud storage

Automatic backups: Yes | Zero-knowledge encryption: Optional | At-rest encryption: Yes | Support: Phone, live chat, email, online form submission

Compatible with various devices

Uses full end-to-end encryption

Configurable backups

Support for unlimited devices

User interface can be a little confusing

Upload and download speeds are a little slow

IDrive is a leading cloud storage provider, and it offers excellent secure storage solutions for businesses of all sizes. Its known for its configurability, which essentially enables you to specify exactly how you would like files to be stored and how backups should work.

In addition, IDrive offers excellent multi-device compatibility. In fact, accounts can be used with unlimited devices, including on mobile and desktop. End-to-end and at-rest encryption is used throughout, and you can create a private key to enable zero-knowledge encryption.

All of IDrives data centers are located within the USA. They are designed with multiple failsafes, and they employ industry-standard security measures to prevent physical data breaches.

Theres a basic free plan with 5GB of storage, but you will need to upgrade to a premium subscription for full access to all tools and features. Prices start from $59.62 a year for a single user license with 5TB of storage.

It is worth noting that IDrive does have a few small flaws. Upload and download speeds can be slower than average. The user interface is also a little confusing, and you may find it hard to navigate at the beginning.

Find out more in our comprehensive IDrive review; across our comparison features pitting IDrive vs Backblaze and IDrive vs OneDrive; and in our interview with IDrive's CEO Raghu Kulkarni, who discusses its most important recent successes, the impact of COVID-19 on the sector, and the future.

Best overall secure cloud storage platform

Automatic backups: Yes | Zero-knowledge encryption: With add-on | At-rest encryption: Yes | Support: Email

Generous 10GB of free storage

Excellent file-sharing tools

Leading client-side encryption practices

Fast and easy to use

Support is a little basic

The free plan has limited tools

Zero-knowledge encryption is a premium add-on

Swiss-based pCloud is one of the worlds leading cloud storage providers. Its one of our top choices when it comes to secure cloud storage, and it should be easy to see why.

For starters, pCloud provides all of the expected encryption services across the board. Advanced zero-knowledge and client-side encryption is available through the pCloud Encryption add-on, which costs a relatively small $4.99 a month.

The cheapest business plans also offer excellent value for money. Prices start from just $9.99 per user a month for 1TB of storage per user. Admin team members will benefit from a suite of user and access controls, and there are numerous other tools available to streamline the cloud storage process.

In addition, pCloud offers excellent multi-device capabilities. Its available across all popular mobile and desktop operating systems, and its user interface is streamlined and intuitive across the board.

On the downside, collaboration tools are notably lacking. The free version is a little limited, and customer service is basic, at best.Learn more in our full pCloud review, and in our interview with the company's Ivan Dimitrov, who covers the company's future plans, its growth amid a larger industry, and the impacts of COVID-19.

Excellent zero-knowledge storage solutions

Automatic backups: Yes | Zero-knowledge encryption: Yes | At-rest encryption: Yes | Support: Email, live chat

Support for unlimited devices

Point-in-time recovery tools

Tight security all-around

Tidy desktop app

Quite expensive

Phone support is absent

Limited mobile support

SpiderOak offers advanced secure cloud storage solutions through its SpiderOak One product. This enables you to create full backups of all of your files and other data, storing it in a safe cloud environment.

Like most of the providers on this list, SpiderOak offers advanced end-to-end encryption. It has a strict No Knowledge policy, which means that the company and its employees will never have access to your files or any information associated with them. The point-in-time recovery tools are excellent, enabling you to restore previous versions of files or folders.

In addition, all plans come with support for unlimited devices. Prices are a little high, though, with the base 150GB plan costing $6 a month. Theres a 21-day free trial that you can use to test the platform.

Unfortunately, theres very limited mobile support. The desktop client is attractive and beginner-friendly, though, which is nice to see.Read our detailed SpiderOak review to find out more.

Advanced zero-knowledge encryption

Automatic backups: Yes | Zero-knowledge encryption: Yes | At-rest encryption: Yes | Support: Email

Excellent zero-knowledge encryption

Streamlined file sharing

Unlimited storage options

Support is limited to email

Few third-party integrations

Limited collaboration tools

Sync.com is a clear industry leader, and it focuses on data security and privacy across the board. Its known for its advanced end-to-end, zero-knowledge encryption, which basically means that no one will be able to access your data except for you.

The secure sharing tools on offer here stand out as excellent. You can set clear access permissions and control which users have what sort of access. For example, you can set permissions to read-only or read-write as necessary.

On the security front, Sync.com offers advanced two-factor authentication tools. Its compliant with regulations in various parts of the world, including the USA, Canada, and the EU. All data centers are highly secure and protected by tight controls.

Prices start from $5 per user, per month for 1TB of secure storage. Unlimited storage can be accessed for $15 per user a month. Theres also a free version that you can use to test the platform.

On the downside, theres only email support. In-app collaboration is limited, and theres only a small number of third-party integrations.To find out more, read our Sync.com review.

Versatile secure cloud storage for businesses of all sizes

Automatic backups: Yes | Zero-knowledge encryption: Yes | At-rest encryption: Yes | Support: Live chat, phone, email

Excellent encryption tools

Encrypted file sharing available

Real-time collaboration tools

Slow upload and download speeds

Expensive compared to some alternatives

Tresorit is known for its advanced cloud storage solutions which are backed by a suite of collaboration and other productivity features. It uses zero-knowledge end-to-end encryption across the board.

The secure file sharing tools also stand out as excellent, particularly for those dealing with sensitive data. All links are encrypted, and you can set clear access permissions to ensure files are only available to selected people.

In addition, Tresorit boasts full compliance with various regulatory bodies. Its fully HIPAA compliant, and its Swiss roots enable it to offer leading privacy features.

The collaboration tools also stand out as excellent. You can work alongside other team members to edit files. All changes will be tracked, and you can mark files that youre working on as editing to notify your colleagues.

Some users will be concerned by the slow download and upload speeds, though, which are somewhat lower than we would expect with a leading cloud storage provider. Prices are also a little high, with the cheapest plan starting at $14.50 per user a month for 1TB of storage.

Our full Tresorit review covers the service in more detail.

Mega has a great free forever plan

Automatic backups: Yes | Zero-knowledge encryption: Yes | At-rest encryption: Yes | Support: Email

Very competitively priced

Great free forever plan

Tidy user interface

Built-in team messaging tools

Slow upload and download

Limited support options

Few third-party app integrations

MEGA is our clear choice for those looking for a free secure cloud storage platform. It offers 20GB of storage with its free forever plan, which is backed by a full range of premium tools.

As expected, all files are protected by zero-knowledge end-to-end encryption. Two-factor authentication is available, and you can set clear link permissions to ensure only the right people can access shared files.

On top of this, the MEGA user interface is tidy and packed full of advanced features. The collaboration tools are excellent, enabling you to work alongside your colleagues and other team members. Theres a built-in secure chat tool, and theres even a MEGAdrop tool that enables third parties to upload files to your cloud.

The lack of support options will be a little concerning for some, as will the limited number of third-party app integrations. Upload and download speeds are also a little slow.Learn more in our full MEGA review.

Competitively priced secure cloud storage

See the original post here:
Secure cloud storage: which are the most secure providers? - ITProPortal

Read More..

WhatsApp is finally allowing users to encrypt chat backups uploaded to iCloud and Google Drive – Buzz.ie

WhatsApp has announced that all users will soon be able to store end-to-end encrypted backups of their chat history on Google Drive in Android or Apple iCloud in iOS.

The Facebook-owned company, which boasts two billion users who send over 100 billion messages a day, said that the move makes WhatsApp the first global messaging service at this scale to offer end-to-end encrypted messaging and backups.

WhatsApp's introduction of end-to-end encryption (E2EE) will provide users with the ability to secure their backed up message history stored in the cloud.

While WhatsApp messages have been encrypted since 2016, the app hasnt offered end-to-end encryption of backups, which rely on iCloud or Google Drive.

This lack of encryption on the backed-up messages created a security loophole exploitable by parties ranging from law enforcement agencies to unintended malicious third parties.

But with the latest update, users will be able to opt-in to end-to-end encryption for their backups before those backups hit their cloud storage service.

Users can expect the update in the coming weeks, according to the company.

For years, in order to safeguard the privacy of peoples messages, WhatsApp has provided end-to-end encryption by default so messages can be seen only by the sender and recipient, and no one in between.

Now, the platform is planning to give people the option to protect their WhatsApp backups using end-to-end encryption as well.

People can already back up their WhatsApp message history via cloud-based services like Google Drive and iCloud. WhatsApp does not have access to these backups, and they are secured by the individual cloud-based storage services. But, while WhatsApp doesn't have access to those backups, Apple and Google potentially do.

But now, if people choose to enable end-to-end encrypted (E2EE) backups once available, neither WhatsApp nor the backup service provider will be able to access their backup or their backup encryption key.

WhatsApp users will have to opt in to the new feature which will soon begin rolling out.

To enable E2EE backups, WhatsApp developed an entirely new system for encryption key storage that works with both iOS and Android.

With E2EE backups enabled, backups will be encrypted with a unique, randomly generated encryption key. People can choose to secure the key manually or with a user password.

When someone opts for a password, the key is stored in a Backup Key Vault that is built based on a component called a hardware security module (HSM) specialised, secure hardware that can be used to securely store encryption keys.

When the account owner needs access to their backup, they can access it with their encryption key, or they can use their personal password to retrieve their encryption key from the HSM-based Backup Key Vault and decrypt their backup.

The HSM-based Backup Key Vault will be responsible for enforcing password verification attempts and rendering the key permanently inaccessible after a limited number of unsuccessful attempts to access it. These security measures provide protection against brute-force attempts to retrieve the key. WhatsApp will know only that a key exists in the HSM. It will not know the key itself.

The move arrives as Facebook faces scrutiny over its privacy polices for the messaging service. Earlier this week, ProPublica published a report highlighting how contract workers sift through millions of private messages that have been flagged by users as potentially abusive.

The nonprofit investigative organisation subsequently made clear that WhatsApp doesnt break the end to end encryption.

Read more:
WhatsApp is finally allowing users to encrypt chat backups uploaded to iCloud and Google Drive - Buzz.ie

Read More..

Disaster Recovery in the Cloud | TV Tech – TV Technology

Every major broadcaster acknowledges that they have to consider disaster recovery. Apart from meeting audience expectations, if a channel is off air, it cannot transmit commercials. Without commercials, it has no income. Getting the station back on airand broadcasting commercialsis clearly vital.

But, given todays very reliable technology, a large investment in replicating the primary playout center could be seen as wasted money: a lot of hardware (and real estate) that will never go to air.

The question, then, is how to ensure business continuity through a disaster recovery site that gets the channel on air in the shortest possible time, that can be operated from anywhere, and involves the least amount of engineering support to launch. And the answer that broadcasters are increasingly turning to is the cloud.

On DemandStart-up costs aside, it can be extremely cost-effective to keep a standby system in the cloud: ready to start when you need it; dormant when you do not. For many, cloud-based disaster recovery serves as a good, practical first experience of media in the cloud.

Whichever provider you choose, what you buy from them is access to effectively infinite amounts of processing power and storage space. We have worked extensively with AWS and other cloud suppliers, but AWS also offers some media-specific services (through their acquisition of Elemental) like media processing, transcoding and live streaming.

It is important to bear in mind that moving to the cloud is not an all or nothing, irreversible decision. The very nature of the cloud means it is simple to flex the amount of processing you put there, so if you should decide to back away it is simple to do so.

The cloud is an element within the IP transitionyou decide when and how to make that transition, and when and how much to use the cloud. For many broadcasters, disaster recovery is an excellent way to try out cloud services.

Keeping it FamiliarWith todays software-defined architectures, systems should perform identically whether they are in dedicated computers in the machine room, virtualized in the corporate data center, or in the cloud. Consistent operation is especially important in disaster recovery deployments; if disaster strikes, the last thing you want is for operators to scrabble around trying to make sense of an unfamiliar system.

That does not mean that the primary system and the disaster recovery site must be identical. But with a well-designed cloud solution, you should be able to emulate the same user interfaces. This makes it easy for the operators to switch back and forth between the two different environments.

It also means you can set resilience and availability by channel. You might want your premium channels to switch over to disaster recovery in seconds, for example, while some of your secondary channels can be left for a while. That is a business decision.

Content is Still KingOne of the common misconceptions about cloud playout is that synchronizing content between premises and the cloud demands a lot of bandwidth and potentially high costs. This need not be the case.

Faced with the imminent obsolescence of video tape libraries, and wary of the eternal cost of maintaining an LTO data tape library, many broadcasters are looking to archive in the cloud. You load the content once, confident that all the technology migration and maintenance will be carried out, flawlessly, by someone else.

You may have collaborative post-production by hosting content and decision lists in the cloud. Contentprograms and commercialscan be delivered direct to the cloud.

Playout, archiving, post and traffic may be managed as separate departments, but if you combine them content is only delivered to the cloud once. It is then available for playout without the high egress costs, and is securely stored at significant cost savings.

Outsourcing SecurityBroadcasters have traditionally sought very high availability from the technology delivering premium channels. Five nines used to be regarded as the gold standard99.999% up time. Even that, though, is equivalent to about 5 minutes of dead air a year.

AWS offers its broadcast clients unimagined availability, up to maybe nine nineseffectively zero downtime. And it achieves that without any maintenance effort on your part: no disk replacement, no routine cleaning of air conditioning, no continual updates of operating systems and virus protection.

If the disaster is that your building has to be evacuated because of detected cases of a communicable disease, playout operators can work from home with exactly the same user interface and functionality as if they were sitting in the MCR.

If you want hot standby (complete parallel running in the cloud for almost instantaneous failover), then the technology allows it, if you choose to pay for the processing time. Alternatively, pick your own level of cold or warm standby, confident that, even from cold, loading and booting the channel playout instances can be accomplished in just a couple of minutes.

Cyberattacks are becoming an all-too familiar headline. Other industries have seen crippling incursions and software systems held to ransom. Developing a business continuity strategy that protects from such attacks is paramount.

Again, the cloud is the right solution. A good cloud provider will deliver better data security than you can do yourself. AWS has thousands of staff with the word security on their business cards. While no organization can hope to be perfect, a good cloud provider will give you your best shot at complete protection, because that is their business. The alternative is to build your own data security team: an unnecessary overhead and a challenge to develop, recruit and manage.

AWS is even used by the U.S. Intelligence Community, which suggests that it is probably working.

Doing it LiveOne comment that is often heard is that you cannot run live channels or live content from the cloud. This is simply not true. At Imagine, we have implemented primary playout systems that feature live content.

In the United States, we recently equipped a SMPTE ST 2110 operations center and cloud-hosted disaster recovery channels for Sinclairs regional sports networks (RSNBally Sports Regional Networks). For Sinclairs Tennis Channel, we provided core infrastructure for a large-scale ST 2110 on-premises broadcast center and a cloud-based live production center for pop-up live events.

The biggest requirement for sports television is that live should be absolutely live: no one wants to hear their neighbors cheer and wait to find out why. Minimum latency is also critical for the big money business of sports books.

Sinclair spun up live channels around the 2021 Miami Open tennis tournament in March, and again for the French Open from Roland Garros. All the playout, including the unpredictable live interventions associated with fitting commercial breaks into tennis matches, was hosted in the cloud, with operators sitting wherever was convenient and safe for them.

As consumer preferences move from broadcast to streaming, what happens after the master control switcher becomes ever more complicated in preparing the output for all the different platforms. That level of signal processing is better done in the cloud, especially with transcoding-as-a-service providing high-performance, affordable delivery.

Stepping-Stone to Next-Gen PlayoutDisaster recovery is fundamentally a business issue, a strategic decision. Using the cloud can deliver the best total cost of ownership, but it can also be a valuable stepping-stone in the broadcasters transition to IP connectivity and outsourced hosting.

The technical and operational teams gain experience and confidence in the cloud as a suitable broadcast platform. Routine rehearsals of business continuity mean that operators will learn how similar the performance of the cloud and on-premises systems, and how the user interface seamlessly switches from one to the other.

This experience gives confidence to move on towards a completely cloud future. Pop-up channels can be created in minutes not months, so it is easy to service sports events or music festivals, while only paying for processor time when you need it.

The cloud is infinitely scalable, so you can add channels or services, support new delivery platforms, and test market 4K and HDR. The direct linkage between the cost of delivery and the revenue won makes for easier business management.

As the legacy playout network reaches life-expiration, broadcasters will know what the cloud can do, and have built up solid information on the costs of operating in the cloud. That knowledge will be invaluable in evaluating proposals for the next generation of playout.

View original post here:
Disaster Recovery in the Cloud | TV Tech - TV Technology

Read More..

Grafana Labs and Alibaba Cloud Bring Pervasive Visualization and Dashboarding to Asia-Pacific Region – GlobeNewswire

NEW YORK, Sept. 14, 2021 (GLOBE NEWSWIRE) -- Grafana Labs, the company behind the open source project Grafana, the worlds most ubiquitous open and composable operational dashboards, today announced a new strategic partnership with Alibaba Cloud, the digital technology and intelligence backbone of Alibaba Group. Through the partnership, the companies are introducing Grafana on Alibaba Cloud, a fully managed data visualization service that enables customers to instantly query and visualize operational metrics from various data sources.

Our goal at Grafana Labs is to make sure Grafanas dashboarding capabilities are available however it makes the most sense for our users whether thats on their own infrastructure or in a public cloud platform like Alibaba Cloud, said Raj Dutt, Co-founder and CEO at Grafana Labs. Partnering with public cloud platforms like Alibaba Cloud further cements Grafana as the best-in-class solution for open source visualizations, and gives Alibabas millions of cloud users instant access to dashboarding capabilities in a way that is uniquely integrated with Alibaba Cloud and easier to get started than self-hosting, while opening the door to a brand new market for Grafana Labs.

We hope that our cooperation with Grafana Labs can let Alibaba Cloud users worldwide leverage Grafana products more conveniently and efficiently so that they can focus on business efficiency by reducing the need for strenuous operations and maintenance activities, said Jiangwei Jiang, Partner of Alibaba Group, Head of Alibaba Cloud Intelligence Infrastructure Products. While putting more efforts into open source fields, Alibaba Cloud will continue to cooperate with more open source vendors to launch complete cloud native products and solutions, providing new momentum for enterprise digital innovation.

To learn more about Grafana on Alibaba Cloud, visit https://www.aliyun.com/activity/middleware/grafana

About Grafana LabsGrafana Labs provides an open and composable monitoring and observability stack built around Grafana, the leading open source technology for dashboards and visualization. There are over 1,500 Grafana Labs customers including Bloomberg, JP Morgan Chase, eBay, PayPal, and Sony, and more than 750,000 active installations of Grafana around the globe. Grafana Labs helps companies manage their observability strategies with full-stack offerings that can be run fully managed with Grafana Cloud, or self-managed with Grafana Enterprise Stack, both featuring extensive enterprise data source plugins, dashboard management, alerting, reporting and security, scalable metrics (Prometheus & Graphite), logs (Grafana Loki) and tracing (Grafana Tempo). Grafana Labs is backed by leading investors Lightspeed Venture Partners, Lead Edge Capital, GIC, Sequoia Capital, and Coatue. Follow Grafana on Twitter at@grafana or visitwww.grafana.com.

Media Contact:Dan Jensen, PR for Grafana Labsdan.jensen@grafana.com

Link:
Grafana Labs and Alibaba Cloud Bring Pervasive Visualization and Dashboarding to Asia-Pacific Region - GlobeNewswire

Read More..

Setting up and troubleshooting multiple thin client monitors – TechTarget

The use of two or more monitors is the norm for many business workstations, and users expect excellent performance when accessing virtual resources on these monitors, regardless of the endpoint.

Unlike a traditional desktop, users can't resolve issues with their thin client monitors and display settings on their own locally because thin clients do not host the OS. Therefore, IT administrators must deliver these resources to the end users' devices and ensure that they have the proper configuration to handle multiple monitors.

There is no universal method to configure thin clients with every virtual desktop management and delivery platform. Still, Citrix Virtual Apps and Desktops (CVAD) provides a reasonable example that IT administrators can use to learn the general process for enabling multiple monitors on a thin client endpoint.

CVAD user sessions, also known as HDX sessions, present resources to users based on a series of bitmaps on the screen, and organizations often rely on thin clients to provide access to these resources at a low hardware and management cost.

When a user launches or modifies a virtual application or desktop within an HDX session, the server or cloud hosting the virtual resources modifies the bitmaps and sends the updated info to the end-user device. Whether a user is accessing these resources via a Windows, Mac, iPad, Chromebook or thin client device, the session processes are the same. However, once multiple monitors are in the mix, IT has to take administrative action to ensure users have a quality experience. Several challenges exist with multiple monitor deployments on thin clients, and IT admins will need to troubleshoot certain issues.

If any use cases require multiple monitors or extremely high resolution, IT will have to carefully review the model specifications of any thin clients the organization deploys. Many environments that benefit from thin clients, such as call centers, hot desks and general business workers, also benefit from dual monitors with about 1920x1080 resolution.

Citrix Virtual Apps and Desktops can support up to eight monitors, but most thin client devices can only support single or dual monitors. In addition, screen resolution support varies from thin client to thin client. Thin client hardware capabilities are typically far less than full Windows or macOS devices. To address this, thin client devices can run a dedicated OS, such as Igel OS. This stripped-down OS can run as a physical device or via UD Pocket in a USB port that can support eight monitors and 4K resolution. Long gone are the days of Video Graphics Array displays.

Where dual or multiple monitors are in use, it is best for IT admins to deploy monitors based on the same size and resolution. However, this is not always possible, and issues may arise because of these discrepancies.

The most common issue is that users can only see CVAD sessions on a single monitor.

In many cases, multiple monitors will function properly by default, but issues can still arise. Problems with session presentation on multiple monitors focus on two main areas: the CVAD setup and the thin client configuration. Most often, issues will occur on the local thin client device, but the CVAD configuration may require modification.

The most common issue is that users can only see CVAD sessions on a single monitor. If an individual user reports this issue, it is most likely related to the end-user device. This may be a hardware issue or thin client configuration issue. For example, on the HP t430 thin client device, dual displays require HDMI and DisplayPort connections; a display connected via just the HDMI cannot serve as both connections for dual display.

IT admins must allocate sufficient memory for the CVAD environment, whether based on a server or workstation. This is especially important for 4K monitors. In addition, users running GPU-enabled Virtual Delivery Agents need sufficient GPU capabilities as well.

The graphics policy settings are key items within Citrix policies. In particular, the display memory limit setting may affect screen resolution for multiple monitors. The default setting is 65,536 KB, which may not be sufficient; numerous high-resolution HDX sessions require more memory than this.

HDX session presentation on a single monitor may lead to thin client configuration issues. If the thin client is not in multimonitor mode, a virtual desktop admin can remedy this with a configuration setting. For example, on Igel devices, admins can go to the window setting and ensure that the multimonitor configuration is not set to restrict full-screen sessions onto one monitor.

Where the size or resolution of the monitors differs, it is possible that the alignment of screens does not display correctly. Most users will want their CVAD session to be aligned horizontally along the top, but they may prefer center or bottom alignment. Users can adjust monitor alignment appearance on their own via the on-device display settings.

The local settings need to be adjusted if the user sees one or more screens rotated improperly -- upside down or at a 90-degree angle. Using Igel devices as an example, admins can select screen rotation on the client settings and rotate right or left.

Read this article:
Setting up and troubleshooting multiple thin client monitors - TechTarget

Read More..

Taking The Long View On Open Computing – The Next Platform

COMMISSIONED Software changes like the weather, and hardware changes like the landscape; each affects the other over geologic timescales to create a new climate. And this largely explains why it has taken so long for open-source computing to spread its tentacles into the hardware world.

With software, all you need is a couple of techies and some space on GitHub and you can change the world with a few hundred million mouse clicks. Hardware on the other hand is capital intensive you have to buy parts and secure manufacturing for it. While it is easy enough to open up the design specs for any piece of hardware, it is not necessarily easy to get such hardware specs adopted by a large enough group of people for it to be manufactured at scale.

However, from slow beginnings, open computing has been steadily adopted by the hyperscalers and cloud builders. And now it is beginning the trickle down to smaller organizations.

In a world where hardware costs must be curtailed and compute, network, and storage efficiency is ever more important, it is reasonable to expect that sharing hardware designs and pooling manufacturing resources at a scale that makes economic sense but does not require hyperscale will happen. We believe, therefore, that open computing has already brought dramatic changes to the IT sector, and that these will only increase over time.

The term open computing is often used interchangeably with the Open Compute Project, created in 2011 by Facebook in conjunction with Intel, Rackspace Hosting and Goldman Sachs However, OCP is just one of four open-source computing initiatives in the market today. Lets see how they all got started.

More than a decade ago, Facebook growing by leaps and bounds, bought much of its server and storage equipment from Dell, and then eventually Dell and Facebook started to customize equipment for very specific workloads. By 2009, Facebook decided that the only way to improve IT efficiency was to design its own gear and the datacenters that house it. In January 2014, Microsoft joined the OCP, opening up its Open Cloud Server designs and creating a second track of hardware to complement the Open Rack designs from Facebook. Today, OCP has more than 250 members, with around 5,000 engineers working on projects and another 16,000 participants who are members of the community and who often are implementing its technology.

Six months after Facebook launched the OCP, the Open Data Center Committee, formerly known as Project Scorpio, was created by Baidu, Alibaba, and Tencent to come up with shared rack scale infrastructure designs. ODCC opened up its designs in 2014 in conjunction with Intel. (Baidu and Alibaba, the two hyperscalers based in China, are members of both OCP and ODCC, and significantly buy a lot of their equipment from Inspur.)

In 2013, IBM got together with Google to form what would become the OpenPower Foundation, which sought to spur innovation in Power-based servers through open hardware designs and open systems software that runs on them. (Inspur also generates a significant portion of its server revenues, which are growing by leaps and bounds, from Power-based machinery.)

And finally, there is the Open19 Foundation, started by LinkedIn, Hewlett Packard Enterprise, and VaporIO to create a version of a standard, open rack that is more like the standard 19-inch racks that large enterprises are used to in their datacenters and less like the custom racks that have been used by Facebook, Microsoft, Baidu, Alibaba, and Tencent. Starting this year, and in the wake of LinkedIn being bought by Microsoft, the Linux Foundation is now hosting the Open19 effort, and the datacenter operator Equinix and server and switch vendor Cisco Systems are now on its leadership committee.

Inspur is a member of all four of these open computing projects and is among the largest suppliers of open computing equipment in the world, with about 30 percent of its systems revenue based on open computing designs. Given this, we had a chat with Alan Chang, vice president of technical operations, who prior to joining Inspur, worked at both Wistron and Quanta selling and defining their open computing-inspired rack solutions.

It depends on how broadly you define open computing, but I would say that somewhere between 25 percent to 30 percent of the server market today could be using at least some open computing standards. It is not in the hundreds of large customers yet, but in the tens, and that is the barrier that Inspur wants to break through with open computing, Chang tells The Next Platform. He points out that two top tier hyperscalers consumed somewhere around two million servers last year against a total market of 11.9 million machines. With just those two companies alone, you are at 18.5 percent, which sounds like a very large number, but it is concentrated in just two players,

Tens of customers may not seem like a lot, but the server market changes at a glacial pace and it is very hard to make big changes in hardware. For starters, customers have long-standing buying preferences, and outside of the hyperscalers and cloud builders, many large enterprises and service providers they are dependent on the baseboard management controllers, or BMCs, that handle the lights out, remote management of their server infrastructure. The BMC is a control point just like proprietary BIOS microcode inside of servers was in days gone by.

But this is going to change, says Chang. And with that change those who adopt the system management styles of the hyperscalers and cloud builders will reap the benefits as they force a new kind of management overlay onto systems and in particular, the open computing systems they install.

The BIOS and the BMC are programmed in a kind of Assembly language, and only the big OEMs have the skills and the experience to write that code, explains Chang. Even if a company like Facebook wants to help, they dont have the Assembly language skills. But such companies are looking for a different way to create the BIOS and the BMC, something similar to the way they create Java or Python programs, and these companies have a lot of Java and Python programmers. And this is where we see OpenBMC and Redfish all starting to evolve and come together, all based on open-source software, to replace the heart of the hardware.

To put it bluntly, for open computing to take off, the management of individual servers has to be as good as the BMCs on OEM machinery because in a lot of cases in the enterprise, one server runs one workload, and they are not scaled out with replication or workload balancing to avoid downtime. This is what makes those BMCs so critical in the enterprise. Enterprises have a lot of pet servers running pet applications, not interchangeable massive herds of cattle and scale-out, barn-sized applications. And even large enterprises are, at best, a hybrid of these. But if enough of them gang together their scale, then they can make a virtual hyperscaler.

That, in essence, is what all of the open computing projects have been trying to do: find that next bump of scale. Amazon Web Services and Google do a lot of their own design work and get the machines built by original design manufacturers, or ODMs. Quanta, Foxconn, Wistron, and Inventec are the big ones, of course. Microsoft and Facebook design their own and then donate to OCP and go to the ODMs for manufacturing. Baidu, Alibaba, and Tencent work together through ODCC and co-design with ODMs and OEMs, and increasingly rely on Inspur for design and manufacturing. And frankly, there are only a few companies in the world that can build at the scale and at the cost that the hyperscalers and large cloud builders need.

Trying to scale down is one issue, but so is the speed of opening up designs.

When Facebook, for instance, has a design for a server or storage, and they open it up, they do it so late, says Chang. Everyone wants a jump on the latest and greatest technology, and sometimes they might like 80 percent of the design and they need to change 20 percent of it. So in the interest of time, companies who want to adopt that design have to figure out if they can take the engineering change or just suck it up and use the Facebook design. And as often happens in the IT business, if they do the engineering change and go into production, then there is a chance that something better will come out by the time they get their version to market. So what people are looking for OCP and ODDC and the other open computing projects to do is to provide guidance, and get certifications for independent software vendors like SAP, Oracle, Microsoft, and VMware quickly. All of the time gaps have to close in some way.

The next wave of open computing adoption will come from smaller service providers various telcos, Uber, Apple, Dropbox, and companies of this scale. Their infrastructure is getting more expensive, and they are at the place that Facebook was at a decade ago when the social network giant launched the OCP effort to try to break the 19-inch infrastructure rack and so drive up efficiencies, drive down costs, and create a new supply chain.

The growth in open computing has been strong and steady, thanks in large part to the heavy buying by Facebook and Microsoft, but the market is larger than that and getting larger.

As part of the tenth-year anniversary celebration for the OCP, Inspur worked with market researcher Omdia to case the open computing market, and recently put out a report, which you can get a copy of here. Here are the interesting bits from the study. The first is a table showing the hardware spending by open computing project:

The OCP designs accounted for around $11 billion in server spending (presumably at the ODM level) in 2020, while the ODCC designs accounted for around $3 billion. Open19, being just the racks and a fledgling project by comparison, had relatively small revenues. Omdia did not talk about OpenPower iron in its study, but it might have been on the same scale a few years back and higher if Google or Inspur is doing some custom OpenPower machinery on their own. Rackspace had an OpenPower motherboard in an Open Compute chassis, for instance.

Add it all up over time, and open computing is a bigger and bigger portion of server spending, and it is reasonable to assume that some storage and networking will be based on open computing designs, following a curve much like the one below for server shipments:

Back in 2016, open computing platforms accounted for a mere seven percent of worldwide server shipments. But the projection by Omdia is for open computing platforms to account for 40 percent by 2025, and through steady growth after a step function increase in 2020. As we have always said, recessions dont cause technology transitions, but they do accelerate them. We would not be surprised if those magenta bars get taller faster than the Omdia projection particularly if service providers start merging and capacity needs skyrocket in a world that stays stuck in a pandemic for an extended amount of time.

Commissioned by Inspur

Original post:
Taking The Long View On Open Computing - The Next Platform

Read More..

NICE Named Global Market Share Leader for Interaction Analytics with Perfect Satisfaction Scores Across All 24 Categories – Business Wire

HOBOKEN, N.J.--(BUSINESS WIRE)--NICE (Nasdaq: NICE) today announced that it has been recognized as the Interaction Analytics market share leader, based on a 39.4 percent share of seats by DMG Consulting LLC, a leading independent research and consulting firm. This is the tenth consecutive year in which NICE has been named the market share leader, based on seats in DMG Consultings Interactions Analytics Product and Market Report. Notably, NICE received the top customer satisfaction score of 5.0 in DMGs Interaction Analytics Vendor Satisfaction Analysis across all 24 vendor, product capabilities and product effectiveness categories. Click here for a complimentary excerpt from the report.

DMG Consulting's 2021-2022 report attributed NICE with a lead of more than 9 percent in the number of seats over the nearest competitor in the Interaction Analytics market. NICE received a perfect 5.0 customer satisfaction score for product capabilities such as omnichannel capabilities (the ability to capture, aggregate and analyze data from all voice and digital interactions), sentiment analysis, artificial intelligence (AI) and machine learning capabilities as well as real-time capabilities. Also noteworthy, NICE earned a perfect rating for each product effectiveness category, specifically for the solutions ability to identify and mitigate pandemic-related impacts for customers, and support work at home/remote agents amongst others.

Donna Fluss, President, DMG Consulting, said, A unique and highly beneficial aspect of Interaction Analytics is its ability to address voice and digital channels and put together a comprehensive story of CX. Looking at feedback in each channel has always been important but gaining visibility into what is happening across channels and enterprise business units is essential to understanding the overall customer journey. This is becoming even more important as activity in digital channels picks up momentum as a result of the digital transformation.

Organizations globally are racing to engage in digital conversations as a way to deliver transformative experiences, said Barry Cooper, President, NICE Workforce & Customer Experience Group. Having received perfect 5.0 customer satisfaction ratings in DMGs 2021 Interaction Analytics industry research, we consider it a clear vote of confidence from our customers. We believe it is a direct result of our focus on cutting-edge innovation, such as Enlighten AI, which allows organizations to meet customers at their digital doorstep.

Enlighten AI is a set of purpose-built AI technologies that are embedded across the NICE portfolio of solutions, making every CX application and process smarter in real-time. It analyzes every interaction to identify the successful behaviors that drive extraordinary experiences. NICE Enlighten AI, developed based on over 30 years of industry expertise and using amongst the largest syndicated interaction datasets, offers an array of out-of-the-box self-learning AI solutions including Enlighten AI for Customer Satisfaction, Enlighten AI for Complaint Management and Enlighten AI Routing.

DMG Consulting LLC's annual Interaction Analytics Product and Market Report' comprehensively analyzes the interaction analytics market, competitive landscape, product innovation, and market, business and servicing trends and challenges. The analysis provides an in-depth review of interaction analytics solutions, including how they enhance other applications and are an essential tool for capturing the voice of the customer, understanding the customer journey and measuring and improving the customer experience.

About NICEWith NICE (Nasdaq: NICE), its never been easier for organizations of all sizes around the globe to create extraordinary customer experiences while meeting key business metrics. Featuring the worlds #1 cloud native customer experience platform, CXone, NICE is a worldwide leader in AI-powered contact center software. Over 25,000 organizations in more than 150 countries, including over 85 of the Fortune 100 companies, partner with NICE to transform - and elevate - every customer interaction. http://www.nice.com.

Trademark Note: NICE and the NICE logo are trademarks or registered trademarks of NICE Ltd. All other marks are trademarks of their respective owners. For a full list of NICEs marks, please see: http://www.nice.com/nice-trademarks.

Forward-Looking StatementsThis press release contains forward-looking statements as that term is defined in the Private Securities Litigation Reform Act of 1995. Such forward-looking statements, including the statements by Mr. Cooper, are based on the current beliefs, expectations and assumptions of the management of NICE Ltd. (the Company). In some cases, such forward-looking statements can be identified by terms such as believe, expect, seek, may, will, intend, should, project, anticipate, plan, estimate, or similar words. Forward-looking statements are subject to a number of risks and uncertainties that could cause the actual results or performance of the Company to differ materially from those described herein, including but not limited to the impact of changes in economic and business conditions, including as a result of the COVID-19 pandemic; competition; successful execution of the Companys growth strategy; success and growth of the Companys cloud Software-as-a-Service business; changes in technology and market requirements; decline in demand for the Company's products; inability to timely develop and introduce new technologies, products and applications; difficulties or delays in absorbing and integrating acquired operations, products, technologies and personnel; loss of market share; an inability to maintain certain marketing and distribution arrangements; the Companys dependency on third-party cloud computing platform providers, hosting facilities and service partners;, cyber security attacks or other security breaches against the Company; the effect of newly enacted or modified laws, regulation or standards on the Company and our products and various other factors and uncertainties discussed in our filings with the U.S. Securities and Exchange Commission (the SEC). For a more detailed description of the risk factors and uncertainties affecting the company, refer to the Company's reports filed from time to time with the SEC, including the Companys Annual Report on Form 20-F. The forward-looking statements contained in this press release are made as of the date of this press release, and the Company undertakes no obligation to update or revise them, except as required by law.

Read more:
NICE Named Global Market Share Leader for Interaction Analytics with Perfect Satisfaction Scores Across All 24 Categories - Business Wire

Read More..

Artificial Intelligence A New Portal to Promote Global Cooperation Launched with 8 International Organisations – Council of Europe

On 14 September 2021, eight international organisations joined forces to launch a new portal promoting global co-operation on artificial intelligence (AI). The portal is a one-stop shop for data, research findings and good practices in AI policy.

The objective of the portal is to help policymakers and the wider public navigate the international AI governance landscape. It provides access to the necessary tools and information, such as projects, research and reports to promote trustworthy and responsible AI that is aligned with human rights at global, national and local level.

Key partners in this joint effort include the Council of Europe, the European Commission, the European Union Agency for Fundamental Rights, the Inter-American Development Bank, the Organisation for Economic Co-operation and Development (OECD), the United Nations (UN), the United Nations Educational, Scientific and Cultural Organization (UNESCO), and the World Bank Group.

Access the website: https://globalpolicy.ai

Useful links:

Continue reading here:
Artificial Intelligence A New Portal to Promote Global Cooperation Launched with 8 International Organisations - Council of Europe

Read More..

Yan Cui and Team Are Innovating Artificial Intelligence Approach to Address Biomedical Data Inequality – UTHSC News

Yan Cui, PhD, associate professor in the UTHSCDepartment of Genetics, Genomics, and Informatics,recently received a $1.7 million grant from the National Cancer Institute for a study titled Algorithm-based prevention and reduction of cancer health disparity arising from data inequality.

Dr. Cuis project aims to prevent and reduce health disparities caused by ethnically-biased data in cancer-related genomic and clinical omics studies. His objective is to establish a new machine learning paradigm for use with multiethnic clinical omics data.

For nearly 20 years, scientists have been using genome-wide association studies, known as GWAS, and clinical omics studies to detect the molecular basis of diseases. But statistics show that over 80% percent of data used in GWAS come from people of predominantly European descent.

As artificial intelligence (AI) is increasingly applied to biomedical research and clinical decisions, this European-centric skew is set to exacerbate long-standing disparities in health. With less than 20% of genomic samples coming from people of non-European descent, underrepresented populations are at a severe disadvantage in data-driven, algorithm-based biomedical research and health care.

Biomedical data-disadvantage has become a significant health risk for the vast majority of the worlds population, Dr. Cui said. AI-powered precision medicine is set to be less precise for the data-disadvantaged populations including all the ethnic minority groups in the U.S. We are committed to addressing the health disparities arising from data inequality.

The project is innovative in the type of machine learning technique it will use. Multiethnic machine learning normally uses mixture learning and independent learning schemes. Dr. Cuis project will instead be using a transfer learning process.

Transfer learning works much the same way as human learning. When faced with a new task, instead of starting the learning process from scratch, the algorithm leverages patterns learned from solving a related task. This approach greatly reduces the resources and amount of data required for developing new models.

Using large-scale cancer clinical omics data and genotype-phenotype data, Dr. Cuis lab will examine how and to what extent transfer learning improves machine learning on data-disadvantaged cohorts. In tandem with this, the team aims to create an open resource system for unbiased multiethnic machine learning to prevent or reduce new health disparities.

Neil Hayes, MD, MPH, assistant dean for Cancer Reesearch in the UTHSC College of Medicine and director of the UTHSC Center for Cancer Research, and Athena Starlard-Davenport, PhD, associate professor in the Department of Genetics, Genomics, and Informatics, are co-Investigators on the grant. Yan Gao, PhD, a postdoctoral scholar working with Dr. Cui, is a machine learning expert in the team. A pilot study for this project, funded by the UT Center for Integrative and Translational Genomics and UTHSC Office of Research, has been published in Nature Communications.

Related

See the article here:
Yan Cui and Team Are Innovating Artificial Intelligence Approach to Address Biomedical Data Inequality - UTHSC News

Read More..