Page 3,342«..1020..3,3413,3423,3433,344..3,3503,360..»

Mission Impossible: 7 Countries Tell Facebook To Break Encryption – Forbes

The governments want to stop encrypted messaging

This article has been updated with a comment from Facebook.

The governments of seven countries are calling on Facebook and other tech firms to do the technically impossible - to weaken encryption by giving law enforcement access to messages, whilst not reducing user safety.

The governments of the U.S., U.K., Australia, New Zealand, Canada, India and Japan have issued the joint statement which pleads with Facebook specifically, as well as other tech firms, to drop end-to-end encryption policies which erode the publics safety online.

The governments once again raise the issue of child abusers and terrorists using encrypted services such as WhatsApp to send messages without fear of content being intercepted.

We owe it to all of our citizens, especially our children, to ensure their safety by continuing to unmask sexual predators and terrorists operating online, the U.K.s home secretary, Priti Patel, said in a statement.

It is essential that tech companies do not turn a blind eye to this problem and hamper their, as well as law enforcements, ability to tackle these sickening criminal acts. Our countries urge all tech companies to work with us to find a solution that puts the publics safety first.

Once again, the politicians seem unable to grasp one of the fundamental concepts of end-to-end encryption - that putting back doors into the encryption algorithms that allow security services to intercept messages effectively breaks the encryption.

According to the U.K. governments statement, the seven signatories of the international statement have made it clear that when end-to-end encryption is applied with no access to content, it severely undermines the ability of companies to take action against illegal activity on their own platforms.

Yet, end-to-encryption with the ability for third parties to intercept content is not end-to-end encryption in any meaningful sense. Worse, by introducing back doors to allow security services to access content, it would compromise the entire encryption system.

Nevertheless, the international intervention calls on tech companies to ensure there is no reduction in user safety when designing their encrypted services; to enable law enforcement access to content where it is necessary and proportionate; and work with governments to facilitate this.

As has been pointed out to the governments many times before, what they are asking for is technically impossible. An open letter sent to several of the signatory countries by a coalition of international civil rights groups in 2019 made this very point.

Proponents of exceptional access have argued that it is possible to build backdoors into encrypted consumer products that somehow let good actors gain surreptitious access to encrypted communications, while simultaneously stopping bad actors from intercepting those same communications, the letter stated. This technology does not exist.

To the contrary, technology companies could not give governments backdoor access to encrypted communications without also weakening the security of critical infrastructure, and the devices and services upon which the national security and intelligence communities themselves rely.

Critical infrastructure runs on consumer products and services, and is protected by the same encryption that is used in the consumer products that proponents of backdoor access seek to undermine, the letter adds.

In response to the statement from the seven nations, a Facebook spokesperson said: We've long argued that end-to-end encryption is necessary to protect people's most private information. In all of these countries, people prefer end-to-end encrypted messaging on various apps because it keeps their messages safe from hackers, criminals, and foreign interference. Facebook has led the industry in developing new ways to prevent, detect, and respond to abuse while maintaining high security and we will continue to do so."

Read more here:
Mission Impossible: 7 Countries Tell Facebook To Break Encryption - Forbes

Read More..

Dutton pushes against encryption yet again but oversight at home is slow – ZDNet

(Image: APH)

"We, the undersigned, support strong encryption, which plays a crucial role in protecting personal data, privacy, intellectual property, trade secrets and cybersecurity," wrote a bunch of nations on the weekend -- the Five Eyes, India, and Japan.

As a statement of intent, it's right up there with "Your privacy is very important to us", "Of course I love you", and "I'm not a racist but...".

At one level, there's not a lot new in this latest International statement: End-to-end encryption and public safety.

We like encryption, it says, but you can't have it because bad people can use it too.

"Encryption is an existential anchor of trust in the digital world and we do not support counter-productive and dangerous approaches that would materially weaken or limit security systems," the statement said.

"Particular implementations of encryption technology, however, pose significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children."

The obviously important law enforcement task of tackling child sexual abuse framed the rest of the statement's two substantive pages too.

End-to-end encryption should not come at the expense of children's safety, it said. There was only a passing mention of "terrorists and other criminals".

This statement, like all those that have come before it, tries, but of course, fails to square the circle: A system either is end-to-end encryption, or it isn't.

According to renowned Australian cryptographer Dr Vanessa Teague, the main characteristic of this approach is "deceitfulness".

She focuses on another phrase in the statement, where it complains about "end-to-end encryption [which] is implemented in a way that precludes all access to content".

"That's what end-to-end encryption is, gentlemen," Teague tweeted.

"So either say you're trying to break it, or say you support it, but not both at once."

What's interesting about this latest statement, though, is the way it shifts the blame further onto the tech companies for implementing encryption systems that create "severe risks to public safety".

Those risks are "severely undermining a company's own ability to identify and respond to violations of their terms of service", and "precluding the ability of law enforcement agencies to access content in limited circumstances where necessary and proportionate to investigate serious crimes and protect national security, where there is lawful authority to do so".

Note the way each party's actions are described.

Law enforcement's actions are reasonable, necessary, and proportionate. Their authorisation is "lawfully issued" in "limited circumstances", and "subject to strong safeguards and oversight". They're "safeguarding the vulnerable".

Tech companies are challenged to negotiate these issues "in a way that is substantive and genuinely influences design decisions", implying that right now they're not.

"We challenge the assertion that public safety cannot be protected without compromising privacy or cybersecurity," the statement said.

The many solid arguments put forward explaining why introducing a back door for some actors introduces it for all, no they're mere assertions.

"We strongly believe that approaches protecting each of these important values are possible and strive to work with industry to collaborate on mutually agreeable solutions."

This too is an assertion, of course, but the word "belief" sounds so much better, doesn't it.

As your correspondent has previously noted, however, the fact that encryption is either end-to-end or not may be a distraction. There are ways to access communications without breaking encryption.

One obvious way is to access the endpoint devices instead. Messages can be intercepted before they're encrypted and sent, or after they've been received and decrypted.

In Australia, for example, the controversial Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (TOLA Act) can require communication producers to install software that a law enforcement or intelligence agency has given them.

Providers can also be made to substitute a service they provide with a different service. That could well include redirecting target devices to a different update server, so they receive the spyware as a legitimate vendor update.

Doubtless there are other possibilities, all of which avoid the war on mathematics framing that some of the legislation's opponents have been relying on.

While Australia's Minister for Home Affairs Peter Dutton busies himself with signing onto yet another anti-encryption manifesto, progress on the oversight of his existing laws has been slow.

The review of the mandatory data retention regime, due to be completed by April 13 this year, has yet to be seen.

This is despite the Parliamentary Joint Committee on Intelligence and Security having set itself a submissions deadline of 1 July 2019, and holding its last public hearing on 28 February 2020.

The all-important review of the TOLA Act was due to report by September 30. Parliament has been in session since then, but the report didn't appear.

A charitable explanation would be that the government was busy preparing the Budget. With only three parliament sitting days, and a backlog of legislation to consider, other matters had to wait.

A more cynical explanation might be that the longer it takes to review the TOLA Act, the longer it'll be before recommended amendments can be made.

Those amendments might well include having to implement the independent oversight proposed by the Independent National Security Legislation Monitor.

Right now the law enforcement and intelligence agencies themselves can issue the TOLA Act's Technical Assistance Notices and Technical Assistance Requests. One imagines they wouldn't want to lose that power.

Meanwhile, the review of the International Production Orders legislation, a vital step on the way to Australian law being made compatible with the US CLOUD Act, doesn't seem to have a deadline of any kind.

In this context, we should also remember the much-delayed and disappointing 2020 Cyber Security Strategy. That seems to have been a minimal-effort job as well.

For years now, on both sides of Australian politics, national security laws have been hasty to legislate but slow to be reviewed. The question is, is it planned this way? Or is it simply incompetence?

Read the original here:
Dutton pushes against encryption yet again but oversight at home is slow - ZDNet

Read More..

Western governments double down efforts to curtail end-to-end encryption – The Daily Swig

Security community resists anti-encryption push as counter-productive

ANALYSISWestern governments have doubled down on their efforts to rein-in end-to-end encryption, arguing that the technology is impeding investigations into serious crimes including terrorism and child abuse.

In a joint statement (PDF) published over the weekend the Five Eyes (FVEY) intel alliance countries of Australia, Canada, New Zealand, the UK, and US were joined by India and Japan in calling for tech firms to enable law enforcement access to content upon production of a warrant.

The governments also want tech firm such as Apple and Facebook to consult with them on design decisions that might help or hinder this outcome.

The statements signatories call for tech firms to embed the safety of the public in system designs, thereby enabling companies to act against illegal content and activity effectively with no reduction to safety, and facilitating the investigation and prosecution of offences and safeguarding the vulnerable.

How might this work? GCHQ recently came up with a proposal for adding an extra party into an end-to-end encrypted chat via a ghost feature, a pointer to the sort of approaches intel agencies have in mind.

Security experts have pushed back against the proposals, arguing that they inevitably undermine the privacy and integrity of end-to end encryption the current gold standard for secure comms.

In end-to-end encryption systems the cryptographic keys needed to encrypt and decrypt communications are held on the devices of users, such as smartphones, rather than by service providers or other technology providers. Users therefore dont have to trust their ISPs or service providers not to snoop.

Popular instant messaging apps WhatsApp, iMessage, and Signal have placed E2E encryption in the hands of the average smartphone user.

So if governments come knocking with requests for the keys normally necessary to decrypt encrypted communications, then theres nothing to hand over.

Western government say they support the development of encryption in general, as a means to secure e-commerce transactions and protect the communications of law-abiding businesses and individuals its just E2E encryption they have an issue with. Governments have long argued that E2E encryption is hampering the investigation of serious crimes, at least on a larger scale.

Malware can be used by law enforcement against individuals targeted in surveillance operations, a tactic which if successful gives access to content without needing to break encryption.

And police in countries such as the UK, for example, already have the ability to compel disclosure of encryption secrets from suspects.

As the anonymous privacy activist behind the Spy Blog Twitter account noted: UK already has law for disclosure of plaintext material, regardless of encryption tech, but they want to do it in secret, in bulk.

The tweet referenced the Regulation of Investigatory Powers Act 2000 Part III, which deals with the investigation by law enforcement of electronic data protected by encryption.

Security experts were quick to criticize the latest government moves as a push to mandate encryption backdoors, supposedly accessible only to law enforcement. Several compared it to failed government encryption policies of the 1990s.

These included efforts to control the US export of encryption technologies and attempts to mandate key escrow.

Katie Moussouris, chief exec of Luta Security and an expert in bug bounties, tweeted: The 1st time they did this (look up crypto wars), it weakened e-commerce and all other web transactions for over a decade, enabling crime. I wish we didnt have to repeat these facts.

Encryption of any type can be viewed as a branch of applied mathematics but arguments that anyone can implement encryption in a few lines of code miss the point that what governments are seeking is to make encryption tools inaccessible to the broader public, according to noted cryptographer Matthew Green.

One thing thats different this time around compared to the first crypto wars is that governments have more levers to apply pressure on tech firms, including app store bans. Last month, for instance, the Trump administration threatened to ban TikTok in the US over supposed national security concerns unless owners Byte Dance sold the technology to a US firm.

Green noted: The current administration has demonstrated that app store bans can be used as a hammer to implement policy, and you can bet these folks are paying attention.

I also think that sideloading capability is likely to be eliminated (or strongly discouraged) in a regime where encryption bans are successful, he added.

Cryptographer Alex Muffett expressed fears that the government proposals might eventually result in non-compliant social networks [getting] banned under criminal law.

End-to-end encryption is a key tool towards securing the privacy of everyone on the planet, as the world becomes more connected. It must not be derailed, instead the police should be better funded for traditional investigation, Muffett said on Twitter.

RELATED Are we building surveillance into systems, or are we building in security?

Read more:
Western governments double down efforts to curtail end-to-end encryption - The Daily Swig

Read More..

Fuse Analytics integration with StrongSalt offers Enterprise Information Archiving with GDPR protections – PR Web

You don't have to let your data out of your hands unless you actually want to and make an active decision to do so.

ATLANTA (PRWEB) October 12, 2020

Fuse Analytics based in Atlanta GA, has been migrating and storing enterprise data since 2014. As companies migrate to the cloud, and now, from one cloud service to another, migrating and storing data is becoming increasingly complex due to data privacy regulations. In addition to providing ETL services (Extraction, Transformation and Loading of data to the new database) Fuse offers a data warehouse SaaS that holds all the legacy data that you need to hold onto, without the cost and complexity of storing in your new database. With proactive legacy data management, clients see a simplification and cost reduction by sunsetting legacy systems and optimizing current ones.

Until recently, Fuse Analytics used standard AES 256 encryption on all its clients data, which is the same level banks are used to. The issue is that by unlocking some of the data, you unlock all the data. Setting user permissions solved for most issues, but with increased Data Privacy laws, and the potential for rogue employees, they sought a superior solution. Enter StrongSalt.

StrongSalt is the leading provider at third party encryption management, providing the customer complete privacy control of their data. This granular level encryption management allows the customer to apply encryption keys to any segment of data, so rather than one master key, the client has infinite keys and a management system to lock portions of data, and manage user access. They have full visibility of who is accessing their data, and can change it or modify it as needed.

With most current cloud based applications, encryption keys are held by vendors so customers have no control over who can decrypt their data. These keys are often shared across customers. Our partnership with StrongSalt enables Fuses customers to manage their own encryption giving them advanced control over who they allow to decrypt and see the data, says Charles Eubanks, COO of Fuse Analytics.

In addition to increased visibility and control, by keeping all PII (Personally Identifiable Information) encrypted at rest, in transit and in use, it solves major data privacy concerns, which can get quickly complex when sharing data with external vendors, especially across different countries.

Tony Scott, the former Federal CIO of the United States under the Obama administration is an advisor to StrongSalt. Scott says You don't have to let your data out of your hands unless you actually want to and make an active decision to do so. That's really the intent of what's behind current regulations, which is a nightmare for a lot of companies. StrongSalt offers a simple solution with a single portal for encryption management across all your data.

In reference to using StrongSalts approach, Lydia de la Torre from Squire Patton Boggs says, ...this approach is the only get out of jail free card. De la Torre provides strategic privacy compliance advice related to US and EU privacy, including data protection and cybersecurity law, GDPR, CCPA, other states privacy and cyber laws and US financial privacy laws.

StrongSalt, the leading privacy API, builds data protection into any application or workflow, allowing both security and usability to co-exist in a privacy-focused world. StrongSalt offers Decentralized Keyless Management, Searchable Encryption, Shareable Encryption and Immutable Auditing.

Share article on social media or email:

View post:
Fuse Analytics integration with StrongSalt offers Enterprise Information Archiving with GDPR protections - PR Web

Read More..

Is Signal Safe? What to Know About the New Encrypted Messaging App – Parentology

Signal is a free private messaging platform that promises security and privacy to users through end-to-end encryption. The emphasis on every conversations security has Signal quickly rising in popularity, but it may also have many parents asking, How safe is Signal?

With Signal, users can message people one-on-one, create group chats, and make free voice and video calls. With the apps end-to-end encryption, only those involved with the conversation are able to view and access messages.

Users create an encrypted Signal Profile a name and picture that they set up within the app. First names are required, but people can use a nickname, single character, or an emoji as their identifier.

Message Requests give users the option to block, delete, or accept messages from somebody trying to get in touch with them. Users can see the name and photo of the person trying to message them in individual conversations. For group conversations, users can identify who is in the chat prior to joining, giving them better control over who they are talking to.

The protests surrounding George Floyds murder earlier this year started Signals sudden rise. Because its easier to keep group communications private on Signal unlike Facebook, Instagram or TikTok which can be monitored by law enforcement many sought out Signal as a way to safely and securely organize protests.

Signal addressed the heightened use during this time in a blog post. They wrote, Many of the people and groups who are organizing for that change are using Signal to communicate, and were working hard to keep up with the increased traffic. Weve also been working to figure out additional ways we can support everyone in the street right now.

The app also announced a new feature that made it easy to blur faces in photos, and an initiative to distribute face coverings to those protesting on the streets.

However, that same security and privacy many activists seek out in Signal may become a cause of concern for parents of teens using the app.

As mentioned, Signal is equipped with features to keep conversations as private as possible. Each one-on-one chat has a unique safety number that allows users to verify the security of their messages and calls with specific contacts.

For parents, its not hard to guess why this may be an issue. If a teen is using Signal to hide the content of their conversations, they will likely be successful.

The app also has a disappearing message feature, similar to Snapchats chat feature. Once enabled, a users messages will come with a timer and once the timer goes off, the message is deleted from the conversation. As with Snapchat, that doesnt stop a person from taking a screenshot so sending adult messages or images can still come back to haunt someone but its still good for hiding communications.

Signal requires that users must be at least 13 years of age, but there is no real age verification on the app. As long as a child has access to a phone number, they can register a profile on Signal.

Signal is currently available in the iOS App Store, Google Play, and on Chrome.

Omegle The Dangerous New Chat Site Where Kids Meet Strangers 8 Chat Apps That Parents Should Worry About

Signal SupportBlog: Encrypt Your Face

Go here to read the rest:
Is Signal Safe? What to Know About the New Encrypted Messaging App - Parentology

Read More..

Five Eyes alliance warning: ‘Encryption creates severe risks to public safety’ – New Zealand Herald

Business

11 Oct, 2020 06:19 PM3 minutes to read

Encrypted messaging services pose a number of security risks. Photo / Getty Images

The governments of seven countries including the UK and the US have publicly warned technology companies that offering unbreakable encryption to their users "creates severe risks to public safety".

Ministers from the "Five Eyes" intelligence sharing alliance of the UK, US, Canada, Australia and New Zealand published a statement on Sunday which called on technology companies to build ways to regularly hand encrypted private messages to police and governments.

"We urge industry to address our serious concerns where encryption is applied in a way that wholly precludes any legal access to content," the countries wrote.

READ MORE: Cyber-security: Study finds most NZ small businesses tempting fate The Conversation: NZ's cyber security centre warns more attacks likely

Creating unbreakable encryption, where technology companies cannot read messages sent by their users, "poses significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children," the governments added.

The countries stopped short of condemning all forms of encryption, however, and noted that strong encryption "plays a crucial role in protecting personal data, privacy, intellectual property, trade secrets and cyber security."

The statement was signed by home secretary Priti Patel as well as representatives of India and Japan, countries which are not formally part of the Five Eyes alliance.

Defence secretary Ben Wallace said earlier this month that the UK is seeking to expand the Five Eyes alliance in order to "send a message to China".

"We would absolutely continue to explore working with new partners in Asia and deepening Five Eyes," Mr Wallace told a ConservativeHome event at the Conservative Party Conference.

Sunday's statement follows a similar warning by the Five Eyes alliance published in 2018 which called on technology companies to create "customised solutions" that would allow police to access private messages, bypassing encryption.

The governments warned that they could take action such as "technological, enforcement, legislative or other measures" in order to prevent police being denied access to material by unbreakable end-to-end encryption.

Last year, an alliance of technology companies including Apple, Google, Microsoft and WhatsApp signed an open letter condemning a proposal by the UK's GCHQ agency which suggested that law enforcement could be invisibly added as a recipient to private messages to avoid forcing companies to break their own encryption.

A Facebook spokesman said: "We've long argued that end-to-end encryption is necessary to protect people's most private information. In all of these countries, people prefer end-to-end encrypted messaging on various apps because it keeps their messages safe from hackers, criminals, and foreign interference. Facebook has led the industry in developing new ways to prevent, detect, and respond to abuse while maintaining high security and we will continue to do so."

- Telegraph

See the original post:
Five Eyes alliance warning: 'Encryption creates severe risks to public safety' - New Zealand Herald

Read More..

Privateness or youngster safety? 7 governments, together with US & UK, argue Fb’s new encryption plan would profit PEDOPHILES – Editorials 360

The UK, and others, have criticized Fb over its plan to introduce end-to-end message encryption, arguing it might enable pedophiles freely share youngster abuse supplies as hundreds of thousands are attributed to the tech bigs platforms.

The US social media big has discovered itself on the middle of one more controversy as its initiative supposedly aimed toward defending the privateness of billions of individuals utilizing Fb, Instagram and WhatsApp came across the vehement opposition of the UK and the US, in addition to different nations. The governments argue the transfer would make the social networks a protected haven for pedophiles and terrorists.

Additionally on rt.com

On Sunday, the UK Residence Secretary Priti Patel revealed a damning piece within the Solar, which stated Fbs plan would solely assist terrorists and paedophiles and accused the tech big of failing to give you any credible technique to combat youngster intercourse abuse. The change would make it near-impossible to get better on-line prison conversations, she wrote, citing some officers. Patel known as Fbs arguments overblown and stated it merely desires to show a blind eye to horrific abuse.

London didnt simply restrict itself to opinion items within the British media, although. The 5 Eyes group the UK, the US, Australia, New Zealand and Canada, additionally joined this time by India and Japan issued a joint assertion primarily demanding Fb present a backdoor for the safety providers to make use of to get entry to encrypted messages.

Tech firms ought to embody mechanisms within the design of their encrypted services whereby governments, performing with applicable authorized authority, can acquire entry to information in a readable and usable format, the assertion reads, encouraging the tech big to seek out mutually agreeable options to supposedly shield each privateness and public safety.

Additionally on rt.com

Fbs intentions to totally encrypt communications in its messaging app and Instagram Direct, along with the already-encrypted WhatsApp, have been identified since at the least 2019. The potential change would imply that nobody apart from the message sender and recipient may see or modify its content material.

Extra gasoline has been added to the hearth as a US Congress-founded youngsterss charity launched information suggesting Fb accounts for a whopping 94 p.c of all circumstances of kid intercourse abuse supplies being shared on the web, as reported by US tech firms. Out of just about 17 million studies involving distribution of over 69 million youngster intercourse exploitation photographs and movies, greater than 15.eight million had been filed by Fb, the Nationwide Middle for Lacking & Exploited Youngsters (NCMEC) stated in its latest report.

Fb has to date not reacted to this newest flurry of requests and calls for made by the governments. But, earlier, the tech big already promised to look into bettering youngster safety on its networks forward of the deliberate change.

Options offered by Fb again in autumn 2019 targeted on stopping pedophiles from contacting potential victims within the first place. It additionally in the end concerned a complete lot of recent monitoring procedures that may vary from flagging somebody in search of to contact minors they have no idea and analyzing age gaps between individuals speaking privately on its platforms.

Additionally on rt.com

Assume your mates would have an interest? Share this story!

Visit link:
Privateness or youngster safety? 7 governments, together with US & UK, argue Fb's new encryption plan would profit PEDOPHILES - Editorials 360

Read More..

Optical Encryption Market Analysis And Demand With Forecast Overview To 2025 – Express Journal

The research report on Optical Encryption market consists of significant information regarding the growth drivers, opportunities, and the challenges & restraints that define the business scenario in the subsequent years.

According to the report, the Optical Encryption market is predicted to record a CAGR of XX% and generate lucrative revenues during the forecast period. (2020-2025)

The advent of coronavirus outbreak has resulted in enforcement of temporary lockdowns in order to flatten the curve, which in turn has resulted in business and factory shutdowns, supply chain disruptions, and economic slowdown across various nations.

Request Sample Copy of this Report @ https://www.express-journal.com/request-sample/219124

Most of the businesses operating in various sectors have revised their respective budget plans to re-establish profit trajectory in the ensuing years. Thus, the research report offers crucial analysis regarding the effect of COVID-19 pandemic on the overall industry remuneration and deciphers strategies capable of drawing attractive gains.

Additionally, the study provides a comprehensive assessment of the market segmentations and evaluates their respective performance.

Major pointers of the Optical Encryption market report:

Optical Encryption Market Segmentations:

Regional spectrum: North America, Europe, Asia-Pacific, South America, Middle East & Africa, South East Asia

Product types: OTN or Layer 1, MACsec or Layer 2 and IPsec or Layer 3

Applications scope: Banking, financial services, and insurance (BFSI), Government, Healthcare, Data center and cloud, Energy and utilities and Others

Competitive scenario: The major players covered in Optical Encryption are:, Ciena, Infinera, ECI Telecom, Adva, Microchip Technology, Nokia, Acacia Communications, Huawei, Cisco, Arista Networks, Centurylink, Broadcom, Thales E-Security, Juniper Networks and Packetlight Networks

Market segmentation

The Optical Encryption market is split by Type and by Application. For the period 2020-2025, the growth among segments provides accurate calculations and forecasts for sales by Type and by Application in terms of volume and value. This analysis can help you expand your business by targeting qualified niche markets.

Research Objective:

Why to Select This Report:

Key questions answered in the report:

MAJOR TOC OF THE REPORT:

Chapter 1 Industry Overview

Chapter 2 Production Market Analysis

Chapter 3 Sales Market Analysis

Chapter 4 Consumption Market Analysis

Chapter 5 Production, Sales and Consumption Market Comparison Analysis

Chapter 6 Major Manufacturers Production and Sales Market Comparison Analysis

Chapter 7 Major Product Analysis

Chapter 8 Major Application Analysis

Chapter 9 Industry Chain Analysis

Chapter 10 Global and Regional Market Forecast

Chapter 11 Major Manufacturers Analysis

Chapter 12 New Project Investment Feasibility Analysis

Chapter 13 Conclusions

Chapter 14 Appendix

Request Customization on This Report @ https://www.express-journal.com/request-for-customization/219124

Continued here:
Optical Encryption Market Analysis And Demand With Forecast Overview To 2025 - Express Journal

Read More..

Four Practical Applications Of Artificial Intelligence And 5G – Forbes

Pixabay

It is no secret that artificial intelligence (AI) is a technical marketing whitewash. Many companies claim that its algorithms and data scientists enable a differentiated approach in the networking infrastructure space. However, what are the practical applications of AI for connectivity and, in particular, 5G? From my perspective, it encapsulates four key areas. Here I will provide my insights into each and highlight what I believe is the practical functionality for operators, subscribers and equipment providers.

Smart automation

Automation is all about reducing human error and improving network performance and uptime through activities such as low to no-touch device configuration, provisioning, orchestration, monitoring, assurance and reactive issue resolution. AI promises to deliver the "smarts" in analyzing the tasks above, steering networking to a more closed-loop process. Pairing all of this with 5G should help mobile service providers offer simpler activations, higher performance and the rapid deployment of new services. The result should be higher average revenue per subscriber (ARPU) for operators and a more reliable connection, and better user experience.

Predictive remediation

Over time, I believe AI will evolve to enable network operators to move from reactive to proactive issue resolution. They will be able to evaluate large volumes of data for anomalies and make course corrections before issues arise. 5G should enable networks to better handle these predictive functions' complexity and support significantly more connected devices. We're beginning to see AI-powered predictive remediation applied to the enterprise networking sector to positive results, via some tier one carriers and 5G infrastructure providers such as Ericsson. In my opinion, one of the most significant impacts of AI in mobile networks will be the reduction of subscriber churn. That is a huge considerationcarriers are spending billions of dollars building fixed and mobile 5G networks. They must be able to add and retain customers.

Digital transformation acceleration

One of the pandemic's silver linings is the acceleration, out of necessity, of businesses' digital transformation. The distributed nature of work from home has put tremendous pressure on corporate and mobile networks from a scalability, reliability and security perspective. Many connectivity infrastructure providers are embracing AIOps for its potential to supercharge DevOps and SecOps. AI will also help operators better manage the lifecycle of 5G deployments from a planning, deployment, ongoing operations and maintenance perspective. For example, China Unicom leveraged AI to transform how it internally manages operations and how it interfaces with partners and customers. In 2019, the operator reported a 30% reduction in time to product delivery and a 60% increase in productivity for leased line activations.

Enhanced user experiences

The combination of AI and 5G will unlock transformative user experiences across consumer and enterprise market segments. I expanded on this topic in my Mobile World Congress 2019 analysis, which you can find here if interested. At a high level, AI has the potential to reduce the number of subscriber service choices, presenting the most relevant ones based on past behavior. I believe the result will be higher subscriber loyalty and operator monetization.

Wrapping Up

Though AI is hyped all around, there is particular synergy with 5G. Mobile networks are no longer just a "dumb pipe" for data access. AI can improve new device provisioning, deliver high application and connectivity performance, accelerate digital transformation and provide exceptional user experiences. For service providers, I also believe AI and 5G will result in operational expense savings and drive incremental investment in new service delivery. In my mind, that is a win-win for subscribers, operators, and infrastructure providers alike.

Disclosure: My firm, Moor Insights & Strategy, like all research and analyst firms, provides or has provided research, analysis, advising, and/or consulting to many high-tech companies in the industry, including Ericsson. I do not hold any equity positions with any companies cited in this column.

See the original post:
Four Practical Applications Of Artificial Intelligence And 5G - Forbes

Read More..

What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence? – Forbes

Theres been a great deal of hype and excitement in the artificial intelligence (AI) world around a newly developed technology known as GPT-3. Put simply; it's an AI that is better at creating content that has a language structure human or machine language than anything that has come before it.

What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence?

GPT-3 has been created by OpenAI, a research business co-founded by Elon Musk and has been described as the most important and useful advance in AI for years.

But theres some confusion over exactly what it does (and indeed doesnt do), so here I will try and break it down into simple terms for any non-techy readers interested in understanding the fundamental principles behind it. Ill also cover some of the problems it raises, as well as why some people think its significance has been overinflated somewhat by hype.

What is GPT-3?

Starting with the very basics, GPT-3 stands for Generative Pre-trained Transformer 3 its the third version of the tool to be released.

In short, this means that it generates text using algorithms that are pre-trained theyve already been fed all of the data they need to carry out their task. Specifically, theyve been fed around 570gb of text information gathered by crawling the internet (a publicly available dataset known as CommonCrawl) along with other texts selected by OpenAI, including the text of Wikipedia.

If you ask it a question, you would expect the most useful response would be an answer. If you ask it to carry out a task such as creating a summary or writing a poem, you will get a summary or a poem.

More technically, it has also been described as the largest artificial neural network every created I will cover that further down.

What can GPT-3 do?

GPT-3 can create anything that has a language structure which means it can answer questions, write essays, summarize long texts, translate languages, take memos, and even create computer code.

In fact, in one demo available online, it is shown creating an app that looks and functions similarly to the Instagram application, using a plugin for the software tool Figma, which is widely used for app design.

This is, of course, pretty revolutionary, and if it proves to be usable and useful in the long-term, it could have huge implications for the way software and apps are developed in the future.

As the code itself isn't available to the public yet (more on that later), access is only available to selected developers through an API maintained by OpenAI. Since the API was made available in June this year, examples have emerged of poetry, prose, news reports, and creative fiction.

This article is particularly interesting where you can see GPT-3 making a quite persuasive attempt at convincing us humans that it doesnt mean any harm. Although its robotic honesty means it is forced to admit that "I know that I will not be able to avoid destroying humankind," if evil people make it do so!

How does GPT-3 work?

In terms of where it fits within the general categories of AI applications,GPT-3 is a language prediction model. This means that it is an algorithmic structure designed to take one piece of language (an input) and transform it into what it predicts is the most useful following piece of language for the user.

It can do this thanks to the training analysis it has carried out on the vast body of text used to pre-train it. Unlike other algorithms that, in their raw state, have not been trained, OpenAI has already expended the huge amount of compute resources necessary for GPT-3 to understand how languages work and are structured. The compute time necessary to achieve this is said to have cost OpenAI $4.6 million.

To learn how to build language constructs, such as sentences, it employs semantic analytics - studying not just the words and their meanings, but also gathering an understanding of how the usage of words differs depending on other words also used in the text.

It's also a form of machine learning termed unsupervised learning because the training data does not include any information on what is a "right" or "wrong" response, as is the case with supervised learning. All of the information it needs to calculate the probability that it's output will be what the user needs is gathered from the training texts themselves.

This is done by studying the usage of words and sentences, then taking them apart and attempting to rebuild them itself.

For example, during training, the algorithms may encounter the phrase the house has a red door. It is then given the phrase again, but with a word missing such as the house has a red X."

It then scans all of the text in its training data hundreds of billions of words, arranged into meaningful language and determines what word it should use to recreate the original phrase.

To start with, it will probably get it wrong potentially millions of times. But eventually, it will come up with the right word. By checking its original input data, it will know it has the correct output, and weight is assigned to the algorithm process that provided the correct answer. This means that it gradually learns what methods are most likely to come up with the correct response in the future.

The scale of this dynamic "weighting" process is what makes GPT-3 the largest artificial neural network ever created. It has been pointed out that in some ways, what it does is nothing that new, as transformer models of language prediction have been around for many years. However, the number of weights the algorithm dynamically holds in its memory and uses to process each query is 175 billion ten times more than its closest rival, produced by Nvidia.

What are some of the problems with GPT-3?

GPT-3's ability to produce language has been hailed as the best that has yet been seen in AI; however, there are some important considerations.

The CEO of OpenAI himself, Sam Altman, has said, "The GPT-3 Hype is too much. AI is going to change the world, but GPT-3 is just an early glimpse."

Firstly, it is a hugely expensive tool to use right now, due to the huge amount of compute power needed to carry out its function. This means the cost of using it would be beyond the budget of smaller organizations.

Secondly, it is a closed or black-box system. OpenAI has not revealed the full details of how its algorithms work, so anyone relying on it to answer questions or create products useful to them would not, as things stand, be entirely sure how they had been created.

Thirdly, the output of the system is still not perfect. While it can handle tasks such as creating short texts or basic applications, its output becomes less useful (in fact, described as "gibberish") when it is asked to produce something longer or more complex.

These are clearly issues that we can expect to be addressed over time as compute power continues to drop in price, standardization around openness of AI platforms is established, and algorithms are fine-tuned with increasing volumes of data.

All in all, its a fair conclusion that GPT-3 produces results that are leaps and bounds ahead of what we have seen previously. Anyone who has seen the results of AI language knows the results can be variable, and GPT-3s output undeniably seems like a step forward. When we see it properly in the hands of the public and available to everyone, its performance should become even more impressive.

View original post here:
What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence? - Forbes

Read More..