Page 2,503«..1020..2,5022,5032,5042,505..2,5102,520..»

Oracle expands in Middle East with new cloud region in Abu Dhabi – The National

Oracle opened a new cloud region a complex that houses at least two data centres in Abu Dhabi that provides storage capacity to regional enterprises amid soaring demand.

This will be the second cloud region of the Austin, Texas-based company in the UAE and its third in the Middle East.

The UAE and Middle East are priority regions and the company has made significant investments to enhance its infrastructure, physical presence, human resources and other support capabilities in the region, Jae Sook Evans, Oracles chief information officer, told The National.

This long-term commitment from Oracle has translated into massive investments to help organisations of all sizes achieve their digital transformation projects

Jae Sook Evans, Oracles chief information officer

This long-term commitment from Oracle has translated into massive investments to help organisations of all sizes achieve their digital transformation projects, said Ms Evans, without disclosing the value of the investment.

The cloud industry is booming globally. The GCC's public cloud market is expected to more than double in value to reach $2.4 billion by 2024, up from $956 million last year, the International Data Corporation said.

For regional businesses, moving to a cloud system hosted by a specialised company such as Oracle, Amazon Web Services or SAP is more economical than creating their own infrastructure of servers, hardware and security networks, industry experts said. It also brings down the overall cost of ownership.

Businesses have realised the numerous benefits including higher return on investment, ability to constantly innovate, boost security and create a scalable business model that is quick to respond to changing economic environment, a key priority post-Covid, said Ms Evans.

Oracle reported nearly $7.4bn in global revenue from its cloud services and licence support business in the quarter that ended on August 31. Reuters

Oracle, whose local clients include DP World, Abu Dhabi Customs, Emaar Properties, Saudi Arabia Tourism Development Fund, Saudi Railway Company, Mashreq Bank and Saudi Arabia Mining Company, reported nearly $7.4bn in global revenue from its cloud services and licence support business in the quarter that ended on August 31.

The cloud services business accounted for more than 75 per cent of its total sales of $9.7bn.

In July last year, the company opened its first cloud region in Jeddah that was followed by another centre in Dubai in October 2020. Last month, Oracle said it planned to open a second cloud region in Saudi Arabia's upcoming futuristic city Neom.

With the Dubai and Abu Dhabi cloud regions, Oracle has the required infrastructure to work with public as well as private organisations to accelerate their digital transformation, Richard Smith, Oracles executive vice president for technology in Europe, Middle East and Africa, said.

Dr Thani Al Zeyoudi, Minister of State for Foreign Trade, said the UAE is committed to developing an innovative and knowledge-based economy that encourages the development and deployment of the technologies of the future and attracting human, financial and technological capital to the nation is central to these ambitions,.

Oracles continued investment into the UAE will only accelerate this process, he said.

The company has 34 cloud regions globally and aims to open 10 new centres across Europe, the Middle East, Asia and Latin America over the next year.

Oracles cloud regions will boost the cyber resilience of the country, mitigate the incidents of cyber crime and increase international collaboration, Dr Mohamed Hamad Al Kuwaiti, head of the UAE governments cyber security, said.

Oracles two cloud regions in the UAE are important investments towards providing cyber resilience and secure digital infrastructure for organisations to enjoy the full benefits of cloud computing, said Mr Al Kuwaiti.

Several global players are establishing data centres in the region as the cloud market picks up.

Last year, IBM unveiled two data centres in the UAE, its first foray in the Middle East and Africa cloud storage market. In 2019, Amazon Web Services opened three data centres in Bahrain. Germany's SAP has centres in Dubai, Riyadh and Dammam, which house servers for local cloud computing clients.

Alibaba Cloud a comparatively smaller player and the cloud computing of the Chinese e-commerce giant opened its first regional data centre in Dubai in 2016.

Updated: November 9th 2021, 5:00 AM

See the original post:
Oracle expands in Middle East with new cloud region in Abu Dhabi - The National

Read More..

Apple letting third-party shops replace screens is the right thing to do, but why was it ever in question? – iMore

We received a spot of good news yesterday with the revelation that Apple plans to release a software update that will allow third-party repair shops to swap out iPhone 13 screens without breaking Face ID, but why was that ever in question?

Backing up for a moment, let's remind everyone of what was going on.

As iFixit and others noted, replacing an iPhone 13 screen is relatively easy except for one new chip attached to the glass. That chip isn't an issue for Apple and its authorized repair centers but it is kryptonite to third-party shops. See, that chip renders Face ID unusable if it isn't swapped from the old screen or re-programmed. Unfortunately, most repair shops can't do either, effectively breaking Face ID.

It's a whole thing and, frankly, not a great look for Apple.

iFixit:

This unprecedented lockdown is unique to Apple and totally new in the iPhone 13. It is likely the strongest case yet for right to repair laws. And it's all because of a chip about the size of a Tic-Tac, tucked into the bottom of a screen.

(...)

The iPhone 13 is paired to its screen using this small microcontroller, in a condition repair techs often call "serialization." Apple has not provided a way for owners or independent shops to pair a new screen. Authorized technicians with access to proprietary software, Apple Services Toolkit 2, can make new screens work by logging the repair to Apple's cloud servers and syncing the serial numbers of the phone and screen. This gives Apple the ability to approve or deny each individual repair.

However, a new report by The Verge says that Apple will issue a software update that fixes all of this, although it isn't known when that will happen.

That's good news, and it's the right thing to do. But it doesn't explain what's going on here is the current situation the result of a bug, for example? That seems unlikely, which means that we're dealing with expected behavior here. But why?

Apple sometimes says it does similarly odd things in the name of security it wants to ensure that people can't swap out components as a way to get around biometric security measures like Face ID and, in the past, Touch ID. But there are two issues with that in this instance:

The current iPhone 13 is undoubtedly the best iPhone we've seen to date, but its display still breaks like any other. It's a reasonably safe assumption that people will regularly replace screens on these things, and what percentage of those replacements will be done by Apple or its partners? As things stand, I can only imagine a ton of people being left without a functional Face ID. That wasn't part of the plan, right?

I don't have the answers to any of these questions, but I've reached out to Apple to ask. Fingers crossed that this was all just a bug after all.

Follow this link:
Apple letting third-party shops replace screens is the right thing to do, but why was it ever in question? - iMore

Read More..

The future of OT security in an IT-OT converged world – The Register

Paid Feature If you thought the industrial internet of things (IIoT) was the cutting edge of industrial control systems, think again. Companies have been busy allowing external access to sensors and controllers in factories and utilities for a while now, but forward-thinking firms are now exploring a new development; operating their industrial control systems (ICS) entirely from the cloud. That raises a critical question: who's going to protect it all?

Dave Masson, Director of Enterprise Security at Darktrace, calls this new trend 'ICSaaS'. "ICS for the cloud is starting to happen now. That represents a whole new world for industrial technology and security.

This trend has been possible for the last decade or so, he explains, but the uptake has been slow. Now, Masson is hearing from clients who are actioning it.

The move to cloud-controlled ICS took this long to begin in part because of the cultural differences in the ICS world. One mistake configuring the operational technology (OT) underpinning ICS can have profound effects, Masson says. Opening this infrastructure up to access from the internet was a bold enough step on its own, and took a big cultural shift. Putting the means of control in the cloud takes a further shift in mentality.

"Although there are positives, it will still impact reliability," says Masson. "There are ramifications for ICS performance, security, and therefore safety." Many of these environments can't tolerate any downtime at all.

Operational technology admins may be nervous about allowing cloud-based control of their infrastructures, but they're attracted by the potential benefits, Masson asserts. The pandemic has been a strong driver, allowing operators to remotely control industrial systems when they haven't been able to come on-site.

Organizations could enable remote access without cloud-based systems by punching holes in on- premises firewalls, but doing so made cloud-based access more plausible, opening up the conversation.

If operators are accessing ICS remotely anyway, then it makes it easier to consider cloud-based interfaces, Masson says. These make the management infrastructure cheaper and easier to operate. He points out the arguments now familiar to IT decision makers, including the opportunity to reduce operators' own hardware investments and potentially cut their data center real estate. Companies are now seriously considering taking advantage of these operational benefits for the first time.

In this scenario, the hardware components that make up ICS stay where they are. We're not talking about virtualizing programmable logic controllers here. It's the data governing their operation that moves to the cloud. That means the applications, databases, and other services that operators rely on to keep those components running smoothly. Instead of handling planning and scheduling using on-premises data, they'll do it using cloud platforms that then tunnel communications to those legacy systems in the field - which still expect to be spoken to via specialized protocols like Modbus.

Security is just as important in these new cloud-enabled environments as it was in the old legacy walled gardens, but the challenges facing defenders are different. The cloud is eroding the gap between IT and OT, explains Masson. OT is now part of what looks increasingly like a common IT network.

"Now, anybody can access this network from anywhere, so you've got to make sure you have good controls around who's got permission," he says. "This raises questions about data security, compliance, and regulation."

Security teams grappling with this face challenges including more complexity in their infrastructures as they bring different devices and protocols into the fray, with traffic running through different gateways. The number of OT devices can be staggering, far outnumbering the number of servers or endpoints that an IT security team has dealt with before.

OT admins, used to maintaining an iron grip on their infrastructure, now risk a loss of visibility and control, warns Masson. He calls the people looking after this management data in an ICS setting data historians.

"That data is now over the horizon and you need to know what people are doing with it in the cloud," he warns, pointing to a litany of problems with misconfigured databases and storage resources. The prospect of exposing ICS management data to the general public due to a dashboard misstep would turn most data historians grey.

There are organizational worries to consider beyond the technological ones, Masson adds. Converging IT/OT infrastructures is only part of the story. You must also decide who is managing security for the expanded network. Is it the IT security team, or the OT team, or both? Do they speak the same language? Will the organization have to contend with political strife and territorial battles?

When all these challenges combine, it's easy for security problems to slip through the gaps. It takes a cohesive approach with multiple checks and balances to ensure protection that extends from the physical equipment in the field through to the infrastructure that controls it in the cloud. It takes a sharp focus on access controls and permissions at all points in the ecosystem.

This new, more complex environment demands a new approach to security, according to Masson.

Zero trust architecture is a common talking point today when discussing cloud-based security, and that will be important, he says. Its focus on identity-based access, backed by account controls like multi-factor authentication, is valuable. "But that won't tell you when you've misconfigured something providing you with access to your ICS from the cloud," he points out.

He warns that IT teams can't rely on the same protective measures they used in the past. "They'll have one product for this and another for that, all using hard-coded predefined rules and signatures that aren't really designed to adapt with sudden transformation. The rules-based firewalls that might have offered some protection in the past will no longer cut it in a converged IT/OT cloud-based environment.

Darktraces AI technology flips this narrative, evaluating threats to complex systems not using a rigid set of rules, but instead leveraging unsupervised machine learning to constantly understand an organizations 'pattern of life'.

Instead of running every traffic pattern against a complex and often outdated series of signatures to detect malicious behaviour, Darktrace's tools look for activities that deviate from this pattern of life. If it detects communications between ICS systems that don't usually communicate, for example, or unusual access to ICS control systems in the cloud, its AI will investigate the activity in real time.

If granted permission, Darktrace's Antigena product will also take its own steps to contain the threat. It uses an AI-powered Autonomous Response mechanism that takes measured steps to neutralize malicious behaviour, all while allowing normal business operations to continue to run smoothly.

This approach has the advantage of not relying on deep packet inspection for its results. That's a big plus in an environment where tunnelled communications between cloud-based management systems and ICS components are often so obscure that they're effectively encrypted.

"There are tons of these protocols, some invented by people who are now dead," Masson says. "So we stay protocol agnostic."

While the company is learning some of the protocols for clients that demand it, the AI technology doesn't need to understand what's happening in a packet. Instead, Darktrace looks at what the packet is doing within the broader infrastructure, using its self-learning AI to assess deviations from the norm.

A number of cloud-first critical infrastructure organizations use Darktrace to defend their cloud environments one being Mainstream Renewable Power, a major player in wind and solar energy.

ICSaaS is only one part of a broader shift towards OT/IT convergence, says Masson. The advent of 5G, along with the development of edge computing, will accelerate the trend still further.

"Right now people focus on protecting the data that's in the cloud, but with 5G and edge computing that data won't always stay there; it will be on the edge where the computation is actually taking place. Masson argues that self-learning AI, built to maintain a picture of normality in volatile environments, will be well-placed to cope with the speed and complexity of edge-based scenarios.

ICS will be deeply ingrained in this new computing model, which will see local 5G-based networks supporting edge facilities and sensors with software-defined network functions including network slicing. With the world on the cusp of this change, new approaches to protecting it all from attack will be crucial.

Masson is certain that AI will be squarely in the middle of the picture, protecting the network from logic controllers in the field through to virtual servers in hyperscale cloud architectures - and everything in between.

This article is sponsored by Darktrace.

Excerpt from:
The future of OT security in an IT-OT converged world - The Register

Read More..

Still reeling from the Great Facebook Blackout of 2021? Turns out Zuck is not the worst offender – The Register

UK-based price comparision and broadband swapping service Uswitch has totted up the figures and come up with a surprising candidate for most outage incidents in 2021.

Outages are a tricky thing to quantify, and the metric used by Uswitch was a simple count of the most visited websites against a total number of incidents reported by DownDetector. The "visited website" metric ruled out the likes of Azure and AWS, although both services lurk behind the scenes. However, hotspots like Facebook, Instagram, and TikTok all qualified.

At 180 incidents, according to Uswitch, Reddit was head and shoulders ahead of the pack. A whopping 60 per cent of issues were related to forum's app. Following, at 107 incidents, was Discord (server connection woes afflicted 73 per cent of blackouts), just snatching second place from Instagram. The food and lifestyle happy snapper accounted for 106 incidents (the app accounted for 55 per cent of problems).

Close to Instagram was its big brother Facebook's website, with 95 outages (of which its website accounted for 49 per cent).

However, a count of incidents does not really tell the whole story, and there was no indication from Uswitch which of Facebook's 95 borks was the big one that took out WhatsApp and Instagram in the blast radius and left the world bereft of angry uncle posts and shots of dinner plates for a happy few hours. The Register asked Uswitch for a breakdown by duration.

It told us: "The data we collected is the average count of outages. Unfortunately, the data for how long the outages lasted is not readily available, and trying to create stats from what we do have I believe will not be accurate enough to run with."

WhatsApp itself was in the better-behaved end of the table, with a mere 34 outages, just above TikTok's 30. The video streamers also fared better than social media, with Netflix and YouTube tied at 44 outages.

And as for that bastion of whingeing when things go wrong, Twitter, Uswitch put it quite some way behind Facebook and a gnat's whisker ahead of YouTube at 50 incidents.

While enterprises have service level agreements in place to punish providers for outages (although rarely more than a simple credit rather than the actual cost of lost business), consumers are not so fortunate. What does one get for handing over all that personal data? An outage-prone service, it appears.

Go here to read the rest:
Still reeling from the Great Facebook Blackout of 2021? Turns out Zuck is not the worst offender - The Register

Read More..

Use smarter key management to secure your future anywhere in the cloud PCR – PCR-online.biz

Marcella Arthur, Vice President Global Marketing, Unbound Security explores using smarter key management in the cloud

Cloud computing remains a dominant trend in global business, boosted by the mass shift to remote working during the pandemic. A report from Deloitte highlights how investment in cloud infrastructure increased through 2020 with the scale of mergers and acquisitions indicating significant expectations of further growth.

Yet as organisations migrate workloads to the cloud in search of greater agility and innovation and reduced costs, they are facing serious security challenges that conventional approaches fail to meet, particularly if they adopt hybrid approaches. By 2022, analysts IDC estimate more than 90 per cent of enterprises worldwide will be relying on a mix of on-premises/dedicated private clouds, multiple public clouds, and legacy platforms to meet their infrastructure needs. As companies become more distributed and more complex than ever through their entry into the hybrid cloud, they find themselves with massively extended security perimeters while constantly exchanging high volumes of data.

Combined with the imposition of stricter demands by regulators, these developments make control of encryption keys used to protect data more important than ever. For those with heavy investments in on-premise infrastructure, hardware security modules (HSMs), or apps partially in the cloud, the inability to secure and manage the cryptographic keys that protect their data across a multitude of scenarios has the potential to bring their organisations to an extremely costly standstill.

Whenever IT managers decide on a cloud shift that requires some existing hardware to remain intact, among the problems they face are the time-consuming task of maintaining multiple systems, implementing key management solutions, and the creation of multiple keys depending on the application supported and authentication path. Developers and solution architects take on the biggest migration risk, because the painstaking work that it took to develop an application once, may now have to be repeatedly refactored to ensure that keys work anywhere in the cloud, at any time.

For key management, organisations may feel they can rely on the solutions provided by the major cloud service providers (CSPs), who have made encryption simple to activate. Sadly, however, there is a basic security flaw in having the keys held by the same entity that holds the data. It is not just penetration by criminals we should worry about in this respect, it is the government warrants and subpoenas that may force CSPs to open up what they hold. Alongside this vulnerability is one of management. It becomes much harder to achieve consistency of data governance across an organisations entire and varied infrastructure including on-premises hardware when keys are managed by the cloud provider. The way CSPs solutions deliver a segmented picture of the key logs and usage reports makes it impossible for enterprises to manage their entire range of keys in one place with full visibility across all sites.

Time to market for new and existing applications suffers as they require keys to ensure the requisite security policies are met in each case. Security is potentially compromised when organisations are unable to manage keys across disparate sites because of dependencies on the applications they are looking to authenticate, each having been written to specific cloud requirements.

The way out of this tangle is to nail down security with a third-party solution that overrides the complexity of refactoring applications to ensure they work in each cloud environment. Enterprises need to write and manage their own keys on a separate, one-stop platform, using multiparty computation (MPC). MPC splits a secret key into two or more pieces and places them on different servers and devices. Because all the pieces are required to get any information about the key, but are never assembled, hackers have to breach all the servers and devices. Strong separation between these devices (different administrator credentials, environments, and so on), provides a very high level of key protection.

Adopting this approach gives enterprises using hybrid cloud or multi-cloud infrastructures the single-pane-of-glass visibility that is essential for security and surveillance, providing information about all keys and digital assets, how they are stored, who is using them and how they are programmed. The use of cloud crypto keys is no longer a leap of faith.

When organisations are moving into the cloud for greater innovation and efficiency, an MPC platform provides the most effective means of securing and managing encryption keys, being highly agile, adaptable, and easy to use without any compromise of safety.

Read the latest edition of PCRs monthly magazine below:

Like this content? Sign up for thefree PCR Daily Digestemail service to get the latest tech news straight to your inbox. You can also follow PCR onTwitterandFacebook.

See the original post:
Use smarter key management to secure your future anywhere in the cloud PCR - PCR-online.biz

Read More..

In the spirit of open government, France dumps 9,067 repos online to show off its FOSS credentials – The Register

UK-based price comparision and broadband swapping service Uswitch has totted up the figures and come up with a surprising candidate for most outage incidents in 2021.

Outages are a tricky thing to quantify, and the metric used by Uswitch was a simple count of the most visited websites against a total number of incidents reported by DownDetector. The "visited website" metric ruled out the likes of Azure and AWS, although both services lurk behind the scenes. However, hotspots like Facebook, Instagram, and TikTok all qualified.

At 180 incidents, according to Uswitch, Reddit was head and shoulders ahead of the pack. A whopping 60 per cent of issues were related to forum's app. Following, at 107 incidents, was Discord (server connection woes afflicted 73 per cent of blackouts), just snatching second place from Instagram. The food and lifestyle happy snapper accounted for 106 incidents (the app accounted for 55 per cent of problems).

Visit link:
In the spirit of open government, France dumps 9,067 repos online to show off its FOSS credentials - The Register

Read More..

Dutch newspaper accuses US spy agencies of orchestrating 2016 Booking.com breach – The Register

Jointly US-Dutch owned Booking.com was illegally accessed by an American attacker in 2016 and the company failed to tell anyone when it became aware of what happened, according to explosive revelations.

The alleged miscreant, named as "Andrew", is said to have stolen "details of thousands of hotel reservations in countries in the Middle East," according to a new book written by three Dutch journalists.

Their employer, Dutch title NRC Handelsblad, reported the allegations this week, claiming that Booking.com had relied on legal advice from London-based law firm Hogan Lovells saying it wasn't obliged to inform anyone of the attack.

The breach was said to have occurred after "Andrew" and associates stumbled upon a poorly secured server which gave them access to personal ID numbers (PINs), seemingly unique customer account identifier codes. From there the miscreants were able to steal copies of reservation details made by people living and staying in the Middle East. NRC Handelsblad linked this to espionage carried out by the US against foreign diplomats and other people of interest in the region.

Although the accommodation booking website reportedly asked the Dutch AIVD spy agency for help with the breach after its internal investigation identified "Andrew" as having connections to US spy agencies, it did not notify either its affected customers or data protection authorities in the Netherlands at the time, the newspaper allged.

When we asked for comment about the allegations, a Booking.com spokesperson told us: "With the support of external subject matter experts and following the framework established by the Dutch Data Protection Act (the applicable regulation prior to GDPR), we confirmed that no sensitive or financial information was accessed.

"Leadership at the time worked to follow the principles of the DDPA, which guided companies to take further steps on notification only if there were actual adverse negative effects on the private lives of individuals, for which no evidence was detected."

The breach predated the EU's General Data Protection Regulation (GDPR), meaning data protection rules everyone's familiar with today, which (mostly) make it illegal not to disclose data leaks to state authorities, did not exist at the time.

Booking.com was fined 475,000 earlier this year by Dutch data protection authorities after 4,100 people's personal data was illegally accessed by criminals. In that case employees of hotels in the UAE were socially engineered out of their account login details for the platform.

The apparent online break-in once again raises the spectre of European countries being targeted by Anglosphere intelligence agencies. The infamous Belgacom hack, revealed by Edward Snowden in 2013 and reignited in 2018 when Belgium attributed it to the UK, was carried out by British spies trying to gain access to data on people of interest in Africa.

Almost exactly eight years ago, Snowden also revealed the existence of a British spy-on-diplomats programme codenamed Golden Concierge, which on the face of it looks remarkably similar to the Booking.com breach reported this week.

While some readers might shrug and mutter "spies spy," evidence of the theft of bulk data by third parties who may or may not be subject to whatever lax controls spy agencies choose to create for themselves will be cold comfort to anyone who made a Booking.com reservation in the Middle East at the time.

Read more from the original source:
Dutch newspaper accuses US spy agencies of orchestrating 2016 Booking.com breach - The Register

Read More..

Old Microsoft is back: If the latest Windows 11 really wants to use Edge, it will use Edge no matter what – The Register

Microsoft Windows 11 build 22494 appears to prevent links associated with the Microsoft Edge browser from being handled by third-party applications, a change one developer argues is anticompetitive.

Back in 2017, Daniel Aleksandersen created a free helper application called EdgeDeflector to counter behavioral changes Microsoft made in the way Windows handles mouse clicks on certain web links.

Typically, https:// links get handled by whatever default browser is set for the system in question. But there are ways to register a custom protocol handler, for operating systems and web browsers, that defines the scheme to access a given resource (URI).

Microsoft did just that when it created the microsoft-edge: URI scheme. By prefixing certain links as microsoft-edge:https://example.com instead of https://example.com, the company can tell Windows to use Edge to render example.com instead of the system's default browser.

Microsoft is not doing this for all web links it hasn't completely rejected browser choice. It applies the microsoft-edge:// protocol to Windows 10 services like News and Interest, Widgets in Windows 11, various help links in the Settings app, search links from the Start menu, Cortana links, and links sent from paired Android devices. Clicking on these links will normally open in Edge regardless of the default browser setting.

When the microsoft-edge:// protocol is used, EdgeDeflector intercepts the protocol mapping to force affected links to open in the user's default browser like regular https:// links. That allows users to override Microsoft and steer links to their chosen browsers.

This approach has proven to be a popular one: Brave and Firefox recently implemented their own microsoft-edge:// URI scheme interception code to counter Microsoft's efforts to force microsoft-edge:// links into its Edge browser.

But since Windows 11 build 22494, released last week, EdgeDeflector no longer works.

This is on top of Microsoft making it tedious to change the default browser on Windows 11 from Edge: in the system settings, you have to navigate to Apps, then Default apps, find your preferred installed browser, and then assign all the link and file types you need to that browser, clicking through the extra dialog boxes Windows throws at you. Your preferred browser may be able to offer a shortcut through this process when you install it or tell it to make it your default.

The Register has asked Brave and Mozilla whether their respective link interception implementations for the microsoft-edge:// URI scheme still work.

In an email to The Register, a Mozilla spokesperson confirmed the Windows change broke Firefoxs Edge protocol workaround.

People deserve choice, the spokesperson said. They should have the ability to simply and easily set defaults and their choice of default browser should be respected. We have worked on code that launches Firefox when the microsoft-edge protocol is used for those users that have already chosen Firefox as their default browser.

"Following the recent change to Windows 11, this planned implementation will no longer be possible.

Brave CEO Brendan Eich told The Register his Windows 11 testers haven't yet provided an update, but allowed that Aleksandersen's post seems pretty dire. "[Microsoft] must figure [that the] antitrust Eye of Sauron is looking at [Google, Facebook, and Apple] only," he observed.

In an email to The Register, Aleksandersen said the change affects both Brave and Firefox.

"No program other than Microsoft Edge can handle the protocol," he said. "Ive tested Brave (stable release) and a version of Firefox with the patch to add the protocol. Theyre not allowed to support it either."

Microsoft isnt a good steward of the Windows operating system. Theyre prioritizing ads, bundleware, and service subscriptions over their users productivity

"Microsoft hasnt blocked EdgeDeflector specifically. Windows is just bypassing the normal protocol handling system in Windows and always uses Edge for this specific protocol."

According to Aleksandersen, the latest Windows 11 build allows only the Edge browser to handle the microsoft-edge:// protocol.

"No third-party apps are allowed to handle the protocol," he wrote in a blog post on Thursday. "You cant change the default protocol association through registry changes, OEM partner customizations, modifications to the Microsoft Edge package, interference with OpenWith.exe, or any other hackish workarounds."

Aleksandersen says Windows will force the use of Edge even if you delete it, opening an empty UWP window and presenting an error message rather than falling back on the default browser.

The change to Windows means EdgeDeflector will not receive any further updates unless this behavior is reverted, said Aleksandersen.

"These arent the actions of an attentive company that cares about its product anymore," said Aleksandersen. "Microsoft isnt a good steward of the Windows operating system. Theyre prioritizing ads, bundleware, and service subscriptions over their users productivity."

Aleksandersen advises those opposed to the change to raise the issue with their local antitrust regulator or to switch to Linux.

Ironically, as Aleksandersen tells it, vendor-specific URI schemes took off in February 2014 after Google introduced a googlechrome:// scheme for its mobile apps as a way to counter Apple's anticompetitive insistence that Safari should handle certain links on iOS devices.

"Microsoft just turned the racket on its head and changed more and more links in its operating system and apps to use its vendor-specific URL scheme," he said in a post last month.

The Register asked the US Justice Department whether it's aware of this change and, if so, whether it's concerned, given Microsoft's prior conviction for abusing its market dominance. We've not heard back.

"Microsofts use of the microsoft-edge:// protocol instead of regular https:// links is in itself an antitrust issue," Aleksandersen told The Register. "This annoyed me so much that I created EdgeDeflector to fight back on its monopolistic and user-hostile behavior".

"I believe Microsoft clearly doesnt fear antitrust regulators.

"Theyre putting up more barriers and are being more aggressive now than they were in the past when they were hit with antitrust fines. (E.g. removing the default browser settings from Windows Setting, making it more difficult to programmatically change the default browser, prompting the user to 'choose Edge' after every system update, hiding/unpinning other browsers from your taskbar.) On top of this, theyre using these horrid microsoft-edge:// links in very prominent places in the OS to bypass the default browser setting entirely."

Microsoft did not respond to a request for comment.

Read more here:
Old Microsoft is back: If the latest Windows 11 really wants to use Edge, it will use Edge no matter what - The Register

Read More..

Quantum Turing machine – Wikipedia

Model of quantum computation

A quantum Turing machine (QTM) or universal quantum computer is an abstract machine used to model the effects of a quantum computer. It provides a simple model that captures all of the power of quantum computationthat is, any quantum algorithm can be expressed formally as a particular quantum Turing machine. However, the computationally equivalent quantum circuit is a more common model.[1][2]:2

Quantum Turing machines can be related to classical and probabilistic Turing machines in a framework based on transition matrices. That is, a matrix can be specified whose product with the matrix representing a classical or probabilistic machine provides the quantum probability matrix representing the quantum machine. This was shown by Lance Fortnow.[3]

A way of understanding the quantum Turing machine (QTM) is that it generalizes the classical Turing machine (TM) in the same way that the quantum finite automaton (QFA) generalizes the deterministic finite automaton (DFA). In essence, the internal states of a classical TM are replaced by pure or mixed states in a Hilbert space; the transition function is replaced by a collection of unitary matrices that map the Hilbert space to itself.[4]

That is, a classical Turing machine is described by a 7-tuple M = Q , , b , , , q 0 , F {displaystyle M=langle Q,Gamma ,b,Sigma ,delta ,q_{0},Frangle } .

For a three-tape quantum Turing machine (one tape holding the input, a second tape holding intermediate calculation results, and a third tape holding output):

The above is merely a sketch of a quantum Turing machine, rather than its formal definition, as it leaves vague several important details: for example, how often a measurement is performed; see for example, the difference between a measure-once and a measure-many QFA. This question of measurement affects the way in which writes to the output tape are defined.

In 1980 and 1982, physicist Paul Benioff published papers[5][6] that first described a quantum mechanical model of Turing machines. A 1985 article written by Oxford University physicist David Deutsch further developed the idea of quantum computers by suggesting quantum gates could function in a similar fashion to traditional digital computing binary logic gates.[4]

Iriyama, Ohya, and Volovich have developed a model of a linear quantum Turing machine (LQTM). This is a generalization of a classical QTM that has mixed states and that allows irreversible transition functions. These allow the representation of quantum measurements without classical outcomes.[7]

A quantum Turing machine with postselection was defined by Scott Aaronson, who showed that the class of polynomial time on such a machine (PostBQP) is equal to the classical complexity class PP.[8]

View original post here:
Quantum Turing machine - Wikipedia

Read More..

Is Quantum Computing the Future of AI? – Datanami

(metamorworks/Shutterstock)

Quantum computing has grabbed the imagination of computer scientists as one possible future of the discipline after weve reached the limits of digital binary computers. Thanks to its capability to hold many different possible outcomes in the quantum state, quantum computing could potentially deliver a big computational upgrade for machine learning and AI problems. However, there are still a lot of unanswered questions around quantum computing, and its unclear if the devices will help with the building wave of investment in enterprise AI.

Weve done quite well with the line of binary computers that first appeared in the 1950s and have evolved into the basis of todays multi-trillion-dollar IT sector. With just two bits and three Boolean algebraic operators, we created tremendous data-crunching machines that have automated many manual tasks and had a large impact on the world around us. From basic accounting and supply chain routing to flight control computers and understanding the genome, its tough to overstate the impact that computers have had on our modern lives.

But as we approach the limits of what classical binary computers can do, quantum computers have emerged with the (as yet unfulfilled) promise of a tremendous upgrade in computational power. Instead of being restricted to Boolean linear algebraic functions on 1s and 0s, quantum computing allows us to use linear algebra upon quantum bits, or qubits, that are composed of numbers, vectors, and matrices interacting in quantum states, including superposition, entanglement, and interference.

Quantum computing opens the door potentially solving very large and complex computational problems that are basically impossible to solve on traditional computers. This includes things like using brute-force methods to guess the passcode used to encrypt a piece of data using a 256-bit algorithm. Data encrypted with AES-256 is considered secure precisely because it cant be cracked with a brute-force attack (its possible, but it would take many thousands of years with current technology, which makes it practically impossible). But with quantum computers ability to compute with multiple possible states, solving such problems will now be within practical reach.

The Google Sycamore quantum processor (Image source: Google)

Another example is the traveling salesman problem. Given a number of geographic locations, figuring out the most efficient path among them is actually an extremely compute-intensive problem. UPS, which spends billions on fuel for its delivery trucks, has gone so far as to limit the number of left turns its drivers make in an attempt to maximize delivery time and minimize fuel use, making it an interesting twist on the old traveling salesman problem.

Which brings us to machine learning and AI. The latest incarnation of machine learning, deep learning, is pushing the limits of what traditional computers can handle. Large transformer models, such as OpenAIs GPT-3, which has 175 billion parameters, take months to train on classical computers. As future models grow into the trillions of parameters, they will take even longer to train. That is one reason why users are adopting novel microprocessor architectures that deliver better performance than what traditional CPUs and even GPUs can deliver.

But at the end of the day, CPUs and GPUs are tied to classical binary computers, and the limitations they entail. Quantum computers offer the possibility of a quantum leap in performance and capability for a range of use cases, and AI is definitely one of them.

Cem Dilmegani, who is an industry analyst at AIMultiple, defines quantum AI as the use of quantum computing for running machine learning algorithms. Thanks to computational advantages of quantum computing, quantum AI can help achieve results that are not possible to achieve with classical computers, Dilmegani writes.

A quantum computer from Oxford-Quantum-Circuits (Image courtesy of the company)

One of the early quantum computer manufacturers thats making moves in this area is Google. In March 2020, Google launched TensorFlow Quantum, a which brings the TensorFlow machine learning development library to the world of quantum computers. With TensorFlow Quantum, developers will be able to develop quantum neural network models that run on quantum computers.

While running AI applications on quantum computers is still in its very earliest stages, there are many organizations working to develop it. NASA has been working with Google for some time, and there is also work going on in the national labs.

For instance, last month, researchers at Los Alamos National Laboratory published a paper called Absence of Barren Plateaus in Quantum Convolutional Neural Networks, which essentially shows that convolutional neural networks (the type commonly used for computer vision problems) can run on quantum computers.

We proved the absence of barren plateaus for a special type of quantum neural network, Marco Cerezo, a LANL researcher who co-authored the paper, said in a LANL press release. Our work provides trainability guarantees for this architecture, meaning that one can generically train its parameters.

LANL researchers are bullish on the potential for quantum AI algorithms to provide the next breakthrough in computational capability. Patrick Coles, a quantum physicist at LANL and a co-author of the paper, said this approach will yield new approaches for crunching large amounts of data.

The field of quantum machine learning is still young, Coles said in the LANL press release. Theres a famous quote about lasers, when they were first discovered, that said they were a solution in search of a problem. Now lasers are used everywhere. Similarly, a number of us suspect that quantum data will become highly available, and then quantum machine learning will take off.

Earlier this year, IBM Research announced that it found mathematical proof of a quantum advantage for quantum machine learning. The proof came in the form of a classification algorithm that, provided access to classical data, provided a provable exponential speedup over classic ML methods. While there are plenty of caveats to go along with that statement, it provides a glimpse into one potential future where quantum AI is feasible.

IBM quantum computer (Source: IBM)

To be sure, there is plenty of doubt whenever two highly hyped technologiesAI and quantum computingcome together. In its July 2021 blog, IBM stated: Few concepts in computer science cause as much excitementand perhaps as much potential for hype and misinformationas quantum machine learning.

While there appears to be potential with quantum AI, that potential is, as yet, unrealized. On the bright side, there appears to be at least cause for some optimism that a real breakthrough could be in our future.

Sceptics are correct in that quantum computing is still a field of research and it is a long way from being applied to neural networks, Dilmegani writes. However, in a decade, AI could run into another plateau due to insufficient computing power and quantum computing could rise to help the advance of AI.

Its still too soon to tell whether the field of quantum computing will have a major impact on the development of AI. Were still in the midst of what those in the quantum computing field call Noisy Intermediate-Stage Quantum, or NISQ. There definitely are many promising developments, but there are too many unanswered questions still.

Related Items:

Machine Learning Cuts Through the Noise of Quantum Computing

Google Launches TensorFlow Quantum

IBM Pairs Data Science Experience with Quantum Computer

Link:
Is Quantum Computing the Future of AI? - Datanami

Read More..