Page 431«..1020..430431432433..440450..»

Microsoft invests 3.2 billion in AI and the cloud in Germany – CIO

While 2023 was still a year of trial and error in terms of generative AI, when everyone put together and tried out all kinds of ingredients, 2024 is the year in which German industrial giants such as Siemens, Mercedes-Benz, Bayer, and BASF will deeply integrate AI technologies into their own services. Ultimately, the German economy could create a unique selling point with the help of AI.

According to Janik, the International Monetary Funds AI Readiness Index also shows that the economy has nothing to hide when it comes to AI. It puts Germany in third place worldwide behind Singapore and the US. In addition, she said, the widespread use of AI offers Germany the opportunity to increase its gross national product by 0.6% per year.

However, this is not a sure-fire success, she warned. We need a strong digital infrastructure and the right skills. We really need to empower everyone in Germany to be able to use these AI technologies.

After doubling the capacity of its own data centers in Germany in 2023, Microsoft intends to double it again by the end of 2025. One focus will be on expanding the cloud region around Frankfurt am Main. In addition, new infrastructure is planned in the Rhenish mining district in North Rhine-Westphalia.

When building the new data centers, Microsoft is also paying attention to the sustainability aspect and investing in renewable energies.

We will generate more electricity from renewable energy sources than our data centers consume, Smith promised, noting that Microsoft is focusing on solar energy in Germany.

Read more:
Microsoft invests 3.2 billion in AI and the cloud in Germany - CIO

Read More..

Admiral Selects Google Cloud to Accelerate Innovative Customer Experiences – PR Newswire

LONDON, Feb. 14, 2024 /PRNewswire/ -- Admiral, a leading financial services company in the UK and part of the Admiral Group, has selected Google Cloud as a strategic cloud partner. Under the terms of the agreement, Admiral's core insurance operations, including insurance policy administration and digital systems, will now be powered by Google Cloud. The company will also use Google Cloud technologies to develop new digital products and services, such as making further improvements to its customer-facing mobile app.

The collaboration will enable Admiral to accelerate time-to-market for new products and services by deploying containerised cloud applications and adopting new software development practices. Admiral will continue to improve operational efficiency using Google Cloud's data analytics capabilities, and better serve its customers with Google Cloud's AI and machine learning services.

The partnership with Google Cloud covers four core areas:

"With our customers at the heart of everything we do, Admiral is delighted to join forces with Google Cloud to help us achieve our strategic goals," said Admiral CIO Alan Patefield-Smith. "Google Cloud's cutting-edge tech and expertise allows us to accelerate our digital transformation journey and helps us to deliver forward-thinking customer experiences."

"Admiral is an innovative insurer that has delivered many firsts to the market. We are proud to support its continued commitment to giving its customers the very best products and services across its insurance portfolio," said Helen Kelisky, MD, Google Cloud, UKI. "We look forward to strengthening our existing relationship with Admiral to help it accelerate its change strategy and deliver even better experiences."

About Admiral

Admiral is a leading Financial Services company covering services such as motor, home, travel insurance, Insurtech and legal services. Admiral is part of Admiral Group, a FTSE100 Financial Services company with businesses in the UK, Europe and America. In the UK it has over 7,500 colleagues and over 6.4 million customers. In 2023, Admiral was named the 6th best workplace in the UK by Great Place to Work, as well as the 14th best workplace for Wellbeing, and the 3rd best workplace for Women. It was also named the Best Big Company To Work For in the UK in the Best Companies To Work For list. Follow Admiral on Facebook, Twitter and Instagram at @admirallife, and on LinkedIn at Admiral Group Plc.

About Google Cloud

Google Cloud accelerates every organization's ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google's cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.

SOURCE Google Cloud

View original post here:
Admiral Selects Google Cloud to Accelerate Innovative Customer Experiences - PR Newswire

Read More..

German chancellor welcomes Microsoft’s $3.5 billion AI investment in Germany – Quartz

Federal Chancellor Olaf Scholz, right, and Vice Chair and President of Microsoft Corporation Brad Smith shake hands in Berlin, `Thursday, Feb. 15, 2024, after a press conference held by Microsoft Deutschland GmbH on the company's investments in the AI sector in Germany. Chancellor Olaf Scholz welcomed an announcement by Microsoft on Thursday that it would invest more than 3.2 billion euros ($3.4 billion) in Germany over the next two years to massively expand its data center capacities for applications in the field of artificial intelligence and cloud computing. (Kay Nietfeld/dpa via AP) Image: ASSOCIATED PRESS

BERLIN (AP) German Chancellor Olaf Scholz welcomed an announcement Thursday by Microsoft that it would invest almost 3.3 billion euros ($3.5 billion) in Germany over the next two years to massively expand its data center capacities for applications in the field of artificial intelligence and cloud computing.

Tarek El Moussa's road out of debt to being a millionaire | Your Wallet

This is a really good commitment to progress, to growth, to modernity and to global openness as the basis for these opportunities, he said, adding it was also linked to the fact that Germany remains very determined to be an open economy.

Not only are we probably the most successful export economy in the world in terms of the size of our country, but we are also a country that trades with the whole world, that invests everywhere, but also invests in our own country, he said.

Microsoft President Brad Smith made the announcement in Berlin during a presentation with Scholz. The largest single investment in Microsofts 40-year history in Germany also includes an AI training program that aims to reach up to 1.2 million people, German news agency dpa reported.

Microsoft is looking to be close to major customers, such as the pharmaceutical company Bayer AG and energy company RWE, in order to keep data latency between data centers and applications as low as possible. The central German state of Hesse will also benefit from Microsofts investments.

More here:
German chancellor welcomes Microsoft's $3.5 billion AI investment in Germany - Quartz

Read More..

Cloud Migration Services Market worth $29.2 billion by 2028 – Exclusive Report by MarketsandMarkets USA … – PR Newswire

CHICAGO, Feb. 15, 2024 /PRNewswire/ -- Trends including edge computing migration, AI-driven solutions,containerisation, and the use of hybrid and multi-cloud infrastructure will shape the cloud migration services market in the future. The evolution of services is driven by issues like regulatory compliance and the complexity of data migration; cost optimisation, DevOps integration, and ecosystem collaboration are key components of effective, safe, and compliant cloud migrations.

The Cloud Migration Services Market is projected to grow from USD 10.2 billion in 2023 to USD 29.2 billion by 2028, at a compound annual growth rate (CAGR) of 23.3% during the forecast period, according to a new report by MarketsandMarkets. The Cloud Migration Services Market is expected to grow significantly during the forecast period, owing to numerous business drivers such as rising demand for better agility and automation, and seamless integration and compatibility of enterprises with the evolving landscape of cloud technology.

Browse in-depth TOC on "Cloud Migration Services Market"

219 - Tables 52 - Figures 287 - Pages

Download PDF Brochure @ https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=266815130

Scope of the Report

Report Metrics

Details

Market size available for years

20182028

Base year considered

2022

Forecast period

20232028

Forecast units

USD (Billion)

Segments Covered

Service Type, Deployment Mode, Migration TypeApplication, Vertical, and Region

Geographies covered

North America, Asia Pacific, Europe, Middle East & Africa, and Latin America

Companies covered

IBM (US), AWS (US), Google (US), Microsoft (US), Oracle (US), SAP (Germany), VMWare (US), Cisco (US), NTT Data (Japan), Accenture (Ireland), Infosys (India), DXC (US), HPE (US), Veritis (US), RiverMeadow (US), Rackspace (US), Informatica (US), WSM (US), and so on.

By service type, assessment & planning services segment to register for the largest market size during the forecast period.

By service type, assessment & planning services segment is expected to register the largest market size during the forecast period. These services align the cloud migration strategy with the business objectives of the organization. By understanding the specific goals and requirements, assessment services ensure that the migration plan is tailored to meet the unique needs of the business. Assessment and Planning Services form the foundational phase of cloud migration, providing organizations with a comprehensive understanding of their current state, aligning migration strategies with business objectives, identifying and mitigating risks, and developing a detailed roadmap for a successful transition to the cloud.

Request Sample Pages@https://www.marketsandmarkets.com/requestsampleNew.asp?id=266815130

By deployment mode, hybrid cloudto register for the highest CAGR during the forecast period.

The hybrid cloud segment of the Cloud Migration Services Market is growing rapidly. Hybrid Cloud deployment, supported by cloud migration services, provides organizations with a balanced and adaptable approach to cloud adoption. It offers the advantages of cloud scalability and innovation while allowing organizations to retain control over certain workloads and maintain compliance with specific requirements. Cloud migration services are instrumental in designing, implementing, and optimizing Hybrid Cloud solutions tailored to the unique needs of each organization.

By region, Asia Pacific accounted for highest growth rate during the forecast period.

Asia Pacific is witnessing significant growth in Cloud Migration Services Market. Companies in the region are recognizing the cost benefits associated with cloud migration. Cloud services offer a pay-as-you-go model, eliminating the need for significant upfront investments in hardware and infrastructure. Several governments across the Asia Pacific region have been promoting cloud adoption through various initiatives and policies. These efforts aim to foster innovation, improve public services, and drive economic growth.

Top Key Companies in Cloud Migration Services Market:

Some major players in the Cloud Migration Services Market include IBM (US), AWS (US), Google (US), Microsoft (US), Oracle (US), SAP (Germany), VMWare (US), Cisco (US), NTT Data (Japan), Accenture (Ireland), Infosys (India), DXC (US), HPE (US), Veritis (US), RiverMeadow (US), Rackspace (US), Informatica (US), WSM (US), and so on.

Recent Developments:

Inquire Before Buying@ https://www.marketsandmarkets.com/Enquiry_Before_BuyingNew.asp?id=266815130

Cloud Migration Services Market Advantages:

Report Objectives

Browse Adjacent Markets:Cloud Computing Market Research Reports & Consulting

Related Reports:

Cloud Storage Market- Global Forecast to 2028

Cloud Mobile Backend as a Service Market- Global Forecast to 2028

Telecom Cloud Market- Global Forecast to 2027

Cloud TV Market- Global Forecast to 2026

Biometrics as a Service Market- Global Forecast to 2025

About MarketsandMarkets

MarketsandMarkets has been recognized as one of America's best management consulting firms by Forbes, as per their recent report.

MarketsandMarkets is a blue ocean alternative in growth consulting and program management, leveraging a man-machine offering to drive supernormal growth for progressive organizations in the B2B space. We have the widest lens on emerging technologies, making us proficient in co-creating supernormal growth for clients.

Earlier this year, we made a formal transformation into one of America's best management consulting firms as per a survey conducted by Forbes.

The B2B economy is witnessing the emergence of $25 trillion of new revenue streams that are substituting existing revenue streams in this decade alone. We work with clients on growth programs, helping them monetize this $25 trillion opportunity through our service lines - TAM Expansion, Go-to-Market (GTM) Strategy to Execution, Market Share Gain, Account Enablement, and Thought Leadership Marketing.

Built on the 'GIVE Growth' principle, we work with several Forbes Global 2000 B2B companies - helping them stay relevant in a disruptive ecosystem. Our insights and strategies are molded by our industry experts, cutting-edge AI-powered Market Intelligence Cloud, and years of research. The KnowledgeStore (our Market Intelligence Cloud) integrates our research, facilitates an analysis of interconnections through a set of applications, helping clients look at the entire ecosystem and understand the revenue shifts happening in their industry.

To find out more, visit http://www.MarketsandMarkets.com or follow us on Twitter, LinkedIn and Facebook.

Contact: Mr. Aashish MehraMarketsandMarkets INC. 630 Dundee Road Suite 430 Northbrook, IL 60062 USA: +1-888-600-6441 Email: [emailprotected] Research Insight: https://www.marketsandmarkets.com/ResearchInsight/cloud-migration-service-market.asp Visit Our Website: https://www.marketsandmarkets.com/ Content Source: https://www.marketsandmarkets.com/PressReleases/cloud-migration-service.asp

Logo: https://mma.prnewswire.com/media/2297424/MarketsandMarkets_Logo.jpg

SOURCE MarketsandMarkets

View post:
Cloud Migration Services Market worth $29.2 billion by 2028 - Exclusive Report by MarketsandMarkets USA ... - PR Newswire

Read More..

Embracing shadow AI will help accelerate innovation – CIO

Emerging technologies catch IT leaders flat-footed, so it comes as no surprise that some are clenching their teeth over shadow AI, or the unsanctioned use of generative AI and associated services.

CIOs recall when cloud computing disrupted the industry more than a decade ago. They remember business line leaders using corporate credit cards for cloud applications from startups.

Yet once IT leaders realized how much employees valued these tools they came around. They secured enterprise licenses, integrated the technologies with their existing IT systems and trained employees. IT leaders augmented productivity while preserving security.

With todays proliferation of genAI services democratizing AI, IT leaders are bracing for the growth of shadow AI.

Yet IT leaders should embrace and extend rather than hinder employees genAI experiences.

This would serve two desirable goals: help employees realize greater productivity and even augment the customer experience.

GenAI could help buoy profits by as much as $4.4 trillion by boosting productivity across customer operations, sales and marketing, software engineering, and research and development, according to McKinsey Digital.1

With genAI gaining traction, IT should learn what employees prefer to use, educate them on how to use them responsibly and institute guardrails. This might help IT garner goodwill with the business, with which it may be at oddseven today.

Consider that while more than half of IT decision makers want a stronger relationship with their business peers, 81% of business decision makers exclude their IT peers from strategic decision-making, according to new Dell research.2 Lack of trust is typically the key sticking point.

Yet IT leaders can take lessons from their past experiences with shadow IT to build a bridge between employees and genAI that cultivates trust unleashes innovation. These tips can point the way.

Some IT leaders first instinct is to create or at least command every technology solution. Instead, IT leaders should learn and understand how employees consume genAI to assist their work.

IT leaders must then work with the C-suite to build consensus around how to use genAI to accelerate the business but balance such efforts with risk mitigation, as McKinsey noted.

Leadership will communicate this messaging to the rank-and-file early and often.

When its time to engage with a vendor or build a solution, IT leaders can position themselves as innovators by executing with a product management mindset that aligns agile development practices with business goals.

CIOs can use this approach to ensure that their technology solutions directly contribute to the organizations objectives, said CIO-turned-investor Yousuf Khan.

GenAI is new enough that most employees dont yet know how to use it and most companies arent helping. Only 6% of companies have trained more than 25% of their people on genAI tools, according to a Boston Consulting Group survey of C-suite executives.3

Organizations must tailor education, training and tools for employees in technical and nontechnical roles.

Accenture plans to train 250,000 employees on how to use genAI services responsibly, said Accenture CEO Julie Sweet at the recent World Economic Forum. This is basic digital literacy to run a company and to be good, Sweet said.

The stakes are high for such literacy. As much as 40% of current job roles will be redefined or eliminated across large enterprises due to genAI adoption, according to IDC research commissioned by Dell.4

From a risk perspective, genAI is like Jurassic Parkchaotic. GenAI services are black boxes; no one knows how they arrive at their conclusions. They may hallucinate and spew false information. Comprised or inaccurate content can put businesses reputationor worseat risk.

This makes risk mitigation critical. Most companies will track and assess AI for risk and apply mitigation strategies as needed over the next 12 to 24 months, Accentures Sweet predicted.

That timeline feels long. Organizations adopting genAI should institute governance and security models today, ideally taking a Zero Trust approach, to keep the organization safe and secure.

Falling back on the classic command-and-control stance is instinctive for IT leaders, but genAI is too easy to use and easily accessible for most employees.

Instead, IT leaders should work with their business peers on responsible use, adopt and/or build safe and vetted technologies, and educate and communicate to the rank-and-file how to consume them.

Trusted partners can help, providing the hardware, software and services to help organizations bring AI to their data in a way that respects IT governance while democratizing employee access to genAI services.

Learn more at dell.com/ai.

See the article here:
Embracing shadow AI will help accelerate innovation - CIO

Read More..

USPS scam smishing campaigns could move to cloud with SNS Sender – SC Media

USPS failed delivery scam texts could be sent through Amazon cloud services using a new phishing tool brought to light by researchers Thursday.

SNS Sender is a Python script discovered by SentinelOne researchers that is designed to enable bulk SMS delivery via the Amazon Simple Notification Service (SNS). The phishing kit automatically inserts links to attacker-controlled websites, such as fake U.S. Postal Service (USPS) websites that collect victims personal information including names, addresses, phone numbers, emails and credit card numbers.

SentinelLabs Threat Researcher Alex Delamotte said in a blog post detailing the script that it represents a previously unseen technique in the context of cloud attack tools.

A common threat between businesses and threat actors is that both are moving workloads previously handled by traditional web servers to the cloud, Delamotte wrote.

SMS phishing, or smishing, campaigns may leverage bulk SMS delivery tools, such as Twilio, to boost their ability to spam victims en masse. The SNS Sender smishing kit is believed to be the first of its kind to target Amazon SNS, according to SentinelOne.

The suspected author of SNS Sender is known by the alias ARDUINO_DAS, whose handle appears in more than 150 other phishing kit files identified by SentinelOne. More than half of the kits associated with ARDUINO_DAS were related to USPS scams.

SNS Sender contains a text file for storing a list of phishing links that are randomly chosen and inserted into smishing messages by replacing occurrences of the linkas string. It also includes text files for storing target phone numbers, message contents and Amazon Web Services (AWS) access keys.

There are some signs in the SNS Sender script that suggest it is more of a work-in-progress than a complete smishing kit. For example, the script includes the ability to insert a custom sender ID, which is not supported by carriers in the United States where targets of USPS scams would presumably reside.

Additionally, the manner in which the script selects AWS access key pairs to use for each message does not appear to be optimized, as it would require an impractically long list of credentials to run at scale.

While the discovery of a phishing tool dedicated to exploitation of Amazon SNS is a new development, there have been several examples of threat actors targeting cloud servers for potential subsequent phishing campaigns.

For example, an attacker who used previously exposed AWS access keys to infiltrate an AWS server in March 2023 was observed by Permiso researchers attempting a GetSMSAttributes action. The researchers realized attackers may run GetSMSAttribute, GetSMSSandboxAccountStatus and similar commands to determine if a hijacked server is configured properly to send mass SMS messages.

Attackers targeting AWS SNS may run into trouble, as the cloud service does not enable bulk SMS delivery by default. The AWS tenant must be outside of the SNS sandbox environment to take advantage of this feature.

SentinelOne previously detailed Predator AI, a Python-based infostealer and hacking tool that leverages the ChatGPT API. Predator AI targets a wide range of cloud services, including email and SMS communication services that could be leveraged for phishing campaigns, such as AWS Simple Email Service (SES).

Earlier this year, another Python-based hack tool called FBot was revealed to be targeting AWS, Sendgrid, Twilio and other cloud and software services. Like Predator AI, FBot established initial access to accounts to be used post-compromise for email and text spamming.

Continue reading here:
USPS scam smishing campaigns could move to cloud with SNS Sender - SC Media

Read More..

Benefits and challenges of managed cloud security services – TechTarget

Organizations continue to grow more distributed, virtual and complex. Over the last several years, they have sped up digital transformation projects, leaning hard into hybrid and multi-cloud deployments. This rapid movement comes with a price, however. Too many organizations lack the in-house cloud security expertise and resources needed to protect cloud assets effectively.

One option to address these challenges is managed cloud security. Outsourcing cloud security to a third party not only helps organizations with limited cloud security resources manage risks in the cloud, but it can, in some cases, save budget and free in-house security teams to focus on other pressing issues.

Let's look at the challenges of managing cloud security and the benefits and challenges of using managed cloud security services.

The cloud introduces several new security issues organizations must contend with. Security teams struggle to detect and remediate cloud security threats, with 90% of the organizations surveyed in Palo Alto Networks' "The State of Cloud-Native Security Report 2023" admitting they can't identify and mitigate cyberthreats within an hour.

In addition, too many organizations deploy cloud applications too quickly. This can equate to limited testing time and DevOps teams deploying code with gaping security holes. Developers are also tapping commercial off-the-shelf software to accelerate deployment times -- some of which don't have the best security measures. An application's security is at risk if it has any vulnerabilities in the development software.

Organizations also struggle with the number of tools needed to manage cloud security. The Palo Alto Networks survey found teams use more than 30 discrete security tools, of which six to 10 are for cloud security. Plus, 75% said the large number of separate tools makes it difficult to get an accurate view of the cloud environment. They said, in this scenario, it is challenging to gauge where the most significant risks are and how to remediate them.

Lastly, cloud providers apply a shared responsibility model to security. IaaS providers are primarily responsible for infrastructure security, while the client is on the hook for securing the workloads running in the environment. Client cloud operations teams sometimes need help understanding where their obligations begin and end.

Managed cloud security delivers many of the same benefits as outsourcing on-premises security. It can provide advanced threat intelligence and threat hunting capabilities, backed by the support of threat researchers and sophisticated tools, to expedite and improve threat identification. These services can also help organizations prioritize alerts and contain threats.

The best managed cloud security providers are trusted partners that can deliver innovative and effective technology, while alleviating the headaches associated with collating data from disparate tools. Managed cloud security services can also give organizations access to cloud-specific expert resources and partners with experience navigating evolving regulatory environments.

Outsourcing cloud security can also be more cost-effective than handling everything in-house; consolidating security operations under a third party can lower some operating expenses.

Managed cloud security isn't perfect. Suppose the service only provides cloud security for one environment. The client's IT team must integrate data from the cloud security services with its other security resources, adding complexity to security management.

There is also always a risk the external provider and its partners could expose the client's cloud environment to new risks. This fear of loss of control keeps many organizations from adopting managed cloud security services.

Finally, using third-party cloud security services -- depending on the circumstances -- could prove more expensive than managing these protections internally.

Many cloud security suppliers are available. All hyperscalers and cloud providers offer security controls as part of their IaaS and SaaS offerings, often for free. However, apart from Microsoft, which offers a full slate of managed cloud security services, most are discrete tools focusing on a single security aspect rather than providing a complete end-to-end perspective on the cloud environment. These primarily concentrate on security within their cloud, which complicates the security situation for organizations with hybrid and multi-cloud environments.

On the other hand, all major managed security service providers (MSSPs) offer cloud security services, as do many vendors that opt for a security-as-a-service model. MSSPs often provide security across cloud and hybrid environments. Most of their services are delivered via the cloud, translating to more rapid deployment. They can also mask much of the complexity associated with cloud security management, making it easier for internal security teams to tackle challenges as they arise.

One crucial aspect to consider is how cloud security fits into an organization's broader security strategy. It is essential to see the security perspective across the entire enterprise IT estate, including hybrid and multi-cloud. Tools such as extended detection and response offer protection from the customer premises to the cloud. These products amalgamate the tools that track, analyze and orchestrate responses across endpoints, infrastructure, workloads, networks and the cloud.

Cloud security services are available for organizations of all sizes, but under-resourced smaller and midsize organizations typically benefit the most. Finding the right provider comes down to trust -- and a proven track record. Cloud security providers should be able to demonstrate effectiveness in production cloud environments with customer testimonials. They must have integrations with all the hyperscalers and major cloud providers. It is also essential that cloud security services can integrate with any on-premises security infrastructure for more holistic management.

Amy Larsen DeCarlo has covered the IT industry for more than 30 years, as a journalist, editor and analyst. As a principal analyst at GlobalData, she covers managed security and cloud services.

Link:
Benefits and challenges of managed cloud security services - TechTarget

Read More..

A causal perspective on dataset bias in machine learning for medical imaging – Nature.com

Char, D. S., Shah, N. H. & Magnus, D. Implementing machine learning in health care addressing ethical challenges. N. Engl. J. Med. 378, 981983 (2018).

Article PubMed PubMed Central Google Scholar

Obermeyer, Z., Powers, B., Vogeli, C. & Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 447453 (2019).

Article ADS CAS PubMed Google Scholar

Wiens, J. et al. Do no harm: a roadmap for responsible machine learning for health care. Nat. Med. 25, 13371340 (2019).

Article CAS PubMed Google Scholar

Buolamwini, J. & Gebru, T. Gender shades: intersectional accuracy disparities in commercial gender classification. In Proc. 1st Conference on Fairness, Accountability and Transparency (eds Friedler, S. A. & Wilson, C.) 7791 (PMLR, 2018).

Beede, E. et al. A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proc. 2020 CHI Conference on Human Factors in Computing Systems 112 (Association for Computing Machinery, 2020).

Seyyed-Kalantari, L., Liu, G., McDermott, M., Chen, I. Y. & Ghassemi, M. CheXclusion: fairness gaps in deep chest X-ray classifiers. Pacific Symp. Biocomput. 26, 232243 (World Scientific, 2021).

Seyyed-Kalantari, L., Zhang, H., McDermott, M. B., Chen, I. Y. & Ghassemi, M. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nat. Med. 27, 21762182 (2021).

Article CAS PubMed PubMed Central Google Scholar

Mamary, A. J. et al. Race and gender disparities are evident in COPD underdiagnoses across all severities of measured airflow obstruction. Chronic Obstruct. Pulmon. Dis. 5, 177184 (2018).

Google Scholar

Oakden-Rayner, L., Dunnmon, J., Carneiro, G. & R, C. Hidden stratification causes clinically meaningful failures in machine learning for medical imaging. Proc. ACM Conf. Health Infer. Learn. 2020, 151159 (2020).

Article Google Scholar

Gianfrancesco, M. A., Tamang, S., Yazdany, J. & Schmajuk, G. Potential biases in machine learning algorithms using electronic health record data. JAMA Intern. Med. 178, 15441547 (2018).

Article PubMed PubMed Central Google Scholar

Larrazabal, A. J., Nieto, N., Peterson, V., Milone, D. H. & Ferrante, E. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc. Natl Acad. Sci. USA 117, 1259212594 (2020).

Article ADS CAS PubMed PubMed Central Google Scholar

Wang, Z. et al. Towards fairness in visual recognition: effective strategies for bias mitigation. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 89168925 (IEEE, 2020).

Zietlow, D. et al. Leveling down in computer vision: pareto inefficiencies in fair deep classifiers. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 1041010421 (IEEE, 2022).

Alvi, M., Zisserman, A. & Nellaaker, C. Turning a blind eye: explicit removal of biases and variation from deep neural network embeddings. In Proc. European Conference on Computer Vision Workshops 556572 (Springer, 2018).

Kim, B., Kim, H., Kim, K., Kim, S. & Kim, J. Learning not to learn: training deep neural networks with biased data. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 90129020 (IEEE, 2019).

Madras, D., Creager, E., Pitassi, T. & Zemel, R. Learning adversarially fair and transferable representations. In International Conference on Machine Learning 33843393 (PMLR, 2018).

Edwards, H. & Storkey, A. Censoring representations with an adversary. In International Conference in Learning Representations (eds Bengio, Y. & LeCun, Y.) (2016). Editors: Yoshua Bengio and Yann LeCun.

Ramaswamy, V. V., Kim, S. S. Y. & Russakovsky, O. Fair attribute classification through latent space de-biasing. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 93019310 (IEEE, 2021).

Wang, M., Deng, W., Hu, J., Tao, X. & Huang, Y. Racial faces in the wild: reducing racial bias by information maximization adaptation network. In Proc. IEEE/CVF International Conference on Computer Vision 692702 (IEEE, 2019).

Hendricks, L. A., Burns, K., Saenko, K., Darrell, T. & Rohrbach, A. Women also snowboard: overcoming bias in captioning models. In Computer Vision ECCV 2018 Vol. 11207 (eds Ferrari, V. et al.) 793811 (Springer, 2018).

Li, Y. & Vasconcelos, N. REPAIR: removing representation bias by dataset resampling. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition 95649573 (IEEE, 2019).

Quadrianto, N., Sharmanska, V. & Thomas, O. Discovering fair representations in the data domain. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition 82198228 (IEEE, 2019).

Wang, T., Zhao, J., Yatskar, M., Chang, K.-W. & Ordonez, V. Balanced datasets are not enough: estimating and mitigating gender bias in deep image representations. In 2019 IEEE/CVF International Conference on Computer Vision 53095318 (IEEE, 2019).

Corbett-Davies, S. & Goel, S. The measure and mismeasure of fairness: a critical review of fair machine learning. Preprint at https://arxiv.org/abs/1808.00023 (2018).

Friedler, S. A. et al. A comparative study of fairness-enhancing interventions in machine learning. In Proc. Conference on Fairness, Accountability, and Transparency 329338 (Association for Computing Machinery, 2019).

Zong, Y., Yang, Y. & Hospedales, T. MEDFAIR: benchmarking fairness for medical imaging. In International Conference on Learning Representations (eds Kim, B., Nickel, M., Wang, M., Chen, N. F. & Marivate, V.) (2023).

Castro, D. C., Walker, I. & Glocker, B. Causality matters in medical imaging. Nat. Commun. 11, 3673 (2020).

Article ADS CAS PubMed PubMed Central Google Scholar

Subbaswamy, A. & Saria, S. From development to deployment: dataset shift, causality, and shift-stable models in health AI. Biostatistics 21, 345352 (2020).

MathSciNet PubMed Google Scholar

Subbaswamy, A. & Saria, S. Counterfactual normalization: proactively addressing dataset shift using causal mechanisms. In 34th Conference on Uncertainty in Artificial Intelligence 2018 947957 (Association For Uncertainty in Artificial Intelligence, 2018).

Subbaswamy, A., Schulam, P. & Saria, S. Preventing failures due to dataset shift: learning predictive models that transport. In Proc. Twenty-Second International Conference on Artificial Intelligence and Statistics 31183127 (PMLR, 2019).

Huang, B. et al. Behind distribution shift: mining driving forces of changes and causal arrows. Proc. IEEE Int. Conf. Data Mining 2017, 913918 (2017).

Google Scholar

Yue, Z., Sun, Q., Hua, X.-S. & Zhang, H. Transporting causal mechanisms for unsupervised domain adaptation. In Proc. IEEE/CVF International Conference on Computer Vision 2021 85998608 (IEEE, 2021).

Zhang, K., Gong, M. & Schoelkopf, B. Multi-source domain adaptation: a causal view. In Proceedings of the AAAI Conference on Artificial Intelligence 29, 31503157 (AAAI Press, Palo Alto, CA, 2015).

Magliacane, S. et al. Domain adaptation by using causal inference to predict invariant conditional distributions. In Proc. 32nd International Conference on Neural Information Processing Systems 1086910879 (Curran Associates Inc., 2018).

Chen, R. J. et al. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat. Biomed. Eng. 7, 719742 (2023).

Article PubMed PubMed Central Google Scholar

Vapnik, V. An overview of statistical learning theory. IEEE Trans. Neur. Netw. 10, 988999 (1999).

Article CAS Google Scholar

Peters, J., Janzing, D. & Schlkopf, B. Elements of Causal Inference: Foundations and Learning Algorithms (MIT Press, 2017).

Pearl, J. Causality: Models, Reasoning, and Inference 2nd edn (Cambridge Univ. Press, 2011).

Schlkopf, B. et al. On causal and anticausal learning. In Proc. 29th International Coference on Machine Learning 459466 (Omnipress, 2012).

Verma, T. & Pearl, J. Causal networks: semantics and expressiveness. In Proc. Fourth Annual Conference on Uncertainty in Artificial Intelligence 6978 (North-Holland Publishing Co., 1990).

Pearl, J. & Dechter, R. Identifying independencies in causal graphs with feedback. In Proc. Twelfth International Conference on Uncertainty in Artificial Intelligence 420426 (Morgan Kaufmann Publishers Inc., 1996).

Glocker, B., Jones, C., Bernhardt, M. & Winzeck, S. Algorithmic encoding of protected characteristics in chest X-ray disease detection models. eBioMedicine 89, 104467 (2023).

Article PubMed PubMed Central Google Scholar

Gichoya, J. W. et al. AI recognition of patient race in medical imaging: a modelling study. Lancet Digit. Health 4, e406e414 (2022).

Article CAS PubMed PubMed Central Google Scholar

Jones, C., Roschewitz, M. & Glocker, B. The role of subgroup separability in group-fair medical image classification. In Medical Image Computing and Computer Assisted Intervention 2023 179188 (Springer Nature, 2023).

Mccradden, M. et al. Whats fair is fair? Presenting JustEFAB, an ethical framework for operationalizing medical ethics and social justice in the integration of clinical machine learning: JustEFAB. In Proc. 2023 ACM Conference on Fairness, Accountability, and Transparency 15051519 (Association for Computing Machinery, 2023).

Chiappa, S. Path-specific counterfactual fairness. In Proceedings of the AAAI Conference on Artificial Intelligence 33, 78017808 (AAAI Press, Palo Alto, CA, 2019).

Friedler, S. A., Scheidegger, C. & Venkatasubramanian, S. On the (im)possibility of fairness. Preprint at https://arxiv.org/abs/1609.07236 (2016).

Wachter, S., Mittelstadt, B. & Russell, C. Bias preservation in machine learning: the legality of fairness metrics under EU non-discrimination law. West Virginia Law Review 123, 735790 (2021).

Hardt, M., Price, E. & Srebro, N. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems (eds Lee, D. et al.) 29, 33233331 (Curran Associates, 2016).

Zemel, R., Wu, Y., Swersky, K., Pitassi, T. & Dwork, C. Learning fair representations. In Proc. 30th International Conference on Machine Learning 325333 (PMLR, 2013).

Dutta, S. et al. Is there a trade-off between fairness and accuracy? A perspective using mismatched hypothesis testing. In Proc. 37th International Conference on Machine Learning 28032813 (PMLR, 2020).

Wick, M., panda, s. & Tristan, J.-B. Unlocking Fairness: A Trade-off Revisited. In Advances in Neural Information Processing Systems Vol. 32 (Curran Associates, Inc., 2019).

Plecko, D. & Bareinboim, E. Causal fairness analysis. Preprint at https://arxiv.org/abs/2207.11385 (2022).

Mao, C. et al. Causal transportability for visual recognition. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 75217531 (IEEE, 2022).

Pearl, J. & Bareinboim, E. Transportability of causal and statistical relations: a formal approach. In Proceedings of the AAAI Conference on Artificial Intelligence 25, 247254 (AAAI Press, Palo Alto, CA, 2011).

Jiang, Y. & Veitch, V. Invariant and transportable representations for anti-causal domain shifts. Adv. Neur. Inf. Process. Syst. 35, 2078220794 (2022).

Google Scholar

Wolpert, D. & Macready, W. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1, 6782 (1997).

Article Google Scholar

Holland, P. W. Statistics and causal inference. J. Am. Stat. Assoc. 81, 945960 (1986).

Article MathSciNet Google Scholar

Schrouff, J. et al. Diagnosing failures of fairness transfer across distribution shift in real-world medical settings. In Advances in Neural Information Processing Systems 3, 1930419318 (Curran Associates, 2022).

Bernhardt, M., Jones, C. & Glocker, B. Potential sources of dataset bias complicate investigation of underdiagnosis by machine learning algorithms. Nat. Med. 28, 11571158 (2022).

Article CAS PubMed Google Scholar

Szczepura, A. Access to health care for ethnic minority populations. Postgrad. Med. J. 81, 141147 (2005).

Article CAS PubMed PubMed Central Google Scholar

Richardson, L. D. & Norris, M. Access to health and health care: how race and ethnicity matter. Mt Sinai J. Med. 77, 166177 (2010).

Article PubMed Google Scholar

Niccoli, T. & Partridge, L. Ageing as a risk factor for disease. Curr. Biol. 22, R741752 (2012).

Article CAS PubMed Google Scholar

Riedel, B. C., Thompson, P. M. & Brinton, R. D. Age, APOE and sex: triad of risk of Alzheimers disease. J. Steroid Biochem. Molec. Biol. 160, 134147 (2016).

Article CAS PubMed Google Scholar

Dwork, C., Immorlica, N., Kalai, A. T. & Leiserson, M. Decoupled classifiers for group-fair and efficient machine learning. In Proc. 1st Conference on Fairness, Accountability and Transparency Vol. 81 (eds Friedler, S. A. & Wilson, C.) 119133 (PMLR, 2018).

Boyko, E. J. & Alderman, B. W. The use of risk factors in medical diagnosis: opportunities and cautions. J. Clin. Epidemiol. 43, 851858 (1990).

Article CAS PubMed Google Scholar

Iglehart, J. K. Health insurers and medical-imaging policya work in progress. N. Engl. J. Med. 360, 10301037 (2009).

Article CAS PubMed Google Scholar

Iglehart, J. K. The new era of medical imagingprogress and pitfalls. N. Engl. J. Med. 354, 28222828 (2006).

Article CAS PubMed Google Scholar

Irvin, J. et al. CheXpert: a large chest radiograph dataset with uncertainty labels and expert comparison. Proc. AAAI Conf. Artif. Intell. 33, 590597 (2019).

Google Scholar

Johnson, A. E. W. et al. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Sci. Data 6, 317 (2019).

Article PubMed PubMed Central Google Scholar

Jiang, H. & Nachum, O. Identifying and correcting label bias in machine learning. In Proc. Twenty Third International Conference on Artificial Intelligence and Statistics 702712 (PMLR, 2020).

Gebru, T. et al. Datasheets for datasets. Commun. ACM 64, 8692 (2021).

Read more:
A causal perspective on dataset bias in machine learning for medical imaging - Nature.com

Read More..

MIT researchers remotely map crops, field by field – MIT News

Crop maps help scientists and policymakers track global food supplies and estimate how they might shift with climate change and growing populations. But getting accurate maps of the types of crops that are grown from farm to farm often requires on-the-ground surveys that only a handful of countries have the resources to maintain.

Now, MIT engineers have developed a method to quickly and accurately label and map crop types without requiring in-person assessments of every single farm. The teams method uses a combination of Google Street View images, machine learning, and satellite data to automatically determine the crops grown throughout a region, from one fraction of an acre to the next.

The researchers used the technique to automatically generate the first nationwide crop map of Thailand a smallholder country where small, independent farms make up the predominant form of agriculture. The team created a border-to-border map of Thailands four major crops rice, cassava, sugarcane, and maize and determined which of the four types was grown, at every 10 meters, and without gaps, across the entire country. The resulting map achieved an accuracy of 93 percent, which the researchers say is comparable to on-the-ground mapping efforts in high-income, big-farm countries.

The team is applying their mapping technique to other countries such as India, where small farms sustain most of the population but the type of crops grown from farm to farm has historically been poorly recorded.

Its a longstanding gap in knowledge about what is grown around the world, says Sherrie Wang, the dArbeloff Career Development Assistant Professor in MITs Department of Mechanical Engineering, and the Institute for Data, Systems, and Society (IDSS). The final goal is to understand agricultural outcomes like yield, and how to farm more sustainably. One of the key preliminary steps is to map what is even being grown the more granularly you can map, the more questions you can answer.

Wang, along with MIT graduate student Jordi Laguarta Soler and Thomas Friedel of the agtech company PEAT GmbH, will present a paper detailing their mapping method later this month at the AAAI Conference on Artificial Intelligence.

Ground truth

Smallholder farms are often run by a single family or farmer, who subsist on the crops and livestock that they raise. Its estimated that smallholder farms support two-thirds of the worlds rural population and produce 80 percent of the worlds food. Keeping tabs on what is grown and where is essential to tracking and forecasting food supplies around the world. But the majority of these small farms are in low to middle-income countries, where few resources are devoted to keeping track of individual farms crop types and yields.

Crop mapping efforts are mainly carried out in high-income regions such as the United States and Europe, where government agricultural agencies oversee crop surveys and send assessors to farms to label crops from field to field. These ground truth labels are then fed into machine-learning models that make connections between the ground labels of actual crops and satellite signals of the same fields. They then label and map wider swaths of farmland that assessors dont cover but that satellites automatically do.

Whats lacking in low- and middle-income countries is this ground label that we can associate with satellite signals, Laguarta Soler says. Getting these ground truths to train a model in the first place has been limited in most of the world.

The team realized that, while many developing countries do not have the resources to maintain crop surveys, they could potentially use another source of ground data: roadside imagery, captured by services such as Google Street View and Mapillary, which send cars throughout a region to take continuous 360-degree images with dashcams and rooftop cameras.

In recent years, such services have been able to access low- and middle-income countries. While the goal of these services is not specifically to capture images of crops, the MIT team saw that they could search the roadside images to identify crops.

Cropped image

In their new study, the researchers worked with Google Street View (GSV) images taken throughout Thailand a country that the service has recently imaged fairly thoroughly, and which consists predominantly of smallholder farms.

Starting with over 200,000 GSV images randomly sampled across Thailand, the team filtered out images that depicted buildings, trees, and general vegetation. About 81,000 images were crop-related. They set aside 2,000 of these, which they sent to an agronomist, who determined and labeled each crop type by eye. They then trained a convolutional neural network to automatically generate crop labels for the other 79,000 images, using various training methods, including iNaturalist a web-based crowdsourced biodiversity database, and GPT-4V, a multimodal large language model that enables a user to input an image and ask the model to identify what the image is depicting. For each of the 81,000 images, the model generated a label of one of four crops that the image was likely depicting rice, maize, sugarcane, or cassava.

The researchers then paired each labeled image with the corresponding satellite data taken of the same location throughout a single growing season. These satellite data include measurements across multiple wavelengths, such as a locations greenness and its reflectivity (which can be a sign of water).

Each type of crop has a certain signature across these different bands, which changes throughout a growing season, Laguarta Soler notes.

The team trained a second model to make associations between a locations satellite data and its corresponding crop label. They then used this model to process satellite data taken of the rest of the country, where crop labels were not generated or available. From the associations that the model learned, it then assigned crop labels across Thailand, generating a country-wide map of crop types, at a resolution of 10 square meters.

This first-of-its-kind crop map included locations corresponding to the 2,000 GSV images that the researchers originally set aside, that were labeled by arborists. These human-labeled images were used to validate the maps labels, and when the team looked to see whether the maps labels matched the expert, gold standard labels, it did so 93 percent of the time.

In the U.S., were also looking at over 90 percent accuracy, whereas with previous work in India, weve only seen 75 percent because ground labels are limited, Wang says. Now we can create these labels in a cheap and automated way.

The researchers are moving to map crops across India, where roadside images via Google Street View and other services have recently become available.

There are over 150 million smallholder farmers in India, Wang says. India is covered in agriculture, almost wall-to-wall farms, but very small farms, and historically its been very difficult to create maps of India because there are very sparse ground labels.

The team is working to generate crop maps in India, which could be used to inform policies having to do with assessing and bolstering yields, as global temperatures and populations rise.

What would be interesting would be to create these maps over time, Wang says. Then you could start to see trends, and we can try to relate those things to anything like changes in climate and policies.

Continue reading here:
MIT researchers remotely map crops, field by field - MIT News

Read More..

Fairness in machine learning: Regulation or standards? | Brookings – Brookings Institution

Abstract

Machine Learning (ML) tools have become ubiquitous with the advent of Application Programming Interfaces (APIs) that make running formerly complex implementations easy with the click of a button in Business Intelligence (BI) software or one-line implementations in popular programming languages such as Python or R. However, machine learning can cause substantial socioeconomic harm through failures in fairness. We pose the question of whether fairness in machine learning should be regulated by the government, as in the case of the European Unions (EU) initiative to legislate liability for harmful Artificial Intelligence (AI) and the New York City AI Bias law, or if an industry standard should arise, similar to the International Standards Organization (ISO) quality-management manufacturing standard ISO 9001 or the joint effort between ISO and the International Electrotechnical Commission (IEC) standard ISO/IEC 27032 standard for cybersecurity in organizations, or both. We suggest that regulators can help with establishing a baseline of mandatory security requirements, and standards-setting bodies in industry can help with promoting best practices and the latest developments in regulation and within the field.

The ease of incorporating new machine learning (ML) tools into products has resulted in their use in a wide variety of applications, including medical diagnostics, benefit fraud detection, and hiring. Common metrics in optimizing algorithm performance, such as algorithm Accuracy (the ratio of correctly predicted to total number of attempted predicted), do not paint the complete picture regarding False Positives (algorithm incorrectly predicted positive) and False Negatives (algorithm incorrectly predicted negative), nor do they quantify the individual impact of being mislabeled. The literature has, in recent years, created the subfield of Machine Learning Fairness, attempting to define statistical criteria for group fairness such as Demographic Parity or Equalized Opportunity, which are explained in section II, and over twenty others, described in comprehensive review articles like the one by Mehrabi et al (2021).1 As the field of ML fairness continues to evolve, there is currently no one standard agreed upon in the literature for how to determine whether an algorithm is fair, especially when multiple protected attributes are considered.2 The literature on which we draw includes computer science literature, standards and governance, and business ethics.

Fairness criteria are statistical in nature and simple to run for single protected attributesindividual characteristics that cannot be the basis of algorithm decisions (e.g., race, national origin, and age, among other individual characteristics). Protected attributes in the United States are defined in U.S. federal law and began with Title VI of the Civil Rights Act of 1964. However, in cases of multiple protected attributes it is possible that no criterion is satisfied. Furthermore, oftentimes a human decision maker needs to audit the system for compliance with the fairness criteria with which it originally complied at design,3 given that a machine learning-based system often adapts through a growing training set as it interacts with more users. Moreover, no current federal law nor industry standard mandates regular auditing of such systems.

However, precedents exist in other industries for both laws and standards where risk to users exists. For example, in both the U.S. (the Federal Information Security Modernization Act [FISMA], the California Consumer Privacy Act [CCPA]) and EU (the General Data Protection Regulation [GDPR]) laws protect user data. In addition, the industry has moved to reward those who find cybersecurity bugs and report them to companies confidentially, for example through bug reward programs like Googles Vulnerability Reward Program. In this article, we propose that a joint effort between lawmakers and industry may be the best strategy to improve the fairness of machine learning systems and maintain existing systems so that they adhere to fairness standards, with higher penalties for systems that pose greater risks to users.

Before elaborating further on existing regulations, we will briefly summarize what ML fairness is and illustrate why it is a complex problem.

ML fairnessand AI fairness more broadlyis a complex and multidimensional concept, and there are several definitions and metrics used to measure and evaluate fairness in ML-based systems. Some of the most common definitions include:456

The fairness criteria should equally apply to procedural and distributive aspects.9 Other approaches are also possible; among others, the use of balanced accuracy (and its related measures)10 should be explored.

These definitions of fairness often require trade-offs, as optimizing for one may negatively impact another.11 Chouldechova (2017)12 showed that it is not possible for three group-fairness criteria to be satisfied at once, so determining the appropriate fairness metric for a specific AI system depends on the context, the domain, and the societal values at stake. This is a decision that human designers of an ML system need to make, ideally prior to the systems release to actual users. Involving several stakeholders, including users of the ML system, in deciding the fairness metric is helpful to ensure the end-product aligns with the systems ethical principles and goals.13 There is an extensive literature in ethics and psychology revolving around principles of procedural fairness.141516 Throughout the literature, procedural fairness has been broadly understood as perceived fairness of the methods used to make the decisions,17 and involves high-level principles such as correctability (if a decision is perceived as incorrect, the affected party has a mechanism to challenge it), representativeness, and accuracy (in the field of ML algorithms, this means the algorithms rely on valid, high-quality information18), among others (for a detailed explanation of procedural fairness applied to ML see Footnote 17). These principles are found in regulations; for example, the correctability principle is found in GDPR privacy law as a right to rectification and in the right to object found in both CCPA and GDPR.19

Given the rapid advances in machine learning, we recognize that legal frameworks by nations or local governments may be more difficult to develop and update. Thus, we propose first looking at industry standards, which can be used to incentivize companies to perform better while also collaborating on developing standards. Certification of a product to a standard can provide customers with a signal of quality and thus differentiate among ML-based solutions in the market.

Companies who are early adopters of a novel standard of ML fairness may be able to use that standard to gain market share as well as establish a competitive advantage compared to newcomers. For example, a company that invests early on in an auditing team for its ML system may produce more transparent software, which could be apparent to the discerning customer and thus appease concerns customers might have regarding the use of their data. Industry collaboration organizations such as Institute of Electrical and Electronics Engineers [IEEE] have developed standards for recent technologies with the help of leaders in the industry that have resulted in benefits for customers. For instance, the IEEE 802 wireless standards provided a foundation for the now-widespread Wi-Fi technology and, in the early days, provided a badge to signal to customers purchasing computers that the manufacturer complied with the latest Wi-Fi standard. Current updates to the standard enable a new wave of innovation including in areas of indoor mapping. The same incentives for standardization advocated by organizations like IEEE and ISO in manufacturing quality management, electrical safety, and communications may apply to ML.20

In the following section, we include some additional background information on standards and parallels to the cybersecurity standards that could serve as a reference for developing standards for ML fairness.

Standards are guidelines or best practices developed by industry and federal regulators in collaboration to improve the quality of products and services. Although they are often voluntary, organizations choose to adopt them to signal their commitment to security and quality, among other reasons. Some of the most prolific and widely adopted standards in the technology sector are the International Organization for Standardization (ISO) / International Electrotechnical Commission (IEC) 27000 series, the National Institute of Science and Technology (NIST) Cybersecurity Framework, and the Center for Internet Security (CIS) Critical Security Controls.

While standards are recommended, regulations are enforced. Procedures and standards must be regulatorily compliant.2122 Standards are often included by reference in U.S. federal regulations,23 which means that publishing of regulations in the Federal Register and the Code of Federal Regulations (CFR) by referring to materials already published elsewhere is lawful, as long as those materials, like international standards documents, can be accessed by the public. Since some of the standards of organizations like ISO can be hundreds of pages long and are accessible in electronic form, this approach is sensible. Regulations are legally binding rules imposed by governments or regulatory authorities that are mandatory and specify penalties, fines, or other consequences for non-compliant entities. Examples of cybersecurity and privacy regulations include the GDPR in the European Union, and the CCPA and FISMA in the United States.

The following are criteria for evaluating the use of standards and regulations:

Considering these factors, a combination of standards and regulations may be the most timely and effective approach in many cases. Regulations can establish a baseline of mandatory ML security and ethics requirements, while standards can provide guidance on best practices and help organizations stay current with the latest developments in the field.

In the United States, the process of developing cybersecurity standards often involves collaboration among the federal government, industry stakeholders, and other experts. This collaborative approach helps ensure that the resulting standards are practical, effective, and widely accepted.

One key example of this is the development of the NIST Cybersecurity Framework. NIST, an agency within the U.S. Department of Commerce, plays a key function in developing cybersecurity standards and guidelines, yet is not endowed with regulatory enforcement capabilities. NIST often solicits input from industry stakeholders, academia, and other experts to ensure that its guidance is comprehensive and current. Following the issuance of Executive Order 13636, Improving Critical Infrastructure Cybersecurity, in 2013, NIST was tasked with developing a framework to help organizations better understand, manage, and communicate cybersecurity risks.24 To do so, NIST engaged in an open, collaborative process that included the following:

These examples demonstrate the value of public-private partnerships in the development of cybersecurity standards. By involving industry stakeholders in the process, the federal government can help ensure that the resulting standards are practical, widely accepted, and effective in addressing the challenges posed by cybersecurity threats.

Regulations have also played a significant role in governing cybersecurity and privacy and have often been used to set mandatory requirements for organizations to protect sensitive information and ensure privacy.

General Data Protection Regulation (GDPR) European Union: GDPR, implemented in 2018, is widely recognized as one of the most comprehensive data protection regulations worldwide. It requires organizations to protect the personal data of EU citizens, ensuring privacy and security. GDPR has been effective in raising awareness about data protection and pushing organizations to improve their cybersecurity posture and has been seen as setting a global standard25 for user privacy. A research report on GDPR impacts published by the UK government found that GDPR compliance had resulted in more investment in cybersecurity by a majority of surveyed European businesses and that organizations generally prioritized cybersecurity following the new regulations. However, challenges include the complexity of the regulation as well as high compliance costs (further details can be found here).

Health Insurance Portability and Accountability Act (HIPAA) United States: HIPAA, passed in 1996, sets standards for protecting sensitive patient data and requires health care organizations to implement cybersecurity measures for electronic protected health information (ePHI) . Although HIPAA has been successful in improving the safeguarding of patient data, it has faced criticism for being overly complex for patients who may assume it applies to contexts where it may not offer protections (such as in health mobile apps) and for the fact that patients and caregivers of patients can have difficulty accessing necessary records.26 Furthermore, when cybersecurity breaches of private health data occur, it may be difficulty for consumers to know what options they have for recourse. The law as was written in 1996 may require updating in the face of rapidly evolving cybersecurity threats.

California Consumer Privacy Act (CCPA) United States: Implemented in 2020, the CCPA grants California consumers specific data privacy rights, such as the right to know what information is stored, as well as an option for a consumer to opt-out of data sharing. The CCPA has been praised for empowering consumers and raising the bar for privacy protection in the United States. For companies that have customers in California and other states, the CCPA has resulted in a standard for consumer privacy rights that will likely be applied by companies to other states and create dynamics that contribute to shaping the U.S. privacy regulatory framework.27 However, the CCPA does face criticism for several issues including its complexity: the nine exceptions to the right of consumers to delete data may not give consumers the protection that they expect; the burden it places on businesses; and the potential conflicts with other state or federal privacy regulations.28 Some lessons may be drawn from the current complexity of privacy laws to regulation of algorithmic fairness: Consumers may not have the time to read every opt-out notice or legal disclaimer and understand on the spot what rights they may be giving up or gaining from accepting terms of service.

Federal Information Security Management Act (FISMA) United States: Signed into law in 2002, FISMA requires each federal agency to develop, document, and implement an agency-wide program to provide information security for the information and systems that support the operations and assets of the agency. Although FISMA has led to improved cybersecurity within federal agencies, it has been criticized for being overly focused on compliance rather than continuous risk management and for not keeping pace with the evolving threat landscape. FISMA does establish generally applicable principles of cybersecurity in government, such as requiring the NIST to establish federal information processing standards that require agencies to categorize their information and information systems according to the impact or magnitude of harm that could result if they are compromised, which are codified in NIST standards. Broad principles like these are sensible for businesses to adopt as well (for example, at the business-unit or organization-unit levels).

Although regulations can be effective in setting mandatory requirements for cybersecurity and raising awareness, they may also face challenges such as complexity, high compliance costs, outdated provisions, and potential conflicts with other regulations. To address these issues, policymakers should consider periodically reviewing and updating regulations to ensure they remain relevant and effective in the face of rapidly evolving cybersecurity threats. Additionally, harmonizing regulations across jurisdictions and providing clear guidance to organizations can help alleviate some of the challenges associated with compliance. Existing cybersecurity regulations may be a template for ML fairness regulations, as well.

A similar approach can be applied to ML ethics, where regulations can set a legal framework and minimum requirements for ethical ML development and deployment, while standards can provide detailed guidance on best practices, allowing for flexibility and adaptation to new technological advancements.

Until recently, there have been no comprehensive AI-specific regulations in the United States. However, there have been efforts to establish guidelines, principles, and standards for AI development and deployment, both by the U.S. government and by various organizations and industry groups. Some examples include the following:

Executive Order on Maintaining American Leadership in Artificial Intelligence (2019): Issued by the White House in 2019, this order aimed to promote sustained investment in R&D and enhance the United States global leadership in AI. Although it did not establish specific regulations, it directed federal agencies to create a national AI strategy and develop guidance for AI development and deployment.

Defense Innovation Board (DIB) AI Principles (2019): The DIB is an advisory committee to the U.S. Department of Defense (DoD). In 2019, it released a set of ethical principles for AI in defense, covering areas such as responsibility, traceability, and reliability. Although they are not legally binding, these principles provide a basis for the ethical deployment of AI within the DoD.

Organization for Economic Co-operation and Development (OECD) AI Principles (2019): In 2019, the OECD established a set of standards for use of AI that are respectful of human rights and democratic values.

NIST AI Risk Management Framework (2023): NIST has been active in AI research and standardization efforts, focusing on topics such as trustworthy AI, AI testing and evaluation, and AI risk management. In 2023, NIST published the AI Risk Management Framework (AI RMF 1.0), which aims to provide a systematic approach to managing risks associated with AI systems. This framework, once finalized, could serve as a foundation for future AI standards and guidelines in the United States.

Partnership on AI (PAI) Tenets: The PAI is a multi-stakeholder organization that brings together industry, academia, and civil society to develop best practices for AI technologies. The partnership has published a set of tenets to guide AI research and development, including principles such as ensuring AI benefits all, prioritizing long-term safety, and promoting a culture of cooperation.

Industry-specific guidelines and standards: Several organizations and industry groups have developed their own guidelines and principles for AI development and deployment within specific sectors, such as health care, finance, and transportation. For example, PrivacyCon is an annual conference to bring together industry, government, and academia that serves the development of such guidelines in 2020 the theme of the event was health data. In a recent article, Accentures global health industry lead provided some best practices for generative AI in healthcare which are likely just a beginning of guidelines as generative AI grows in adoption in that industry. Bain and Company put together design principles for the use of generative AI in financial services, again likely a topic of growing interest in the coming years. In the transportation industry, a wide consortium of auto manufacturers, chipmakers, and other industry members put together guidelines for automated driving back in 2019. These examples of guidelines, although they are not legally binding, can help set expectations and establish best practices for AI governance within their respective industries. The National AI Institute at the Department of Veterans Affairs has also been building on and harmonizing these frameworks for trustworthy AI and operationalizing them in the health care sector.

AI-related concerns, such as data privacy and algorithmic fairness, may be addressed by existing regulations and guidelines that are not AI-specific, such as GDPR, CCPA, and guidelines on algorithmic transparency and fairness from the Federal Trade Commission (FTC). As AI continues to evolve, it is likely that more targeted regulations and standards will be developed to address AI-specific concerns and ensure ethical and responsible AI governance. Comprehensive AI auditing processes will need to be developed in a timely manner and updated periodically. Additionally, a system of incentives may be needed to encourage companies to actively develop tools to address and solve AI fairness concerns.

Standard-setting bodies work well when there are mechanisms for accountability built-in. For instance, external audit committees29 can provide an accountability mechanism as long as the audits are performed periodically (e.g., quarterly or annually) and if the auditors are not influenced by the position of those who are being audited (no revolving door scenario). To ensure accountability, such auditors may be hired by testing organizations such as the Technical Inspection Association (TUV), Underwriters Laboratories (UL), or Intertek, among others. Alternatively, auditors may be part of a volunteer community, similar to code maintainers in the Linux open-source community who control the quality of any changes to the codebase. Therefore, we suggest creating fairness audits by external auditors to the firm and codifying the type of audit and frequency in an industry standard.

We propose an approach where regulations complement industry standards by adding to them tools for enforcement, rather than as a one-size-fits-all tool. Unfortunately, firmsat least right nowdo not have a strong commercial incentive to pursue ML fairness as a goal of product development and incur additional liability in the absence of agreed upon standards. In fact, it is well known that standards may not reach their potential if they are not effectively enforced.30 Since consumers do not currently have widespread visibility into which products abide by fairness criteria and fairness criteria are not yet accessible to the general consumer since they require specialized knowledge (e.g., data science and programming skills in addition to knowing the literature), it is perhaps not feasible that a majority of consumers could themselves test for unfairness in a product or service. Furthermore, it is not yet the case that most consumers have access to training sets or company proprietary algorithms to prove whether they have been harmed by an ML system, which is required for damages under the newest regulations such as the EU AI Act (see the commentary in Heikkil, 2022). The literature on ML fairness is a complex, multidisciplinary one, so computer scientists, lawyers, ethicists, and business scholars are needed to be part of driving regulations.

Under such circumstances, it is not surprising that companies do not perceive a direct financial incentive to maintain specialized staff to audit ML fairness or supervise with a fairness-oriented goal the development of ML based products, especially in a downturn in the markets. Recently, many leading companies have unfortunately laid off a number of specialized ML fairness engineering staff, in some cases closing entire departments, which results in loss of company-specific knowledge and will mean a much slower adoption of fairness principles in industries in the future. Although regulations can provide general principles (for example, meeting at minimum an equality of opportunity fairness criterion and performing an annual fairness audit; see the guidelines by the Consumer Financial Protection Bureau (CFPB) in auditing compliance to the Equal Credit Opportunity Act [ECOA] in the United States as well as expectations regarding transparency in algorithmic decision-making when related to credit applications) and provide some consumers relief in cases of egregious violations of basic fairness criteria, they are insufficient to provide incentives to companies to perform better than such minimums and do not incentivize companies to innovate beyond meeting the regulatory requirements. Although companies may wish to implement fair ML as part of every stage of their software development processes to ensure they meet the highest ethical standards,31and we encourage that in related work,32 we recognize that people, and thus companies at large, respond to incentives and the current rules of the road are still in their infancy when it comes to ML.

Market forces can provide incentives to companies to innovate and produce better products provided the advantages of the innovation are clear to a majority of consumers. A consumer may choose a product and pay more if it satisfies a fairness standard set by a leading, recognizable standard-setting body and if the benefits of that standard are apparent. For example, a consumer may prefer a lender that markets itself as ensuring no discrimination based on subgroup fairness (i.e., combinations of categories of race, age, gender, and/or other protected attributes) if the alternatives only guarantee group-level fairness. If the consumer is aware of the higher standard this product is satisfyingfor example through a standard that the product is displayingthe consumer may choose to pay a higher price for two feature-equivalent products if one satisfies a fairness standard. Thus, we call on the industry and organizations such as the Information Systems Security Association (ISSA), the IEEE, the Association for Computing Machinery (ACM), and the ISO, among others, to invest in developing an ML fairness standard, communicate their rationale, and interact with policymakers over these standards as they deploy them into products over the next five years.

We also suggest that firms create bug bounty programs for fairness errors, in which users themselves can file bug reports with the producer of the ML system, much like what exists in cybersecurity. For example, if the ML system is a voice assistant that often misunderstands the user based on a speech impediment due to a disability, the user should be able to report that experience to the company. In another example, a user utilizing an automated job resume screening tool (as some companies now have implemented in their hiring processes) who gets consistently denied and suspects the reason may be because of a protected attribute, should be able to request a reason from the service provider. In yet another example, a mobile phone application allowing the user to test for melanoma by taking a picture should allow the user to report false positives and false negatives following consultation with a physician should such consultation prove the application misdiagnosed the user, which would allow the developers to diagnose the root cause, which may include information covered by protected attributes. A researcher or independent programmer should also be able to report bugs or potential fairness issues for any ML-based system and receive a reward if that bug report is found to reveal flaws in the algorithm or training set related to fairness.

In this report, we shared some background information on the issues of ML fairness and existing standards and regulations in software. Although ML systems are becoming increasingly ubiquitous, their complexity is often beyond what was considered by prior privacy and cybersecurity standards and laws. Therefore, we expect that what norms should specify will be an ongoing conversation requiring both industry collaboration through standardization and new regulations. We recommend a complementary approach to fairness by creating a fairness standard via a leading industry standard-setting body to include audits and mechanisms for bug reporting by users and a regulation-based approach wherein generalizable tests such as group-fairness criteria are encoded and enforced by national regulations.

Link:
Fairness in machine learning: Regulation or standards? | Brookings - Brookings Institution

Read More..