Page 331«..1020..330331332333..340350..»

What Is Cloud Security? – CrowdStrike

Cloud security definition

Cloud security is a discipline of cybersecurity focused on the protection of cloud computing systems. It involves a collection of technologies, policies, services, and security controls that protect an organizations sensitive data, applications, and environments.

Cloud computing, commonly referred to as the cloud, is the delivery of hosted services like storage, servers, and software through the internet. Cloud computing allows businesses to reduce costs, accelerate deployments, and develop at scale.

Cloud security goals:

Fortify the security posture of your cloud platforms and respond with authority to cloud data breaches.Cloud Security Services

As companies continue to transition to a fully digital environment, the use of cloud computing has become increasingly popular. But cloud computing comes with cybersecurity challenges, which is why understanding the importance of cloud security is essential in keeping your organization safe.

Over the years, security threats have become incredibly complex, and every year, new adversaries threaten the field. In the cloud, all components can be accessed remotely 24/7, so not having a proper security strategy puts gathered data in danger all at once. According to the CrowdStrike 2024 Global Threat Report, cloud environment intrusions increased by 75% from 2022 to 2023, with a 110% year-over-year increase in cloud-conscious cases and a 60% year-over-year increase in cloud-agnostic cases. Additionally, the report revealed that the average breakout time for interactive eCrime intrusion activity in 2023 was 62 minutes, with one adversary breaking out in just 2 minutes and 7 seconds.

Cloud security should be an integral part of an organizations cybersecurity strategy regardless of their size. Many believe that only enterprise-sized companies are victims of cyberattacks, but small and medium-sized businesses are some of the biggest targets for threat actors. Organizations that do not invest in cloud security face immense issues that include potentially suffering from a data breach and not staying compliant when managing sensitive customer data.

Download this new report to learn about the most prevalent cloud security risks and threats from 2023 to better protect from them in 2024.

An effective cloud security strategy employs multiple policies and technologies to protect data and applications in cloud environments from every attack surface. Some of these technologies include identity and access management (IAM) tools, firewall management tools, and cloud security posture management tools, among others.

Organizations also have the option to deploy their cloud infrastructures using different models, which come with their own sets of pros and cons.

The four available cloud deployment models are:

This type of model is the most affordable, but it is also associated with the greatest risk because a breach in one account puts all other accounts at risk.

The benefit of this deployment model is the level of control it provides individual organizations. Additionally, it provides enhanced security and ensures compliance, making it the most leveraged model by organizations that handle sensitive information. However, it is expensive to use.

The biggest benefit from this deployment model is the flexibility and performance it offers.

Most organizations use a third-party CSP such as Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure to host their data and applications. Strong cloud security involves shared responsibility between these CSPs and their customers.

It is important not to rely only on security measures set by your CSP you should also implement security measures within your organization. Though a solid CSP should have strong security to protect from attackers on their end, if there are security misconfigurations, privileged access exploitations, or some form of human error within your organization, attackers can potentially move laterally from an endpoint into your cloud workload. To avoid issues, it is essential to foster a security-first culture by implementing comprehensive security training programs to keep employees aware of cybersecurity best practices, common ways attackers exploit users, and any changes in company policy.

The shared responsibility model outlines the security responsibilities of cloud providers and customers based on each type of cloud service: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS).

This table breaks down the shared responsibility by cloud service model:

Misconfigurations, workloads and data

Endpoints, user and network security, and workloads

Endpoints, user and network security, workloads, and data

The dynamic nature of cloud security opens up the market to multiple types of cloud security solutions, which are considered pillars of a cloud security strategy. These core technologies include:

It is essential to have a cloud security strategy in place. Whether your cloud provider has built-in security measures or you partner with the top cloud security providers in the industry, you can gain numerous benefits from cloud security. However, if you do not employ or maintain it correctly, it can pose challenges.

The most common benefits include:

Unlike traditional on-premises infrastructures, the public cloud has no defined perimeters. The lack of clear boundaries poses several cybersecurity challenges and risks.

Failure to properly secure each of these workloads makes the application and organization more susceptible to breaches, delays app development, compromises production and performance, and puts the brakes on the speed of business.

In addition, organizations using multi-cloud environments tend to rely on the default access controls of their cloud providers, which can become an issue in multi-cloud or hybrid cloud environments. Insider threats can do a great deal of damage with their privileged access, knowledge of where to strike, and ability to hide their tracks.

To address these cloud security risks, threats, and challenges, organizations need a comprehensive cybersecurity strategy designed around vulnerabilities specific to the cloud. Read this post to understand 12 security issues that affect the cloud. Read: 12 cloud security issues

Though cloud environments can be open to vulnerabilities, there are many cloud security best practices you can follow to secure the cloud and prevent attackers from stealing your sensitive data.

Some of the most important practices include:

Why embrace Zero Trust?

The basic premise of the Zero Trust principle in cloud security is to not trust anyone or anything in or outside the organizations network. It ensures the protection of sensitive infrastructure and data in todays world of digital transformation. The principle requires all users to be authenticated, authorized, and validated before they get access to sensitive information, and they can easily be denied access if they dont have the proper permissions.

CrowdStrike has redefined security with the worlds most complete CNAPP that secures everything from code to cloud and enables the people, processes, and technologies that drive modern enterprise.

With a 75% increase in cloud-conscious attacks in the last year, it is essential for your security teams to partner with the right security vendor to protect your cloud, prevent operational disruptions, and protect sensitive information in the cloud. CrowdStrike continuously tracks 230+ adversaries to give you industry-leading intelligence for robust threat detection and response.

The CrowdStrike Falcon platform contains a range of capabilities designed to protect the cloud. CrowdStrike Falcon Cloud Security stops cloud breaches by consolidating all the critical cloud security capabilities that you need into a single platform for complete visibility and unified protection. Falcon Cloud Security offers cloud workload protection; cloud, application, and data security posture management; CIEM; and container security across multiple environments.

Get a free, no obligation Cloud Security Risk Review for instant and complete visibility into your entire cloud estate, provided through agentless scanning with zero impact to your business.CrowdStrike's cloud security risk review

Read the original post:
What Is Cloud Security? - CrowdStrike

Read More..

Nvidia Blackwell GPUs to be offered via AWS, Microsoft, Google, Oracle and others – DatacenterDynamics

On the back of Nvidia announcing its latest Blackwell line of GPUs, the hyperscale cloud providers have all announced plans to offer access to them later this year.

Oracle, Amazon, Microsoft, and Google have all said they will offer access to the new GPUs through their respective cloud platforms at launch. Lambda and NexGen, GPU-cloud providers, have said it will soon be offering access to Blackwell hardware.

The launch of the H100 Hopper GPU saw niche cloud providers including CoreWeave and Cirrascale get first access, with H100 instances coming to the big cloud platforms later.

Malaysian conglomerate YTL, which recently moved into developing data centers, is also set to host and offer access to a DGX supercomputer.

Singaporean telco Singtel is also set to launch a GPU cloud service later this year.

Applied Digital, a US company previously focused on hosting cryptomining hardware, has also announced it will host Blackwell hardware.

Oracle said it plans to offer Nvidias Blackwell GPUs via its OCI Supercluster and OCI Compute instances. OCI Compute will adopt both the Nvidia GB200 Grace Blackwell Superchip and the Nvidia Blackwell B200 Tensor Core GPU.

Oracle also said Nvidias Oracle-based DGX Cloud cluster will consist of 72 Blackwell GPUs NVL72 and 36 Grace CPUs with fifth-generation NVLink. Access will be available through GB200 NVL72-based instances.

As AI reshapes business, industry, and policy around the world, countries and organizations need to strengthen their digital sovereignty in order to protect their most valuable data, said Safra Catz, CEO of Oracle.

Our continued collaboration with Nvidia and our unique ability to deploy cloud regions quickly and locally will ensure societies can take advantage of AI without compromising their security.

Google announced its adoption of the new Nvidia Grace Blackwell AI computing platform. The company said Google is adopting the platform for various internal deployments and will be one of the first cloud providers to offer Blackwell-powered instances.

The search and cloud company also said the Nvidia H100-powered DGX Cloud platform is now generally available on Google Cloud. The company said it will bring Nvidia GB200 NVL72 systems, which combine 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink, to its cloud infrastructure in future.

"The strength of our long-lasting partnership with Nvidia begins at the hardware level and extends across our portfolio - from state-of-the-art GPU accelerators, to the software ecosystem, to our managed Vertex AI platform," said Google Cloud CEO Thomas Kurian.

"Together with Nvidia, our team is committed to providing a highly accessible, open, and comprehensive AI platform for ML developers."

Microsoft also said it will be one of the first organizations to bring the power of Nvidia Grace Blackwell GB200 and advanced Nvidia Quantum-X800 InfiniBand networking to the cloud and will be offering them through its Azure cloud service.

Microsoft also announced the general availability of its Azure NC H100 v5 VM virtual machine (VM) based on the Nvidia H100 NVL platform, which is designed for midrange training and inferencing.

Together with Nvidia, we are making the promise of AI real, helping drive new benefits and productivity gains for people and organizations everywhere, said Satya Nadella, chairman and CEO, Microsoft.

From bringing the GB200 Grace Blackwell processor to Azure to new integrations between DGX Cloud and Microsoft Fabric, the announcements we are making today will ensure customers have the most comprehensive platforms and tools across every layer of the Copilot stack, from silicon to software, to build their own breakthrough AI capability.

Blackwell hardware is also coming to Amazon Web Services (AWS). The companies said AWS will offer the GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs on its cloud platform.

AWS will offer the Blackwell platform, featuring GB200 NVL72, with 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink. The cloud provider also plans to offer EC2 instances featuring the new B100 GPUs deployed in EC2 UltraClusters. GB200s will also be available on Nvidias DGX Cloud within AWS.

The deep collaboration between our two organizations goes back more than 13 years, when together we launched the worlds first GPU cloud instance on AWS, and today we offer the widest range of Nvidia GPU solutions for customers, said Adam Selipsky, CEO at AWS.

Nvidias next-generation Grace Blackwell processor marks a significant step forward in generative AI and GPU computing. When combined with AWSs powerful Elastic Fabric Adapter Networking, Amazon EC2 UltraClusters hyper-scale clustering, and our unique Nitro systems advanced virtualization and security capabilities, we make it possible for customers to build and run multi-trillion parameter large language models faster, at massive scale, and more securely than anywhere else.

In its own announcement, GPU cloud provider Lambda Labs said it would be one of the first companies to deploy the latest Blackwell hardware.

The GB200 Grace Blackwell Superchip and B200 and B100 Tensor Core GPUs will be available through Lambdas On-Demand & Reserved Cloud, and Blackwell-based DGX SuperPODs will be deployed in Lambdas AI-Ready Data Centers.

NexGen, a GPU cloud and Infrastructure-as-a-Service provider, also announced it would be among the first cloud providers to offer access to Blackwell hardware.

The company said it will provide these services as part of its AI Supercloud, which is itself planned for Q2 2024.

Being one of the first Elite Cloud Partners in the Nvidia Partner Network to offer Nvidia Blackwell-powered products to the market marks a major milestone for our business, said Chris Starkey, CEO of NexGen Cloud.

Through Blackwell-powered solutions, we will be able to equip customers with the most powerful GPU offerings on the market, empowering them to drive innovation, whilst achieving unprecedented efficiencies. This will help unlock new opportunities across industries and enhance the way we use AI both now and in the future.

Malaysias YTL, which is developing data centers in Johor, is moving to become an AI cloud provider.

The company this week announced the formation of YTL AI Cloud, a specialized provider of GPU-based computing. The new unit will deploy and manage one of the worlds most advanced supercomputers on Nvidias Grace Blackwell-powered DGX Cloud.

The YTL AI Supercomputer will reportedly surpass more than 300 exaflops of AI compute.

The supercomputer will be located in a facility at the 1,640-acre YTL Green Data Center Campus, Johor. The site will reportedly be powered via 500MW of on-site solar capacity.

YTL Power International Managing Director, Dato Seri Yeoh Seok Hong, said: We are proud to be working with Nvidia and the Malaysian government to bring powerful AI cloud computing to Malaysia.

"We are excited to bring this supercomputing power to the Asia Pacific region, which has been home to many of the fastest-growing cloud regions and many of the most innovative users of AI in the world.

In the US, Applied Digital also said it would be "among the pioneering cloud service providers" offering Blackwell GPUs. Further details weren't shared.

Applied develops and operates next-generation data centers across North America to cater to high-performance computing (HPC). It was previously focused on hosting cryptomining hardware. The company also has a cloud offering through Sai Computing.

Applied Digital demonstrates a profound commitment to driving generative AI, showcasing a deep understanding of its transformative potential. By seamlessly integrating infrastructure, Applied breathes life into generative AI, recognizing the critical role of GPUs and supporting data center infrastructure in its advancement, said Wes Cummins, CEO and chairman of Applied Digital.

Singaporean telco Singtel announced it will be launching its GPU-as-a-Service (GPUaaS) in Singapore and Southeast Asia in the third quarter of this year.

At launch, Singtels GPUaaS will be powered by Nvidia H100 Tensor Core GPU-powered clusters that are operated in existing upgraded data centers in Singapore. In addition, Singtel - like everyone else - will be among the world's first to deploy GB200 Grace Blackwell Superchips.

Bill Chang, CEO of Singtels Digital InfraCo unit and Nxera regional data center business, said: We are seeing keen interest from the private and public sectors which are raring to deploy AI at scale quickly and cost-effectively.

"Our GPUaaS will run in AI-ready data centers specifically tailored for intense compute environments with purpose-built liquid-cooling technologies for maximum efficiency and lowest PUE, giving them the flexibility to deploy AI without having to invest and manage expensive data center infrastructure.

Original post:
Nvidia Blackwell GPUs to be offered via AWS, Microsoft, Google, Oracle and others - DatacenterDynamics

Read More..

VMware by Broadcom reportedly offers olive branch to some cloud providers – TechRadar

Broadcom appears to have softened its stance somewhat by giving some VMware customers a peace offering.

An exclusive report by The Register notes the shift comes several weeks after the company decided to terminate VMwares Cloud Services Provider (VCSP) program, among other fairly significant shakeups.

The news also comes just a few days after the companys CEO, Hock Tan, addressed customer unease following Broadcoms $61 billion acquisition of VMware at the end of 2023.

Previously, VCSP assisted partners in offering VMware applications as managed services; however, Broadcoms move left many partners in limbo, facing the prospect of not being able to continue offering VMware-powered services.

In response to feedback, Broadcom is now reported to have introduced a white label program that would allow cloud providers that do not meet Broadcoms core licensing requirements to continue operating by partnering with established affiliates.

This peace offering does two things:it preserves existing partnerships and allows smaller providers to continue operating their businesses, but it also ensures that VMware services continue to serve their customers and end users.

The decision comes just in time, before the looming deadline for terminating VMwares CSP program, and offers a viable alternative for those who had been threatened with partner status termination.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Though the move does address some key concerns, Broadcom continues to be monitored by industry analysts globally, and it remains to be seen whether the company is indeed on track to boost VMwares profitability.

However, many are wondering whether this is too little too late, with customers already jumping ship to switch to other hypervisor alternatives.

Follow this link:
VMware by Broadcom reportedly offers olive branch to some cloud providers - TechRadar

Read More..

Kubernetes: the driving force behind cloud-native evolution – SiliconANGLE News

As KubeCon + CloudNativeCon Europe 2024 draws to a close, the event leaves behind a rich tapestry of insights and advancements in the world of cloud computing and Kubernetes.

This years conference, marked by its vibrant atmosphere of nostalgia mixed with forward-looking enthusiasm, celebrated the significant milestone of Kubernetes tenth anniversary. Attendees and speakers alike delved into retrospective discussions about Kubernetes transformative journey over the past decade while also casting a keen eye on its future, particularly in the realms of artificial intelligence integration and the evolving landscape of cloud-native technologies.

Absolutely packed, really good interests across the board from community developers, saidDustin Kirkland (pictured, second from left), guest analyst on theCUBE, SiliconANGLE Medias livestreaming studio. Some maybe new to the open-source and CNCF community, but a lot of enterprise interests, a lot of EU enterprise interest in the various solutions that are surrounding us here on the show floor.

During the event, Kirkland and his co-analysts,Rob Strechay (right), Savannah Peterson (second from right), and Joep Piscaer (left), discussed the significant attendance and community engagement, the evolution and maturity of Kubernetes and cloud-native technologies, and the anticipation of their future integration with AI and data science. (* Disclosure below.)

The conference highlighted the shift toward a developer-centric approach in cloud-native technology, noting the importance of empowering developers to build meaningful solutions for businesses, acording to Piscaer. This focus is indicative of the maturity of Kubernetes and the broader ecosystem, moving beyond infrastructure concerns to address more complex, business-driven requirements.

Kubernetes is just mature, like you say, people are implemented were looking at the developer how to empower them, how to enable them to actually build something that makes sense for the business, Piscaer said. Thats what excites me in this show is actually having those conversations about what the developers need, what the business needs. Were kind of in a phase where we can just say, the infrastructure part, its there, its a commodity again, which I just enjoy.

The integration of new contributors and the expansion of the community also plays a critical role, according to Strechay. The diversity and growth of the community contribute significantly to the evolution of cloud-native technologies. With more than half of the attendees being newcomers, theres a vibrant exchange of ideas and experiences that enriches the ecosystem.

Backstage has more individual contributors than any of the other projects that are out there. It may not have all of the contributions, but to me, thats the interface between platform engineering and developer, Strechay said. I think weve heard through a number of the discussions weve had this week, how do you make that? How do you make platform engineering, understand what the developers need? And, by the way, now you got this guy called a data scientist whos trying to put other things and models in places that models havent been before.

In summary, the latest KubeCon + CloudNativeCon has underscored a pivotal shift in the cloud-native ecosystem. The focus is now firmly on empowering developers and integrating AI and data science into cloud-native strategies. This evolution, backed by a diverse and growing community, sets a promising trajectory for the future of cloud-native technologies.

Heres the complete video interview, part of SiliconANGLEs and theCUBE Researchs coverage of KubeCon + CloudNativeCon Europe:

(* Disclosure: TheCUBE is a paid media partner for the KubeCon + CloudNativeCon Europe event. No sponsors have editorial control over content on theCUBE or SiliconANGLE.)

THANK YOU

Follow this link:
Kubernetes: the driving force behind cloud-native evolution - SiliconANGLE News

Read More..

Fluences Cloudless Platform Goes Live as Alternative to AWS, Google Cloud – Tekedia

In an era where cloud computing has become synonymous with big tech giants like AWS and Google Cloud, a new player has emerged with a revolutionary proposition. Fluence, a decentralized platform, has officially launched its Cloudless Platform, offering a unique alternative to the centralized services that currently dominate the market.

The launch of Protocol Village marks a significant milestone for Fluence, as it represents a major leap forward in the realm of decentralized networks. This innovative platform is designed to enhance collaboration and interoperability between different protocols, fostering a more unified and efficient blockchain ecosystem.

The Cloudless Platform by Fluence is designed to operate without the need for traditional cloud infrastructure. Instead, it leverages a network of independent nodes to provide computing resources and data storage. This approach not only challenges the status quo of cloud services but also aims to address some of the key concerns associated with them, such as privacy, security, and vendor lock-in.

Tekedia Mini-MBA edition 14 (June 3 Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africas finest startupshere.

Fluences platform operates on the principle of protocol interoperability, allowing various applications and services to communicate seamlessly. This is achieved through Protocol Village, a suite of protocols that ensure compatibility and smooth operation across different systems and services.

One of the standout features of Fluences platform is its commitment to open-source development. By fostering a community-driven approach, Fluence encourages innovation and collaboration among developers. This open ecosystem is expected to accelerate the development of new applications and services that can run on the Cloudless Platform.

As businesses and individuals become increasingly aware of the implications of data sovereignty and digital autonomy, Fluences Cloudless Platform presents an attractive proposition. It offers users control over their data while providing a robust and scalable solution for their computing needs.

The launch of Protocol Village marks a significant milestone for Fluence as it sets out to redefine the landscape of cloud computing. With its decentralized model, Fluence is poised to empower users with greater freedom and flexibility in how they manage their digital resources.

The implications of Protocol Village are far-reaching. By providing a common ground for various protocols to interact, it paves the way for more seamless integration of services and applications. This not only benefits developers but also end-users who will enjoy a more cohesive experience across different blockchain platforms.

Looking ahead, the launch of Protocol Village is just the beginning. It sets the stage for a future where decentralized networks can operate more harmoniously, unlocking new possibilities for innovation and growth in the digital world.

As we move forward, it will be interesting to observe how the market responds to this alternative approach to cloud computing. Will Protocol Village and Fluences Cloudless Platform disrupt the dominance of established players? Only time will tell.

Like Loading...

Follow this link:
Fluences Cloudless Platform Goes Live as Alternative to AWS, Google Cloud - Tekedia

Read More..

AWS and NVIDIA Extend Collaboration to Advance Generative AI Innovation – NVIDIA Blog

GTCAmazon Web Services (AWS), an Amazon.com company (NASDAQ: AMZN), and NVIDIA (NASDAQ: NVDA) today announced that the new NVIDIA Blackwell GPU platform unveiled by NVIDIA at GTC 2024 is coming to AWS. AWS will offer the NVIDIA GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs, extending the companies long standing strategic collaboration to deliver the most secure and advanced infrastructure, software, and services to help customers unlock new generative artificial intelligence (AI) capabilities.

NVIDIA and AWS continue to bring together the best of their technologies, including NVIDIAs newest multi-node systems featuring the next-generation NVIDIA Blackwell platform and AI software, AWSs Nitro System and AWS Key Management Service (AWS KMS) advanced security, Elastic Fabric Adapter (EFA) petabit scale networking, and Amazon Elastic Compute Cloud (Amazon EC2) UltraCluster hyper-scale clustering. Together, they deliver the infrastructure and tools that enable customers to build and run real-time inference on multi-trillion parameter large language models (LLMs) faster, at massive scale, and at a lower cost than previous-generation NVIDIA GPUs on Amazon EC2.

The deep collaboration between our two organizations goes back more than 13 years, when together we launched the worlds first GPU cloud instance on AWS, and today we offer the widest range of NVIDIA GPU solutions for customers, said Adam Selipsky, CEO at AWS. NVIDIAs next-generation Grace Blackwell processor marks a significant step forward in generative AI and GPU computing. When combined with AWSs powerful Elastic Fabric Adapter Networking, Amazon EC2 UltraClusters hyper-scale clustering, and our unique Nitro systems advanced virtualization and security capabilities, we make it possible for customers to build and run multi-trillion parameter large language models faster, at massive scale, and more securely than anywhere else. Together, we continue to innovate to make AWS the best place to run NVIDIA GPUs in the cloud.

AI is driving breakthroughs at an unprecedented pace, leading to new applications, business models, and innovation across industries, said Jensen Huang, founder and CEO of NVIDIA. Our collaboration with AWS is accelerating new generative AI capabilities and providing customers with unprecedented computing power to push the boundaries of what's possible.

Latest innovations from AWS and NVIDIA accelerate training of cutting-edge LLMs that can reach beyond 1 trillion parameters AWS will offer the NVIDIA Blackwell platform, featuring GB200 NVL72, with 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVIDIA NVLink. When connected with Amazons powerful networking (EFA), and supported by advanced virtualization (AWS Nitro System) and hyper-scale clustering (Amazon EC2 UltraClusters), customers can scale to thousands of GB200 Superchips. NVIDIA Blackwell on AWS delivers a massive leap forward in speeding up inference workloads for resource-intensive, multi-trillion-parameter language models.

Based on the success of the NVIDIA H100-powered EC2 P5 instances, which are available to customers for short durations throughAmazon EC2 Capacity Blocks for ML, AWS plans to offer EC2 instances featuring the new B100 GPUs deployed in EC2 UltraClusters for accelerating generative AI training and inference at massive scale. GB200s will also be available onNVIDIA DGX Cloud, an AI platform co-engineered on AWS, that gives enterprise developers dedicated access to the infrastructure and software needed to build and deploy advanced generative AI models. The Blackwell-powered DGX Cloud instances on AWS will accelerate development of cutting-edge generative AI and LLMs that can reach beyond 1 trillion parameters.

Elevate AI security with AWS Nitro System, AWS KMS, encrypted EFA, and Blackwell encryption As customers move quickly to implement AI in their organizations, they need to know that their data is being handled securely throughout their training workflow. The security of model weights the parameters that a model learns during training that are critical for its ability to make predictions is paramount to protecting customers intellectual property, preventing tampering with models, and maintaining model integrity.

AWS AI infrastructure and services already have security features in place to give customers control over their data and ensure that it is not shared with third-party model providers. The combination of the AWS Nitro System and the NVIDIA GB200 takes AI security even further by preventing unauthorized individuals from accessing model weights. The GB200 allows inline encryption of the NVLink connections between GPUsand encrypts data transfers,while EFA encrypts data across servers for distributed training and inference. The GB200 will also benefit from the AWS Nitro System, which offloads I/O for functions from the host CPU/GPU to specialized AWS hardware to deliver more consistent performance, while its enhanced security protects customer code and data during processing on both the customer side and AWS side. This capability available only on AWS has beenindependently verified by NCC Group, a leading cybersecurity firm.

With the GB200 on Amazon EC2, AWS will enable customers to create a trusted execution environment alongside their EC2 instance, usingAWS Nitro Enclaves and AWS KMS. Nitro Enclaves allow customers to encrypt their training data and weights with KMS, using key material under their control. The enclave can be loaded from within the GB200 instance and can communicate directly with the GB200 Superchip. This enables KMS to communicate directly with the enclave and pass key material to it in a cryptographically secure way. The enclave can then pass that material to the GB200, protected from the customer instance and preventing AWS operators from ever accessing the key or decrypting the training data or model weights, giving customers unparalleled control over their data.

Project Ceiba taps Blackwell to propel NVIDIAs future generative AI innovation on AWS Announced at AWS re:Invent 2023, Project Ceiba is a collaboration between NVIDIA and AWS to build one of the worlds fastest AI supercomputers. Hosted exclusively on AWS, the supercomputer is available for NVIDIAs own research and development. This first-of-its-kind supercomputer with 20,736 B200 GPUs is being built using the new NVIDIA GB200 NVL72, a system featuring fifth-generation NVLink, that scales to 20,736 B200 GPUs connected to 10,368 NVIDIA Grace CPUs. The system scales out using fourth-generation EFA networking, providing up to 800 Gbps per Superchip of low-latency, high-bandwidth networking throughput capable of processing a massive 414exaflops of AI a 6x performance increase over earlier plans to build Ceiba on the Hopper architecture. NVIDIA research and development teams will use Ceiba to advance AI for LLMs, graphics (image/video/3D generation) and simulation, digital biology, robotics, self-driving cars, NVIDIA Earth-2 climate prediction, and more to help NVIDIA propel future generative AI innovation.

AWS and NVIDIA collaboration accelerates development of generative AI applications and advance use cases in healthcare and life sciences AWS and NVIDIA have joined forces to offer high-performance, low-cost inference for generative AI with Amazon SageMaker integration with NVIDIA NIM inference microservices, available with NVIDIA AI Enterprise. Customers can use this combination to quickly deploy FMs that are pre-compiled and optimized to run on NVIDIA GPUs to SageMaker, reducing the time-to-market for generative AI applications.

AWS and NVIDIA have teamed up to expand computer-aided drug discovery with newNVIDIA BioNeMo FMs for generative chemistry, protein structure prediction, and understanding how drug molecules interact with targets. These new models will soon be available on AWS HealthOmics, a purpose-built service that helps healthcare and life sciences organizations store, query, and analyze genomic, transcriptomic, and other omics data.

AWS HealthOmics and NVIDIA Healthcare teams are also working together to launch generative AI microservices to advance drug discovery, medtech, and digital health delivering a new catalog of GPU-accelerated cloud endpoints for biology, chemistry, imaging and healthcare data so healthcare enterprises can take advantage of the latest advances in generative AI on AWS.

See the original post here:
AWS and NVIDIA Extend Collaboration to Advance Generative AI Innovation - NVIDIA Blog

Read More..

AI generated deepfake of Kari Lake used to highlight dangers in election – The Washington Post

Hank Stephenson has a finely tuned B.S. detector. The longtime journalist has made a living sussing out lies and political spin.

But even he was fooled at first when he watched the video of one of his home states most prominent congressional candidates.

There was Kari Lake, the Republican Senate hopeful from Arizona, on his phone screen, speaking words written by a software engineer. Stephenson was watching a deepfake an artificial-intelligence-generated video produced by his news organization, Arizona Agenda, to underscore the dangers of AI misinformation in a pivotal election year.

When we started doing this, I thought it was going to be so bad it wouldnt trick anyone, but I was blown away, Stephenson, who co-founded the site in 2021, said in an interview. And we are unsophisticated. If we can do this, then anyone with a real budget can do a good enough job that itll trick you, itll trick me, and that is scary.

As a tight 2024 presidential election draws ever nearer, experts and officials are increasingly sounding the alarm about the potentially devastating power of AI deepfakes, which they fear could further corrode the countrys sense of truth and destabilize the electorate.

There are signs that AI and the fear surrounding it is already having an impact on the race. Late last year, former president Donald Trump falsely accused the producers of an ad, which showed his well-documented public gaffes, of trafficking in AI-generated content. Meanwhile, actual fake images of Trump and other political figures, designed both to boost and to bruise, have gone viral again and again, sowing chaos at a crucial point in the election cycle.

Now some officials are rushing to respond. In recent months, the New Hampshire Justice Department announced it was investigating a spoofed robocall featuring an AI-generated voice of President Biden; Washington state warned its voters to be on the lookout for deepfakes; and lawmakers from Oregon to Florida passed bills restricting the use of such technology in campaign communications.

And in Arizona, a key swing state in the 2024 contest, the top elections official used deepfakes of himself in a training exercise to prepare staff for the onslaught of falsehoods to come. The exercise inspired Stephenson and his colleagues at the Arizona Agenda, whose daily newsletter seeks to explain complex political stories to an audience of some 10,000 subscribers.

They brainstormed ideas for about a week and enlisted the help of a tech-savvy friend. On Friday, Stephenson published the piece, which included three deepfake clips of Lake.

It begins with a ploy, telling readers that Lake a hard-right candidate whom the Arizona Agenda has pilloried in the past had decided to record a testimonial about how much she enjoys the outlet. But the video quickly pivots to the giveaway punchline.

Subscribe to the Arizona Agenda for hard-hitting real news, the fake Lake says to the camera, before adding: And a preview of the terrifying artificial intelligence coming your way in the next election, like this video, which is an AI deepfake the Arizona Agenda made to show you just how good this technology is getting.

By Saturday, the videos had generated tens of thousands of views and one very unhappy response from the real Lake, whose campaign attorneys sent the Arizona Agenda a cease-and-desist letter. The letter demanded the immediate removal of the aforementioned deep fake videos from all platforms where they have been shared or disseminated. If the outlet refuses to comply, the letter said, Lakes campaign would pursue all available legal remedies.

A spokesperson for the campaign declined to comment when contacted on Saturday.

Stephenson said he was consulting with lawyers about how to respond, but as of Saturday afternoon, he was not planning to remove the videos. The deepfakes, he said, are good learning devices, and he wants to arm readers with the tools to detect such forgeries before theyre bombarded with them as the election season heats up.

Fighting this new wave of technological disinformation this election cycle is on all of us, Stephenson wrote in the article accompanying the clips. Your best defense is knowing whats out there and using your critical thinking.

Hany Farid, a professor at the University of California at Berkeley who studies digital propaganda and misinformation, said the Arizona Agenda videos were useful public service announcements that appeared carefully crafted to limit unintended consequences. Even so, he said, outlets should be wary of how they frame their deepfake reportage.

Im supportive of the PSAs, but theres a balance, Farid said. You dont want your readers and viewers to look at everything that doesnt conform to their worldview as fake.

Deepfakes present two distinct threat vectors, Farid said. First, bad actors can generate false videos of people saying things they never actually said; and second, people can more credibly dismiss any real embarrassing or incriminating footage as fake.

This dynamic, Farid said, has been especially apparent during Russias invasion of Ukraine, a conflict rife with misinformation. Early in the war, Ukraine promoted a deepfake showing Paris under attack, urging world leaders to react to the Kremlins aggression with as much urgency as they might show if the Eiffel Tower had been targeted.

It was a potent message, Farid said, but it opened the door for Russias baseless claims that subsequent videos from Ukraine, which showed evidence of Kremlin war crimes, were similarly feigned.

I am worried that everything is becoming suspect, he said.

Stephenson, whose backyard is a political battleground that lately has become a crucible of conspiracy theories and false claims, has a similar fear.

For many years now weve been battling over whats real, he said. Objective facts can be written off as fake news, and now objective videos will be written off as deep fakes, and deep fakes will be treated as reality.

Researchers like Farid are feverishly working on software that would allow journalists and others to more easily detect deepfakes. Farid said the suite of tools he currently uses easily classified the Arizona Agenda video as bogus, a hopeful sign for the coming flood of fakes. However, deepfake technology is improving at a rapid rate, and future phonies could be much harder to spot.

And even Stephensons admittedly sub-par deepfake managed to dupe a few people: After blasting out Fridays newsletter with the headline Kari Lake does us a solid, a handful of paying readers unsubscribed. Most likely, Stephenson suspects, they thought Lakes endorsement was real.

Maegan Vazquez contributed to this report.

View post:
AI generated deepfake of Kari Lake used to highlight dangers in election - The Washington Post

Read More..

Tennessee Makes A.I. an Outlaw to Protect Its Country Music and More – The New York Times

The floor in front of the stage at Roberts Western World, a beloved lower Broadway honky-tonk in Nashville, was packed on Thursday afternoon.

But even with the country music superstar Luke Bryan and multiple other musicians on hand, the center of attention was Gov. Bill Lee and his Elvis Act.

And Mr. Lee did not disappoint, signing into law the Ensuring Likeness, Voice and Image Security Act, a first-in-the-nation bill that aims to protect musicians from artificial intelligence by adding penalties for copying a performers voice without permission.

There are certainly many things that are positive about what A.I. does, Mr. Lee told the crowd. But, he added, when fallen into the hands of bad actors, it can destroy this industry.

The use of A.I. technology and its rapid fire improvement in mimicking public figures has led several legislatures to move to tighten regulations over A.I., particularly when it comes to election ads. The White House late last year imposed a sweeping executive order to push for more guardrails as Congress wrestles with federal regulations.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Read the original here:
Tennessee Makes A.I. an Outlaw to Protect Its Country Music and More - The New York Times

Read More..

Using novel micropore technology combined with artificial intelligence to differentiate Staphylococcus aureus and … – Nature.com

A micropore device

Micropores (3m in diameter) were fabricated on a 50nm-thick silicon nitride film cast on a silicon substrate. The silicon substrate was sandwiched between a 25mm25mm 0.5-mm-thick plastic channel. This structure is termed a micropore module or a module (Fig.1a, b). Bacterial suspensions (18 L) in 1phosphate buffered saline (PBS) and PBS (15 L) were introduced into the cis and trans chambers, respectively. We measured ionic currenttime waveforms through applying a voltage of 0.1V between the Ag/AgCl electrodes placed in the flow channel. When the bacteria passed through the micropores, the ionic current decreased owing to obstruction due to the flowing ions (Fig.1c). S. aureus and S. epidermidis are spherical in shape with a diameter ranging from 0.6 to 1.0m under a low vacuum of 98hPa. We observed no specific differences in the bacterial structures through scanning electron microscopy (SEM) under the culture conditions (Fig.1dg). The waveforms were collected in the data server and analysed, as described in the Methods section. The number of available waveforms for the bacterial suspensions was independent of their optical densities measured at a wavelength of 600nm (OD600) (Fig.1h). Therefore, sufficient bacteria were present in the suspensions regardless of their OD600 values.

(a) The structure of the micropore module. The micropore module is 25 mm2 in size and 0.5mm thick. The bacterial suspension is introduced from the cis channel, and PBS is introduced from the trans channel. (b) An optical image of the pore of the module. The optical microscopic examination of the micropore module suggests that the 3m diameter micropore is in the centre of the silicon substrate. (c) A schema of the micropore. When the bacteria pass through the micropore, the ionic current decreases because of obstruction in flowing ions. The processing software obtains the change in ionic current as a waveform. (d) Scanning electron microscope (SEM) observation of S. epidermidis14,000. (e) SEM observation of S. epidermidis60,000. (f) SEM observation of S. aureus14,000. (g) SEM observation of S. aureus60,000. (h) A scatter diagram of the OD600 of the bacterial suspensions and their average pulse counts. No correlations were observed between the OD600 and pulse counts of the bacterial suspensions.

Ionic current measurements were performed on S. aureus and S. epidermidis cultures using micropore devices placed under an optical microscope. The base current was approximately 0.4 A. Using the equations for access resistance (1/D) and micropore resistance (4L/D2), the base current was estimated to be 0.5 nA when the ionic conductivity () of 1PBS was 1.61 Sm1, the average diameter of the micropores was D=3m, and the thickness of the micropores was L=50nm. The ionic current obtained was consistent with the theoretically estimated ionic current. With application of a voltage of 0.1V, a single bacterium (black spot indicated by yellow arrow, Fig.2a) was pulled into the micropore. Bacteria within an approximate radius of 15m from the micropore were pulled in at an accelerated rate as they approached the micropore (Fig.2a(langle 1rangle)). When one bacterium passed through the micropore, we observed a single ionic currenttime waveform, and the ionic current did not change until it entered the micropore. Bacteria within an approximate radius of 40m from the micropore were drawn to the micropore by Brownian motion (Fig.2a(langle 2rangle)). The duration of Brownian motion was>1min while approaching the micropores. When a bacterium reached an approximate 15m radius from the micropore threshold, it was rapidly pulled into the micropore, which suggested it was being pulled under the influence of electric forces.

(a) Optical microscope images (scale bar=10m) of S. epidermidis (small dot indicated by yellow arrow) being drawn towards and pulled into a micropore (3m black circles):<1>the bacterium near (within 15m radius) the micropore is pulled in at a fast rate;<2>the trajectory of a distant bacterium being drawn towards the micropore until it is pulled in. (b) Actual display screen during the measurement. The moment at which the bacteria (yellow arrow) is being sucked into the micropore. Its waveform is denoted on the right top column of the right-side window. The waveforms are indicated at three different time scales using the waveform viewer software.

Negatively charged bacteria were subjected to diffusion and electric forces between the electrodes. The radius (r) from the centre of the micropore at which the bacteria were trapped by the electric field is denoted by the equation:

$$r = d^{{2}} mu Delta V/{8}hD$$

where d, , and V are the diameter of the micropore, the mobility of the bacteria, and applied voltage, respectively, and h and D are the thickness of the micropores and diffusion constant of the bacteria, respectively. The bacterial mobility and diffusion constants were approximately108mV1s1 and 109 m2 s1, respectively. The experimentally observed r=15m was relatively close to the theoretically predicted r=22.5m. When the bacteria did not pass through the micropores, the ionic current remained constant at the base current, which was correlated with a non-event (Supplementary Material: the captured video of the actual screen during the measurement of the ionic currents of the bacteria. The optical microscopic image is shown on the left side, and the waveforms of ionic currents are shown on the right side). When a bacterium passed through the micropore, we obtained a spike-shaped ionic currenttime waveform corresponding to one bacterium (Fig.2b). The combination of optical microscopy and ionic current measurements demonstrated a correlation between the movement of a single bacterium and the ionic currenttime waveform.

Spike-shaped waveforms are characterised by a maximum current value (Ip) and current duration (td) (Fig.1c). The histograms of Ip and td nearly overlapped completely (Fig.3a, b). Similar Ip levels reflected a small difference in the size between the two bacterial species. It was difficult to distinguish between them using these histograms. However, we observed differences in the shapes of the waveforms between the bacteria, suggesting distinguishable features (Fig.3c). We used machine learning for identifying the waveform features such that the waveforms obtained could be used to identify the two species.

(a) Histograms of Ip and td of S. epidermidis. (b) Histograms of Ip and td of S. aureus. (c) Differences in the absolute value of the histograms of Ip and td between S. epidermidis and S. aureus. (d) Confusion matrix of all isolates. (e) Confusion matrix of the assembly machine learning results.

Fifty isolates of S. aureus and S. epidermidis were used for ionic current measurements. Each bacterial isolate was measured in triplicate using a micropore device, and each measurement lasted for 3min. The 15,000 waveforms obtained from the measurements were provided as inputs for the machine-learning training set. The accuracy of differentiating between the two species in a single waveform was F-value=0.59, which exceeded the F-value=0.5 for random discrimination (Fig.3d). In this single-waveform learning, it is determined which species each single waveform belongs to. The F-value is denoted by the harmonic mean of the sensitivity and precision, 2/(1/sensitivity+1/precision), as follows:

$$Sensitivity= frac{True , Positive}{True , Positive+False , Negative}$$

$$Precision=frac{True , Positive}{True , Positive+False , Positive}$$

$$F-measure=frac{2times Sensitivity , times , Precision}{Sensitivity , + , Precision}$$

For S. epidermidis, the confusion matrix yielded a sensitivity and precision of 9781/(9781+5219)=0.65 and 9781/(9781+6968)=0.58, respectively. This resulted in an F-value of 0.62. Similarly, the sensitivity and precision for S. aureus were 0.61 and 0.54, respectively, yielding an F-value of 0.57. The overall adopted F-value was the average of the F-values obtained for the two species. We determined the species-level accuracy of the bacterial identification on an isolate-by-isolate basis using machine learning. Of the 50 bacterial isolates, 25 were selected randomly as the training set for each species, respectively. We determined the training set yielding the highest F-value. Using the selected training set, we performed assembly learning to develop a classifier to determine whether a bacterial isolate belonged to S. aureus or S. epidermidis. In this assembly learning, single waveforms of one isolate/strain are treated as an aggregated data. Therefore, this assembly learning is an isolate/strain-focused learning. In contrast to the single-waveform learning, it is determined which species the aggregated data of each isolate/strain belongs to. In addition, the assembly machine learning uses the entire distribution of waveforms in each species, in addition to independent parameters (Ip, td, current vector, and time vector). The F-value was 0.93 (Fig.3e).

The classifiers created during the machine learning process were used to distinguish the remaining 25 bacterial isolates (Fig.4a) and the additional ATCC standard strains. Each isolate was assessed by the classifier for three measurements, and two or three correct responses from the three trials were regarded as the final correct answers for the isolate or strain.

(a) Hold-out method. We employed a hold-out method for machine learning that splits data into the following two groups: a training dataset and a testing dataset. (b) Receiver operating characteristic curve of the classifier. (c) Characteristic distribution of the waveform. The pulse data are acquired 250,000 times per min. Steps denote the number of data points acquired, Height denotes the current value of the pulse, and Peak ratio denotes the location of the peak of the pulse, when the left edge of the pulse is 0 and the right edge of the pulse is 1. (d) Zeta potential distribution of the bacteria. The dashed line denotes the median, and the dotted line denotes the quartiles. No statistically significant difference was noted between the two species.

The area under the receiver operating characteristic curve (AUROC) was 0.94 (>0.9), demonstrating that the trained classifier could distinguish S. aureus from S. epidermidis at a high accuracy (Fig.4b)20. The sensitivity and specificity for S. aureus detection were 96.4% and 80.8%, respectively, with an accuracy of 88.9% (Table 1). The positive agreement was 84.4%, and the negative agreement was 95.5% (Table 1).

Ionic currenttime waveforms consist of information in relation to the size, shape, and surface charges of bacteria passing through the micropores. Micropore measurements demonstrated few statistically significant differences in the size and shape of S. aureus and S. epidermidis (Fig.4c).

The Zeta potential affects the ionic current of the particles measured using the micropore device, and indicates the electrical charge of the surface layer of the particles21. Zeta potentials of the S. aureus and S. epidermidis isolates were measured using a Zetasizer (Malvern Instruments, Worcestershire, UK)22. The surface charges of the two species indicated that they were negatively charged, with Zeta potentials>20mV. While the Zeta potential range of S. epidermidis was greater than that of S. aureus and the distribution pattern was not completely the same, there were no significant difference between the Zeta potentials of S. aureus and S. epidermidis (Fig.4d). Machine learning used surface charge differences between the two bacteria to distinguish between the species in reference to the entire distribution pattern of the features, of which difference is too subtle to statistically detect. The ion currenttime waveform provides information on the volume, structure, and surface charge of bacteria passing through micropores. The machine learning, which inputs the shape of the ion currenttime waveform as a feature, is considered to distinguish differences in surface charge.

See the original post:
Using novel micropore technology combined with artificial intelligence to differentiate Staphylococcus aureus and ... - Nature.com

Read More..

The Risks of Artificial Intelligence and the Response of Korean Civil Society – The Good Men Project

By Byoung-il Oh

With the launch of ChatGPT at the end of 2022, people around the world realised the arrival of the artificial intelligence (AI) era, and South Korea was no exception. At the same time, 2023 was also a year of global awareness of the need to control the risks of AI. In November 2023, at the AI Safety Summit in Bletchley Park, UK, legislators in the European Union (EU) agreed on an AI act, and the Biden administration in the US issued an executive order to regulate AI. In the coming years, discussions on AI regulation in various countries are bound to influence each other. Korean civil society also believes that it is necessary to enact a law to stem the risks of AI, but the bill currently being pushed by the Korean government and the National Assembly has been met with opposition from civil society because, in the name of fostering Koreas own AI industry, the proposed billlacks the proper regulatory framework.

South Koreans have already embraced AI in their lives right from chatbots to translation to recruitment and platform algorithms, a variety of AI-powered services have already been introduced into our society. While AI can provide efficiency and convenience in work and life, its development and use can also pose a number of risks, including threats to safety and violations of human rights. The risks commonly cited are invasion of privacy, discriminatory decisions, lack of accountability due to opacity, and sophisticated surveillance, which, when coupled with unequal power relations in society, can perpetuate inequities and discriminatory structures and threaten democracy.

Jinbonet recently published a researchreport, produced with the support of an APCsubgrant, on controversial AI-related cases in South Korea. And indeed there have been several that have raised concerns. Lee Luda, a chatbot launched in December 2020, was criticised for its hate speech against vulnerable groups such as women, people with disabilities, LGBTQIA+communities and Black people, and was punished by the Personal Information Protection Commission (PIPC) for violating the Personal Information Protection Act (PIPA). dIn addition, the use of AI during recruitment processes has increased across both public and private companies in recent years, as corruption in recruitment in public institutions has become a social issue. Also, with remote work becoming the norm during the COVID-19 pandemic, institutions have not properly verified the risks or performance AI recruitment systems and have no data in this regard. It also remains an open question whether private companies AI-driven recruitment works fairly without discrimination based on gender, region, education, etc. The Ministry of Justice and Ministry of Science and ICT sparked a huge controversy when they provided facial recognition data of 170 million Koreans and foreigners to a private company without consent in the guise of upgrading the airport immigration system. Civil society groups suspect that public authorities provided such personal data to favour private tech companies.

There is suspicion that big tech platforms use their algorithms to abuse this data to gain advantage over their competitors. Kakao, which provides KakaoTalk, a messenger app used by almost all Koreans, used its dominance to take over the taxi market. It was fined by the Korean Fair Trade Commission (KFTC) in February 2023 after it was found to have manipulated its AI dispatching algorithm in favour of its taxis. Similarly, another Korean big tech company, Naver, was fined by the KFTC in 2020 for manipulating shopping search algorithms to favour its own products. Korean civil society is also concerned about the use of AI systems for state surveillance. While the use of AI systems by intelligence and investigative agencies has not yet become controversial, the Korean government has invested in R&D for so-called smart policing, and, given that South Korea has one of the highest numbers of CCTVs installed globally, there are concerns that surveillance through intelligent CCTVs could be introduced.

While existing regulations such as the PIPA and the Fair Trade Act can be applied to AI systems, there is no specific legislation to regulate AI in South Korea as a whole. For example, as in the case of AI recruitment systems, there are no requirements for public institutions to develop their own AI systems or to adopt private sector AI systems to ensure accountability. There is also no obligation to take measures to proactively control problems with AI, such as verifying data bias, or to reactively track the source of problems, such as record-keeping.

The Korean government has been promoting the development of the AI industry as a national strategy for several years. The National Strategy for Artificial Intelligence (AI) was released on 17 December 2019 by all ministries, including the Ministry of Science and ICT. As the slogan Beyond IT powerhouse to AI powerhouse shows, the strategy is an expression of the governments policy to understand AI as a civilisational change and use it as an opportunity to develop the economy and solve social problems. However, this strategy is based on an allow first, regulate later approach to AI regulation. Therefore, the policy is mainly based on the establishment of an AI ethical code that can serve as a guide for self-regulation of companies.

Civil society organisations (CSOs) in South Korea have also been making their voices heard on AI-related policies for several years. On 24 May 2021, 120 CSOs released their manifesto, Civil Society Declaration on Demand for an AI Policy that Ensures Human Rights, Safety, and Democracy. Calling for ensuring human rights and legal compliance of AI and the legislation of the AI act, the CSOs suggested that the act should include

In cooperation with the National Human Rights Commission (NHRC), the CSOs have also urged the NHRC to play an active role in regulating AI from a human rights perspective. On 11 May 2022, the NHRC released its road map, Human Rights Guidelines on the Development and Use of Artificial Intelligence, to prevent any human rights violations and discrimination that may occur in the process of developing and using AI. It plans to release an AI human rights impact assessment tool in 2024. Activists of Jinbonetparticipated in research work to establish human rights guidelines for the NHRC and to develop a human rights impact assessment tool.

The Korean government, particularly the Ministry of Science and ICT, which is the lead ministry, is also pushing for legislation to regulate AI. In early 2023, the relevant standing committee of the National Assembly discussed an AI bill that was a merger of bills proposed by several lawmakers, and also consulted by the Ministry of Science and ICT. However, while the bill aims to establish a basic law on AI, it is mainly focused on fostering the industry. It advocates the principle of allow first, regulate later, but does not include any obligations or penalties for providers to control the risks of AI, nor any remedies for those who suffer harm from AI.

Korean civil society agrees that laws are needed to regulate AI and is vehemently opposed to the AI bill currently being debated in the National Assembly. Instead, Korean CSOs have been discussing their own proposal for an AI bill in 2023. Led by digital rights groups, including Jinbonet, they developed a draft and received inputs from a wider panel of CSO activists and experts at the civil society forum, Artificial Intelligence and The Role of Civil Society for Human Rights and Safety, held on 22 November 2023 and funded by APC. They intend to propose a civil society version of the AI bill to the National Assembly.

The AI legislation being debated in Europe has also influenced the Korean civil society. It examined the positions of its European counterpart on the AI bill, the positions of the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS), the negotiating position of the European Parliament, etc. Although the European AI Bill, which was agreed at the end of 2023, is a step backward compared to civil societys position and the European Parliaments position, it contains a number of references at the global level. Of course, when discussing AI legislation in Korea, it is necessary to consider the different legal systems and social conditions apropos of Europe and Korea.

Korean industry, pro-industry experts and conservative media oppose AI regulation. They argue that the European Union is trying to regulate AI because it is lagging behind, and that there is no need to rush to regulate AI, in order to foster Koreas AI industry. They have used the same logic for privacy regulation and big tech regulation. South Koreasown big tech companies such as Naver and Kakao are also developing hyperscale AI. Therefore, there is a very strong public opinion in favour of domestic big tech and AI industries .

South Korea is holding its general election in April 2024. Any bills that fail to pass in the current National Assembly will be abandoned when the new National Assembly is constituted in June 2024. It is unlikely that AI bills will be fully discussed in the current National Assembly. Korean civil society intends to ask the new National Assembly to introduce a civil society AI bill and urge it to pass legislation that will actually regulate AI. To build public opinion for the passage of the AI bill, Korean civil society, including Jinbonet, is set on identifying and publicising more instances of the dangers of AI.

Byoung-il Oh is the president of the Korean progressive network Jinbonet, a member of the Association for Progressive Communications, which advocates for human rights in the information society, especially the rights to communication, free speech and privacy. He is also a member of the Korea Internet Governance Alliance (KIGA) Steering Committee.

Previously Published on apc.org with Creative Commons License

***

All Premium Members get to view The Good Men Project with NO ADS. A $50 annual membership gives you an all access pass. You can be a part of every call, group, class and community. A $25 annual membership gives you access to one class, one Social Interest group and our online communities. A $12 annual membership gives you access to our Friday calls with the publisher, our online community.

Photo credit: unsplash

Read more here:
The Risks of Artificial Intelligence and the Response of Korean Civil Society - The Good Men Project

Read More..