Page 97«..1020..96979899..110120..»

Sessions | Ayaan Hirsi Ali [DW+ Exclusive] – The Daily Wire

The Jordan B. Peterson PodcastJun 20, 2024

In this DW+ exclusive Sessions interview, Dr. Peterson and Ayaan Hirsi Ali further discuss the subversion of the West, the corruption of academia which has propagated extremist activism, and her open invitation to bring forth restoration.

Ayaan Hirsi Ali is a Fellow at the Hoover Institute at Stanford University, a Senior Fellow with the Future of Diplomacy Project at the Belfer Center for Science and International Affairs at The Harvard Kennedy School, a member of the Council on Foreign Relations, and the bestselling author of Infidel (2007) and Heretic: Why Islam Needs a Reformation Now (2015). Born in Mogadishu, Somalia, Hirsi Ali strove to live as a devout Muslim during her early life, but in 1992, fled to the Netherlands to escape a forced marriage. She became a citizen and went on to study at the University of Leiden. From 2003 to 2006, Hirsi Ali served as an elected member of the Dutch parliament and then moved to the U.S., where she founded the AHA Foundation to protect and defend the rights of women from harmful traditional practices. In 2024, Hirsi Ali founded Restorationbulletin.com, where she explores the forces of subversion plaguing Western society.

- Links -

For Ayaan Hirsi Ali:

On X https://twitter.com/Ayaan?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor

Restoration Bulletin https://www.restorationbulletin.com/

Read more here:
Sessions | Ayaan Hirsi Ali [DW+ Exclusive] - The Daily Wire

Read More..

Sugar Cravings, Red Meat, and Your Health | Max Lugavere – The Daily Wire

The Jordan B. Peterson PodcastJun 17, 2024

Dr. Jordan B. Peterson sits down in-person with filmmaker and science author Max Lugavere. They discuss his upcoming film, Little Empty Boxes, that details his mothers progression through dementia, which set Max on his lifes journey to understand and communicate the science behind health and nutrition. Dr. Peterson and Lugavere also discuss obesity, toxic exposure, hypertension, and elimination diets.

Max Lugavere is a health and science journalist, filmmaker, and bestselling author. He is the author of the Genius trilogy of books, including the New York Times bestseller Genius Foods and the Wall Street Journal bestseller Genius Kitchen. He hosts The Genius Life podcast, one of the top health and wellness podcasts in the U.S. His appearances on The Today Show, The Rachael Ray Show, The Doctors, and The Joe Rogan Experience have made him a respected and well-known voice in the field. His debut film Little Empty Boxes, a project 10 years in the making and the first film to document the science of dementia prevention, is available for pre-order now at LittleEmptyBoxes.com.

This episode was recorded on June 8th, 2024

- Sponsors -

Balance of Nature: Get 35% off Your Order of Fruits & Veggies + FREE Fiber & Spice Supplements. Use promo code Jordan at checkout: https://www.balanceofnature.com/

Netsuite: Make better business decisions with NetSuite https://www.NetSuite.com/JBP

Shopify: Get a $1/ month FREE trial with full access to Shopify's entire suite of features: https://shopify.com/jbp

- Links -

For Max Lugavere:

Pre-Order Little Empty Boxes (Film) http://littleemptyboxes.com

On X http://x.com/maxlugavere

On Instagram http://instagram.com/maxlugavere

The Genius Life on YouTube (podcast) http://youtube.com/maxlugavere

The Genius Life on Spotify (podcast) https://open.spotify.com/show/5WkdjiVLo1kp3alPfFZItS?si=e0189092486a4653

See the article here:
Sugar Cravings, Red Meat, and Your Health | Max Lugavere - The Daily Wire

Read More..

Data waste: According to study, a significant portion of stored server data is never accessed – TechSpot

Why it matters: Making cloud servers and data centers more efficient is a crucial step in the push to increase sustainability and reduce carbon emissions. However, one company has started drawing attention to what it calls "data wastage" the retention of large amounts of data no one accesses and the factors making it difficult to cut back.

Up to 70 or 80 percent of the data that some companies store goes unused, according to enterprise data infrastructure provider NetApp. The company has recently started bringing attention to the issue to help clients meet sustainability goals.

NetApp Chief Technology Evangelist Matt Watts recently told The Register that storage comprises 15 to 20 percent of data center power consumption. Furthermore, a national survey indicates that cloud usage of the UK's power grid could grow from 2.5 percent to six percent by the end of this decade.

NetApp provides data tools for clients like Microsoft, Amazon, and Google. Last April, it published a report on the difficulties of tackling data wastage. Watts initially reported that around 41 percent of stored data is never accessed, but he has since revised the number to as much as twice that in certain areas.

Most leading IT figures believe that cutting back on unwanted data could help reduce carbon emissions, but many companies lack the resources. Some don't have enough time or find differentiating between valuable and unwanted data too daunting.

Obviously, companies that handle clients' data also don't want to cause trouble by deleting information someone might need. Watts highlighted disagreement and confusion in some organizations about whether IT departments are the owners or simply the caretakers of the data they manage, which can present an additional roadblock.

NetApp's BlueXP classification tool, part of a service that gives client companies unified control over diverse data infrastructures, was a significant factor in revealing the scale of data wastage. It retrieved metadata from its data center clients, showing who owned their stored files and when customers last accessed their information.

The issue comes amid NetApp's controversial decision to remove BlueXP's support for services like Google Cloud, Amazon S3, and OneDrive. Watts said that NetApp intends to focus on its internal storage systems instead so the company could stand out.

Read the rest here:
Data waste: According to study, a significant portion of stored server data is never accessed - TechSpot

Read More..

HPE’s Neil MacDonald: Running AI In Public Cloud Will ‘Crush’ IT Budgets – CRN

The target customers for this [HPE Private Cloud for AI] are very firmly enterprise customers who are seeking to embrace generative AI and who recognize that running AI workloads at scale in the public cloud will crush their IT budgets, said MacDonald, the general manager of Compute, HPC (High Performance Compute) and AI for HPE.

Hewlett Packard Enterprise Executive Vice President Neil MacDonald says that HPE Private Cloud for AI has a big GenAI advantage over the public cloud, which he proclaims will crush the IT budgets of enterprise companies.

The target customers for this [HPE Private Cloud for AI] are very firmly enterprise customers who are seeking to embrace generative AI and who recognize that running AI workloads at scale in the public cloud will crush their IT budgets, said MacDonald, the general manager of Compute, HPC (High Performance Compute) and AI for HPE in a press conference ahead of HPE Discover.

AI is the most data- and compute-intensive workload of our generation, and managing that data and the governance around it and the protection of that data and IP leaves a lot of enterprises to want to embrace generative AI but to do it in a private cloud model, said MacDonald.

Manuvir Das, vice president of enterprise computing for Nvidia, who joined the HPE press conference on the new Nvidia AI Computing By HPE portfolio, said he also sees the private cloud cost advantages for customers.

Its the age-old debate that if you have the wherewithal to stand up your own infrastructure, you can get a far superior TCO than relying on a cloud service, he said.

Furthermore, Das said there are benefits for customers looking to keep their private data on- premises. The AI workload is particularly interesting because its so driven by data, he said. And if you think about an enterprise company, you are really accessing the private data of your company that really represents the secret sauce, the IP of your company. So the question is, would you prefer sending all of that data into the cloud, or would you rather keep that all under your control?

Finally, there are latency issues with regard to moving the data from on-premises to the public cloud, said Das. If you have petabytes of your enterprise data that youre now extracting insight from, do you want to move the data to the compute? Or do you want to move the compute to where the data is? So I think these are the reasons why a private solution is quite desirable. And of course, you know both options exist and every customer will decide for themselves which option they prefer.

MacDonald also weighed in on a number of other issues including why HPE is going all in with Nvidia on the Nvidia AI Computing By HPE portfolio rather than AMD, and why HPE has a liquid cooling advantage over competitors.

Below is an excerpt from the press conference question-and-answer session with MacDonald.

If you think about enterprise AI success, generative AI relies not just on accelerator silicon, but also on fabrics, on system design, on models, on software tooling, on the optimizations of those models at runtime. And so we are thrilled to be working closely within Nvidia with a very strong set of capabilities that together enable us to have our enterprise customers be able to move forward much more quickly on their enterprise [AI] journeys.

It's key to notice that this HPE private cloud AI is not a reference architecture that would place the burden on the customer to assemble their AI infrastructure by cobbling together piece parts, whether those are GPUs or pieces of software or different connectivity.

HPE and Nvidia have done the hard work for customers like co-developing a turnkey AI private cloud that is up and running in three clicks. And that goes much beyond a question simply of an accelerator.

So the target customers for this are very firmly enterprise customers who are seeking to embrace generative AI and who recognize that running AI workloads at scale in the public cloud will crush their IT budgets.

AI is the most data- and compute-intensive workload of our generation, and managing that data and the governance around it and the protection of that data and IP leaves a lot of enterprises to want to embrace generative AI but to do it in a private cloud model.

So our target customers for HPE Private Cloud AI are those enterprises around the world who are all coming to grips with how to gain the productivity benefits of generative AI in their operations and want to do that on-prem with greater efficiency and greater control.

At HPE we are currently supporting a few customers with GPU as a service that supports large language model work and other AI workloads. This spans thousands of GPUs that were providing to customers via the cloud. Thats currently an offer available to select pilot customers. Were working to refine the offering and well share further details later this year.

Not yet. The growth in the energy and thermal intensity of accelerator silicon and in CPU silicon continues to accelerate. As a result, within our product portfolios and across our servers, we offer a variety of systems today that encompass traditional air-cooled, 70 percent PLC systems and 100 percent PLC systems. And we continue to evolve our technologies around liquid cooling as we move forward.

But as these systems are becoming more accelerator-rich, and therefore more thermally and energy challenged, it becomes increasingly pervasive to deploy either partial direct liquid cooling or in the most [high]-performance systems 100 percent direct liquid cooling. So across the portfolio today, we have all of that and you will see increasing use of direct liquid cooling as we continue to work with Nvidia on future systems and future silicon.

So you referred to rear door heat exchanger technologies that you see on the [slide on the] extreme left. There are also advanced systems for recirculating air around air-cooled systems and leveraging water supply as the heat exchanger without requiring different server technology. Thats also on the left [hand side of the slide].

In the middle [of the slide] is classic direct liquid cooling, with the fluid removing the heat from the system going to an external CDU as the point of exchange in the system and then on the right, you see the same kinds of technologies, but at greater scale, with 100 percent DLC [direct liquid cooling] and no reliance on air cooling at all. So we do all of that today in products that are shipped at scale to customers around the world and we build on decades of experience in direct liquid cooling across the spectrum.

We continue to innovate in the cooling space. That includes all forms of advanced cooling technologies, which we continue to assess and bring to the portfolio. Were very familiar and aware of what can and cant be done with immersion base technologies both single base and dual base and the optimum solution for the design points across this range today remain the solutions that you see in front of you.

We continue to provide the direct liquid cooling that weve been providing in the past, which is single phase.

In essence, when youre dealing with 100 percent direct liquid cooling you are extracting all of the heat using liquid, and you design the system very, very specifically to enable you to do that by pulling the fluid through the system and all the points of rejection of heat.

So this is something that were doing at scale that underpins the two exascale systems in the world that have been delivered. And that same technology is increasingly relevant within our broader portfolio. Today we have systems deployed, leveraging our servers for AI workloads. Taking advantage of all of the range of cooling that you see in front of you, including 100 percent direct liquid cooling is a very strong capability of HPE.

Direct liquid cooling is not about designing a server as much as it is about designing an entire system, including the integration with all of the circulation, the CPUs, the control systems, etc.

Read the original post:
HPE's Neil MacDonald: Running AI In Public Cloud Will 'Crush' IT Budgets - CRN

Read More..

Apple is building a high-security OS to run its AI data centers – here’s what we know so far – ZDNet

Apple

During last week's introduction of Apple Intelligence, Apple software engineering head Craig Federighi announced that the company will run somegenerative AI models in a secure cloud computing environment.

Called Private Cloud Compute(PCC), the service will be subject to scrutiny by outside security experts. Federighi said: "Just like your iPhone, independent experts can inspect the code that runs on these servers to verify this privacy promise." The goal is to verify Apple's privacy promises, including that user data will never be stored on PCC servers and will be expunged from memory once a request is fulfilled.

Also:Here's how Apple's keeping your cloud-processed AI data safe (and why it matters)

Federighi did not go into detail about how security researchers will inspect or audit the PCC servers, but a subsequent Apple blog poststates the PCC servers will run a distinct version of the company's operating system software that researchers will be allowed to inspect.

"When we launch Private Cloud Compute, we'll take the extraordinary step of making software images of every production build of PCC publicly available for security research," the Apple Security Engineering and Architecture and collaborating teams wrote.

The blog post goes on to say that Apple will "periodically also publish a subset of the security-critical PCC source code, [and] in a first for any Apple platform, PCC images will include the sepOS firmware and the iBoot bootloader in plaintext, making it easier than ever for researchers to study these critical components."

Apple emphasizes that its devices "will be willing to send data only to PCC nodes that can cryptographically attest to running publicly listed software" as a means to ensure its privacy and security guarantees.

Apple makes various promises about the safety and security of using Private Cloud Compute to process some AI tasks.

Apple provided little detail about the nature of the server software, other than the fact that it is a derivation of the iOS and MacOS operating systems.

The servers will run on Apple's own computer chips, analogous to the iPhone, iPad, and Mac, powered by "a new operating system: a hardened subset of the foundations of iOS and macOS tailored to support large language model (LLM) inference workloads while presenting an extremely narrow attack surface. This allows us to take advantage of iOS security technologies such as Code Signing and sandboxing."

Also:Apple's AI extravaganza left out 2 key advances - maybe next time?

Apple's iOS and macOS are based on a combination of open-source technologies such as the Darwin operating system, developed at Apple in the 1990s, freeBSD, and closed-source software developed at Apple.

It's unclear when developers will get a look at the new software. In the blog post, Apple researchers say they will give security researchers a "first look" at the software "soon." A note on Apple's developer site says Apple Intelligence will be available "in an upcoming beta" without mentioning anything specific about PCC timing.

ZDNET's Maria Diaz speculates that iOS 18 betas will become available in July, although Apple's website states in a footnote that "Apple Intelligence will be available in beta on iPhone 15 Pro, iPhone 15 Pro Max, and iPad and Mac with M1 and later, with Siri and device language set to US English, as part of iOS 18, iPadOS 18, and macOS Sequoia this fall."

More here:
Apple is building a high-security OS to run its AI data centers - here's what we know so far - ZDNet

Read More..

HPE talks up AI’s role in boosting server business – ITPro

AI presents an opportunity for partners to sell more on-premises hardware, according to HPE CEO Antonio Neri.

Speaking at the companys annual Partner Growth Summit ahead of the main HPE Discover 2024 conference in Las Vegas, Neri said:

When you think about AI is not a monolithic workload, its actually a distributed workload, which is obviously very data intensive. That actually opens up the opportunity to bring more infrastructure on premises than ever before.

The fact that AI is a data first business will drive customers to rethink their data strategy, where they store it, [and] how they maintain it in compliance.

By offering customers a cloud-first approach thats also based on-premises (such as that provided by HPE GreenLake), theres an opportunity to really increase revenues off the back of the trend, he suggested.

AI has given a triple shot in the arm to the server business, he added.

Its not just AI where HPE believes the opportunity lies, though. Leaning on one of the key themes of this years conference, sustainability, Neil MacDonald, EVP and GM of compute, HPC and AI at HPE, spoke of the need for companies to modernize their existing infrastructure, saying theres a lot of old, dusty server equipment out there that is ripe to refresh.

Stay up to date with the latest Channel industry news and analysis with our twice-weekly newsletter

MacDonald claimed that these elderly systems are wasting our customers money, as theyre not very efficient.

There are over 400,000 [HPE] systems out there of over four years of age, MacDonald said.

Legacy systems are consuming about 66% of data center power, but they're only doing about 7% of useful work so theres a huge opportunity for efficiency gains. One Gen11 can replace 11 Gen8 servers and consume 90% less power.

He also pointed out that older infrastructure doesnt have all the security features of HPEs more recent products. All of these are reasons to encourage customers to upgrade, he said.

ITPro's Jane McCallion will be live on the ground in Las Vegas this week with rolling coverage of HPE Discover 2024. Keep tabs on all the latest news, updates, and announcements in our live blog.

Read more here:
HPE talks up AI's role in boosting server business - ITPro

Read More..

HPE GreenLake adds GenAI capabilities as on-premises PaaS – TechTarget

LAS VEGAS -- Hewlett Packard Enterprises' and Nvidia's latest co-developed offering for the HPE GreenLake platform aims to bring the cloud-like experience of generative AI to on-premises enterprise data centers.

Nvidia AI Computing by HPE is a new portfolio of products and services developed by the two companies and sold by HPE or its partner resellers. The catalog, releasing in full this fall, took centerstage today during HPE CEO and president Antonio Neri's keynote at HPE Discover 2024.

HPE Private Cloud AI is a platform-as-a-service offering that brings HPE hardware and software along with Nvidia GPUs, AI tools and AI models as a managed private cloud service to a customer's data center through HPE's GreenLake.

HPE executives said Private Cloud Al provides a turnkey service for generative AI (GenAI) creation, enabling users to scale hardware as demands dictate with the benefit of keeping data on premises and under the customer's control.

Creating the service has led to updates and features in other parts of GreenLake, with a new GenAI copilot coming to OpsRamp, an IT ops management and visibility SaaS that HPE acquired last year, and a refresh of server hardware to accommodate more potent Nvidia GPUs.

Enterprises are still experimenting with GenAI capabilities and will continue to rely on hyperscaler services through AWS, Azure and others, according to Mike Leone, an analyst at TechTarget's Enterprise Strategy Group.

As those experiments mature and customers move them on premises and into production, however, many enterprises will lack the ability to build systems and networks capable of managing GenAI at scale, he said. HPE has an opportunity with those customers as GreenLake offers not only hardware akin to offerings from Dell Technologies and other on-premises hardware vendors but also mature data management and query software through its Ezmeral software suite.

"HPE has the technology in place if they put it together and execute well," Leone said. "Data gravity is real and is going to dictate what some organizations do."

HPE Private Cloud AI will come in several hardware configurations depending on the size of AI workloads a customer plans to deploy.

Supporting these configurations are new servers including HPE ProLiant Compute DL384 Gen12 and HPE ProLiant Compute DL380a Gen12, which target small and medium-sized workloads and support Nvidia's H200 NVL GPUs.

Customers looking to train large language models and AI service providers can use the HPE Cray XD670 line for the most demanding workloads with support for up to eight Nvidia H200 Tensor Core GPUs.

Nvidia's contributions to Private Cloud AI include GPUs, Nvidia AI Enterprise cloud-native software platform and AI models through the Nvidia NIM inference microservices.

HPE's software additions include HPE AI Essentials and HPE Data Lakehouse with HPE File Storage, recently updated to support GenAI workloads, to support infrastructure needs. All configurations will include liquid cooling installation options and HPE Sustainability Insight Center software for energy management and reporting.

Unlike a reference architecture, HPE Private Cloud AI will require no component assembly by the user, said Neil MacDonald, executive vice president and general manager of compute, HPC and AI at HPE.

The OpsRamp automation and visibility SaaS for IT ops teams now include an operations copilot to enable customers to ask questions and get possible solutions for operations challenges.

Copilots have become an expected feature in IT ops automation tools, Leone said. But they are still immature. True innovation for copilots and other GenAI assistants will come as these services develop into predictable, analytical and self-acting agents within an IT environment. These agents should not only advise teams on how to fix a given problem but also know based upon past actions what should and could be automated, he said.

"It's a table stake now," Leone said. "When I think of IT ops, it will need to have AIOps. But I still don't think GenAI is the right approach. This has to be the first iteration as we work toward agents."

The addition of copilot into the GreenLake catalog might also send a conflicting message to buyers, said Mike Matchett, founder and analyst of Small World Big Data. GreenLake AI services promise to keep enterprise data isolated from the public cloud, but the OpsRamp copilot relies on cloud services and could affect that data.

Customers who are wary enough to keep their data on premises should question what third-party data is being used to support the copilot and what customer data could be fed back to those vendors, he said.

"Simply adding a chatbot to a tool does very little and may even be counterproductive," Matchett said. "There's some benefit to automation, but there's [a number of] questions you should ask to anyone patching GenAI into your operational tool."

Many enterprises aren't investing in GenAI infrastructure during the early experiment stage, which can quickly tally to hundreds of thousands of dollars, Matchett said.

As experiments mature and become production-ready, enterprises could be looking for ways to process workloads without running proprietary data in the cloud, he said.

"[GenAI] is going to touch a lot of our IP, and we need to keep control of it," Matchett said. "It's a security and a control need."

Bringing workloads on premises will also require a new set of skills to develop and oversee GenAI, another area many enterprises lack expertise, according to Steve McDowell, founder and analyst of NAND Research. A PaaS approach could smooth deployment and eliminate the need to reskill IT teams or hire new employees.

"AI doesn't look like anything we've touched in IT, and the IT guys are trying to figure it out," McDowell said.

Private cloud deployments and other on-premises configurations won't be cheap, Matchett said. But the combination of saving on hiring new employees and data control might outweigh the potential sticker shock.

"Whatever the zeitgeist is, every board room has talked about AI and what they're going to do about it," he said. "There's probably a market there."

Tim McCarthy is a news writer for TechTarget Editorial covering cloud and data storage.

Original post:
HPE GreenLake adds GenAI capabilities as on-premises PaaS - TechTarget

Read More..

Use cloud computing to accelerate GenAI adoption in healthcare with Microsoft AI – CIO

Certain healthcare organizations have been slow to embrace cloud computing despite its proven benefits, largely due to security concerns. But cloud caution will also likely hinder the healthcare sectors adoption of Generative AI (GenAI) applications. Healthcare IT leaders can make a business case to optimize the organizations cloud and GenAI investments to enable GenAI applications.

An experienced systems integrator paired with a hyperscaler such as Tata Consultancy Services (TCS) and Microsoft Azure can help healthcare IT leaders achieve these goals. This alliance can create a tailored package of ready-to-use, scalable infrastructure; a battery of cloud services; in-depth security; health-care specific platforms; and AI expertise.

How Cloud capabilities can satisfy GenAI requirements

Cloud computing offers a host of capabilities that directly enable GenAI applications:

Optimizing cloud environments for GenAI

To effectively deploy GenAI, healthcare IT leaders should consider the following steps to optimize their cloud environments:

How TCS accelerates cloud-based GenAI adoption for healthcare

An experienced systems integrator can simplify and accelerate these steps. TCS has a wealth of experience in systems integration, cloud optimization, and AI development. This includes:

The bottom line

By leveraging the expertise of a systems integrator like TCS, healthcare IT leaders can ensure their cloud architecture is optimized to enable and accelerate GenAI adoption.

To learn more, visit https://www.tcs.com/what-we-do/services/cloud/microsoft.

See the original post:
Use cloud computing to accelerate GenAI adoption in healthcare with Microsoft AI - CIO

Read More..

Oracle to invest $1bn in AI and cloud computing in Spain – DatacenterDynamics

Oracle is investing more than $1 billion in artificial intelligence (AI) and cloud computing in Spain.

The investment will enable Oracle to open a third cloud computing region in the country, located in Madrid.

It will be a public cloud region and will enable users to migrate workloads from their data centers to Oracle Cloud Infrastructure (OCI) while meeting local regulations including the Digital Operational Resilience Act (DORA) and the European Outsourcing Guidelines (EBA, EIOPA, ESMA).

Oracle is developing the cloud region in partnership with Telefnica Espaa.

The opening of the third Oracle cloud region in Spain is excellent news for our country, said Jos Luis Escriv, minister for digital transformation and public administration, Government of Spain. The investment announced by Oracle provides a significant boost that will help Spanish enterprises and public sector organizations innovate with AI and continue advancing on the path of digital transformation.

Miguel Lpez-Valverde, councilor for digitalization, Government of the Region of Madrid, said: We are convinced of the importance of public-private collaboration in this area, and of facilitating the development of cloud infrastructure in our region. Over the next three years, the region of Madrid is expected to receive investments of more than six billion Euros from the cloud industry, which is critical to helping us build a digital economy capable of creating high-quality jobs, attracting investment, and retaining talent.

Oracle also offers an EU Sovereign Cloud, launched in 2023, with a region in Madrid for data and applications that are sensitive, regulated, or need to meet data sovereignty requirements. The company also has an existing Oracle Cloud Region in Madrid which launched in 2022, making this new addition the company's third.

Spanish enterprises and public sector organizations are rapidly embracing the cloud to gain access to the latest digital technologies such as AI, and the upcoming public cloud in Madrid will help them address data residency requirements as well as regulations in key sectors such as financial services, said Albert Triola, country leader, Oracle Spain.

Customers of Oracle's Spain cloud regions include its hosting partner, Telefnica Espaa, and others including Banco Sabadell, Naturgy, and MasOrange.

Earlier this year, Microsoft launched a Madrid cloud region - Spain Central - which was part of the company's commitment to invest $2.1 billion in Spain between 2024 and 2025. That cloud region was also developed in partnership with Telefnica. The cloud giant has since committed to investing $7.16bn in new data centers in Aragon, Spain.

Google also has a Spanish region in Madrid that launched in May 2022.

Amazon launched an Amazon Web Services (AWS) cloud region in Aragon in November 2022. In May 2024, AWS announced it would invest 15.7 billion ($17bn) in expanding its Aragon region, with support from the local government.

Go here to read the rest:
Oracle to invest $1bn in AI and cloud computing in Spain - DatacenterDynamics

Read More..

Oracle to Invest More Than $1B in AI and Cloud Computing in Spain – Datanami

MADRID and AUSTIN, Texas, June 21, 2024 To meet the rapidly growing demand for its AI and cloud services in Spain, Oracle has announced plans to invest more than US$1 billion to open a third cloud region in Madrid and drive AI skills development across the country.

Telefonica Espaa to be the host partner for the region. Image credit: Oracle.

The new public cloud region will enable Oracle customers and partners across all industries in Spain, including its prominent financial services sector, to migrate mission-critical workloads from their data centers to Oracle Cloud Infrastructure (OCI) while helping them address regulations like the Digital Operational Resilience Act (DORA) and the European Outsourcing Guidelines (EBA, EIOPA, ESMA). Telefonica Espaa will be the host partner for the planned cloud region.

The upcoming cloud region in Madrid will help public and private sector organizations migrate all types of workloads to the cloud, modernize their applications, and innovate with data, analytics, and AI. Oracle is the only hyperscaler capable of delivering AI and a full suite of 100+ cloud services across dedicated, public, and hybrid cloud environments, anywhere in the world. This includes Oracle Autonomous Database, MySQL HeatWave Database Service, Oracle Container Engine for Kubernetes, Oracle Cloud VMware Solution, OCI Generative AI service, and OCI AI Infrastructure.

The opening of the third Oracle cloud region in Spain is excellent news for our country, said Jos Luis Escriv, minister for digital transformation and public administration, Government of Spain. The investment announced by Oracle provides a significant boost that will help Spanish enterprises and public sector organizations innovate with AI and continue advancing on the path of digital transformation.

It is great news that major technology multinationals such as Oracle are announcing this level of investment in the Region of Madrid, said Miguel Lpez-Valverde, councillor for digitalisation, Government of the Region of Madrid. We are convinced of the importance of public-private collaboration in this area, and of facilitating the development of cloud infrastructure in our region. Over the next three years, the region of Madrid is expected to receive investments of more than six billion Euros from the cloud industry, which is critical to helping us build a digital economy capable of creating high-quality jobs, attracting investment, and retaining talent.

Oracle also offers a separate EU Sovereign Cloud with one region in Madrid to help public and private sector customers with data and applications that are sensitive, regulated, or of strategic regional importance, move to the cloud. The Oracle EU Sovereign Cloud in Madrid is operated by EU-based personnel and supports workloads that fall under EU guidelines and requirements for sovereignty and data privacy.

Spanish enterprises and public sector organizations are rapidly embracing the cloud to gain access to the latest digital technologies such as AI, and the upcoming public cloud in Madrid will help them address data residency requirements as well as regulations in key sectors such as financial services, said Albert Triola, country leader, Oracle Spain. With our plans to invest an additional $1B in Spain over the next 10 years, we are reaffirming our commitment to helping Spanish organizations of all sizes and industries including those across Spains small and medium-sized enterprises and the financial services industry accelerate their adoption of cloud technologies to boost business performance and resilience.

OCI Provides Customers and Partners with a Resilient and Scalable Cloud Foundation

With the upcoming new region adding to the existing Oracle Cloud Region in Madrid, customers and partners can gain additional low-latency access to cloud services to help them derive better value from their data. Customers can also leverage high availability and disaster recovery capabilities to enhance business continuity and help address Spains and the EUs regulations and requirements for data residency, in addition to industry-specific requirements and regulations such as the Digital Operational Resilience Act (DORA) in the financial services sector.

Financial services organizations face unique regulatory requirements due to the fact that their workloads are inherently sensitive, and Oracle has developed capabilities and practices to help customers in the financial services sector address requirements for DORA and other key industry regulations. As part of continuing efforts to help financial services customers perform their due diligence, Oracle developed the Advisory: Oracle Cloud Services and the European Outsourcing Guidelines (EBA, EIOPA, ESMA) to enable customers to evaluate OCI in the context of their regulatory outsourcing guidance. In addition, Oracle Cloud Regions including the existing region and upcoming region in Madrid, as well as the EU Sovereign Cloud Region in Madrid offer backup and disaster recovery solutions to help address customers need for data availability and resilience.

In addition, OCIs sovereign AI capabilities provide customers with increased control over where they locate their data and computing infrastructure and how they manage it. As a result, customers can achieve AI sovereignty by gaining the assurance that their use of AI is aligned with data sovereignty frameworks.

OCIs unique cloud architecture enables Oracle to deploy dedicated cloud regions with hyperscale cloud services inside customer data centers and deploy more public cloud regions faster by starting with an optimal footprint and scaling as needed. This approach helps meet the needs of all countries and markets without compromising cloud capabilities, while also providing the consistent performance, SLAs, and global pricing for which OCI has become known.

Leading Organizations in Spain Welcome the Upcoming Public Cloud Region

Our partnership with Oracle has been paying off for decades. We have a stronger position in the market, combining the best capabilities of both companies, said Sergio Snchez, operations, network and IT director, Telefnica Espaa. On the one hand, we continue to migrate part of our mission-critical infrastructure to Oracles cloud to be more agile in delivering innovative products and services to our customers around the world. On the other hand, this is the third cloud region for which we have been trusted as a host partner, again positioning us as a leader in the data center industry.

We rely on Oracle to migrate our critical workloads to Oracle Cloud Infrastructure while complying with local regulatory requirements. OCI provides us with a robust and secure cloud infrastructure, said Enrque Rodrguez, director of technology services, Banco Sabadell. The arrival of the third Oracle Cloud Region in Spain will open significant opportunities for us in terms of utilizing new technologies such as AI across different areas of our business.

Naturgy is migrating a large part of its critical data infrastructure to OCI, which we selected for its scalability, performance, and price competitiveness, said Rafael Blesa, general manager of information systems, Naturgy. The upcoming arrival of Oracles third cloud region in Spain is great news as it will allow us to migrate low-latency and critical infrastructure support systems to the cloud while improving their security and resilience.

The worlds digital transformation is now unstoppable, and organizations everywhere need to adapt to remain relevant and competitive, said Ravin Dhalani, CIO, MASORANGE. Cloud technologies play a critical role in the telecommunications industry, enabling vendors to meet demand for high-speed services while enhancing operational efficiency and productivity. We are excited about Oracles upcoming third cloud region in Spain, which will help us advance our innovation on OCI.

About Oracle Distributed Cloud

Oracles distributed cloud delivers the benefits of cloud with greater control and flexibility. Oracles distributed cloud lineup includes:

About Oracle

Oracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud. For more information about Oracle (NYSE: ORCL), please visit us at http://www.oracle.com.

Source: Oracle

Original post:
Oracle to Invest More Than $1B in AI and Cloud Computing in Spain - Datanami

Read More..