Page 2,127«..1020..2,1262,1272,1282,129..2,1402,150..»

Google Invests $750 Million in a New Data Center – Database Trends and Applications

Google is unveiling a new $750 million data center in Nebraska, on the way to fulfilling its promise of spending $9.5 billion on new Google data centers and offices in 2022.

The massive new Google campus in Omaha will consist of four buildings totaling more than 1.4 million square feet as the demand for Google Cloud services and infrastructure rises. In Google Clouds recent fourth quarter, the company reported sales growth of 45 percent year over year to $5.5 billion.

The new Google data center in Nebraska is part of the Mountain View, Calif.-based search and cloud giants plan to invest a total of $9.5 billion in data centers and U.S.-based offices by the end of 2022.

Google is one of the largest spenders on building new data centers across the globe, according to Synergy Research Group, investing billions each year on constructing and equipping hyperscale data centers to meet its growing cloud customer demands.

Google, Amazon Web Services and Microsoft have the broadest data center footprints in the world, with each hosting at least 60 or more data center locations.

Data centers are the vital anchors to customers and local communities, said Google CEO Sundar Pichai in a blog post this month.

Our investments in data centers will continue to power the digital tools and services that help people and businesses thrive, said Pichai.

In addition to the new data center in Nebraska, Google plans to spend billions this year on data centers in Georgia, Iowa, Oklahoma, Nevada, Tennessee, Virginia and Texas.

In the U.S. over the past five years, weve invested more than $37 billion in our offices and data centers in 26 states, creating over 40,000 full-time jobs. Thats in addition to the more than $40 billion in research and development we invested in the U.S. in 2020 and 2021, said Pichai.

Data centers enable Google Cloud services and infrastructure, including its flagship Google Cloud Platform (GCP) offering.

For more information about this news, visit https://blog.google/inside-google/company-announcements/investing-america-2022/.

Read the original:
Google Invests $750 Million in a New Data Center - Database Trends and Applications

Read More..

What is Server Virtualization? Benefits and advantages discussed – TheWindowsClub

Server virtualization, have you ever heard of it? Youd be surprised how important it is and how much it is used around the world. Now, since not a lot of people have knowledge of server virtualization, we aim to explain all the important bits.

Understanding server virtualization is very important to many people, which is why weve decided to explain what it is all about.

Virtualization is the creation of a virtual variant of anything. There wont be physical hardware, though it shares the underlying physical hardware with an operating system that acts as the host along with virtual devices.

It is the process of creating virtual servers which act as real servers. To allow something like this to happen, the virtualization is installed on a host computer that was designed to deliver the necessary computing power and hardware.

The problem with the traditional server setup is that they are usually designed to support single applications, forcing the servers to run a single workload. This can effectively waste resources, and no one wants that.

Virtual servers are better because they allow companies to cut down on the cost of having to deploy multiple physical servers which will take up additional space and use more electrical power.

Virtual servers can ascend to new heights with the help of a layer of software known as a hypervisor. What is this? Well, it is all about abstracting the underlying hardware from all the software that runs above it.

In laymans terms, a hypervisor is similar to an emulator, a virtualization software, if you will. It is designed to run several virtual machines on single computer hardware, and it is responsible for allocating resources on physical servers on the main hardware to different instances of virtual machines.

There are two types of hypervisors that can be used without problems in a virtual server. The names are Type 1, and Type 2, and we are going to explain a few things about them.

Cost: A virtual server is cheaper because the user will not have to worry about hardware maintenance. This is a huge boon for companies because their IT department wont have to invest in on-site resources or a separate space to house massive physical servers.

Read:How to enable Automatic .NET Updates in Windows Server

Server virtualization is all about separating the physical server from the Guest operating system which provides additional benefits and capabilities.

As for network virtualization, this is where network applications are moved onto a network device, which provides more capabilities and benefits as well.

A server is a bunch of physical computers that run services designed to serve the needs of other computers on a network.

Read more here:
What is Server Virtualization? Benefits and advantages discussed - TheWindowsClub

Read More..

Partner ecosystem strategy: Co-innovation and other variants – TechTarget

IT services companies, customers and technology providers are reinventing how they work together in a partner ecosystem strategy.

A growing number of engagements feature closer relationships, a focus on business outcomes and a willingness to create entirely new offerings. Such collaborative approaches operate under several labels: co-innovation, generative partnering, co-creation, service creation and strategic partnerships.

The terms carry different shades of meaning, but stem from the same forces. Technical complexity, time-to-market demands and IT skills shortages encourage alliances rather than DIY approaches. The impetus to rethink partnerships is particularly strong among enterprise customers that rely on technology to power their core business models. Such companies seek relationships that offer innovation rather than off-the-shelf technology offerings. Service providers, meanwhile, believe the newer collaborative approaches foster long-term customer relationships, zero in on customers' specific needs and accelerate delivery schedules.

"The speed of technology change is getting faster and faster," said Brendan Walsh, senior vice president of partner relations at 1901 Group, an MSP in Reston, Va., and a wholly owned subsidiary of Leidos. The pace of development favors partnering over building technology in-house or buying it through an acquisition, he noted.

"Partnering is going to become bigger versus creating everything on your own," Walsh said.

Partnering in the IT sector has been around for years. But arrangements typically resulted in one-off product sales or discrete projects. The emerging set of alliances fall into several categories.

Co-innovation. This term describes relationships where customers and partners, such as consulting firms and other service providers, develop new offerings that address a particular business outcome.

"We look at today's tech execs, CIOs and CTOs, and they need to deliver business outcomes," said Matt Guarini, vice president and senior research director at Forrester's CIO Practice, in a co-innovation podcast. "How do they do that when you are limited by how much tech talent and how much capability and how much money you have to spend within your organization? You can't do everything yourself."

Co-innovation extends the technical capabilities of resource-constrained IT executives. A partner's contribution, however, goes beyond technology to include the methods of invention -- ways to innovate quickly and at scale, noted Ted Schadler, vice president and principal analyst at Forrester.

"You are looking for partners to bring new ideas, sure, but you are also looking for partners to help you get it done in ways you want to get it done," he added.

Co-innovation can also occur between service providers and technology vendors or involve service providers, technology vendors and customers. The task of building an industry cloud, for example, could bring together the customer, a consulting firm and a public cloud provider.

Generative partnering. Market research firm Gartner describes this type of partnership as one in which a customer and a technology partner collaborate to build something that doesn't currently exist. Such efforts aim to achieve a particular business outcome.

Generative partnering is especially prevalent among digital businesses that lead with technology. Those companies view technology as a source of competitive advantage, but can't gain that edge with traditional, market-defined offerings available to any business, said Mark McDonald, a vice president at Gartner and the company's lead analyst on generative relationships.

In this approach, the customer has an outcome in mind, but doesn't specify the technology or combination of technologies needed to reach its goal. The enterprise and its partner work together to figure that out. Generative partnering stands out for its open-ended nature in contrast to tightly scoped projects or deployments based on predefined solutions. The method provides an "unbounded view of technology," McDonald said.

Co-innovation shares the fluidity of generative partnering in that the parties typically don't start with a preconceived notion of what the solution ought to be.

Co-creation. This type of partnering has some of the characteristics of co-innovation in that the participants build something new together. Such arrangements often involve a service provider and a technology vendor, which work together to build an offering that meets customers' needs.

Co-creation arrangements often focus on building an asset -- an app, for example. The partners typically look to commercialize their co-created intellectual property beyond the initial customer or customers, so the asset becomes a saleable product.

A generative partnership, in comparison, would not start with an expectation of commercializing a jointly developed offering, McDonald noted. The partners could tweak an offering for a broader market, but only after the initial customer's business outcome has been achieved, he added.

Service creation. Cisco devised this multistep process to co-create offerings with mid-sized to large channel partners.

Service creation begins with developing the offer and building the service. Next, the parties craft a business plan and pilot the service with customers. Subsequent steps in the process address sales readiness and revenue forecasting, culminating with the launch of the new service.

The process is modular so providers can integrate it within their own service delivery frameworks, according to a Cisco spokesperson.

Strategic partnership. This approach brings together a services provider and a technology vendor or vendors that co-develop technology and also pursue a joint marketing strategy.

Walsh said cooperatively built technologies could potentially just "sit in a lab" without a plan to address the buyer's journey. "The strategic partnership is that one-two punch of innovation and go to market, together," he noted.

Companies entering such partnerships must focus on the operational details, particularly when it comes to defining who does what in a relationship. To that end, the RACI (responsible, accountable, consulted, informed) matrix provides a mechanism for assigning roles, Walsh said.

Those emerging partner ecosystem approaches lend themselves to the boldest business and technology initiatives.

Co-innovation, for instance, "is appropriate for the most risky, uncertain ventures," said Alexei Miller, co-founder and managing director of DataArt, a software development services company with headquarters in New York. He cited unproven, experimental technology and untested business models as areas suitable for co-innovation.

The risk of journeying into the unknown means the parties involved should be prepared to accept a total loss, Miller said. He suggested partners consider creating a separate company, with independent management, to control costs and manage the rules of a co-innovation effort.

In addition to tackling new ventures, the engagement models can also help cultivate repeatable offerings.

A partnership between Chicago-based service providers Asperitas Consulting and Villa-Tech provides a case in point. The companies created a virtualized network test lab for two clients that needed a faster way to test on-premises networks. An enterprise's networking team typically struggles to keep up with its cloud counterparts, according to Derek Ashmore, application transformation principal at Asperitas. Cloud personnel can quickly make changes in code, but networking staff must deal with physical devices, he said.

The companies' virtual lab, however, creates a digital twin of a customer's network or a subset of its network for testing. The digital twin links to a customer's cloud providers and services, so customers can evaluate a hybrid environment. An IT group can quickly spin up virtualized networking devices for testing versus maintaining an array of gear in a physical lab.

Asperitas and Villa-Tech now plan to take the virtual lab, which is delivered as a managed service, to a wider audience. "We're coming up with a feature set and consumption model that makes sense for customers," Ashmore said.

The companies will share the revenue and may also include colocation providers as additional partners. Colocation companies have expressed interest in hosting the virtual testing lab, Ashmore said. Their involvement would give customers the option of an externally hosted lab in addition to having Asperitas and Villa-Tech manage an on-premises deployment.

The result of co-innovation or co-creation, however, isn't always an individual product or service. AmolAjgaonkar, distinguished engineer at Insight, a solutions integrator based in Chandler, Ariz., said the company's collaboration with ISVs and cloud providers results in "offerings" that include a mix of products, services and processes.

Customers gain cost and speed benefits when they adopt an industry-specific offering that Insight has deployed on previous occasions. "Since we have done it before, we understand what the cost is and what the real timeline is," Ajgaonkar said. "From a customer's point of view, it gives them peace of mind."

AI has emerged as one technology in which the newer partnering methods come into play, Ajgaonkar noted. Customers may have data to exploit but don't know how to build an AI model. Or, they know how to build a model, but don't know how to scale it from pilot to production. Insight teams up with an inference engine ISV to create or scale an AI model. Working together, the companies can offer an AI pipeline and process that makes those goals easier for customers to achieve, he added.

Results stem from partner discussions focused on strategic problem solving rather than transactional sales.

"You have a part of the solution; we have a part of the solution -- so, how can we make it better?" Ajgaonkar said. "Having those conversations up front has really driven a lot more innovation."

Continue reading here:
Partner ecosystem strategy: Co-innovation and other variants - TechTarget

Read More..

Has the cloud industry solved a big problem for digital pathology? – Digital Health

Pathology produces immense amounts of imaging data compared to other disciplines but could a different approach to cloud storage prevent a potential cost crisis? Sectras sales director, Chris Scarisbrick, explores a sustainable strategy some healthcare providers are now taking.

Digitisation in pathology is taking place at an unprecedented pace. Healthcare providers almost everywhere are now progressing their plans for the biggest transformational change that the centuries-old discipline has ever seen.

Such progress is exciting and important with significant implications for clinical collaboration and enhanced patient care. The UK government has placed such importance on modernising diagnostics, that it is currently investing hundreds of millions of pounds into digitising diagnostics, within the space of a single year. Gone are the days when we can continue to expect pathologists to stand over microscopes, working in relative isolation from each other.

But as necessary as digital pathology is, an inevitable challenge to the longer term sustainability of initiatives has continued to trouble some people the cost of storage.

How big is the problem?

It has been a big challenge, from a data generation point of view at least. Pathology is by far the largest consumer of digital storage when compared to other diagnostic disciplines. In radiology, a typical x-ray might consume about 35 megabytes of data. A more complex examination, like a CT scan, might produce images in the region of 300 megabytes. But in pathology, digital images created from the scanned biopsy slides associated with just a single average patient examination generate as much as five gigabytes of data.

Putting the challenge into context, one of the worlds most advanced digital diagnostic initiatives recently reported that it had produced half a petabyte of radiology data over a 10-year period. Having also now digitised pathology, the programme soon expects to produce around three petabytes of data every single year from scanned slides. Thats 3,000 terabytes of data every year, for a relatively modest regional population, and just from digital pathology.

For healthcare organisations with ready access to expansive storage options, this is less of a challenge. But for many others, who might produce several times the data in the above example, alternative solutions are being sought to ensure the cost of digital pathology storage remains sustainable.

Solving the storage problem

Despite its immense storage footprint, pathology has one very significant advantage. Once digital slides have been reported and the clinical diagnostic cycle is complete, images are relatively less likely to be needed again.

This differs to other diagnostic arenas. In radiology, for example, access to historical imaging is clinically important, allowing healthcare professionals to quickly see what might be historically normal for a patient, or to monitor progression of areas of interest over time. A single x-ray might be looked at many times as a point of reference during a persons life, especially if it highlights potential areas of concern.

But in the vast majority of pathology cases this isnt a requirement. Any valuable information is typically extracted at the point of reporting. Once a clinical decision has been made and the patient is on a pathway, biopsies are not usually revisited for ongoing patient care.

Some recent regional digital pathology initiatives I have spoken to are now taking strategic advantage of this situation, coupled with emerging developments in cloud computing. In particular, they are opting to utilise archive storage capabilities that started to emerge a few years ago and which have now become common solutions from major cloud providers.

Retrieving data from such deep layers of archive storage can come with a cost, but overall, it means that vast quantities of data can be stored at scale whilst remaining affordable and sustainable.

Ending the storage of glass slides altogether?

If such images are so infrequently needed, you might legitimately question why they need to be stored in the first place.

Some initiatives have decided to try to manage without storing images in the longer term. They have chosen to purge imaging data from servers, instead opting to spend time retrieving the original physical slide that is kept in storage and to then re-scan that slide at the point the image is needed.

When slides are revisited, it is often for medical-legal reasons. For example, if a cancer has been missed, an inquiry may want to understand if a cancer should have been detected, and to see what was visible to the pathologist at the time of reporting.

One potential challenge with this approach is that the quality of physical slides can degrade over time, meaning that what is visible when that slide is rescanned, might differ to the original image at the time the diagnostic report was made. A high quality digital image, on the other hand, will remain the same indefinitely providing a highly reliable record that might also provide significant value for research or for the training of AI, for example.

Novel cloud archiving options being put into practice now are likely to defeat the case for data purging strategies. Indeed, they might even raise questions as to whether physical slides should be retained. Current guidance from organisations like the Royal College of Pathologists do for the time being require tissues to be retained and stored. But is the storage of slides an unnecessary cost in itself if a reliable digital image is all that is needed?

Cloud is the way forward

Nearly every digital pathology initiative I have encountered recently is reliant on the cloud, for many reasons. It is a more secure option when it comes to cyber security. Cloud providers invest vast resources into their cyber resilience whereas an on-premise solution managed by an already busy hospital IT team, can only defend against so much.

Cloud also offers flexibility of scale, and to pay as you go rather than investing large amounts of capital into hardware, capital that does not exist for many healthcare providers.

Cloud helps to drive forward consolidation and regional multi-organisation pathology programmes. It can help to standardise and simplify digital pathology deployments. And it can help to reduce the time to deployment with projects not dependant on sourcing increasingly scarce hardware that would otherwise dictate timescales.

For that and other reasons, cloud is the way forward. But storing petabytes upon petabytes of data in traditional online environments would likely become too expensive, too quicky for most initiatives. Archives might now be the answer many have been searching for.

Read more here:
Has the cloud industry solved a big problem for digital pathology? - Digital Health

Read More..

Varjo Reality Cloud: Ultra-Reality Experience The Easy Way – Ubergizmo

Varjo has announced the Varjo Reality Cloud, a secure SaaS platform that lets customers stream XR content from Autodesk VRED rendered on powerful enterprise-class hardware to consumer-level clients such as XR headsets, laptops, and mobile devices.

Varjo is well-known for its world-class VR headsets that feature eye-tracking and impressive human-resolution displays. The level of detail is so high that things like cockpit flight instruments are completely readable: an absolute necessity for the most immersive XR apps.

Previously, such a level of detail required connecting the XR headsets to a powerful local workstation to drive the rendering with a beefy (and very expensive) GPU and CPU configuration.

There are many situations where having physical possession of such an expensive computer might not be convenient. For instance, you might want to set up demo rooms in different locations, and physically deploying these computers is costly and might require on-site engineering expertise.

Varjos reality cloud is a remote rendering solution that solves this elegantly and efficiently. The rendering is done in a data center (AWS in this case) and streamed at high speed / low latency to a cheap thin client. For the end-user, Its no more complicated than launching a regular app.

There are other cloud solutions like this, but Varjo is the only one Ive seen that takes full advantage of the companys Human Resolution rendering and displays. Thats a massive advantage in the enterprise XR business and an excellent reason to pay attention to this service.

Although you need a good Internet connection, it works on your typical consumer-level Internet. I tested it when Varjo rented a photo studio in San Francisco with run-the-mill internet connectivity. The service requires only 35Mbps (megabits per second).

Varjo uses little bandwidth thanks to a proprietary foveated transport algorithm. Foveated refers to the tracking of the eye gaze to prioritize something.

Varjo supports foveated rendering during the construction of each frame. However, foveated transport happens in the compression and transmission of the rendered frame over the network. The company claims it can achieve a lossless compression ratio of 1000:1.

Me (middle) with Varjos CTO Urho Konttori (right) and CBO Jussi Mkinen (left)

Overall, the experience was great and comparable to an offline Varjo XR experience. There was no noticeable compression artifact, and the framerate was very acceptable for applications such as Architectural previews where you dont always need 90 FPS+.

The first demo was a virtual car showroom with excellent integration of the car into the real-world studio. Varjo did a great job capturing the local light probes to render the 3D car as if it was in the physical room. I could even peek inside the vehicle, and every instrument was sharp and readable.

The meta-human demo. Photo from Varjos website, not from my actual session

The second demo was a meta-human (virtual character) that needed to be rendered realistically. Again, the extremely high resolution of Varjo headsets makes a world of difference when it comes to fine details such as hair or clothes texture (jeans, etc.). A lot of small things aggregate into very perceptible improvements. The meta-human is real enough that it felt weird to enter their personal space.

I havent created any Varjo Reality Cloud servers myself, but Im well-familiar with the concept, and theres little doubt that the Varjo Reality Cloud is attractive simply because it makes life much easier.

Additionally, renting virtual workstation instances for a short period instead of buying them makes it possible to rapidly scale and shut down utilization for special events or even weekly executive content reviews. Thats potentially a massive increase in usage, associated with a modest cost increase. The added value is straightforward to measure

Filed in Gaming. Read more about Virtual Reality (VR).

See more here:
Varjo Reality Cloud: Ultra-Reality Experience The Easy Way - Ubergizmo

Read More..

IBM outlines first major update to i OS for Power servers in three years – The Register

IBM has outlined a major update to the "I" operating system it offers for its Power servers.

i 7.5, which will debut on May 10, supersedes the version 7.4 that appeared in April 2019. If that feels like a long time between updates, remember that servers packing IBM's POWER CPUs can also run IBM's own AIX or Linux a variant of which IBM also packages thanks to its ownership of Red Hat and its Linux distros.

The i OS update which should not be confused with Apple's iOS or Cisco's IOS runs only on Power 10 or Power 9 hardware. IBM will happily talk to users of earlier Power servers about an upgrade proprietary hardware and associated software are massive contributors to the company's revenue and profit.

The new release improves scalability to a maximum of 48 processors per partition in SMT8 mode. That change lets servers packing Power 10 or 9 to run up to 384 threads.

Other additions include:

IBM's announcement of the update also mentions a couple of odd-seeming changes. One allows clients to change the scope of two-digit year date ranges, so that base years can be moved from 1940 to 1970. If you've been hanging out for that feature, huzzah. Another allows the operating system's FTP client to accept a server certificate that is not signed by a trusted certificate authority but sensibly leaves that turned off by default.

Another change to the Power ecosystem revealed today is the introduction of a module that allows the use of U.2 15mm NVMe solid state disks. Power 9 and 10 boxes can now run such disks with capacity of 800GB, 1.6TB, 3.2TB, or 6.4TB.

Curiously, IBM's announcement of that feature includes verbose cautions about the lifecycle of such drives, and notes that IBM considers three full disk writes a day for five years to be the devices' expected working life.

Big Blue has also teased an update to the Enterprise Edition of AIX but it's mainly a change to the bundle offered for one cut of that OS, rather than the more significant update of features offered in i 7.5.

Read this article:
IBM outlines first major update to i OS for Power servers in three years - The Register

Read More..

VPN services in India to store user-data for 5 years: All you need to know – The Indian Express

The Indian IT Ministry has ordered VPN companies to collect and store users data for a period of at least five years, as per a new report published last week. CERT-in, or the Computer Emergency Response Team has also asked data centers and crypto exchanges to collect and store user data for the same period to coordinate response activities and emergency measures related to cyber security in the country.

Failing to meet the Ministry of Electronics and ITs demands could lead to imprisonment of up to a year, as per the new governing law. Companies are also required to keep track of and maintain user records even after a user has canceled his/her subscription to the service.

Many resort to VPN services in India to maintain a layer of privacy. VPNs or virtual proxy networks allow users to stay free of website trackers that can keep track of data like a users location. Paid VPN services and even some good free ones, often offer a no-logging policy. This allows users to have full privacy as the services themselves operate on RAM-only servers, preventing any storage of user-data beyond a standard temporary scale.

If the new change is implemented, companies will be forced to switch to storage servers, which will allow them to log in user-data and store it for the set term of at least five years. Switching to storage servers will also mean higher costs for the companies.

For the end-user, this translates to lesser privacy and perhaps, higher costs. With data being logged, it would be possible to track your browsing and download history. Meanwhile, paid VPN services may increase cost of subscription plans to cover expenses of the new storage servers that they must now use.

The new laws are expected to come into action from 60 days of being issued, which means they could kick in from July 27, 2022.

CERT-in will reportedly require companies to report a total of twenty vulnerabilities including unauthorised access of social media accounts, IT systems, attacks on servers and more. Check a full list of the twenty vulnerabilities below.

1. Targeted scanning/probing of critical networks/systems.

2. Compromise of critical systems/information.

3. Unauthorised access of IT systems/data.

4. Defacement of website or intrusion into a website and unauthorised changes such as inserting malicious code, links to external websites etc.

5. Malicious code attacks such as spreading of virus/worm/Trojan/Bots/Spyware/Ransomware/Cryptominers.

6. Attack on servers such as Database, Mail and DNS and network devices such as Routers.

7. Identity Theft, spoofing and phishing attacks,

8. Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks.

9. Attacks on Critical infrastructure, SCADA and operational technology systems and Wireless networks.

10. Attacks on Application such as E-Governance, E-Commerce etc.

11. Data Breach.

12. Data Leak.

13. Attacks on Internet of Things (IoT) devices and associated systems, networks, software, servers.

14. Attacks or incident affecting Digital Payment systems.

15. Attacks through Malicious mobile Apps.

16. Fake mobile Apps.

17. Unauthorised access to social media accounts.

18. Attacks or malicious/ suspicious activities affecting Cloud computing systems/servers/software/applications.

19. Attacks or malicious/suspicious activities affecting systems/ servers/ networks/ software/ applications related to Big Data, Block chain, virtual assets, virtual asset exchanges, custodian wallets, Robotics, 3D and 4D Printing, additive manufacturing, Drones.

20. Attacks or malicious/ suspicious activities affecting systems/ servers/software/ applications related to Artificial Intelligence and Machine Learning.

See the rest here:
VPN services in India to store user-data for 5 years: All you need to know - The Indian Express

Read More..

HFactory and OVHcloud leverage their partnership to advance Data Science & AI skilling – PR Web

"With HFactory leveraging the capabilities of OVHcloud AI Training, we are together in a position to offer a secure, fully integrated solution for the organisation of data challenges and AI hackathons."

PARIS (PRWEB) May 03, 2022

HFactory, the EdTech startup behind the first all-in-one platform for creating hands-on learning activities in Data Science and AI, enters a product and marketing partnership with OVHcloud. As part of the alliance, HFactory will now be included in OVHcloud marketplace, thus joining an ecosystem of sales partners offering the best Cloud solutions built on OVHcloud infrastructures.

HFactory was born out of the Hi! PARIS Research Center in Artificial Intelligence, where it was most recently applied for the annual hackathon gathering students from Institut Polytechnique de Paris, HEC Paris and select partner institutions. The SaaS platform combines advanced features for organising such events - from the registration and formation of diverse groups to a built-in chat service - with a seamless access to OVHcloud AI Training capabilities.

"Our deep integration with OVHcloud AI Training is central to our vision of delivering an end-to-end, perfectly integrated solution. HFactory provides the sort of instant, frictionless access to Cloud object storage and GPU computing resources that hackathon participants and AI students just love. details Ghislain Mazars, Founder & CEO at HFactory. "Users' feedback has been outstanding, and we are now moving on to build advanced new features around traceability through the OVHcloud bastion.

"We share common values around digital and technology sovereignty with the HFactory team, and have been deeply impressed by the ease of use and extensive functionalities of their solution. says Alexis Gendronneau, Head of Data Products at OVHcloud. "With HFactory leveraging the capabilities of OVHcloud AI Training, we are together in a position to offer a secure, fully integrated solution for the organization of data challenges and AI hackathons. As expectations rise on that matter, we are thus especially pleased to thereby contribute to European sovereignty in Data and AI education.

In that spirit, HFactory, already a member of the OVHcloud Startup Program, is also joining the Open Trusted Cloud initiative, which aims to unite companies willing to actively defend trusted solutions and see them evolve within the same ecosystem. As a result of the newly announced partnership, a license for HFactory can now be directly subscribed to on the OVHcloud marketplace at https://marketplace.ovhcloud.com/p/plateforme-challenges-data-ia.

About HFactoryHFactory helps educators create engaging active learning experiences in Data Science & AI. With its SaaS application to run data innovation challenges, machine learning courses and AI research projects, the company is the natural partner of higher education institutions and enterprise customers willing to step up their Data & AI training and pedagogy.

About OVHcloudOVHcloud is a global player and Europe's leading cloud provider operating over 400,000 servers within 33 data centers across four continents. For 22 years, the Group has relied on an integrated model that provides complete control of its value chain from the design of its servers to the construction and management of its data centres, including the orchestration of its fiber-optic network. This unique approach allows it to independently cover all the uses of its 1.6 million customers in more than 130 countries. OVHcloud now offers its customers latest-generation solutions combining performance, price predictability and total sovereignty over their data to support their growth in complete freedom.

Share article on social media or email:

See the original post here:
HFactory and OVHcloud leverage their partnership to advance Data Science & AI skilling - PR Web

Read More..

Top 10 Python Jobs Developers Should Apply for in FAANG Companies – Analytics Insight

Explore your enthusiasm for Python with these top 10 jobs in the biggest tech giants like FAANG

In finance, FAANG is an acronym that refers to the stocks of five prominent American technology companies: Meta (FB) (formerly known as Facebook), Amazon (AMZN), Apple (AAPL), Netflix (NFLX); and Alphabet (GOOG) (formerly known as Google). All the tech enthusiasts, in particular Python developers around the world, are interested in jobs at FAANG companies which are also popular for their wonderful work environment and best quality of teams. Here are the top 10 Python jobs at FAANG companies that you can apply for in 2022.

Apple

Responsibilities:

Requirements:

Click here to apply

Apple

Responsibilities:

Requirements:

Click here to apply

Google

Responsibilities:

Requirements:

Click here to apply

Google

Responsibilities:

Requirements:

Click here to apply

Amazon

Responsibilities:

Requirements:

Click here to apply

Amazon Fuse

Responsibilities:

Requirements:

Click here to apply

Meta

Responsibilities:

Requirements:

Click here to apply

Meta

Responsibilities:

Requirements:

Click here to apply

Netflix

Responsibilities:

Requirements:

Click here to apply

Netflix

Responsibilities:

Requirements:

Click here to apply

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Read the original:
Top 10 Python Jobs Developers Should Apply for in FAANG Companies - Analytics Insight

Read More..

Why hybrid intelligence is the future of artificial intelligence at McKinsey – McKinsey

April 29, 2022In 2015, McKinsey acquired QuantumBlack, a sophisticated analytics start-up of more than 30 data scientists, data engineers, and designers based in London. They had made their name in Formula 1 racing, applying data science to help teams gain every possible advantage in performance. Healthcare, transportation, energy, and other industry clients soon followed.

Many times, acquisitions melt quietly into the parent company. This isnt the case for QuantumBlack; it has been an accelerating force for our work in analytics. Today, it enters a new chapter by officially becoming the unified AI arm of McKinsey. When we talk about helping our clients achieve sustainable and inclusive growth, AI is naturally part of the conversation. Its transforming all businesses, including the way we, as McKinsey, serve organizations, explains Alexander Sukharevsky, who along with Alex Singlaleads QuantumBlack.

Our QuantumBlack community through the years

Over the past seven years, theQuantumBlack community has helped McKinsey achieve a number of feats: building and then donating Kedro, an industry-leading developer tool, to the open-source community; being named a Leader in AI; and supporting women in technology through community efforts and mentorship. The team grew quickly, to 400 in 2020, and now has more than 1000 technical practitioners across the globe today.

Along the way, QuantumBlack has been a critical part of many digital and AI transformations across industries. We have now brought together all of our analytics colleagues under one umbrella called QuantumBlack, AI by McKinsey,says Alex Singla, sharing a single culture and strongly defined career pathways, and using common methods and tools.

Team members range from deeply experienced data scientists and engineers to AI-fluent business consultants. The firm has also undertaken intensive training and certification in all aspects of AI and machine learning, including digital and analytics risk.

One thing that hasnt changed: our original principle of combining the brilliance of the human mind and domain expertise with innovative technology to solve the most difficult problems, explains Alex Sukharevsky. We call it hybrid intelligence, and it starts from day one on every project.

AI initiatives are known to be challenging; only one in ten pilots moves into production with significant results. Adoption and scaling arent things you add at the tail end of a project; theyre where you need to start, points out Alex Singla. We bring our technical leaders together with industry and subject-matter experts so they are part of one process, co-creating solutions and iterating models. They come to the table with the day-to-day insights of running the business that youll never just pick up from the data alone.

Our end-to-end and transformative approach is what sets McKinsey apart. Clients are taking notice: two years ago, most of our AI work was single use cases, and now roughly half is transformational.

Another differentiating factor is the assets created by QuantumBlack Labs. We capture the insights we have learned over the years with industries and fuse them with the best technologies to stay at the forefront, explains Matt Fitzpatrick, a senior partner who leads QuantumBlack Labs with Jeremy Palmer. These tech assets can solve up to 70 percent of the work that used to be done on a bespoke basis.

We already know how to tie the analytic model into the clients data pipelines. Now we have industry models that are plug-and-play with security, scalability, and risk management already baked in, says Paul Beaumont, a senior principal data scientist based in Singapore. For example, the CustomerOne toolkit for telecommunications companies can reduce time to market for analytics campaigns by 75 percent.

QuantumBlack Labs will expand significantly over the next year: We want to become a magnet for the best technologists in the world and create assets that bring together all of our knowledge, so we can take this to our clients, says Matt.

Today, our experts work in major cities around the globe, but one tradition from QuantumBlacks early days remains. There has always been beautiful art on the walls, a deep commitment to high-quality design, and a fantastic community and culture, recalls Kat Shenton, who has been with QuantumBlack since 2017.

Video

The team recently engaged Sougwen Chung, an AI artist and researcher, to create a painting that brings to life the concept of hybrid intelligence and will form the basis for QuantumBlacks new visual identity.

As a first step in the artistic process, QuantumBlack data scientists processed data from a river to train its CausalNex machine learning model. It was the ideal tool for this project because it intrinsically requires humans and tech to work together, explains Paul. Sougwen further developed the model, adding her own biofeedback.

The model guides the movement of two robotic arms that paint alongside her to create a beautiful swirling visual. The result, as you can see in the film on this page, is a meld of artistry and technology that expresses the cutting-edge work we do for our clients.

See the rest here:
Why hybrid intelligence is the future of artificial intelligence at McKinsey - McKinsey

Read More..