Page 1,987«..1020..1,9861,9871,9881,989..2,0002,010..»

Provdotnet Acquires USA Assets from CloudSigma, Launches Alpha 3 Cloud Services, and Joins the HPE Partner Ready Service Provider Program – Yahoo…

Alpha 3 Cloud Brings Benefits of Confidential Computing, IaaS, and PaaS Solutions to Customers with HPE GreenLake.

PROVIDENCE, R.I., June 29, 2022 /PRNewswire/ -- Alpha 3 Cloud, a high-performance cloud service provider, today announced that it has acquired the USA Cloud Operations and Infrastructure Assets of CloudSigma a premier cloud services provider headquartered in Zug Switzerland. The USA Cloud operations that were acquired are located in Northern VA, and Silicon Valley. The company also announced it has joined the HPE Partner Ready Service Provider (PRSP) program as a Confidential Computing IaaS and PaaS cloud provider for thousands of HPE partners and customers. Alpha 3 Cloud becomes the first PRSP member to offer the Confidential Computing Cloud delivered as a service.

"Up until now customers could protect their data at rest and data in-motion, but not their data in-use.. they can now."

As organizations increasingly seek to utilize a public cloud for emerging blockchain, machine learning/AI, and multi-party compute workloads public cloud hosting concerns continue to grow, notably over data privacy cost predictability, vendor lock-in, regulatory compliance, network performance and lack of control over sensitive data. Enterprises and public sector organizations are looking for cloud services that use Secure Enclave Technology and Infrastructure to assure data privacy and security by default.

Designed to address these challenges, Alpha 3 Cloud's Confidential Computing Cloud and Infrastructure solutions complement on-premises HPE hardware deployments to enable seamless hybrid cloud capabilities and secure, compliant data operations. Built exclusively on HPE Gen10+ hardware, Alpha 3 Cloud's enterprise-grade Confidential Computing Cloud, is designed to support sensitive data and workloads that require superior performance, predictability, compliance, and control. Alpha 3 Cloud and HPE bring to market a suite of hybrid cloud solutions as a service, enabled by HPE GreenLake. These solutions include cloud hosting and infrastructure, cloud storage, and containers, which can all be scaled to meet increasing customer needs with the flexibility of HPE GreenLake. Alpha 3 Cloud joined the HPE PRSP program with its unique confidential computing cloud infrastructure and storage solutions that provide the highest level of Security and Privacy Assurance available today.

Story continues

"With the intense focus on data privacy and security, how an organization addresses the many challenges of security, high performance and compliance on-premises, in the cloud and at the edge has become critical to their success," said Xavier Poisson Gouyou Beauchamps, Vice President, Service Providers and Cloud28+, Hewlett Packard Enterprise. "Alpha 3 Cloud is an asset to the HPE PRSP program bringing comprehensive Confidential Computing solutions and industry leading Secure Enclave Cloud Services to the ecosystem of offerings for customers."

"Customers can now leverage the economics of the cloud for applications and workloads that contain highly sensitive data. Up until now customers could protect their data at rest and data in-motion, but not their data in-use. Nor could they share data outside their organization to accelerate the value of Machine Learning/Artificial Intelligence and Federated Learning. They can now," said Ron Sacks, CEO at Alpha 3 Cloud. "We look forward to partnering with customers to provide them with cloud-based Confidential Computing solutions that will protect their most sensitive data."

Alpha 3 Cloud's powerful cloud solutions, combined with HPE's world-class hardware and other joint solutions delivered "as a service," feature world-class security controls and predictable cost to help organizations protect and share data, lower costs, and avoid vendor lock-in.

About Alpha 3 Cloud

Alpha 3 Cloud is a division of Provdotnet, LLC that delivers hybrid cloud, IaaS, and PaaS solutions designed for secure, compliant data operations. Alpha 3 Cloud helps leading organizations comply with strict data privacy regulations, protect their most sensitive data, control costs and minimize vendor lock-in while enabling a range of emerging use cases like blockchain, machine learning/artificial intelligence and multi-party computing initiatives. Alpha 3 Cloud's confidential computing cloud and enterprise- feature the latest HPE Gen10+ with Intel SGX secure hardware and an OPEX billing model. These solutions support hybrid, private and multi-cloud capabilities while providing superior security, performance, predictability, and control. Learn more about Alpha 3 Cloud by visiting http://www.alpha3cloud.com or for general inquiries contact info@alpha3cloud.com

About Hewlett Packard Enterprise

Hewlett Packard Enterprise (NYSE: HPE) is the global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way people live and work, HPE delivers unique, open and intelligent technology solutions as a service. With offerings spanning Cloud Services, Compute, High Performance Computing & AI, Intelligent Edge, Software, and Storage, HPE provides a consistent experience across all clouds and edges, helping customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: http://www.hpe.com.

About CloudSigma

https://www.cloudsigma.com/about/

ContactRon Sacks401-441-5213ron@Alpha3Cloud.com

Cision

View original content:https://www.prnewswire.com/news-releases/provdotnet-acquires-usa-assets-from-cloudsigma-launches-alpha-3-cloud-services-and-joins-the-hpe-partner-ready-service-provider-program-301576984.html

SOURCE Provdotnet, LLC

View post:
Provdotnet Acquires USA Assets from CloudSigma, Launches Alpha 3 Cloud Services, and Joins the HPE Partner Ready Service Provider Program - Yahoo...

Read More..

5 Best outdoor security cameras of 2022 available in the US market – Gadgets Now

Wireless security cameras are now available, making it easier to watch your property at home or away.

There are a few things to consider when choosing the best outdoor security camera for your needs, such as:

Some outdoor security cameras have features that make them more than just a simple surveillance tool. These can include two-way audio, so you can hear and speak to someone on your property, or motion-activated alerts, which can notify you of activity near your camera.

Regarding price, outdoor security cameras can range from around $100 to $500. But, of course, the price will depend on factors like the quality of the camera, the number of features it has, and whether its wireless or wired.

No matter your budget or needs, theres an outdoor security camera thats perfect for you, which you can get at the 4th of July Independence Day sales 2022 the best deals to expect in the market.

Top 5 Best Outdoor Security Cameras1. Blink XT22. Arlo Ultra3. Ring Stick-Up Cam Battery4. Nest Cam Outdoor5. Netgear Arlo Pro 2

1. Blink XT2If youre looking for a good quality, affordable outdoor security camera, the Blink XT2 is a great option. Its weatherproof and comes with two-way audio, motion detection, and free cloud storage. It also has a long battery life, so you wont have to worry about recharging it frequently. The only downside is that it doesnt have as many features as some of the more expensive options on this list.

2. Arlo UltraThe Arlo Ultra is a great choice for something a little more high-end. It has 4K HDR video quality, color night vision, and a 180-degree field view. Unfortunately, its also one of the more expensive options, but its packed with features that make it worth the price tag.

3. Ring Stick-Up Cam BatteryThe Ring Stick Up Cam Battery is a great option if you want an outdoor security camera thats easy to install. It comes with all the necessary hardware and can be up and running in just a few minutes. It also has two-way audio, motion detection, and live viewing. The battery life isnt as long as other options on this list, but its still a good choice for an affordable, easy-to-use camera.

4. Nest Cam OutdoorThe Nest Cam Outdoor is a great choice if you want an outdoor security camera with high-quality video. It has 1080p HD video quality and a wide field of view. It also comes with two-way audio and motion detection. The only downside is that it doesnt have free cloud storage like other cameras on this list.

5. Netgear Arlo Pro 2The Netgear Arlo Pro 2 is our top pick for the best outdoor security camera. It has 1080p HD video quality, night vision, two-way audio, and seven days of free cloud storage. Its also weatherproof and easy to install. The only downside is that it doesnt have as many features as some of the other cameras on this list.

ConclusionNo matter what your budget or needs are, theres an outdoor security camera thats perfect for you. The five cameras on this list are some of the best on the market and will help you keep an eye on your property, whether at home or away.

FacebookTwitterLinkedin

Read more:
5 Best outdoor security cameras of 2022 available in the US market - Gadgets Now

Read More..

Fireworks are only possible because of quantum physics – Big Think

This Monday, July 4, 2022, is remarkable for a number of reasons. It happens to be aphelion: the day where the Earth is at its most distant from the Sun as it revolves through the Solar System in its elliptical orbit. Its the 246th anniversary of when the United States officially declared independence from, and war on, Britain. And it marks the annual date where the wealthiest nation in the world sets off more explosivesin the form of fireworksthan any other.

Whether youre an amateur hobbyist, a professional installer, or simply a spectator, fireworks showsare driven by the same laws of physicsthat govern all of nature. Individual fireworks all contain the same four component stages: launch, fuse, burst charges, and stars. Without quantum physics, not a single one of them would be possible. Heres the science behind how every component of these spectacular shows works.

The anatomy of a firework consists of a large variety of elements and stages. However, the same four basic elements are the same across all types and styles of fireworks: the lift charge, the main fuse, a burst charge, and stars. Variations in the diameter of the launch tube, the length of the time-delay fuse, and the height of the fireworks are all necessary to ignite the stars with the proper conditions during the break.

The start of any firework is the launch aspect: the initial explosion that causes the lift. Ever sincefireworks were first inventedmore than a millennium ago, the same three simple ingredients have been at the heart of them: sulfur, charcoal, and a source of potassium nitrate. Sulfur is a yellow solid that occurs naturally in volcanically active locations, while potassium nitrate is abundant in natural sources like bird droppings or bat guano.

Charcoal, on the other hand, isnt the briquettes we commonly use for grilling, but the carbon residue left over from charring (or pyrolyzing) organic matter, such as wood. Once all the water has been removed from the charcoal, all three ingredients can be mixed together with a mortar and pestle. The fine, black powder that emerges is gunpowder, already oxygen-rich from the potassium nitrate.

The three main ingredients in black powder (gunpowder) are charcoal (activated carbon, at left), sulfur (bottom right) and potassium nitrate (top right). The nitrate portion of the potassium nitrate contains its own oxygen, which means that fireworks can be successfully launched and ignited even in the absence of external oxygen; they would work just as well on the Moon as they do on Earth.

With all those ingredients mixed together, theres a lot of stored energy in the molecular bonds holding the different components together. But theres a more stable configuration that these atoms and molecules could be rearranged into. The raw ingredientspotassium nitrate, carbon, and sulfurwill combust (in the presence of high-enough temperatures) to form solids such as potassium carbonate, potassium sulfate, and potassium sulfide, along gases such as carbon dioxide, nitrogen, and carbon monoxide.

All it takes to reach these high temperatures is a small heat source, like a match. The reaction is a quick-burning deflagration, rather than an explosion, which is incredibly useful in a propulsion device. The rearrangement of these atoms (and the fact that the fuel contains its own oxygen) allows the nuclei and electrons to rearrange their configuration, releasing energy and sustaining the reaction. Without the quantum physics of these rearranged bonds, there would be no way to release this stored energy.

The Macys Fourth of July fireworks celebration that takes place annually in New York City displays some of the largest and highest fireworks you can find in the United States of America and the world. This iconic celebration, along with all the associated lights and colors, is only possible because of the inescapable rules of quantum mechanics.

When that first energy release occurs, conventionally known as the lift charge, it has two important effects.

The upward acceleration needs to give your firework the right upward velocity to get it to a safe height for explosion, and the fuse needs to be timed appropriately to detonate at the peak launch height. A small fireworks show might have shells as small as 2 inches (5 cm) in diameter, which require a height of 200 feet (60 m), while the largest shows (like the one by the Statue of Liberty in New York) have shells as large as 3 feet (90 cm) in diameter, requiring altitudes exceeding 1000 feet (300 m).

Different diameter shells can produce different sized bursts, which require being launched to progressively higher altitudes for safety and visibility reasons. In general, larger fireworks must be launched to higher altitudes, and therefore require larger lift charges and longer fuse times to get there. The largest fireworks shells exceed even the most grandiose of the illustrations in this diagram.

The fuse, on the other hand, is the second stage and will be lit by the ignition stage of the launch.Most fusesrely on a similar black powder reaction to the one used in a lift charge, except the burning black powder core is surrounded by wrapped textile coated with either wax or lacquer. The inner core functions via the same quantum rearrangement of atoms and electron bonds as any black powder reaction, but the remaining fuse components serve a different purpose: to delay ignition.

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

The textile material is typically made of multiple woven and coated strings. The coatings make the device water resistant, so they can work regardless of weather. The woven strings control the rate of burning, dependent on what theyre made out of, the number and diameter of each woven string, and the diameter of the powder core. Slow-burning fuses might take 30 seconds to burn a single foot, while fast-burning fuses can burn hundreds of feet in a single second.

The three main configurations of fireworks, with lift charges, fuses, burst charges and stars all visible. In all cases, a lift charge launches the firework upward from within a tube, igniting the fuse, which then burns until it ignites the burst charge, which heats and distributes the stars over a large volume of space.

The third stage, then, is the burst charge stage, which controls the size and spatial distribution of the stars inside. In general the higher you launch your fireworks and the larger-diameter your shells are, the larger your burst charge will need to be to propel the insides of the shell outward. In general, the interior of the firework will have a fuse connected to the burst charge, which is surrounded by the color-producing stars.

Theburst chargecan be as simple as another collection of black powder, such as gunpowder. But it could be far more complex, such as the much louder and more impressiveflash powder, or a multi-stage explosive that sends stars in multiple directions. By utilizing different chemical compounds that offer different quantum rearrangements of their bonds, you can tune your energy release, the size of the burst, and the distribution and ignition times of the stars.

Differently shaped patterns and flight paths are highly dependent on the configuration and compositions of the stars inside the fireworks themselves. This final stage is what produces the light and color of fireworks, and is where the most important quantum physics comes into play.

But the most interesting part is that final stage: where the stars ignite. The burst is what takes the interior temperatures to sufficient levelsto create the light and colorthat we associate with these spectacular shows. The coarse explanation is that you can take different chemical compounds, place them inside the stars, and when they reach a sufficient temperature, they emit light of different colors.

This explanation, though, glosses over the most important component: the mechanism of how these colors are emitted. When you apply enough energy to an atom or molecule, you can excite or even ionize the electrons that conventionally keep it electrically neutral. When those excited electrons then naturally cascade downward in the atom, molecule or ion, they emit photons, producing emission lines of a characteristic frequency. If they fall in the visible portion of the spectrum, the human eye is even capable of seeing them.

Whether in an atom, molecule, or ion, the transitions of electrons from a higher energy level to a lower energy level will result in the emission of radiation at a very particular wavelength. This produces the phenomenon we see as emission lines, and is responsible for the variety of colors we see in a fireworks display.

What determines which emission lines an element or compound possesses? Its simply the quantum mechanics of the spacing between the different energy levels inherent to the substance itself. For example, heated sodium emits a characteristic yellow glow, as it has two very narrow emission lines at 588 and 589 nanometers. Youre likely familiar with these if you live in a city, as most of those yellow-colored street lamps you see are powered by elemental sodium.

As applied to fireworks, there are a great variety of elements and compounds that can be utilized to emit a wide variety of colors. Different compounds of Barium, Sodium, Copper and Strontium can produce colors covering a huge range of the visible spectrum, and the different compounds inserted in the fireworks stars are responsible for everything we see. In fact,the full spectrum of colors can be achievedwith just a handful of conventional compounds.

The interior of this curve shows the relationship between color, wavelength, and temperature in chromaticity space. Along the edges, where the colors are most saturated, a variety of elements, ions, and compounds can be shown, with their various emission lines marked out. Note that many elements/compounds have multiple emission lines associated with them, and all of these are used in various fireworks. Because of how easy it is to create barium oxide in a combustion reaction, certain firework colors, such as forest green and ocean green, remain elusive.

Whats perhaps the most impressive about all of this is that the color we see with the human eye is not necessarily the same as the color emitted by the fireworks themselves. For example, if you were to analyze the light emitted by a violet laser, youd find that the photons emerging from it were of a specific wavelength that corresponded to the violet part of the spectrum.

The quantum transitions that power a laser always result in photons of exactly the same wavelength, and our eyes see them precisely as they are, with the multiple types of cones we possess responding to that signal in such a way that our brain responds to construct a signal thats commensurate with the light possessing a violet color.

A set of Q-line laser pointers showcase the diverse colors and compact size that now are commonplace for lasers. By pumping electrons into an excited state and stimulating them with a photon of the desired wavelength, you can cause the emission of another photon of exactly the same energy and wavelength. This action is how the light for a laser is first created: by the stimulated emission of radiation.

But if you look at that same color that appears as violet not from a monochromatic source like a laser, but from your phone or computer screen, youll find that there are no intrinsically violet photons striking your eyes at all! Instead,as Chad Orzel has noted in the past,

Our eyes construct what we perceive as color from the response of three types of cells in our retina, each sensitive to light of a particular range of colors. One is most sensitive to blue-ish light (short wavelength), one is most sensitive to red light (long wavelength), and the third to a sort of yellow-green. Based on how strongly each of these cells responds to incoming light, our brains construct our perception ofcolor.

In other words, the key to producing the fireworks display you want isnt necessarily to create light of a specific color that corresponds to a specific wavelength, but rather to create light that excites the right molecules in our body to cause our brain to perceive a particular color.

A violet laser emits photons of a very particular, narrow wavelength, as every photon carries the same amount of energy. This curve, shown in blue, emits violet photons only. The green curve shows how a computer screen approximates the same exact violet color by using a mix of different wavelengths of light. Both appear to be the same color to human eyes, but only one truly produces photons of the same color that our eyes perceive.

Fireworks might appear to be relatively simple explosive devices. Pack a charge into the bottom of a tube to lift the fireworks to the desired height, ignite a fuse of the proper length to reach the burst charge at the peak of its trajectory, explode the burst charge to distribute the stars at a high temperature, and then watch and listen to the show as the sound, light, and color washes over you.

Yet if we look a little deeper, we can understand how quantum physics underlies every single one of these reactions. Add a little bit extrasuch as propulsion or fuel inside each starand your colored lights can spin, rise, or thrust in a random direction. Make sure you enjoy your fourth of July safely, but also armed with the knowledge that empowers you to understand how the most spectacular human-made light show of the year truly works!

Read the original:

Fireworks are only possible because of quantum physics - Big Think

Read More..

North American Solar Power Equipment Market Report and Growth Forecasts to 2027: United States is Set to Continue Market Dominance, Achieving Market…

DUBLIN--(BUSINESS WIRE)--The "North America Solar Power Equipment Market Size, Share & Industry Trends Analysis Report By Equipment, By Application, By Country and Growth Forecast, 2021-2027" report has been added to ResearchAndMarkets.com's offering.

The North America Solar Power Equipment Market is expected to witness market growth of 10.4% CAGR during the forecast period (2021-2027).

The solar power industry's technology is continuously improving, and this trend is expected to continue in the future. Quantum physics and nanotechnology advancements have the potential to improve the efficiency of solar panels. The solar panel market has enormous development potential, and solar energy is predicted to become the most dominant energy source in the future. Solar energy system installations in commercial as well as residential end-use applications are on the rise, creating lucrative growth prospects in the market. Another important factor is the emphasis on CO2 emission reduction. Solar energy can boost green energy output while preserving natural gas and coal sources that are rapidly decreasing. As a result, governments all over the world are promoting renewable energy sources and encouraging the widespread deployment of solar systems, propelling the solar panel market growth.

Solar power can also be used during a drought or a heat wave. Natural gas, coal, and nuclear power use a lot of water to cool themselves. Power generation is at risk during heat waves and severe droughts, as witnessed in recent years. Solar power systems, on the other hand, do not need water to produce electricity. Solar power also produces jobs in the renewable energy sector.

In the United States, solar electricity is more economical, accessible, and widespread than it has ever been. As per the Office of Energy Efficiency & Renewable Energy, Solar power capacity in the United States has increased from 0.34 GW in 2008 to an anticipated 97.2 GW now. This is enough amount of energy to power 18 million American homes. Solar energy, in the type of solar photovoltaics and concentrating solar-thermal power, provides over 3% of all electricity in the United States. Usually, the cost of solar photovoltaic panels has decreased by about 70% since 2014. Solar energy markets are quickly growing across the country, as solar electricity is already cost-competitive with traditional energy sources in several states. Hopefully, despite government funding cuts to the EPA and DOE, this trend will continue as innovative and forward-thinking businesses embrace the changing landscape of energy generation and transition to renewables.

The US market dominated the North American Solar Power Equipment Market by Country in 2020, and is expected to continue to be a dominant market till 2027; thereby, achieving a market value of $33,167.7 Million by 2027. The Canadian market is poised to grow at a CAGR of 12.9% during (2021 - 2027). Additionally, the Mexican market is expected to exhibit a CAGR of 11.9% during (2021 - 2027).

Market Segments Covered:

By Equipment

By Application

By Country

Key Topics Covered:

Chapter 1. Market Scope & Methodology

Chapter 2. Market Overview

2.1 Introduction

2.1.1 Overview

2.1.1.1 Market Composition and Scenario

2.2 Key Factors Impacting the Market

Chapter 3. Competition Analysis - Global

3.1 Publisher Cardinal Matrix

3.2 Recent Industry Wide Strategic Developments

3.2.1 Partnerships, Collaborations and Agreements

3.2.2 Product Launches and Product Expansions

3.2.3 Acquisition and Mergers

3.2.4 Business Expansions

3.2.5 Geographical Expansions

3.3 Top Winning Strategies

3.3.1 Key Leading Strategies: Percentage Distribution (2017-2021)

3.3.2 Key Strategic Move: (Product Launches and Product Expansions: 2020, Feb - 2022, Feb) Leading Players

Chapter 4. North America Solar Power Equipment Market by Equipment

4.1 North America Solar Panels Market by Country

4.2 North America Mounting, Racking, & Tracking System Market by Country

4.3 North America Storage System Market by Country

4.4 North America Others Market by Country

Chapter 5. North America Solar Power Equipment Market by Application

5.1 North America Utility Market by Country

5.2 North America Residential Market by Country

5.3 North America Non-Residential Market by Country

Chapter 6. North America Solar Power Equipment Market by Country

6.1 US Solar Power Equipment Market

6.2 Canada Solar Power Equipment Market

6.3 Mexico Solar Power Equipment Market

6.4 Rest of North America Solar Power Equipment Market

Chapter 7. Company Profiles

For more information about this report visit https://www.researchandmarkets.com/r/opq6aw

About ResearchAndMarkets.com

ResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

See the rest here:

North American Solar Power Equipment Market Report and Growth Forecasts to 2027: United States is Set to Continue Market Dominance, Achieving Market...

Read More..

AdS/CFT: 25 Years of the ‘Bridge’ to an Unknowable Universe – The Wire Science

An artists impression of a black hole and its accretion disk. Illustration: XMM-Newton, ESA, NASA

Twenty-five years ago, in 1997, an Argentine physicist named Juan Martin Maldacena published what would become the most highly cited physics paper in history (more than 20,000 to date). In the paper, Maldacena described a bridge between two theories that describe how our world works, but separately, without meeting each other. These are the field theories that describe the behaviour of energy fields (like the electromagnetic fields) and subatomic particles, and the theory of general relativity, which deals with gravity and the universe at the largest scales.

Field theories have many types and properties. One of them is a conformal field theory: a field theory that doesnt change when it undergoes a conformal transformation i.e. one which preserves angles but not lengths pertaining to the field. As such, conformal field theories are said to be mathematically well-behaved.

In relativity, space and time are unified into the spacetime continuum. This continuum can exist in many possible spaces. Some of these spaces have the same curvature everywhere, and come in three forms (roughly, universes of certain shapes): de Sitter space, Minkowski space and anti-de Sitter space. de Sitter space has positive curvature everywhere like a sphere (but is empty of any matter). Minkowski space has zero curvature everywhere i.e. a flat surface. Anti-de Sitter space has negative curvature everywhere like a hyperbola.

Because these shapes are related to the way our universe looks and works, cosmologists have their own way to understand them. If the spacetime continuum exists in de Sitter space, the universe is said to have a positive cosmological constant. Similarly, Minkowski space implies a zero cosmological constant and anti-de Sitter space a negative cosmological constant. Studies by various space telescopes have found that our universe has a positive cosmological constant, meaning it is approximately a de Sitter space (but not exactly since our universe does have matter).

In 1997, Maldacena found evidence to suggest that a description of quantum gravity in anti-de Sitter space in N dimensions is the same as a conformal field theory in N 1 dimensions. This AdS/CFT correspondence was an unexpected but monumental discovery that connected two kinds of theories that had thus far refused to cooperate.

The Wire Science had a chance to interview Maldacena about his past and current work in 2018, in which he provided more insights on AdS/CFT as well.

In his paper, Maldacena showed that in a very specific case, quantum gravity in anti-de Sitter space in five dimensions was the same as a specific conformal field theory in four dimensions. He conjectured that this equivalence would hold not just for the limiting case but the full theories. So the correspondence is also called the AdS/CFT conjecture. Physicists have not proven this to be the case so far but there is circumstantial evidence from many results that indicate that the conjecture is true.

Nonetheless, the finding was hailed as a major mathematical victory for string theory as well. This theory is a leading contender for one that can unify quantum mechanics and general relativity. However, we have found no experimental evidence of string theorys many claims.

Nonetheless, thanks to the correspondence, (mathematical) physicists have found that some problems that are hard on the AdS side are much easier to crack on the CFT side, and vice versa all they had to do was cross Maldacenas bridge! This was another sign that the AdS/CFT correspondence wasnt just a mathematical trick but could be a legitimate description of reality.

So how could it be real?

The holographic principle

In 1997, Maldacena proved that a string theory in five dimensions was the same as a conformal field theory in four dimensions. However, gravity in our universe exists in four dimensions not five. So the correspondence came close to providing a unified description of gravity and quantum mechanics, but not close enough. Nonetheless, it gave rise to the possibility that an entity that exists in some number of dimensions could be described by another entity that exists in one fewer number of dimensions.

Actually, in fact, the AdS/CFT correspondence didnt give rise to this possibility but realised it mathematically. The awareness of the possibility had existed for many years until then, as the holographic principle. The Dutch physicist Gerardus t Hooft first proposed it and the American physicist Leonard Susskind in the 1990s brought it firmly into the realm of string theory. One way to state the holographic principle, in the words of physicist Matthew Headrick, is thus:

The universe around us, which we are used to thinking of as being three dimensional, is actually at a more fundamental level two-dimensional and that everything we see thats going on around us in three dimensions is actually happening in a two-dimensional space.

This two-dimensional space is the surface of the universe, located at an infinite distance from us, where information is encoded that describes everything happening within the universe. Its a mind-boggling idea. Information here refers to physical information, such as, to use one of Headricks examples, the positions and velocities of physical objects. In beholding this information from the infinitely faraway surface, we apparently behold a three-dimensional reality.

It bears repeating that this is a mind-boggling idea. We have no proof so far that the holographic principle is a real description of our universe we only know that it could describe our reality, thanks to the AdS/CFT correspondence. This said, physicists have used the holographic principle to study and understand black holes.

In 1915, Albert Einsteins general theory of relativity provided a set of complicated equations to understand how mass, the spacetime continuum and the gravitational force are related. Within a few months, physicists Karl Swarzschild and Johannes Droste, followed in subsequent years by Georges Lematre, Subrahmanyan Chandrasekhar, Robert Oppenheimer and David Finkelstein, among others, began to realise that one of the equations exact solutions (i.e. non-approximate) indicated the existence of a point mass around which space was wrapped completely, preventing even light from escaping from inside this space to outside. This was the black hole.

Because black holes were exact solutions, physicists assumed that they didnt have any entropy i.e. that its insides didnt have any disorder. If there had been such disorder, it would have appeared in Einsteins equations. It didnt, so QED. But in the early 1970s, the Israeli-American physicist Jacob Bekenstein noticed a problem: if a system with entropy, like a container of hot gas, was thrown into the black hole, and the black hole doesnt have entropy, where does the entropy go? It had to go somewhere; otherwise, the black hole would violate the second law of thermodynamics that the entropy of an isolated system, like our universe, cant decrease.

Bekenstein postulated that black holes must also have entropy, and that the amount of entropy is proportional to the black holes surface area, i.e. the area of the event horizon. Bekenstein also worked out that there is a limit to the amount of entropy a given volume of space can contain, as well as that all black holes could be described by just three observable attributes: their mass, electric charge and angular momentum. So if a black holes entropy increases because it has swallowed some hot gas, this change ought to manifest as a change in one, some or all of these three attributes.

Taken together: when some hot gas is tossed into a black hole, the gas would fall into the event horizon but the information about its entropy might appear to be encoded on the black holes surface, from the point of view of an observer located outside and away from the event horizon. Note here that the black hole, a sphere, is a three-dimensional object whereas its surface is a flat, curved sheet and therefore two-dimensional. That is, all the information required to describe a 3D black hole could in fact be encoded on its 2D surface.

Doesnt this remind you of the AdS/CFT correspondence? For example, consider a five-dimensional anti-de Sitter space inside which there is a black hole. We can use the correspondence to show that the entropy of the theory that describes the boundary of this space matches exactly with the entropy of the black hole itself. This would realise the conjecture of t Hooft and others except here, the information is encoded not on the event horizon but on the boundary of the five-dimensional space itself.

This is just one example of the wider context that the AdS/CFT correspondence inhabits. For more examples and other insights, do read Maldacenas interview with The Wire Science.

The author is grateful to Nirmalya Kajuri for discussion and feedback on this article.

Originally posted here:

AdS/CFT: 25 Years of the 'Bridge' to an Unknowable Universe - The Wire Science

Read More..

Information Can Escape a Black Hole Both On the Outside and Possibly to Another Universe (Stephen Hawkings – The Daily Galaxy –Great Discoveries…

Posted on Jun 26, 2022 in Black Holes, Physics, Science

It has been said that Newton gave us answers; Stephen Hawking gave us questions. A trio of physicists appear one step closer to resolving the black-hole information paradox, one of the most intriguing physics mysteries of our time.

Spacetime seems to fall apart at a black hole, implying that space-time is not the root level of reality as suggested by the famous paradox that Stephen Hawking first described five decades ago, but emerges from something deeper, observes George Musser, author of Spooky Action at a Distance, for Quanta about Hawkings seminal theory that in a fiery marriage of relativity and quantum physics says that when a black hole forms and then subsequently evaporates away completely by emitting radiation, the information that went into the black hole cannot come back out and is inevitably lost, violating the laws of physics that insist unequivocally that information can never get totally lost.

Enter EinsteinThe Dissolution of Spacetime

In 2003, Hawking found a way that information might escape during the holes evaporation, but he did not prove that the information escapes, so the paradox continued, until now. They are not the eternal prisons they were once thought of, Hawking said. Things can get out of a black hole both on the outside and possibly to another universe.

Although Einstein conceived of gravity as the curved geometry of space-time, his theory also entails the dissolution of space-time, which is ultimately why information can escape its gravitational prison, adds Musser, summarizing a landmark series of calculations by three physicists that show that information does escape a black hole through the workings of ordinary gravity with a single layer of quantum effects, which seems impossible by definition based on new gravitational calculations that Einsteins theory permits, but that Hawking did not include.

The Most Exciting Thing Since Hawking

That is the most exciting thing that has happened in this subject, I think, since Hawking, said one of the co-authors, Donald Marolf of the University of California, Santa Barbara.

Its from that mysterious area where relativity and quantum mechanics dont quite mesh, that the question of what happens to information in a black hole emerges, says says researcher Henry Maxfield at the University of California, Santa Barbara in calculating the quantum information content of a black hole and its radiation.

The Big Question

Maxfield was co-author of a paper, co-written with physicists Ahmed Almheiri at the Institute for Advanced Study and MITs Netta Engelhardt and Marolf UC Santa Barbara in 2019, that takes us one step closer, says Maxfield, to resolving the black hole information paradox. The hope was, if we could answer this question if we could see the information coming out in order to do that we would have had to learn about the microscopic theory, said Geoff Penington of the University of California, Berkeley, alluding to a fully quantum theory of gravity.

Black Holes Gently Glow and Radiate

It goes back to this problem in the 1970s that Stephen Hawking discovered, Maxfield explained. Black holes those extremely dense, high-gravity voids in space-time arent completely black. They gently glow and radiate, he said. And as they do that, the black holes evaporate. But one element of Hawkings calculations, Maxfield continued, is that this state of Hawking radiation destroys information about the original quantum state of the material drawn into the hole.

This is very different from what quantum mechanics does, Maxfield said. In principle, the laws of physics are completely reversible. In other words, information about the materials original quantum state should exist in some form. So there was this conflict that quantum mechanics behaves one way and gravity seems to behave another way.

Tip of the Iceberg

We were interested in something closely related, which was trying to identify where the information is located, Maxfield said about the non-linear path to their calculation as a modification to Hawkings calculation broadening it to include a method for quantifying the information.

So theres that early radiation when the black hole is still young that doesnt really carry any information, Maxfield said about their calculation about how much information is stored in a black hole as it evaporates, and the finding that the amount of information indeed decreases over time.. But once the black hole has shrunk away to half its size it takes a very long time the quantum information starts coming out. This is what youd expect from quantum mechanics.

The calculation that Maxfield, Englehardt, Almheiri and Geoff Penington (who was concurrently doing very similar work at Stanford) made, reports UC Santa Barbara, is but a tip of the iceberg.

The Biggest Clue Weve Had

It doesnt mean that weve completely understood everything, Maxfield said. But it is the biggest clue weve had for a really long time as to how this tension gets resolved.

They found that the information is coming out, even if they didnt have all the reasons why it comes out, Marolf commented. But the idea is that this is a first step. If you have a way of performing that calculation, you should be able to open that calculation up and figure out what the physical mechanism is. This calculation is something we expect is going to give us insight into quantum processes in black holes and how information comes out of them.

Im very resistant to people who come in and say, Ive got a solution in just quantum mechanics and gravity, said a skeptical Nick Warner of the University of Southern California. Because its taken us around in circles before.

Max Goldberg via UC Santa Barbara and Quanta

Link:

Information Can Escape a Black Hole Both On the Outside and Possibly to Another Universe (Stephen Hawkings - The Daily Galaxy --Great Discoveries...

Read More..

Importance of Machine Learning Algorithms in Predicting Early Revision Surgery – Physician’s Weekly

When compared to the first THA, revision total hip arthroplasty (THA) is associated with greater morbidity, mortality, and healthcare expenditures due to a technically more difficult surgical process. As a result, a better knowledge of risk factors for early revision is required. THA is required to develop techniques to reduce the probability of patients having early revision. For a study, researchers sought to create and verify new machine learning (ML) models for predicting early revision after primary THA.

A total of 7,397 patients who underwent primary THA were assessed, with 566 patients (6.6%) having confirmed early revision THA (<2 years after index THA). Electronic patient data carefully evaluated medical demographics, implant characteristics, and surgical factors related to early revision THA. About 6 machine learning methods were constructed to predict early revision THA, and their performance was evaluated using discrimination, calibration, and decision curve analysis.

The Charlson Comorbidity Index, body mass index of more than >35 kg/m2, and depression were the best predictors of early revision after initial THA. In addition, the six ML models all performed well in discrimination (area under the curve >0.80), calibration, and decision curve analysis. The study used ML models to predict early revision surgery for individuals with original THA. The study findings revealed that all six candidate models perform well in discrimination, calibration, and decision curve analysis, underlining the possibility of these models to aid in clinical practice patient-specific preoperative estimation of greater risk of early revision THA.

Reference:journals.lww.com/jaaos/Abstract/2022/06010/The_Utility_of_Machine_Learning_Algorithms_for_the.4.aspx

More here:
Importance of Machine Learning Algorithms in Predicting Early Revision Surgery - Physician's Weekly

Read More..

How AI and Machine Learning Are Ready To Change the Game for Data Center Operations – Data Center Knowledge

Todays data centers face a challenge that, initially, looks like its almost impossible to resolve. While operations have never been busier, teams are pressured to reduce their facilities energy consumption as part of corporate carbon reduction goals. And, as if that wasnt difficult enough, dramatically rising electricity prices are placing real stress on data center budgets.

With data centers focused on supporting the essential technology services that people increasingly demand to support their personal and professional lives, its not surprising that data center operations have never been busier. Driven by trends that show no sign of slowing down, were seeing massively increased data usage associated with video, storage, compute demands, smart IoT integrations, as well as 5G connectivity rollouts. However, despite these escalating workloads, the unfortunate reality is that many of todays critical facilities simply arent running efficiently enough.

Given that the average data center operates for over 20 years, this shouldnt really be a surprise. Efficiency is invariably tied to a facilitys original design - and based on expected IT loads that have long been overtaken. At the same time change is a constant factor, with platforms, equipment design, topologies, power density requirements and cooling demands all evolving with the continued drive for new applications. The result is a global data center infrastructure that regularly finds it hard to match current and planned IT loads to their critical infrastructure. This will only be exacerbated as data center demands increase, with analyst projections suggesting that workload volumes are set to continue growing at around 20% a year between now and 2025.

Traditional data center approaches are struggling to meet these escalating requirements. Prioritizing availability is largely achieved at efficiencys expense, with too much reliance still placed on operator experience and trusting that assumptions are correct. Unfortunately, the evidence suggests that this model is no longer realistic. EkkoSense research reveals an average figure of 15% of IT racks in data centers operating outside of ASHRAEs temperature and humidity guidelines, and that customers strand up to 60% of their cooling capacity due to inefficiencies. And thats a problem, with Uptime Institute estimating that the global value attributed to inefficient cooling and airflow management is around $18bn. Thats equivalent to some 150bn wasted kilowatt hours.

With 35% of the energy used in a data center utilized to support the cooling infrastructure, its clear that traditional performance optimization approaches are missing a huge opportunity to unlock efficiency improvements. EkkoSense data indicates that a third of unplanned data center outages are triggered by thermal issues. Finding a different way to manage this problem can provide operations teams with a great way to secure both availability and efficiency improvements.

Limitations of traditional monitoringUnfortunately, only around 5% of M&E teams currently monitor and report their data center equipment temperatures on a rack-by-rack basis. Additionally, DCIM and traditional monitoring solutions can provide trend data and be set up to provide alerts when breaches occur, but that is where they stop. They lack the analytics to provide deeper insite into the cause of the issues and how both to resolve them and avoid them in the future.

Operations teams recognize that this kind of traditional monitoring has its limitations, but they also know that they simply dont have the resources and time to take the data they have and convert it from background noise into meaningful actions. The good news is that technology solutions are now available to help data centers tackle this problem.

It's time for data centers to go granular with machine learning and AIThe application of machine learning and AI creates a new paradigm in terms of how to approach data center operations. Instead of being swamped by too much performance data, operations teams can now take advantage of machine learning to gather data at a much more granular level meaning they can start to access how their data center is performing in real-time. The key is to make this accessible, and using smart 3D visualizations is a great way of making it easy for data center teams to interpret performance data at a deeper level: for example, by showing changes and highlighting anomalies.

The next stage is to apply machine learning and AI analytics to provide actionable insights. By augmenting measured datasets with machine learning algorithms, data center teams can immediately benefit from easy-to-understand insights to help support their real-time optimization decisions. The combination of real-time granular data collection every five minutes and AI/machine learning analytics allows operations not just to see what is happening across their critical facilities but also find out why and what exactly they should do about it.

AI and machine learning powered analytics can also uncover the insights required to recommend actionable changes across key areas such as optimum set points, floor grille layouts, cooling unit operation and fan speed adjustments. Thermal analysis will also indicate optimum rack locations. And because AI enables real-time visualizations, data center teams can quickly gain immediate performance feedback on any actioned changes.

Helping data center operations to make an immediate difference Given pressure to reduce carbon consumption and minimize the impact of electricity price increases, data center teams need new levels of optimization support if they are to deliver against their reliability and efficiency goals.

Taking advantage of the latest machine learning and AI-powered data center optimization approaches can certainly make a difference by cutting cooling energy and usage with results achievable within weeks. Bringing granular data to the forefront of their optimization plans, data center teams have already been able to not only remove thermal and power risk, but also secure reductions in cooling energy consumption costs and carbon emmissions by an average of 30%. Its hard to ignore the impact these kind of savings can have particularly during a period of rapid electricity price increases. The days of trading off risk and availability for optimization is a thing of the past with power of AI and Machine learning at the forefront of operating your data center.

Related: Scale Your Machine Learning with MLOps

Want to know more? Register for Wednesday's AFCOMwebinar on the subject here.

About the author

Tracy Collins is Vice President of EkkoSense Americas, the company that enables true M&E capacity planning for power, cooling and space. He was previously CEO at Simple Helix, a leading AL-based Tier III data center operator.

Tracy has over 25 years in-depth data center industry experience, having previously served as Vice President of IT Solutions for Vertiv and, before that, with Emerson Network Power. In his role, Tracy is committed to challenging traditional approaches to data center management particularly in terms of solving the optimization challenge of balancing increased data center workloads while also delivering against corporate energy saving targets.

Read the rest here:
How AI and Machine Learning Are Ready To Change the Game for Data Center Operations - Data Center Knowledge

Read More..

The Global Machine learning as a Service Market size is expected to reach $36.2 billion by 2028, rising at a market growth of 31.6% CAGR during the…

New York, June 29, 2022 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global Machine learning as a Service Market Size, Share & Industry Trends Analysis Report By End User, By Offering, By Organization Size, By Application, By Regional Outlook and Forecast, 2022 2028" - https://www.reportlinker.com/p06289268/?utm_source=GNW It is designed to include artificial intelligence (AI) and cognitive computing functionalities. Machine learning as a service (MLaaS) refers to a group of cloud computing services that provide machine learning technologies.

Increased demand for cloud computing, as well as growth connected with artificial intelligence and cognitive computing, are major machine learning as service industry growth drivers. Growth in demand for cloud-based solutions, such as cloud computing, rise in adoption of analytical solutions, growth of the artificial intelligence & cognitive computing market, increased application areas, and a scarcity of trained professionals are all influencing the machine learning as a service market.

As more businesses migrate their data from on-premise storage to cloud storage, the necessity for efficient data organization grows. Since MLaaS platforms are essentially cloud providers, they enable solutions to appropriately manage data for machine learning experiments and data pipelines, making it easier for data engineers to access and process the data.

For organizations, MLaaS providers offer capabilities like data visualization and predictive analytics. They also provide APIs for sentiment analysis, facial recognition, creditworthiness evaluations, corporate intelligence, and healthcare, among other things. The actual computations of these processes are abstracted by MLaaS providers, so data scientists dont have to worry about them. For machine learning experimentation and model construction, some MLaaS providers even feature a drag-and-drop interface.

COVID-19 Impact

The COVID-19 pandemic has had a substantial impact on numerous countries health, economic, and social systems. It has resulted in millions of fatalities across the globe and has left the economic and financial systems in tatters. Individuals can benefit from knowledge about individual-level susceptibility variables in order to better understand and cope with their psychological, emotional, and social well-being.

Artificial intelligence technology is likely to aid in the fight against the COVID-19 pandemic. COVID-19 cases are being tracked and traced in several countries utilizing population monitoring approaches enabled by machine learning and artificial intelligence. Researchers in South Korea, for example, track coronavirus cases using surveillance camera footage and geo-location data.

Market Growth Factors

Increased Demand for Cloud Computing and a Boom in Big Data

The industry is growing due to the increased acceptance of cloud computing technologies and the use of social media platforms. Cloud computing is now widely used by all companies that supply enterprise storage solutions. Data analysis is performed online using cloud storage, giving the advantage of evaluating real-time data collected on the cloud. Cloud computing enables data analysis from any location and at any time. Moreover, using the cloud to deploy machine learning allows businesses to get useful data, such as consumer behavior and purchasing trends, virtually from linked data warehouses, lowering infrastructure and storage costs. As a result, the machine learning as a service business is growing as cloud computing technology becomes more widely adopted.

Use of Machine Learning to Fuel Artificial Intelligence Systems

Machine learning is used to fuel reasoning, learning, and self-correction in artificial intelligence (AI) systems. Expert systems, speech recognition, and machine vision are examples of AI applications. The rise in the popularity of AI is due to current efforts such as big data infrastructure and cloud computing. Top companies across industries, including Google, Microsoft, and Amazon (Software & IT); Bloomberg, American Express (Financial Services); and Tesla and Ford (Automotive), have identified AI and cognitive computing as a key strategic driver and have begun investing in machine learning to develop more advanced systems. These top firms have also provided financial support to young start-ups in order to produce new creative technology.

Market Restraining Factors

Technical Restraints and Inaccuracies of ML

The ML platform provides a plethora of advantages that aid in market expansion. However, several parameters on the platform are projected to impede market expansion. The presence of inaccuracy in these algorithms, which are sometimes immature and underdeveloped, is one of the markets primary constraining factors. In the big data and machine learning manufacturing industries, precision is crucial. A minor flaw in the algorithm could result in incorrect items being produced. This would exorbitantly increase the operational costs for the owner of the manufacturing unit than decrease it.

End User Outlook

Based on End User, the market is segmented into IT & Telecom, BFSI, Manufacturing, Retail, Healthcare, Energy & Utilities, Public Sector, Aerospace & Defense, and Others. The retail segment garnered a substantial revenue share in the machine learning as a service market in 2021. E-commerce has proven to be a key force in the retail trade industry. Machine intelligence is used by retailers to collect data, evaluate it, and use it to provide customers with individualized shopping experiences. These are some of the factors that influence the retail industries demand for this technology.

Offering Outlook

Based on Offering, the market is segmented into Services Only and Solution (Software Tools). The services only segment acquired the largest revenue share in the machine learning as a service market in 2021. The market for machine learning services is expected to grow due to factors such as an increase in application areas and growth connected with end-use industries in developing economies. To enhance the usage of machine learning services, industry participants are focusing on implementing technologically advanced solutions. The use of machine learning services in the healthcare business for cancer detection, as well as checking ECG and MRI, is expanding the market. Machine learning services benefits, such as cost reduction, demand forecasting, real-time data analysis, and increased cloud use, are projected to open up considerable prospects for the market.

Organization Size Outlook

Based on Organization Size, the market is segmented into Large Enterprises and Small & Medium Enterprises. The small and medium enterprises segment procured a substantial revenue share in the machine learning as a service market in 2021. This is because implementation of machine learning lets SMEs optimize its processes on a tight budget. AI and machine learning are projected to be the major technologies that allow SMEs to save money on ICT and gain access to digital resources in the near future.

Application Outlook

Based on Application, the market is segmented into Marketing & Advertising, Fraud Detection & Risk Management, Computer vision, Security & Surveillance, Predictive analytics, Natural Language Processing, Augmented & Virtual Reality, and Others. The marketing and advertising segment acquired the largest revenue share in the machine learning as a service market in 2021. The goal of a recommendation system is to provide customers with products that they are currently interested in. The following is the marketing work algorithm: Hypotheses are developed, tested, evaluated, and analyzed by marketers. Because information changes every second, this effort is time-consuming and labor-intensive, and the findings are occasionally wrong. Machine learning allows marketers to make quick decisions based on large amounts of data. Machine learning allows businesses to respond more quickly to changes in the quality of traffic generated by advertising efforts. As a result, the business can spend more time developing hypotheses rather than doing mundane tasks.

Regional Outlook

Based on Regions, the market is segmented into North America, Europe, Asia Pacific, and Latin America, Middle East & Africa. The Asia Pacific region garnered a significant revenue share in the machine learning as a service market in 2021. Leading companies are concentrating their efforts in Asia-Pacific to expand their operations, as the region is likely to see rapid development in the deployment of security services, particularly in the banking, financial services, and insurance (BFSI) sector. To provide better customer service, industry participants are realizing the significance of providing multi-modal platforms. The rise in AI application adoption is likely to be the primary trend driving market growth in this area. Furthermore, government organizations have taken important steps to accelerate the adoption of machine learning and related technologies in this region.

The major strategies followed by the market participants are Product Launches and Partnerships. Based on the Analysis presented in the Cardinal matrix; Microsoft Corporation and Google LLC are the forerunners in the Machine learning as a Service Market. Companies such Amazon Web Services, Inc., SAS Institute, Inc., IBM Corporation are some of the key innovators in the Market.

The market research report covers the analysis of key stake holders of the market. Key companies profiled in the report include Hewlett-Packard Enterprise Company, Oracle Corporation, Google LLC, Amazon Web Services, Inc. (Amazon.com, Inc.), IBM Corporation, Microsoft Corporation, Fair Isaac Corporation (FICO), SAS Institute, Inc., Yottamine Analytics, LLC, and BigML.

Recent Strategies deployed in Machine learning as a Service Market

Partnerships, Collaborations and Agreements:

Mar-2022: Google entered into a partnership with BT, a British telecommunications company. Under the partnership, BT utilized a suite of Google Cloud products and servicesincluding cloud infrastructure, machine learning (ML) and artificial intelligence (AI), data analytics, security, and API managementto offer excellent customer experiences, decrease costs, and risks, and create more revenue streams. Google aimed to enable BT to get access to hundreds of new business use-cases to solidify its goals around digital offerings and developing hyper-personalized customer engagement.

Feb-2022: SAS entered into a partnership with TecCentric, a company providing customized IT solutions. SAS aimed to fasten TecCentrics journey towards discovery with artificial intelligence (AI), machine learning (ML), and advanced analytics. Under the partnership, TecCentric aimed to work with SAS to customize services and solutions for a broad range of verticals from the public sector, to banking, education, healthcare, and more, granting them access to the complete analytics cycle with SASs enhanced AI solution offering as well as its leading fraud and financial crimes analytics and reporting.

Feb-2022: Microsoft entered into a partnership with Tata Consultancy Services, an Indian company focusing on providing information technology services and consulting. Under the partnership, Tata Consultancy Services leveraged its software, TCS Intelligent Urban Exchange (IUX) and TCS Customer Intelligence & Insights (CI&I), to enable businesses in providing hyper-personalized customer experiences. CI&I and IUX are supported by artificial intelligence (AI), and machine learning, and assist in real-time data analytics. The CI&I software empowered retailers, banks, insurers, and other businesses to gather insights, predictions, and recommended actions in real-time to enhance the satisfaction of customers.

Jun-2021: Amazon Web Services entered into a partnership with Salesforce, a cloud-based software company. The partnership enabled to utilize complete set of Salesforce and AWS capabilities simultaneously to rapidly develop and deploy new business applications that facilitate digital transformation. Salesforce also embedded AWS services for voice, video, artificial intelligence (AI), and machine learning (ML) directly in new applications for sales, service, and industry vertical use cases.

Apr-2021: Amazon formed a partnership with Basler, a company known for its product line of area scan, line scan, and network cameras. The partnership began as Amazon launched a succession of services for industrial machine learning, including its latest Lookout for Vision cloud AI service for factory inspection. Customers can integrate AWS Panorama SDK within its platform, and thus utilize a common architecture to perform multiple tasks and accommodate a broad range of performance and cost. The integration of AWS Panorama empowered customers to adopt and run machine learning applications on edge devices with additional support for device management and accuracy tracking.

Dec-2020: IBM teamed up with Mila, a Quebec Artificial Intelligence Institute. Under the collaboration, both organizations aimed to quicken machine learning using Oron, an open-source technology. After the integration of Milas open-source Oron software and IBMs Watson Machine Learning Accelerator, IBM also enhanced the deployment of state-of-the-art algorithms, along with improved machine learning and deep learning capabilities for AI researchers and data scientists. IBMs Spectrum Computing team based out of Canada Lab contributes substantially to Orons code base.

Oct-2020: SAS entered into a partnership with TMA Solutions, a software outsourcing company based in Vietnam. Under the partnership, SAS and TMA Solutions aimed to fasten the growth of businesses in Vietnam through Artificial Intelligence (AI) and Data Analytics. SAS and TMA helped clients in Vietnam quicken the deployment and growth of advanced analytics and look for new methods to propel innovation in AI, especially in the fields of Machine Learning, Computer Vision, Natural Language Processing (NLP), and other technologies.

Product Launches and Product Expansions:

May-2022: Hewlett Packard launched HPE Swarm Learning and the new Machine Learning (ML) Development System, two AI and ML-based solutions. These new solutions increase the accuracy of models, solve AI infrastructure burdens, and improve data privacy standards. The company declared the new tool a breakthrough AI solution that focuses on fast-tracking insights at the edge, with attributes ranging from identifying card fraud to diagnosing diseases.

Apr-2022: Hewlett Packard released Machine Learning Development System (MLDS) and Swarm Learning, its new machine learning solutions. The two solutions are focused on simplifying the burdens of AI development in a development environment that progressively consists of large amounts of protected data and specialized hardware. The MLDS provides a full software and services stack, including a training platform (the HPE Machine Learning Development Environment), container management (Docker), cluster management (HPE Cluster Manager), and Red Hat Enterprise Linux.

May-2021: Google released Vertex AI, a novel managed machine learning platform that enables developers to more easily deploy and maintain their AI models. Engineers can use Vertex AI to manage video, image, text, and tabular datasets, and develop machine learning pipelines to train and analyze models utilizing Google Cloud algorithms or custom training code. After that the engineers can install models for online or batch use cases all on scalable managed infrastructure.

Mar-2021: Microsoft released updates to Azure Arc, its service that brought Azure products and management to multiple clouds, edge devices, and data centers with auditing, compliance, and role-based access. Microsoft also made Azure Arc-enabled Kubernetes available. Azure Arc-enabled Machine Learning and Azure Arc-enabled Kubernetes are developed to aid companies to find a balance between enjoying the advantages of the cloud and maintaining apps and maintaining apps and workloads on-premises for regulatory and operational reasons. The new services enable companies to implement Kubernetes clusters and create machine learning models where data lives, as well as handle applications and models from a single dashboard.

Jul-2020: Hewlett Packard released HPE Ezmeral, a new brand and software portfolio developed to assist enterprises to quicken digital transformation across their organization, from edge to cloud. The HPE Ezmeral goes from a portfolio consisting of container orchestration and management, AI/ML, and data analytics to cost control, IT automation and AI-driven operations, and security.

Acquisitions and Mergers:

Jun-2021: Hewlett Packard completed the acquisition of Determined AI, a San Francisco-based startup that offers a strong and solid software stack to train AI models faster, at any scale, utilizing its open-source machine learning (ML) platform. Hewlett Packard integrated Determined AIs unique software solution with its world-leading AI and high-performance computing (HPC) products to empower ML engineers to conveniently deploy and train machine learning models to offer faster and more precise analysis from their data in almost every industry.

Scope of the Study

Market Segments covered in the Report:

By End User

IT & Telecom

BFSI

Manufacturing

Retail

Healthcare

Energy & Utilities

Public Sector

Aerospace & Defense

Others

By Offering

Services Only

Solution (Software Tools)

By Organization Size

Large Enterprises

Small & Medium Enterprises

By Application

Marketing & Advertising

Fraud Detection & Risk Management

Computer vision

Security & Surveillance

Predictive analytics

Natural Language Processing

Augmented & Virtual Reality

Others

By Geography

North America

o US

o Canada

o Mexico

o Rest of North America

Europe

o Germany

o UK

o France

o Russia

o Spain

o Italy

o Rest of Europe

Asia Pacific

o China

o Japan

o India

o South Korea

o Singapore

o Malaysia

o Rest of Asia Pacific

LAMEA

o Brazil

o Argentina

o UAE

o Saudi Arabia

o South Africa

o Nigeria

Read the original post:
The Global Machine learning as a Service Market size is expected to reach $36.2 billion by 2028, rising at a market growth of 31.6% CAGR during the...

Read More..

Abacus.AI Named to 2022 CB Insights AI 100 & the Forbes AI 50 – PR Newswire

Abacus.AI has been recognized for its achievements and developments in artificial intelligence

SAN FRANCISCO, June 30, 2022 /PRNewswire/ --Abacus.AI, the first end-to-end Artificial Intelligence (AI)/Machine Learning (ML) platform, announced that it has been named to the 2022 CB Insights AI 100 List of Most Promising AI Startups, an annual list of the 100 most promising AI companies in the world.

Utilizing the CB Insights platform, the CB Insights team picked 100 private market vendors from a pool of over 7,000 companies, including applicants and nominees. Vendors were chosen based on factors including R&D activity, proprietary Mosaic Scores, market potential, business relationships, investor profiles, news sentiment analysis, competitive landscape, team strength, and tech novelty. They also reviewed thousands of analyst briefings submitted by applicants.

Abacus.AI has been specifically recognized within the ML platforms category, featuring companies that are developing tools to support AI development.

This recognition comes at an exciting time for Abacus.AI as they had been recognized the week prior in the Forbes AI 50, a list of the top 50 private companies in North America that are utilizing AI to transform the future.

Forbes' fourth annual 50 AI list, produced in partnership with Sequoia Capital, features the most compelling companies based on their utilization of AI technologies. Forbes assessed hundreds of submitted entries from the U.S. and Canada. From these submitted entries, their venture capital (VC) partners applied an algorithm that identified more than 120 companies with the highest quantitative scores. The top 50 companies were then hand-picked by a panel of expert AI judges.

"Over the course of the decade, there has been a significant paradigm shift in AI. Our interactions with AI have exponentially increased and companies have begun to see its efficiency in common enterprise use-cases. The challenge, of course, is looking for ways to seamlessly integrate AI within their products in a swift and cost-effective manner," said Bindu Reddy, Co-founder and CEO of Abacus.AI. "We still have a long way to go but I'm exceptionally proud of the progress of the Abacus.AI team in building a unified end to end platform that enables organizations to fully realize their AI, machine learning, and deep learning needs. It is a true honor and privilege to earn recognition from Forbes and CB Insights and stand alongside some of my industry peers."

About Abacus.AI

Abacus.AI is the world's first autonomous cloud AI platform that handles all aspects of machine and deep learning at an enterprise scale. They provide customizable, end-to-end autonomous AI services that can be used to set up data pipelines, specify custom machine learning transformations, and train, deploy, and monitor models.

Abacus.AI specializes in several use-case specific workflows including churn prediction, personalization, forecasting, NLP, and anomaly detection. The company features a world-class research team that has invented several neural architecture search methods that can create custom neural networks from datasets based on a specific use-case. Abacus.AI has been adopted by world-class organizations, several of which are Fortune 500 companies.

About CB Insights

CB Insights builds software that enables the world's best companies to discover, understand, and make technology decisions with confidence. By marrying data, expert insights, and work management tools, clients can manage their end-to-end technology decision-making process with CB Insights. To learn more, please visitwww.cbinsights.com.

SOURCE Abacus.AI

Read more:
Abacus.AI Named to 2022 CB Insights AI 100 & the Forbes AI 50 - PR Newswire

Read More..