Page 1,866«..1020..1,8651,8661,8671,868..1,8801,890..»

Submit nominations for the most important people building the cloud – Business Insider

No one needs to be told that Amazon CEO Andy Jassy or Microsoft CEO Satya Nadella are among the most powerful people in cloud computing.

Yet, for mere mortals looking to build and grow their careers in this all-important technology, knowing such info is hardly helpful; titans like Jassy or Nadella are completely inaccessible to connect with and learn from.

But there are armies of important people doing the hands-on work of building key cloud technologies who could have a direct and meaningful impact on an individual's career. They are the managers running key projects at cloud computing firms, the CIOs and IT pros building cutting-edge projects, the leaders at startups and newly public cloud vendors,the developers running key open source projects, and the power-broker recruiters that match-make between it all.

Insider is seeking nominations of such people for a new project publishing later this year intended to highlight and honor them.

Do you know of some? Please fill out the form below by August 24 to tell us about the person and how they are impacting their corner of the cloud computing universe.

See the original post:
Submit nominations for the most important people building the cloud - Business Insider

Read More..

Cloud IaaS revenues to hit $156bn by 2023 – New Telegraph Newspaper

Public cloud Infrastructure as a service (IaaS) revenues are expected to rise significantly over the coming years, from around $50 billion in 2020 to over $156 billion by 2023. IaaS, one of the features of cloud computing, accounted for over a quarter of the overall cloud computing market in 2020.

The global cloud market was valued at $70.19 billion in 2021. That number is expected to grow to $83.41 billion in 2022. The cloud market is expected to have an annual growth rate of 19.1 per cent for the entire industry and, by 2029, the cloud market industry is forecast to be worth $376.37 billion. However, the cloud infrastructure market share is expected to decrease in favour of the promising development of the platform as a service (PaaS) market. Nonetheless, the largest segment in the overall cloud computing market is and will continue to be SaaS, with over $208 billion in annual revenue. In the IaaS market, the largest companies by revenue are Amazon (Web Services), Microsoft (Azure), and Google (Compute Engine), as well as the Chinese multinational technology company Alibaba. AWS market share is about 32 per cent of the total cloud service market. Amazon has become the biggest chunk of the remaining top contenders such as Google and Microsofts Azure.

As of Q421, the Google Cloud market share was nine per cent worldwide. Revenue growth has consistently been up to 45 per cent over the past several years. Huawei Cloud IaaS revenue surged by a stunning 202.8 per cent, placing it in the top five list of cloud IaaS providers in the world, according to a report from market research firmGartner Inc. According to the research firm, this is the second consecutive year of over 200 per cent growth for Huawei Cloud in the IaaS market, braking Huawei Cloud into the top five IaaS vendors with 4.2 per cent in global market share. Despite Amazon being the largest vendor of cloud infrastructure by some margin, its hold over the market may lessen as Google and Microsoft make headway, with sur-veys suggesting cloud services companies are fighting for a piece of the cloud IaaS market. The major hardware IT infrastructure providers for the IaaS cloud market are Dell, HPE, Inspur, Cisco and Lenovo, representing around 45 per cent of the market combined. Original design manufacturers also play a significant role in the market, making up around one-third of the total market. Alongside software as a service (SaaS) and platform as a service (PaaS), infrastructure as a service (IaaS) is one of the core service models of cloud computing. Under the IaaS model, customers (often enterprises rather than individuals) are permitted to provision and access virtualised hardware and resources such as servers, networks, storage, or virtual machines.

In other words, the customer does not hold responsibility for maintaining or developing these resources, but instead, is free to focus on managing higher-level resources, such as the platform, the operating system, or the necessary software. In this way, customers pay only for what they consume. Also, providers are free to sell unused resources, leading to a substantial opportunity for cost savings and efficiency gains on both sides. Despite the eminent challenges faced in a developing country like Nigeria, deployment of cloud computing has been thriving in finance, business and oil sectors of the country. Suggestions on expanding participants of cloud computing to include small and medium scale enterprises, health sector and others were proffered. Speaking on the policy guiding cloud computing in Nigeria, the Minister of Communications and Digital Economy, Mallam Isa Pantami, said: The need to make these computing resources available and accessible is critical to the countrys continuous growth and sustainable development. The countrys Economic Recovery and Growth Plan (ERGP) recognises information technologies as an enabler for promoting a digital-led growth. Digitalled growth cannot happen except the country has policy direction peculiar to her environment for supporting the government and SMEs to acquire and deploy computing resources in the most efficient manner. Cloud Computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service interaction. Adoption of Cloud Computing policy by Nigerian Government will lead to capital costs reduction, improved responsiveness to citizens or customers needs, increased transparency and enhanced public service delivery. In addition, cloud computing adoption will also support Small Medium Enterprise (SMEs) that provide IT-enabled services to the government cross the barrier of initial IT capital investments and ensure there is a provision suitable for cloud procurement in the countrys procurement requirements. This will facilitate creation of new set of jobs and add value to the economy. Therefore, the Nigeria Cloud Computing Policy (NCCP) is promoting Cloud First as a proposition to federal public institutions (FPIs) and SMEs as an efficient way of acquiring and deploying computing resources for better and improved quality of digital services.

Like Loading...

Related

Continue reading here:
Cloud IaaS revenues to hit $156bn by 2023 - New Telegraph Newspaper

Read More..

IBM Stock Offers a Winning Combination of Growth and Value – InvestorPlace

When it comes to growth opportunities in tech, International Business Machines (NYSE:IBM) and IBM stock may not be what first comes to mind. In fact, many investors may see it as old news.

But taking a closer look, its clear this company differs greatly from this common perception. It may be on course to see its growth reaccelerate from here.

How? In the past year, IBM has continued to focus on faster-growing areas of tech like hybrid cloud computing. Success in this area could in time move the needle. Compare this to its low valuation and high dividend yield, and theres a lot to like here.

In recent years, IBM has seen little in the way of revenue and earnings growth. With this, not only would most describe it as a value stock, many would describe it as a value trap. That is, a cheap stock that becomes even cheaper due to the lackluster performance of its operating business.

Yet in recent quarters, it appears the company has started to turn a corner. With the spinoffof Kyndryl (NYSE:KD) last November, IBM was able to further remove a major slow-growing, low-margin operating segment. Now, International Business Machines consists largely of faster-growing, higher-margin segments like hybrid cloud software, hybrid cloud infrastructure and consulting.

As a result, the company has reported solid numbers in recent quarters. For example, in its most recently completed quarter, it reported overall revenue growth of 9%. On a constant currency basis, top-line growth came in at 16%. According to Barrons, IBM stock investors havent seen that type of revenue growth in over two decades.

Its a work in progress, but IBM may be well on the path toward becoming a growth stock once again.

Despite the prospect of higher growth ahead, IBM stock continues to trade at a low earnings multiple. At todays prices, its forward multiple stands at around 14.7x. If its growth reacceleration continues, it could easily see its forward valuation get a moderately high boost.

Coupled with increased earnings, this may result in significant upside for shares. Nearly a decade ago, the stock traded for between $175 and $200 per share. A return to such levels could happen.

In the meantime, IBM will pay you while you wait for its growth transformation to finish playing out. Right now, the stock pays out around $6.60 per share in dividends annually. That gives shares a forward dividend yield of around 4.7%. Even in todays rising rate environment, that still counts as high yield.

Furthermore, this stock has a track record of dividend growth. The company has raised its dividend 28 years in a row. Over a long timeframe, these steadily increasing payouts can boost your total return.

IBM stock earns a B rating in my Portfolio Grader. Among stocks in the tech sector, there are many names with impressive growth prospects that trade at a premium valuations and have little to offer when it comes to dividends.

There are also many tech stocks that trade at low valuations, and have high dividend yields, but offer little in the way of upside potential. With this tech stock, though, you may be getting the best of all three. A winning combination of growth and value.

That said, dont be surprised if its performance in the immediate term is underwhelming. The market is still digesting macroeconomic uncertainties like inflation, rising interest rates and a possible recession.

Nevertheless, if you have a long time horizon, IBM stock is a great opportunity. Collect a steady dividend yield all while its transformation takes shape.

On the date of publication, neither LouisNavellier nor the InvestorPlace Research Staff member primarily responsible for this article held (either directly or indirectly) any positions in the securities mentioned in this article.

See the original post here:
IBM Stock Offers a Winning Combination of Growth and Value - InvestorPlace

Read More..

Infrastructure as code and your security team: 5 critical investment areas – VentureBeat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

The promises of Infrastructure as Code (IaC) are higher velocity and more consistent deployments two key benefits that boost productivity across the software development lifecycle.

Velocity is great, but only if security teams are positioned to keep up with the pace of modern development. Historically, outdated practices and processes have held security back, while innovation in software development has grown quickly, creating an imbalance that needs leveling.

IaC is not just a boon for developers; IaC is a foundational technology that enables security teams to leapfrog forward in maturity. Yet, many security teams are still figuring out how to leverage this modern approach to developing cloud applications. As IaC adoption continues to rise, security teams must keep up with the fast and frequent changes to cloud architectures; otherwise, IaC can be a risky business.

If your organization is adopting IaC, here are five critical areas to invest in.

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Constantly putting out fires from one project to the next has created a challenge for security teams to find the time and resources to prioritize building foundational security design patterns for cloud and hybrid architectures.

Security design patterns are a required foundation for security teams to keep pace with modern development. They help solution architects and developers accelerate independently while having clear guardrails that define the best practices security wants them to follow. Security teams also get autonomy and can focus on strategic needs.

IaC provides new opportunities to build and codify these patterns. Templatizing is a common approach that many organizations invest in. For common technology use cases, security teams establish standards by building out IaC templates that meet the organizations security requirements.By engaging early with project teams to identify security requirements up front, security teams help incorporate security and compliance needs to give developers a better starting point to build their IaC.

However, templatization is not a silver bullet. It can add value for select commonly used cloud resources, but requires an investment in security automation to scale.

As your organization matures in its use of IaC, your cloud architectures become more complex and grow in size. Your developers are able to rapidly adopt new cloud architectures and capabilities, and youll find that static IaC templates do not scale to address the dynamic needs of modern cloud-native applications.

Every application has different needs, and each application development team will inevitably alter the IaC template to fit the unique needs of that application. Cloud service provider capabilities change daily and make your IaC security template a depreciating asset that becomes stale quickly. A large investment in governance to scale is required for security teams, and it creates significant work for your SMEs to manage exceptions.

Automation that relies on security as code offers a solution and enables your resource-constrained security teams to scale. In fact, it may be the only viable approach to address cloud-native security. It allows you to codify your design patterns and apply security dynamically to tailor to your application use-case.

Managing your security design pattern using security as code has several benefits:

The ratio of developers to ops to security resources is sometimes something like 100:10:1. I recently talked to an organization that has 10,000 developers and 3 AppSec engineers. The only viable way for a team like this to scale and prioritize their time efficiently is to rely on automation to force multiply their security expertise.

Once you reach sufficient maturity in your IaC adoption, youll want all changes to be made through code. This allows you to lock down other channels (that is, cloud console, CLIs) of change and build on good software development governance processes to ensure that every code change gets reviewed.

Security automation that is seamlessly integrated into your development pipeline can now assess every change to your cloud-native apps and provide visibility into any potential inherent risks, avoiding time-consuming manual reviews. This lets you build mature governance processes that ensure security issues are remediated and compliance requirements are met.

Along your journey to IaC maturity, changes will be made to your cloud environment through IaC, as well as traditional channels such as the CSP console or command-line tools. When developers make direct changes to deployed environments, you lose visibility, and this can lead to significant risk. Additionally, your IaC will no longer represent your source of truth, so assessing your IaC can give you an incomplete picture.

Investing in drift detection capabilities that validate your deployed environments against your IaC can ensure that any drift is immediately detected and remediated by pushing a code change to your IaC.

Security teams should put emphasis on the developer workflow and experience and seek to continuously reduce friction to implement security. Having developer champions within security that understand the challenges developers face can help ensure that security automation is serving the needs of the developer. Similarly, security champions within development teams can help generate awareness around security and create a positive feedback loop to help improve the design patterns.

IaC can be a risky business, but it doesnt have to be. Higher velocity and more consistent deployments are in sight, as long as youre able to invest in the right places. By being strategic and intentional and investing in the necessary areas, the security team at your organization will be best positioned to keep up with the fast and frequent changes during IaC adoption.

Are you ready to take advantage of what IaC has to offer? Theres no better time than now.

Aakash Shah is CTO and cofounder of oak9

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Here is the original post:
Infrastructure as code and your security team: 5 critical investment areas - VentureBeat

Read More..

Chess Played Quick – Variants (Bingo Edition): All The Information – Chess.com

Chess Played Quick is Chess.com's series of events where top chess streamers complete bounties for prizes.

Chess Played Quick - Variants (Bingo Edition) starts on August 25 at 9 a.m. PT/18:00 CEST. Streamers will play multiple chess variants to complete bingos and collect as many bounties as possible. The event features 2,000 Twitch subs as the prize pool, with each bingo worth 20 subs.

Streamers will have two hours to play numerous chess variants and complete as many bingos as possible to claim their bounties. The variants for this event are 4 Player Chess, Doubles (Bughouse), Crazyhouse, Giveaway, Atomic, Horde, Chaturaji, Fog of War, and more. Note that for this event there won't be different rating categories.

Bingo Format

Bounty Rules And Details

We will publish the bingo card with a detailed explanation of each bounty when the event starts.

The total prize fund for this event is 2,000 Twitch subs, with 20 subs going for each valid bingo. Notice that only Twitch streamers can participate in the event.

Chess Played Quick - Variants (Bingo Edition) starts on August 25 at 9 a.m. PT/18:00 CEST, with the event lasting two hours.

Fill out the official application form below if you're a Twitch streamer and would like to participate in the event. Applications close 48 hours before the event starts. Important: players must create a new account to participate in this event.

Participants must submit their bounty completion proof forms within 48 hours after the event is over. Important: You need to fill out the form for each bingo you want to claim (do no attempt to claim them all at once).

The official proof submission form is below:

Read this article:
Chess Played Quick - Variants (Bingo Edition): All The Information - Chess.com

Read More..

D. Gukesh breaks into top 20 in chess rankings, is World No. 18 in live ratings – Sportstar

D. Gukesh continued his relentless climb in the world rankings after scoring a hat-trick of wins in the ongoing Turkish Isbank Chess Super League in Ankara on Thursday.

Playing on the top board for third-round leader Turkish Airlines Sports Club, Gukesh defeated Grandmasters Aryan Gholami (2507), Andrey Esipenko (2682) Vahap Sanal (2574) to reach World No. 18 in live rankings and 2735.9 as per live ratings.

With R. Praggnanandhaa making waves in the cash-rich FTX Crypto Cup rapid event, and Arjun Erigaisi leading the Abu Dhabi Masters after three rounds, Gukesh added to the joy by gaining another 10 rating points and six places from the time he finished the Chess Olympiad with twin medals.

As things stand, the country's youngest Grandmaster trails 12th ranked Viswanathan Anand (2756) by around 20 points and leads third-placed P. Harikrishna by 19 points.

Like never before, India has five players in the top 35 of world rankings and eight in the top 90.

Praggnanandhaa (2675.5) and Nihal Sarin (2671.5), with live rankings of 66 and 69, are targeting the coveted 2700-rating barrier.

See the rest here:
D. Gukesh breaks into top 20 in chess rankings, is World No. 18 in live ratings - Sportstar

Read More..

Cloud and datacenters start to feel the slowdown amid spiking energy costs – The Register

The datacenter industry may be starting to feel the effects of the economic slowdown, leading to further impacts on IT vendors and other suppliers, according to reports, while operators in the UK in particular are feeling the pain from rising energy costs.

Cloud and hyperscale companies may have seemed less vulnerable to swings in the wider economy, thanks to the growing adoption of cloud services over the past decade. This was especially so during the pandemic, when many businesses were forced to upscale their use of cloud services to ensure staff could continue to work remotely.

But the signs are starting to point to a possible slowdown, with Reuters reporting that Google Cloud, Microsoft's Azure, and Amazon's AWS all showed slower growth in their recent results.

For example, Google Cloud reported $6.3 billion of revenue for Q2 2022, a 35 percent year-on-year increase, but a slower increase than the 44 percent jump it reported for its Q1 results.

In June, analyst outfit TrendForce was predicting that the global server market will grow more slowly in 2022 than in the past, with China's cloud companies Baidu, Alibaba, and Tencent all lowering their procurement this year. It warned that this could also spread to cloud and hyperscale companies in the US, leading to overall server shipments falling.

All the big cloud players have recently extended the life of their servers in order to save on procurement costs. Microsoft announced this month it was extending the life of its machines by two years, which it expected to save it $3.7 billion next year. Google announced in February an extension of its server lifecycle from three years to four, while Amazon said it expected to save a billion dollars in this quarter by running its servers for six years instead of five.

This could be bad news for the IT industry, which had already seen demand falling away on the consumer device side and may now face weakening demand for servers and lucrative components such as memory chips that servers have so far kept buoyant.

However, as The Register reported this month, enterprises are spending more on cloud infrastructure services than ever before, and falling prices for components may leave the hyperscalers well positioned to take advantage of this and add more capacity at a lower cost.

But supply chain issues may still be a complicating factor, with Microsoft's Azure cloud reported to be having difficulty providing enough capacity to meet customer demand last month.

Meanwhile, another factor affecting datacenter operators is the spiraling cost of energy. According to a recent report, datacenter operators in the UK and Ireland have seen their energy bills increase by as much as 50 percent.

In the UK, 57 percent of operators indicated that they are currently spending between 10 and 30 percent of their entire operating costs on electricity, with some paying more and many expecting this figure to hit 40 percent or higher.

The report, from power generation supplier Aggreko, says that this is causing problems because the all-in pricing models adopted by co-location providers mean they are forced to absorb additional costs and price increases.

It was based on a survey of 253 datacenter professionals, with 58 percent of those in the UK reporting that energy bills have had a significant impact on their company's margins.

The report concludes that operators are struggling to remain competitive, particularly as confidence in government support is "tepid" in both the UK and Ireland, and suggests some remedies.

However, it would seem there is still a danger that customers may be asked to pay more to offset some of the rising energy costs. For example, customers of cloud and network provider M247 were hit with a 161 percent hike in charges late last year, with rising energy prices blamed for the increase even then.

Read the original:
Cloud and datacenters start to feel the slowdown amid spiking energy costs - The Register

Read More..

Google: Here’s how we blocked the largest web DDoS attack ever – ZDNet

By Alfa Photo -- Shutterstock

Google Cloud has revealed it blocked the largest distributed denial-of-service (DDoS) attack on record, which peaked at 46 million requests per second (rps).

The June 1 attack targeted one Google Cloud customer using the Google Cloud Armor DDoS protection service.

Over the course of 69 minutes beginning at 9:45 am PT, the attackers bombarded its customer's HTTP/S Load Balancer with HTTPS requests, starting at 10,000 rps and within minutes scaling up to 100,000 rps before peaking at a whopping 46 million rps.

Google says it is the largest ever attack at Layer 7, referring to the application layer the top layer in the OSI model of the Internet.

The attack on Google's customer was almost twice the size of a HTTPS DDoS attack on a Cloudflare customer in June that peaked at 26 million rps. That attack also relied on a relatively small botnet consisting of 5,067 devices spread over 127 countries.

The attack on Google's customer was also conducted over HTTPS but used "HTTP Pipelining", a technique to scale up rps. Google says the attack came from 5,256 source IP addresses across 132 countries.

"The attack leveraged encrypted requests (HTTPS) which would have taken added computing resources to generate," Google said.

"Although terminating the encryption was necessary to inspect the traffic and effectively mitigate the attack, the use of HTTP Pipelining required Google to complete relatively few TLS handshakes."

Google says the geographic distribution and types of unsecured services used to generate the attack match the Mris family of botnets. Mris is anIoT botnet that emerged in 2021that consisted mostly of compromised MikroTik routers.

Researchers at Qrator who previously analyzed Mris' use of HTTP Pipelining explained the technique involves sending trash HTTP requests in batches to a targeted aimed server, forcing it to respond to those request batches. Pipelining scales up rps, but as mentioned by Google, that technique didn't require it to complete TLS handshakes.

Cloudflare attributed the 26 million rps attack to what it called the Mantis botnet, which it considered an evolution of Mris. Mantis was powered by hijacked virtual machines and servers hosted by cloud companies rather than low-bandwidth IoT devices, according to Cloudflare.

SEE: How to find out if you are involved in a data breach -- and what to do next

Google noted that this Mris-related botnet abused unsecured proxies to obfuscate the true origin of the attacks.

It also noted that around 22% or 1,169 of the source IPs corresponded to Tor exit nodes, but the request volume coming from those nodes amounted to just 3% of the attack traffic.

"While we believe Tor participation in the attack was incidental due to the nature of the vulnerable services, even at 3% of the peak (greater than 1.3 million rps) our analysis shows that Tor exit nodes can send a significant amount of unwelcome traffic to web applications and services."

Go here to read the rest:
Google: Here's how we blocked the largest web DDoS attack ever - ZDNet

Read More..

Highly-Efficient New Neuromorphic Chip for AI on the Edge – SciTechDaily

A team of international researchers designed, manufactured, and tested the NeuRRAM chip. Credit: David Baillot/University of California San Diego

The NeuRRAM chip is the first compute-in-memory chip to demonstrate a wide range of AI applications while using just a small percentage of the energy consumed by other platforms while maintaining equivalent accuracy.

NeuRRAM , a new chip that runs computations directly in memory and can run a wide variety of AI applications has been designed and built by an international team of researchers. What sets it apart is that it does this all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.

The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud. This means they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Applications for this device abound in every corner of the globe and every facet of our lives. They range from smartwatches to VR headsets, smart earbuds, smart sensors in factories, and rovers for space exploration.

Not only is the NeuRRAM chip twice as energy efficient as the state-of-the-art compute-in-memory chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Conventional AI platforms are much bulkier and typically are constrained to using large data servers operating in the cloud.

A close-up of the NeuRRAM chip. Credit: David Baillot/University of California San Diego

Additionally, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition.

The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility, said Weier Wan, the papers first corresponding author and a recent Ph.D. graduate of Stanford University who worked on the chip while at UC San Diego, where he was co-advised by Gert Cauwenberghs in the Department of Bioengineering.

The research team, co-led by bioengineers at the University of California San Diego (UCSD), presented their results in the August 17 issue of Nature.

The NeuRRAM chip uses an innovative architecture that has been co-optimized across the stack. Credit: David Baillot/University of California San Diego

Currently, AI computing is both power-hungry and computationally expensive. Most AI applications on edge devices involve moving data from the devices to the cloud, where the AI processes and analyzes it. Then the results are transferred back to the device. This is necessary because most edge devices are battery-powered and as a result only have a limited amount of power that can be dedicated to computing.

By reducing the power consumption needed for AI inference at the edge, this NeuRRAM chip could lead to more robust, smarter, and accessible edge devices and smarter manufacturing. It could also lead to better data privacy, because the transfer of data from devices to the cloud comes with increased security risks.

On AI chips, moving data from memory to computing units is one major bottleneck.

Its the equivalent of doing an eight-hour commute for a two-hour work day, Wan said.

To solve this data transfer issue, researchers used what is known as resistive random-access memory. This type of non-volatile memory allows for computation directly within memory rather than in separate computing units. RRAM and other emerging memory technologies used as synapse arrays for neuromorphic computing were pioneered in the lab of Philip Wong, Wans advisor at Stanford and one of the main contributors to this work. Although computation with RRAM chips is not necessarily new, generally, it leads to a decrease in the accuracy of the computations performed on the chip and a lack of flexibility in the chips architecture.

Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago, Cauwenberghs said. What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms.

A carefully crafted methodology was key to the work with multiple levels of co-optimization across the abstraction layers of hardware and software, from the design of the chip to its configuration to run various AI tasks. Additionally, the team made sure to account for various constraints that span from memory device physics to circuits and network architecture.

This chip now provides us with a platform to address these problems across the stack from devices and circuits to algorithms, said Siddharth Joshi, an assistant professor of computer science and engineering at the University of Notre Dame, who started working on the project as a Ph.D. student and postdoctoral researcher in Cauwenberghs lab at UCSD.

Researchers measured the chips energy efficiency by a measure known as energy-delay product, or EDP. EDP combines both the amount of energy consumed for every operation and the amount of time it takes to complete the operation. By this measure, the NeuRRAM chip achieves 1.6 to 2.3 times lower EDP (lower is better) and 7 to 13 times higher computational density than state-of-the-art chips.

Engineers ran various AI tasks on the chip. It achieved 99% accuracy on a handwritten digit recognition task; 85.7% on an image classification task; and 84.7% on a Google speech command recognition task. In addition, the chip also achieved a 70% reduction in image-reconstruction error on an image-recovery task. These results are comparable to existing digital chips that perform computation under the same bit-precision, but with drastic savings in energy.

One key contribution of the paper, the researchers point out, is that all the results featured are obtained directly on the hardware. In many previous works of compute-in-memory chips, AI benchmark results were often obtained partially by software simulation.

Next steps include improving architectures and circuits and scaling the design to more advanced technology nodes. Engineers also plan to tackle other applications, such as spiking neural networks.

We can do better at the device level, improve circuit design to implement additional features, and address diverse applications with our dynamic NeuRRAM platform, said Rajkumar Kubendran, an assistant professor at the University of Pittsburgh, who started work on the project while a Ph.D. student in Cauwenberghs research group at UCSD.

In addition, Wan is a founding member of a startup that works on productizing the compute-in-memory technology. As a researcher and an engineer, my ambition is to bring research innovations from labs into practical use, Wan said.

The key to NeuRRAMs energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power-hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy-efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism.

In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights. The neurons connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure.

To make sure that the accuracy of the AI computations can be preserved across various neural network architectures, engineers developed a set of hardware algorithm co-optimization techniques. The techniques were verified on various neural networks including convolutional neural networks, long short-term memory, and restricted Boltzmann machines.

As a neuromorphic AI chip, NeuroRRAM performs parallel distributed processing across 48 neurosynaptic cores. To simultaneously achieve high versatility and high efficiency, NeuRRAM supports data-parallelism by mapping a layer in the neural network model onto multiple cores for parallel inference on multiple data. Also, NeuRRAM offers model-parallelism by mapping different layers of a model onto different cores and performing inference in a pipelined fashion.

An international research team

The work is the result of an international team of researchers.

The UCSD team designed the CMOS circuits that implement the neural functions interfacing with the RRAM arrays to support the synaptic functions in the chips architecture, for high efficiency and versatility. Wan, working closely with the entire team, implemented the design; characterized the chip; trained the AI models; and executed the experiments. Wan also developed a software toolchain that maps AI applications onto the chip.

The RRAM synapse array and its operating conditions were extensively characterized and optimized at Stanford University.

The RRAM array was fabricated and integrated onto CMOS at Tsinghua University.

The Team at Notre Dame contributed to both the design and architecture of the chip and the subsequent machine learning model design and training.

The research started as part of the National Science Foundation funded Expeditions in Computing project on Visual Cortex on Silicon at Penn State University, with continued funding support from the Office of Naval Research Science of AI program, the Semiconductor Research Corporation and DARPA JUMP program, and Western Digital Corporation.

Reference: A compute-in-memory chip based on resistive random-access memory by Weier Wan, Rajkumar Kubendran, Clemens Schaefer, Sukru Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss, Priyanka Raina, He Qian, Bin Gao, Siddharth Joshi, Huaqiang Wu, H.-S. Philip Wong and Gert Cauwenberghs, 17 August 2022, Nature.DOI:10.1038/s41586-022-04992-8

Published open-access in Nature, August 17, 2022.

Weier Wan, Rajkumar Kubendran, Stephen Deiss, Siddharth Joshi, Gert Cauwenberghs, University of California San Diego

Weier Wan, S. Burc Eryilmaz, Priyanka Raina, H-S Philip Wong, Stanford University

Clemens Schaefer, Siddharth Joshi, University of Notre Dame

Rajkumar Kubendran, University of Pittsburgh

Wenqiang Zhang, Dabin Wu, He Qian, Bin Gao, Huaqiang Wu, Tsinghua University

Corresponding authors: Wan, Gao, Joshi, Wu, Wong and Cauwenberghs

Excerpt from:
Highly-Efficient New Neuromorphic Chip for AI on the Edge - SciTechDaily

Read More..

Global Infrastructure as Code (IaC) Market Report 2022: Advent of Modern Cloud Architecture & Demand for Better Optimization of Business…

Company Logo

Global Infrastructure as Code Market

Global Infrastructure as Code Market

Dublin, Aug. 18, 2022 (GLOBE NEWSWIRE) -- The "Global Infrastructure as Code (IaC) Market by Tool (Configuration Orchestration, Configuration Management), Service, Type (Declarative & Imperative), Infrastructure Type (Mutable & Immutable), Deployment Mode, Vertical and Region - Forecast to 2027" report has been added to ResearchAndMarkets.com's offering.

The infrastructure as code market size to grow from USD 0.8 billion in 2022 to USD 2.3 billion by 2027, at a Compound Annual Growth Rate (CAGR) of 24.0% during the forecast period. IaC technologies can assist in releasing system administrators from laboring over manual procedures and allowing application developers to concentrate on what they do effectively by optimizing and refactoring infrastructure builds.

Programs, configuration data, and automation devices are used in the infrastructure as code (IaC) deployment and management approach. This method may be used to cloud services as well as to hardware including web servers, routers, databases, load balancers, and personal PCs.

It is distinct from conventional infrastructure management, which depends on mechanical or interactive device configuration. IaC refers to a high-level build connections rather than a particular method, device, or protocol. Utilizing approaches for automated testing and quality control, Infrastructure as Code makes use of the software development process. Instead of manually altering the infrastructure, modifications to the configuration are accomplished by altering the program.

Based on Component, tool segment to register for the largest market size during the forecast period

Based on Component, the infrastructure as code market is segmented into tools and services. The market size of the tools segment is estimated to be the largest during the forecast period. Infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files rather than physical hardware configuration or interactive configuration tools. Many tools fulfill infrastructure automation capabilities and use IaC. The framework or tool that performs changes or configures infrastructure declaratively or imperatively based on a programmatic approach can be considered under IaC. Traditionally, server (lifecycle) automation and configuration management tools were used to accomplish IaC. Now, enterprises are also using continuous configuration automation tools or stand-alone IaC frameworks, such as Microsoft's PowerShell DSC or AWS CloudFormation.

Story continues

The Imperative segment to account for the highest CAGR during the forecast period

Based on type, the infrastructure as code market is segmented into declarative and imperative. The imperative segment is expected grow at a higher CAGR during the forecast period. The imperative solution helps to prepare automation scripts that provide the client's infrastructure one specific step at a time. While this can be more work to manage as it gets scaled, it can be easier for existing administrative staff to understand and leverage configuration scripts that already exist. With an imperative approach, a developer writes a code specifying the computer's steps to accomplish the goal. This is referred to as algorithmic programming. In contrast, a functional approach involves composing the problem as a set of functions to be executed.

Asia Pacific to hold highest CAGR during the forecast period

The Asia Pacific infrastructure as code market is expected to grow at the highest CAGR of 27.5% from 2022 to 2027, due to growing industrialization in this region. In this region, the adoption of new and emerging technologies has gained momentum in recent years. Public cloud is gaining huge adoption due to its low costs, on-demand availability, and improved security.

The availability of skilled labor and the keen focus of SMEs and large enterprises to enter and grow in this region are a few factors driving the adoption of the IaC market. Asia Pacific is expected to witness significant growth during the forecast period. The region has always been cautious about investment plans in terms of funding. Major players, such as Microsoft, AWS, Google, and IBM, are expanding their cloud and IaC rapidly in this region due to the increasing number of customers and growing economic outlook. The increasing adoption of emerging technologies, such as big data, IoT, and analytics, is expected to drive the growth of the IaC market in Asia Pacific region.

Market DynamicsDrivers

Restraints

Opportunities

Challenges

Key Topics Covered:

1 Introduction

2 Research Methodology

3 Executive Summary

4 Premium Insights

5 Market Overview and Industry Trends

6 Infrastructure as Code Market, by Component

7 Infrastructure as Code, by Type

8 Infrastructure as Code, by Infrastructure Type

9 Infrastructure as Code Market, by Organization Size

10 Infrastructure as Code, by Deployment Mode

11 Infrastructure as Code Market, by Vertical

12 Infrastructure as Code Market, by Region

13 Competitive Landscape

14 Company Profiles

15 Adjacent Markets

16 Appendix

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/19hrj0

Attachment

Read this article:
Global Infrastructure as Code (IaC) Market Report 2022: Advent of Modern Cloud Architecture & Demand for Better Optimization of Business...

Read More..