Page 2,472«..1020..2,4712,4722,4732,474..2,4802,490..»

How SOAR Helps to Hold Up Your Part of the Cloud Security Shared Responsibility Model – Security Boulevard

The allure of the cloud is indisputable. Flexibility, reliability, efficiency, scalability and cost savings are tantalizing traits for a business at any time, never mind when most have been catapulted into a colossal work-from-home experiment.

According to OReillys annual cloud adoption survey, nine out of 10 businesses now use cloud computing, with nearly half planning to migrate more than 50 percent of their applications into the cloud in the upcoming year. Amazon Web Services (AWS) is leading the pack, with a recent Vectra AI study reporting that 78% of organizations are running AWS across multiple regions, including 40% in at least three.

But the benefits of the cloud make it easy to leap headfirst without adequately acknowledging and prioritizing its dangers, especially within multi-cloud and hybrid cloud environments. Indeed, as cloud adoption increases, so will the magnitude of both malicious attacks and user errors. For example, a study by Ermetic found that 90% of AWS S3 buckets are prone to identity management and configuration errors that could permit admin-level ransomware attacks.

Free E-Book Download: The Definitive Guide to Ransomware Response

Thankfully public cloud services like AWS, Google Cloud Platform (GCP), and Microsoft Azure offer numerous controls for managing these threats and making compromise more difficult. However, these tools experience their optimal value when organizations accept a communal burden for security, something Amazon references as the Shared Responsibility Model. This is where a security orchestration, automation and response (SOAR) platform can step in, helping to bridge the gap between alert overload and analyst capacity, and pave the way for successful case investigations and remediation.

[A SOAR-AWS integration can help to] bridge the gap between alert overload and analyst capacity, and pave the way for successful case investigations and remediation.

At Siemplify, AWS cloud-native controls, including GuardDuty, CloudWatch, and Security Hub, conveniently integrate with the Siemplify Security Operations Platform, allowing threat responders to slash investigation times, extract valuable context-rich insights into incidents and immediately investigate and take action, such as disabling rogue instances and correcting misconfigurations.

The Siemplify platform combines security orchestration, automation and response with end-to-end security operations management to make analysts more productive, engineers more effective and managers more informed. The SOAR experience is brought to life inside the rich Siemplify Marketplace, where security professionals can access a vast array of integrations, including AWS, and ready-to-deploy use cases.

The Siemplify platform seamlessly connects to cloud threat detection technologies, as well as any on-premises tools, effectively delivering unified incident response at the speed of cloud. Additionally, Siemplify leverages AWS capabilities for monitoring and securing the environment in best of class solutions.

Siemplify customers, as well as users of the free Siemplify Community Edition, can integrate AWS within Siemplify by downloading the marketplace connector and entering AWS credentials. For more information, visit siemplify.co/marketplace.The Siempify platform is also available on the AWS marketplace for existing AWS customers. You can find it here.

Dan Kaplan is director of content at Siemplify.

The post How SOAR Helps to Hold Up Your Part of the Cloud Security Shared Responsibility Model appeared first on Siemplify.

*** This is a Security Bloggers Network syndicated blog from Siemplify authored by Dan Kaplan. Read the original post at: https://www.siemplify.co/blog/how-soar-helps-to-hold-up-your-part-of-the-cloud-security-shared-responsibility-model/

See the rest here:
How SOAR Helps to Hold Up Your Part of the Cloud Security Shared Responsibility Model - Security Boulevard

Read More..

Global Virtualization Security Market Expected to Generate a Revenue of $6,986.3 Million, Growing at a Healthy CAGR of 13.6% During the Forecast…

The global virtualization security market is predicted to witness striking growth during the forecast period owing to the increasing adoption of virtual applications across small and medium businesses and large corporations worldwide. Based on the component, the solution sub-segment is expected to be most lucrative. Regionally, the North America region is predicted to hold the largest share of the market throughout the forecast timeframe.

New York, USA, Nov. 23, 2021 (GLOBE NEWSWIRE) -- According to a report published by Research Dive, the global virtualization security market is anticipated to garner $6,986.3 million and a CAGR of 13.6% over the estimated period from 2021-2028.

Download FREE Sample Report of the Global Virtualization Security Market: https://www.researchdive.com/download-sample/5363

Covid-19 Impact on the Global Virtualization Security Market

Though the outbreak of the Covid-19 pandemic has devastated several industries, however, it has had a positive impact on the virtualization security market. Due to stringent lockdowns and strict government guidelines, many IT companies have adopted work from the home culture which enhanced the reliance on virtualized platforms and cloud-based environments. This has surged the demand for the usage of virtualization security to protect network perimeter access. Moreover, the increasing demand for cloud computing technology especially in the healthcare industry to analyze patients data has further propelled the growth of the market during the period of crisis.

Check out How COVID-19 impacts the Global Virtualization Security Market: https://www.researchdive.com/connect-to-analyst/5363

As per our analysts, the rapid adoption of virtual applications across small and medium businesses and large corporations globally is expected to bolster the growth of the market during the forecast period. Moreover, the utilization of cloud computing to manage a remote workforce, eliminate hardware requirements, and reduce maintenance and operational costs is further expected to upsurge the growth of the market throughout the estimated timeframe. Besides, the rising demand for virtualization security solutions across small and large organization across small and large organizations is expected to fortify the growth of the virtualization security market throughout the analysis period. However, a lack of skilled IT experts in virtualization security may impede the growth of the market during the forecast timeframe.

Story continues

Segments of the Global Virtualization Security Market

The report has divided the market into segments namely, component, deployment, enterprise size, end-user, and region.

Component: Solution Sub-Segment to be Most Lucrative

The solution sub-segment is expected to garner a revenue of $4,955.9 million and is predicted to continue a steady growth during the analysis period. This is mainly due to the rising threat of cyber-attacks all across the globe. In addition, the rapid growth of cloud computing and expanding use of virtualization technology is predicted to upsurge the growth of the virtualization security market sub-segment during the analysis period.

Check out all Information and communication technology & media Industry Reports: https://www.researchdive.com/information-and-communication-technology-and-media

Deployment: Cloud Sub-Segment to be Most Profitable

The cloud sub-segment is predicted to generate a revenue of $4,332.0 million during the forecast period. This is mainly because of the improving efficiency, flexibility of using cloud computing across businesses. Moreover, the emerging way of improving system security by cloud computing and increasing utilization of cloud computing across businesses to avoid platforms vulnerability is expected to fortify the growth of the market sub-segment over the estimated timeframe.

Access Varied Market Reports Bearing Extensive Analysis of the Market Situation, Updated With The Impact of COVID-19: https://www.researchdive.com/covid-19-insights

Enterprise Size: Large Enterprises Sub-Segment to be Most Beneficial

The large enterprises sub-segment is predicted to generate a revenue of $4,384.2 million over the analysis period. This is wide because of the increased flexibility, robustness, and network security of enterprise cloud computing. In addition, the organizations can access security tools such as access management, cloud security monitoring and can implement network-wide identity with enterprise cloud. This factor is expected to boost the growth of the virtualization security market sub-segment over the analysis period.

End-User: IT & Telecommunication Sub-Segment to be Most Productive

The IT & telecommunication sub-segment is anticipated to generate a revenue of $1291.5 million over the forecast period. This is due to the significant impact of cloud computing on the IT, technology, and business sectors. Furthermore, the unexpected jump in data traffic due to the global pandemic, the rise of cloud-native 5G technology, rising usage of broadband services, and increasing customer demands for security solutions, are the factors expected to fuel the growth of the virtualization security market sub-segment throughout the analysis timeframe.

Region: North America Region Expected to Have the Maximum Market Share

The North America region is expected to generate a revenue of $2,430.5 million and is predicted to dominate the market during the forecast period. This is major because of the strong presence of the technical professionals and substantial IT firms in this region. Moreover, the growing transformation of a traditional network, security workloads into computation with the help of virtualization of security and network activities is predicted to amplify the growth of the market-sub-segment during the analysis period.

Key Players of the Global Virtualization Security Market

1. IBM2. Fortinet Inc.3. Cisco Systems, Inc.4. Citrix Systems, Inc.5. Trend Micro6. VMware7. Sophos Ltd8. Juniper Networks, Inc.9. Broadcom Corporation10. Check Point Software Technologies, Ltd

These players are widely working on the development of new business strategies such as mergers and acquisitions, product development to acquire leading positions in the global industry.

For instance, in August 2020, Intel, a leading American multinational corporation and technology company, has announced its collaboration with VMware, a renowned cloud computing and virtualization technology company. This collaboration has taken place on an integrated software platform virtualized Radio Access Networks (RAN). With this collaboration, the companies aimed to accelerate the rollout of LTE and future 5G networks.

Further, the report also presents important aspects including SWOT analysis, product portfolio, latest strategic developments, and the financial performance of the key players. Click Here to Get Absolute Top Companies Development Strategies Summary Report.

TRENDING REPORTS WITH COVID-19 IMPACT ANALYSIS

Point of Sale Software Market: https://www.researchdive.com/8423/point-of-sale-software-market

Digital Vault Market: https://www.researchdive.com/5497/digital-vault-market

Control Tower Market: https://www.researchdive.com/8491/control-towers-market

Read more here:
Global Virtualization Security Market Expected to Generate a Revenue of $6,986.3 Million, Growing at a Healthy CAGR of 13.6% During the Forecast...

Read More..

5 Ways to Improve Data Management in the Cloud – ITPro Today

Managing data can be challenging in any environment. But data management in the cloud is especially difficult, given the unique security, cost and performance issues at play. With that reality in mind, here are some tips to help IT teams optimize cloud data management and strike the right balance among the various competing priorities that shape data in public, private or hybrid cloud environments.

Before delving into best practices for cloud data management, lets briefly discuss why managing data in the cloud can be particularly challenging. The main reasons include:

Those are the problems. Now, lets look at five ways to tackle them.

A basic best practice for striking the right balance between cloud storage costs and performance is to use data storage tiers. Most public cloud providers offer different storage tiers (or classes, as they are called on some clouds) for at least their object storage services.

The higher-cost tiers offer instant access to data. With lower-cost tiers, you may have to wait some amount of time--which could range from minutes to hours--to access your data. Data that doesnt require frequent or quick access, then, can be stored much more cheaply using lower-cost tiers.

For many teams, object storage services like AWS S3 or Azure Blob Storage are the default solution for storing data in the cloud. These services let you upload data in any form and retrieve it quickly. You dont have to worry about structuring the data in a particular way or configuring a database.

The downside of cloud object storage is that you usually have to pay fees to interact with the data. For instance, if you want to list the contents of your storage bucket or copy a file, youll pay a fee for each request. The request fees are very small--fractions of a penny--but they can add up if you are constantly accessing or modifying object storage data.

You dont typically have to pay special request fees to perform data operations on other types of cloud storage services, like block storage or cloud databases. Thus, from a cost optimization perspective, it may be worth forgoing the convenience of object storage in order to save money.

One of the key security challenges that teams face when managing cloud data is the risk that they dont actually know where all of their sensitive data is within cloud environments. It can be easy to upload files containing personally identifiable information or other types of private data into the cloud and lose track of it (especially if your cloud environment is shared by a number of users within your organization, each doing their own thing with few governance policies to manage operations).

Cloud data loss prevention (DLP) tools address this problem by automatically scanning cloud storage for sensitive data. Public cloud vendors offer such tools, such as Google Cloud DLP and AWS Macie. There are also third-party DLP tools, like Open Raven, that can work within public cloud environments.

Cloud DLP wont guarantee that your cloud data is stored securely--DLP tools can overlook sensitive information--but it goes a long way toward helping you find data that is stored in an insecure way before the bad guys discover it.

Data egress--which means the movement of data out of a public cloud environment--is the bane of cloud data cost and performance optimization. The more egress you have, the more youll pay because cloud providers bill for every gigabyte of data that moves out of their clouds. Egress also leads to poorer performance due to the time it takes to move data out of the cloud via the Internet.

To mitigate these issues, make data egress mitigation a key priority when designing your cloud architecture. Dont treat egress costs and performance degradations as inevitable; instead, figure out how to store data as close as possible to the applications that process it or the users who consume it.

In addition to allowing you to store data, all of the major clouds now also let you process it using a variety of managed data analytics services, such as AWS OpenSearch and Azure Data Lake Analytics.

If you want to analyze your data without having to move it out of the cloud (and pay those nasty egress fees), these services may come in handy. However, youll typically have to pay for the services themselves, which can cost a lot depending on how much data you process. There may also be data privacy issues to consider when analyzing sensitive cloud data using a third-party service.

As an alternative, you can consider installing your own, self-managed data analytics platform in a public cloud, using open source tools like the ELK Stack. That way, you can avoid egress by keeping data in the cloud, without having to pay for a third-party managed service. (Youll pay for the cloud infrastructure that hosts the service, but that is likely to cost much less than a managed data analytics service.)

The bottom line here: Managed cloud data analytics may be a useful tool, but deploy them wisely.

Like many other things, data management is just harder when you have to do it in the cloud. The good news is that, by being strategic about which cloud storage services you use, how you manage data in the cloud and how you factor data management into your cloud architecture, you can avoid the cost, performance and security pitfalls of cloud data.

Read more from the original source:
5 Ways to Improve Data Management in the Cloud - ITPro Today

Read More..

Cohere partners with Google Cloud to train large language models using dedicated hardware – VentureBeat

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

Google Cloud, Googles cloud computing services platform, today announced a multi-year collaboration with startup Cohere to accelerate natural language processing (NLP) to businesses by making it more cost effective. Under the partnership, Google Cloud says itll help Cohere establish computing infrastructure to power Coheres API, enabling Cohere to train large language models on dedicated hardware.

The news comes a day after Cohere announced the general availability of its API, which lets customers access models that are fine-tuned for a range of natural language applications in some cases at a fraction of the cost of rival offerings. Leading companies around the world are using AI to fundamentally transform their business processes and deliver more helpful customer experiences, Google Cloud CEO Thomas Kurian said in a statement. Our work with Cohere will make it easier and more cost-effective for any organization to realize the possibilities of AI with powerful NLP services powered by Googles custom-designed [hardware].

Headquartered in Toronto, Canada, Cohere was founded in 2019 by a pedigreed team including Aidan Gomez, Ivan Zhang, and Nick Frosst. Gomez, a former intern at Google Brain, coauthored the academic paper Attention Is All You Need, which introduced the world to a fundamental AI model architecture called the Transformer. (Among other high-profile systems, OpenAIs GPT-3andCodexare based on the Transformer architecture.) Zhang, alongside Gomez, is a contributor at FOR.ai, an open AI research collective involving data scientists and engineers. As for Frosst, he, like Gomez, worked at Google Brain, publishing research on machine learning alongside Turing Award winner Geoffrey Hinton.

In a vote of confidence, even before launching its commercial service, Cohere raised $40 million from institutional venture capitalists as well as Hinton, Google Cloud AI chief scientist Fei-Fei Li, UC Berkeley AI lab co-director Pieter Abbeel, and former Uber autonomous driving head Raquel Urtasun.

Unlike some of its competitors, Cohere offers two types of English NLP models, generation and representation, in Large, Medium, and Small sizes. The generation models can complete tasks involving generating text for example, writing product descriptions or extracting document metadata. By contrast, the representational models are about understanding language, driving apps like semantic search, chatbots, and sentiment analysis.

To keep its technology relatively affordable, Cohere charges access on a per-character basis based on the size of the model and the number of characters apps use (ranging from $0.0025-$0.12 per 10,000 characters for generation and $0.019 per 10,000 characters for representation). Only the generate models charge on input and output characters, while other models charge on output characters. All fine-tuned models, meanwhile i.e., models tailored to particular domains, industries, or scenarios are charged at two times the baseline model rate.

The partnership with Google Cloud will grant Cohere access to dedicated fourth-generation tensor processing units (TPUs) running in Google Cloud instances. TPUs are custom chips developed specifically to accelerate AI training, powering products like Google Search, Google Photos, Google Translate, Google Assistant, Gmail, and Google Cloud AI APIs.

The partnership will run until the end of 2024 with options to extend into 2025 and 2026. Google Cloud and Cohere have plans to partner on a go-to-market strategy, Gomez told VentureBeat via email. We met with a number of Cloud providers and felt that Google Cloud was best positioned to meet our needs.

Coheres decision to partner with Google Cloud reflects the logistical challenges of developing large language models. For example, Nvidias recently released Megatron 530B model was originally trained across 560 Nvidia DGX A100 servers, each hosting 8 Nvidia A100 80GB GPUs. Microsoft and Nvidia say that they observed between 113 to 126 teraflops per second per GPU while training Megatron 530B, which would put the training cost in the millions of dollars. (A teraflop rating measures the performance of hardware, including GPUs.)

Inference actually running the trained model is another challenge. On two of its costly DGX SuperPod systems, Nvidia claims that inference (e.g., autocompleting a sentence) with Megatron 530B only takes half a second. But it can take over a minute on a CPU-based on-premises server. While cloud alternatives might be cheaper, theyre not dramatically so one estimatepegs the cost of running GPT-3 on a single Amazon Web Services instance at a minimum of $87,000 per year.

Cohere rival OpenAI trains its large language models on an AI supercomputer hosted by Microsoft, which invested over $1 billion in the company in 2020, roughly $500 million of which came in the form of Azure compute credits.

In Cohere, Google Cloud which already offered a range of NLP services gains a customer in a market thats growing rapidly during the pandemic. According to a 2021 survey from John Snow Labs and Gradient Flow, 60% of tech leaders indicated that their NLP budgets grew by at least 10% compared to 2020, while a third 33% said that their spending climbed by more than 30%.

Were dedicated to supporting companies, such as Cohere, through our advanced infrastructure offering in order to drive innovation in NLP, Google Cloud AI director of product management Craig Wiley told VentureBeat via email. Our goal is always to provide the best pipeline tools for developers of NLP models. By bringing together the NLP expertise from both Cohere and Google Cloud, we are going to be able to provide customers with some pretty extraordinary outcomes.

The global NLP market is projected to be worth $2.53 billion by 2027, up from $703 million in 2020. And if the current trend holds, a substantial portion of that spending will be put toward cloud infrastructure benefiting Google Cloud.

See more here:
Cohere partners with Google Cloud to train large language models using dedicated hardware - VentureBeat

Read More..

Scalable Modular Data Centers and the Race to ROI – Data Center Frontier

To ensure your data center design is modular and scalable, it is essential to select scalable equipment. Switchgear, uninterruptible power supplies (UPS), power distribution units (PDU), and remote power panels (RPP) are all examples of scalable equipment. (Source: ABB)

Last week, we continued our special report series on how physical infrastructure, the data center, and the cloud are keeping up with new modular solutions delivery, and streamlined operational support. In our final article in the series, wellexamine some solution architectures for scalable, modular data center designs.

With the modular market developing in the industry, some tremendous innovation and engineering design efforts have been put into solutions. The modular market is maturing, with even more large enterprises actively deploying the modular data center platform.

To that extent, there is already quite a bit of industry adoption as it relates to modular solutions:

With all of this in mind, there are still some hesitations related to modular adoption. These modular myths date back to the first generation of modular deployments. Lets examine some of these myths and where todays modular modernization and the race to data center ROI impact digital infrastructure.

MODULAR FACT

Modular solutions can be seen as intelligently applying capital to the data center in line with changing technology and IT requirements. Instead of a $50 million project on day one, ten $5 million modules can be built as they are needed. It enables the ability to add capacity to the data center incrementally.

MODULAR FACT

Heres another critical point: you dont have to worry about a lack of sub-contractors and trade professionals. Due to the nature of the design and standardized module architecture, you can have your equipment and facility up and running with minimal requirements for contractor support. The reason for this is that your equipment comes delivered as factory-built units. These modular units are pre-assembled, tested in a controlled factory environment, and delivered directly to the construction site. These efforts minimize the need for additional onsite construction and additional personnel.

As the modular data center market matures and new technologies are introduced, data center administrators will need a new way to manage their infrastructure. There will be an immediate need to transform complex data center operations into simplified plug & play delivery models. This means lights-out automation, rapid infrastructure assembly, and even further simplified management. The next iteration of DCIM aims to work more closely with modular ecosystems to remove the many challenges which face administrators when it comes to creating a road map and building around efficiencies. In working with the future of DCIM, expect the following:

MODULAR FACT

Another critical consideration is working with a modular partner that can support a healthy supply chain. When working with modular designs, make sure you have a partner that can think locally and deliver globally.

When working with modular designs, make sure you have a partner that can think locally and deliver globally.

Much like anything in the technology market, solutions continue to change and evolve. Many of the legacy perspectives on modular solutions revolve around an older generation of modular design. Today, modular data centers are more efficient, denser, and a lot easier to deploy. Lets examine some solution architectures for scalable, modular data center designs.

To ensure your data center design is modular and scalable, it is essential to select scalable equipment. Switchgear, uninterruptible power supplies (UPS), power distribution units (PDU), and remote power panels (RPP) are all examples of scalable equipment. Get this right and specifying future expansions will be time and cost-efficient.

With this in mind, lets look at some emerging Gen 2 Modular Design considerations.

Digitalization within the modular industry is a significant design consideration for Gen 2 modular designs. Systems of this nature are much more scalable because changes to the configuration can be done remotely using software, as opposed to changing out hardware or reassembling wiring.

IEC 61850 is a well-established communications standard for substation automation. The high reliability, integrated diagnostics, fine selectivity, shorter fault reaction times, and better fault tolerance delivered by IEC 61850 make it ideal for data center power infrastructure.

IEC 61850 AND MODULAR DATA CENTERS

The world is experiencing a data explosion. Not only is the quantity of data increasing at a dizzying rate, but the extent to which society relies on that data is also growing by the day. These trends elevate the data center to the status of critical infrastructure in many countries. If a data center fails, chaos ensues, which makes a reliable power supply indispensable. Generally, data centers have well-thought-out power backup provisions such as uninterruptible power supplies (UPSs), diesel generators, etc. By employing IEC 61850-enabled devices and IEC 61850-based GOOSE (generic object-oriented substation event) communication to automate the data center power infrastructure, significant improvements can be made: better power supply reliability, greater operational control, and reduced cost, for example.

GEN 2 MODULAR CONCEPTS AND AUTOMATION

Working with the next iteration of modular data center design means eliminating wasteful processes and operations. In many cases, this means adopting new solutions around infrastructure automation.

IEC 61850 is eminently suited to data center power infrastructure automation. Using just one protocol can form the bedrock of a complete electrical design concept that includes the full protection, control and supervision system, and cybersecurity. By using optical fiber instead of copper wire, wiring costs are lowered, space requirements are substantially reduced, and safety is increased. IEC 61850 also delivers the capability to monitor and control IEDs remotely. The convenience is that devices supplied by different manufacturers can communicate with each other without custom-designed gateways or other engineering-intensive complications.

Taking a broader perspective, the IEC 61850 standard allows digitalization of the data center power system in a way that opens it to collaboration with other digital entities in the data center, such as a building management system (BMS), power management system (PMS), data center infrastructure management (DCIM) or ABB Ability Data Center Automation.

These are all essential parts of the final goal: the single plane of glass that orchestrates the entire data center. Decathlon for Data Centers, for instance, gives power and cooling visibility, and IEC 61850s open protocols allow integration of existing equipment and systems. With IEC 61850 peer-to-peer communication capabilities in components like ABBs Relion relays and Emax circuit breakers, one can go from the DCIM system controlling or supervising software to having real-time interaction with the subsystem (such as a UPS breaker) itself.

The IEC 61850 architecture is the ideal standard for data centers, as it delivers increased reliability, finer selectivity, shorter fault reaction times, and the possibility to implement fault tolerance and integrated diagnostics, as well as a host of other advantages.

Download the full report, Cloud and the Data Center: How Digital Modernization is Impacting Physical Modular Infrastructure, courtesy of ABB for two exclusive case studies and tips for getting started on the modular journey.

Follow this link:
Scalable Modular Data Centers and the Race to ROI - Data Center Frontier

Read More..

Stocks making the biggest moves after hours: Nordstrom, Gap, VMware, HP and more – CNBC

Shoppers leave a Nordstrom store on May 26, 2021 in Chicago, Illinois.

Scott Olson | Getty Images News | Getty Images

Check out the companies making headlines after the bell:

Nordstrom Shares of the department store chain tumbled roughly 20% following its quarterly results. Nordstrom reported earnings of 39 cents per share, well below the 56 cents expected by analysts. Labor costs ate into profits and sales and Nordstrom Rack, its off-price division, has struggled to return to pre-pandemic levels, the company reported.

Gap The apparel retailer saw its shares drop more than 15% after missing profit and revenue expectations for its fiscal third-quarter. It also slashed its full-year revenue outlook from a 30% increase to a 20% increase, compared with analysts' expectations of a 28.4% year-over-year gain, according to Refinitiv.

HP The computer hardware company saw shares jump about 6% following its quarterly results. HP reported earnings of 94 cents per share on revenue of $16.68 billion, beating analysts' estimates of 88 cents per share on revenue of $15.4 billion, according to Refinitiv. It also raised its first quarter guidance to a range of 99 cents to $1.05 per share, compared with the 94 cents per share expected by analysts.

VMware Shares of cloud computing company VMware got a 1% lift after the company reported a quarterly beat on the top and bottom lines. VMware recorded $1.72 per share in earnings, beating expectations by 18 cents, and $3.19 billion in revenue, topping estimates of $3.12 billion.

Autodesk The software company's shares fell more than 13% despite reporting a beat on the top and bottom lines for its most recent quarter. Autodesk issued fourth quarter earnings and revenue guidance that were largely below estimates.

View post:
Stocks making the biggest moves after hours: Nordstrom, Gap, VMware, HP and more - CNBC

Read More..

Why Dream11 wants to float in the cloud – ETCIO.com

Dream Sports and its brands such as Dream11, FanCode, Dream Capital have a collective user base of over 140 million. To cater to such a humongous user base with peak concurrency reaching up to 5.5+ million, the cloud became a very obvious choice for the organisation from the beginning. With so much traffic data coming through, every decision is data-driven and cloud adoption ensures that this objective is achieved in a seamless manner.

With hyper-growth that we were (and are seeing) on a year-on-year basis, we needed a highly scaled solution that can be elastic as per our traffic patterns and data volumes. We were also looking to get reliable out of the box software/infrastructure as a service so that we could focus on our core product. Cloud Technologies fitted the bill perfectly, said Abhishek Ravi, CIO, Dream Sports.

According to Ravi, some key aspects that are considered for going with the cloud are scalability, elasticity, performance, reliability, resilience, security and cost.

Cloud has really helped us to plan, develop and scale our product without worrying about the performance, availability and cost of ownership. We could quickly test out our features, scale tests in load/stress environments and ship them to our users to give them the best experience. With managed services available on the cloud, our teams could and continue to focus on core products and thus, derive maximum efficiency, Ravi added.

The company intends to remain cloud-native in the future too. With our data volumes increasing day by day and newer solutions evolving in cloud space, we want to use the cloud at the optimum.

The companys strategy is multi-cloud as it selects the right solution for the different use cases as per the requirements.

Ravi believes that a multi-cloud strategy should be well thought through so that the best cloud technology for the right use case can be provisioned. It also helps to optimise the infrastructure and thus, the cost.

In the coming months, Dream Sports aims to upscale its tech advancements and expand tech infrastructure.

We leverage Big Data, Analytics, Artificial Intelligence and Machine Learning to focus on every aspect that makes sports better. We are heavily experimenting with push architecture to serve information to the users in real-time. We are also very advanced on containerization which has reduced our infrastructure requirements drastically, Ravi maintained.

The company is also working on several tech initiatives such as a concurrency prediction model to predict hourly concurrency on the Dream11 platform, and a fraud detection system to identify & mitigate users creating multiple/ duplicate accounts on the platform to abuse referral or promotional cash bonus schemes.

To ensure a quality user experience during peak traffic, Dream Sports also stress tests every feature that is released for a smooth experience at scale. We have a testing framework that simulates any kind of traffic load with real life-like patterns. This gives a high degree of assurance that the backend would behave exactly as expected, Ravi added.

Read more here:
Why Dream11 wants to float in the cloud - ETCIO.com

Read More..

How data skills can amplify corporate action to save the planet | Greenbiz – GreenBiz

The historic COP26 Summit in Glasgow sparked big, bold statements by political and business leaders about their commitment to taking action, but the real challenge will be the doing.

How will companies keep their promises and actually achieve them? With all the different measures that can reduce carbon emissions, it may not seem obvious but data skills can play a key role in accelerating ambitious sustainability targets and helping to save the planet.

Many businesses nowadays have incredibly ambitious sustainability targets. Take Burberry, for example, which has pledged to cut emissions across its extended supply chain by 46 percent by 2030 and committed to being carbon positive by 2040. Meanwhile, Unilever has set out to eliminate direct greenhouse gas emissions from its operations and is aiming to achieve net-zero emissions from its products up to the point of sale by 2039. But big businesses such as these wont sacrifice profit, so its vital that sustainable business practices offer both commercial and environmental benefits.

From my role working at Decoded, which trains business leaders and corporate executives on their data and digital skills, I have seen countless examples of how data, artificial intelligence and machine learning can make a company not only more efficient but also more sustainable.

Take Google DeepMinds almost folkloric story. Back in 2016, DeepMind set out to reduce the amount of energy required to cool Googles data centers. For context, the technology company uses around 12.4 terawatt-hours of electricity per year, the equivalent of powering over 3.3 million homes in the U.K. for a year.

DeepMinds all-purpose algorithm subsequently devised a real-time, adaptive system that cut the cost of cooling by 40 percent and the overall energy consumption by 15 percent. This both saved Google a significant amount of money and reduced its environmental impact. (Google has set out to operate on carbon-free energy 24/7 by 2030.)

Data skills can transform manual processes into automated ones, leading to huge efficiencies, ensuring that employees can focus their time on truly impactful work.

It isnt just the likes of tech titans such as Google that are ramping up their employees data skills. For shipping and logistics businesses such as UPS, more efficient delivery routes save drivers time, reduce fuel use and ultimately increase customer satisfaction. This is especially the case at a time when more packages are being delivered than ever before due to an increase in online shopping as a result of the pandemic.

In the second quarter of 2020, for example, UPS delivered over 21 million packages on average every day, a 22.8 percent increase on the year before. Shaving off just one mile for each of its drivers per day could save the company up to $50 million a year, and this is where machine learning steps in. By using a proprietary tool called Orion, which uses advanced algorithms to create optimal routes for delivery drivers from the data supplied by customers, drivers can alter their routes on the fly based on changing weather conditions or accidents. UPS estimates this insight could reduce delivery miles by 100 million per year, the equivalent reduction in carbon emissions as taking 21,000 passenger cars off the road for a year.

Data skills can transform manual processes into automated ones, leading to huge efficiencies, ensuring that employees can focus their time on truly impactful work.

A relevant case study from our work at Decoded was with a retailer, whereby an employee set out to automate a process previously carried out in Excel that took 20 hours weekly to complete. By creating an automated process using database and programming tools, the business was able to reduce a process from 20 minutes to just 10 minutes as well as create something that could be used business-wide.

Using data skills enabled the retailer to optimize the process, meaning more goods could be shipped through the distribution center. This led to more efficient shipping which has the ability to create a CO2 saving of over 1.2 million metric tons per season the equivalent of more than 2 million people flying from London to New York.

The reality is that that employee didnt set out with the goal of reducing CO2 emissions with their new data skills. They simply wanted to automate a very manual process and save their team time; the environmental benefit was a very positive side effect of their original intention.

Using data skills and machine learning can uncover inefficiencies in processes that humans would never spot or would take a significant amount of time to find. Data analytics and AI can mitigate human error, meaning tasks dont need to be redone, saving energy in theprocess. They can also spot opportunities to use less raw materials. Its important to acknowledge it can take a significant amount of energy to train machine learning algorithms, but over time the energy savings should outweigh this.

Its important to acknowledge how far the business world has come in moving environmental responsibility to the mainstream. But the urgency of this moment means corporate leaders must back their commitments with genuine action and ensure their people are given the tools and skills to achieve these ambitious goals. As corporations move to accelerate action and implementation after the pivotal COP26 talks, its time to realize the commercial and environmental potential of machine learning and data skills.

Originally posted here:
How data skills can amplify corporate action to save the planet | Greenbiz - GreenBiz

Read More..

How to deep fry a turkey without burning your house down and more Thanksgiving food tips – Alton Telegraph

On Tuesday, the Illinois Office of the State Fire Marshal issued guidance for cooking on Thanksgiving, including on tips when preparing food not just during the holiday season, but all year.

Thanksgiving is the leading day for home firesinvolving cooking equipment, OSFM said, with four times the average number occurring. Ranges and cook-tops account for almost three out of every five home fires reported involving cooking, with ovens accounting for 13% of those fires.

According to the U.S. Fire Administration, each year from 2017 to 2019, an estimated average of 2,300 residential building fires were reported to fire departments in the U.S. on Thanksgiving Day. These fires caused an estimated annual average of five deaths, 25 injuries and $26 million in property loss. U.S. fire departments respond to an average of 166,100 home fires per year involving cooking equipment.

"Thanksgiving has arrived and that means many people will be working overtime in their kitchens. I encourage everyone to check to make sure your cooking equipment is working properly and call a professional to fix them if needed," State Fire Marshal Matt Perez said in a statement. "By following a few simple fire safety tips, your holiday will be enjoyable and free from a fire-related incident."

"Anytime food and flames are involved, we must always remember that fire safety is important. While deep-frying a turkey may add irresistible flavor and juiciness to your Thanksgiving menu, there is also the potential of fire and serious injury when doing so," Chicago Fire Commissioner Annette Nance-Holt said in a statement.

When frying a turkey, it's important to make sure not to overfill oil in the fryer. To do so, make sure to fill the pot you plan to use to fry the turkey with water and place the turkey in, measuring the amount needed when there's no spill. This will help determine how much oil is needed without causing oil to spill out when you are ready to fry, which could lead to a fire.

Additionally, use the turkey fryer outdoors only, make sure the turkey is completely thawed before frying and use long cooking gloves to protect hands and arms when handling the pot.

RELATED: How to thaw your turkey safely and efficiently

Tips to keep in mind when preparing food year-round include: Never leave food that you are frying, boiling, grilling or broiling unattended! If you leave the kitchen, even for a short amount of time, turn off the stove. Create a "Kid-Free Zone" of at least three feet around the stove or anywhere you are preparing hot food or drinks. Keep the area around the stove clear of towels, papers, potholders or anything that can burn. If you are simmering, baking, or roasting food, check it regularly, remain in the home while food is cooking, and use a timer to remind you when food is ready. If there is a fire in the oven, keep the door shut and turn off the heat. Smother small flames in a pan by sliding a lid over the pan. Turn off the burner and leave the lid over the pan while it cools. If you have any doubt fighting a small fire, just get out! Call 9-1-1 or your emergency number from outside the home.

Continued here:
How to deep fry a turkey without burning your house down and more Thanksgiving food tips - Alton Telegraph

Read More..

When Denver Lost Its Mind Over Youth Crime – The New Republic

The next days front-page photo of a skinny first-grader on life support, his swollen head obscured in a nest of bandages, became the iconic image for the Summer of Violence, the only Black victim among the high-profile crimes to come. The press chronicled every moment of Brodericks recovery, something Phason remembers even today with gratitude.

Denvers top officials flocked to the mothers side in a show of political unity. Looking uncharacteristically rattled, Mayor Webb held an emotional press conference in front of Phasons home. He declared a war on gangs, asking Black and Latino neighborhood residents to suspend their normal distrust of aggressive policing to end the senselessness. The editor of The Denver Post, the late Gil Spencer, was moved to write a personal column linking the torment hed suffered from almost losing his own eight-year-old child to an illness to that of Brodericks mother. It would make some kind of sense to force-march these gang creatures into the hospital room, Spencer wrote, with his head still bloody, his body a forest of tubes.

A week after the shooting, the governor, the mayor, the police chief, and district attorney linked arms with Phason for a well-publicized Juneteenth march titled Save Our Children and organized by the Black community. More than 1,000 supporters dressed in purple to signify the blending of the red and blue gang colors and marched to Five Points, Denvers historic Harlem of the West. Phason remembered the ugly racist taunts hurled from bystanders along the way: Kill each other, n-----s! I had never seen anything like it in all my life. At one point, a young Crip approached her with an apology and a bouquet. I was trembling, about to cry. I thought there could be a gun up in there, she remembered. It was only flowers.

Like Ignacio in the zoo shooting, Broderick recovered, though he is still slightly impaired on his left side. Again, no arrests were made. Law enforcements failure compounded public anxiety, given the intensity of the media attention. By summers end, Broderick Bells name had appeared in 96 Denver newspaper stories, three times on the front page. By the end of 2009, Brodericks story was mentioned 182 times and featured in eight more front-page stories. Nationally, his case was highlighted by New York Times columnist Bob Herbert in an August 1993 op-ed about the Summer of Violence titled KILLING JUST FOR WHATEVER. Denver also made CBS News list of the nine most violent cities in its 1995 documentary In the Killing Fields of America, produced by Dan Rather, Mike Wallace, and Ed Bradley.

Read the original post:
When Denver Lost Its Mind Over Youth Crime - The New Republic

Read More..