Page 2,481«..1020..2,4802,4812,4822,483..2,4902,500..»

Lacework lands $1.3B to expand its cloud cybersecurity platform – VentureBeat

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

Lacework, a developer of automated containerized workload defense and compliance solutions, today announced that it closed a $1.3 billion funding round, valuing the company at over $8.3 billion post money. Sutter Hill Ventures, Altimeter Capital, D1 Capital Partners, and Tiger Global Management led the round with participation from Franklin Templeton, Counterpoint Global (Morgan Stanley), Durable Capital, General Catalyst, XN, Coatue, Dragoneer, Liberty Global, and Snowflake Ventures. Co-CEOs David Hatfield and Jay Parikh said the funding will support Laceworks product development efforts as the company expands its engineering and R&D initiatives.

As the pandemic prompts companies to move their operations online, many if not most face increasing cybersecurity challenges. According to an IDC report, 98% companies surveyed in H1 2021 experienced at least one cloud data breach in the past 18 months. At the same time, 31% of respondents said theyre spending more than $50 million per year on cloud infrastructure, opening them up to additional attacks if their cloud environments arent configured correctly. In its 2021 Cloud Security Study, Thales found that only 17% of companies encrypt more than 50% of sensitive data that they host on cloud environments despite the surge in ransomware attacks.

Lacework, which was founded in 2015 by Mike Speiser, Sanjay Kalra, and Vikram Kapoor, aims to close security gaps across DevOps and cloud environments by identifying threats targeting cloud servers, containers, and accounts. Its agent provides visibility into running processes and apps, using AI to detect anomalous behavior. Concurrently, the agent monitors for suspicious activities like unauthorized API calls and the use of management consoles and admin accounts, limiting access to vulnerable ports and enforcing least access privileges.

Kalra previously worked at Cisco as a senior product manager. Kapoor spent six years in various roles at Oracle, overseeing work on the data layer and storage side of the business. Speiser, a managing director at Sutter Hill Ventures, was a founding investor in Lacework and remains an active member of the board.

Kapoor and Kalra founded Lacework with the goal of taking a data-driven approach to cloud security. We view security as a data problem and our platform is uniquely suited to solve that problem, Parikh told VentureBeat via email. Traditional security solutions force companies to amass a patchwork of point solutions and then manually tell them what to watch for, resulting in an inefficient and ineffective security process. At Lacework, we use data to uncover security risks and threats.

Parikh describes Laceworks platform which is built on top of Snowflake as data-driven. By collecting and correlating data across an organizations public and private clouds, including Amazon Web Services, Microsoft Azure, and Google Cloud Platform instances, Lacework attempts to identify security events that matter, logging incidents to create a baseline against which future events can be measured.

We believe our solution is uniquely suited to solve the data as a security problem. Our approach to security as a data problem is unique, Parikh said. [Lacework] uses unsupervised or autonomous machine learning, behavioral analytics, and anomaly detection to uncover unknown threats, misconfigurations, known bads, and outliers across [environments.] The platform automatically learns activities and behaviors that are unique to each of our customers environment, creates a baseline, and surfaces unexpected changes so they can uncover potential issues and threats before they become significant problems.

Lacework offers continuous host vulnerability monitoring, preflight checks, and continuous integration and deployment automation workflows designed to expedite threat investigation. More recently, the company made available tools from Soluble, a platform it acquired that finds and fixes misconfigurations in infrastructure as code to automate testing and policy management. (Infrastructure as code, often shortened to IaC, enables developers to write code to deploy and manage hardware infrastructure.)

In a boon for Lacework, the cybersecurity industry shows no sign of slowing. Cybersecurity Ventures which noted that the cybersecurity market grew by roughly 35 times from 2004 to 2017 recently predicted that global spending on cybersecurity products and services will exceed $1 trillion cumulatively over the five-year period from 2017 to 2021. During roughly the first half of 2021 alone, venture capitalists (VCs) poured $11.5 billion into cybersecurity startups as supply chain attacks and breaches ramped up. That easily surpassed the $7.8 billion total VCs pledged in all of 2020.

Over the past few months, Lacework which claims to have hundreds of customers has substantially expanded geographically as it places a concerted effort on marketing and customer acquisition. In October, it entered the Australian and New Zealand market, establishing an office in Sydney as a launchpad for further growth across Asia Pacific. And earlier in the year, Lacework announced it would make significant investments in building out Europe, Middle East, and Asia operations, including a European headquarters in Dublin, Ireland; regional offices in the U.K., France, and Germany; and an Amazon Web Services datacenter in Frankfurt, Germany.

We are experiencing tremendous growth with no signs of slowing down. Our revenue continues to grow along with our customer base and employee base, Parikh added. We plan to use this funding to extend our lead in the cloud security market by fueling product innovation that expands the companys total addressable market and pursuing additional strategic acquisitions, like the recently announced Soluble transaction Well also scale go-to-market strategies, growing our workforce and presence globally to better serve our customers.

To date, Lacework which has more than 700 employees has raised more than $1.85 billion in total capital. The company claims its latest funding round is the largest in security industry history.

Read the original post:
Lacework lands $1.3B to expand its cloud cybersecurity platform - VentureBeat

Read More..

Blue Hill moves municipal computer service to the Cloud – The Weekly Packet

Blue HillOriginally published in The Weekly Packet, November 18, 2021Blue Hill moves municipal computer service to theCloud

by Jeffrey B. Roth

After weeks of dealing with issues related to updating TRIO software, Blue Hill town officials decided to move its municipal services platform to the Cloud, Town Administrator Shawna Ambrose told the select board at its November 15 meeting.

Several weeks ago, the towns IT techs and representatives of Harris Local Government, the company that created and markets the TRIO software, updated the towns computer servers. For a brief period, the upgrade appeared to be successful, but that changed a few days later, Ambrose said.

Were moving to the Cloud, this evening, after another terrible week of technology here at the town hall, Ambrose said. That update should start around six oclock and the TRIO team will work for a few hoursto get all the data on our server and then pushed into the Cloud.

The town relied on TRIO as the platform to register vehicles, collect taxes and perform many other local government services, Ambrose said. The purpose of the software update is to provide more functionality in the system.

Funds for first responders

In other business, Ambrose noted that she completed a survey more than a month ago that was issued to local municipalities by the Hancock County Commissioners. The purpose of the survey was to collect a head count of local EMS, firefighters, emergency dispatchers and other first responders as a preliminary step to apply for a matching funds grant through the federal American Rescue Plan Act. She said the matching funds would be used to pay hazard pay to first responders who worked throughout the COVID-19 pandemic.

We participated in the survey and submitted data from the fire department, as well as for a potential match of funds for EMS workers. The towns are not being forced or even asked to do thishopefully, there will be a match available,Ambrose said.

View post:
Blue Hill moves municipal computer service to the Cloud - The Weekly Packet

Read More..

Check Out the Top Cloud-Based Manufacturing Tools for 2021 – Analytics Insight

The main factor driving the transition of traditional manufacturing towards cloud-based manufacturing is data visualization.

The manufacturing industry is modernizing its operations using technologies such as cloud computing, the Internet of Things (IoT), and virtualization. This requires extensive changes to production hardware and software which is not feasible for all manufacturers. Moreover, apart from the cost, the domain expertise required to integrate manufacturing 4.0 technologies acts as a barrier for manufacturers. The main factor driving the transition towards cloud-based manufacturing is data visualization. This article lists the top cloud monitoring tools for manufacturers in 2021.

BMC helps in boosting multi-cloud operations performance and cost management. It helps measure end-user experience, monitor infrastructure resources, and detect problems proactively. It gives manufacturers the chance to develop an all-around cloud operations management solution. With BMC, you can plan, run, and optimize multiple cloud platforms, including Azure and AWS, among others. BMC also enables you to track and manage cloud costs, eliminate waste by optimizing resource usage, and deploy the right resources at the right price. You can also use it to break down cloud costs and align cloud expenses with business needs.

Sematext Cloud is a unified performance monitoring and logging solution available in the cloud and on-premises. It provides full-stack visibility through a single pane of glass by bringing together application and infrastructure monitoring, log management, tracing, real user, and synthetic monitoring. Sematext enables users to easily diagnose and solve performance issues and spot trends and patterns to deliver a better user experience.

New Relic aims at intelligently managing complex and ever-changing cloud applications and infrastructure. It can help you know precisely how your cloud applications and cloud servers are running in real-time. It can also give you useful insights into your stack, let you isolate and resolve issues quickly, and allow you to scale your operations with usage. The systems algorithm takes into account many processes and optimization factors for all apps, whether mobile, web, or server-based. New Relic places all your data in one network monitoring dashboard so that you can get a clear picture of every part of your cloud. Some of the influential companies using New Relic include GitHub, Comcast, and EA.

Indian startupElitia Tech provides a cloud-based manufacturing execution system. Their MES model is subscription-based, using infrastructure hosted and managed on the cloud, thereby eliminating on-premise hardware and consequent capital expenditure (CAPEX). The solution allows for fast implementation in small and large-scale deployments as well as real-time scalability. The MES solution helps owners and operators to reduce waste, inventory levels, and cycle times while protecting critical data to improve efficiency, quality, and customer satisfaction.

As the name suggests, Site 247 is a cloud monitoring tool that offers round-the-clock services for monitoring cloud infrastructure. It provides a unified platform for monitoring hybrid cloud infrastructure and complex IT setups through an interactive dashboard. The monitoring tool integrates the use of IT automation for real-time troubleshooting and reporting. Site 247 monitors usage and performance metrics for virtual machine workloads.

Auvik is a cloud-based network monitoring and management software that will give you true visibility and control. It offers the functionalities for automating network visibility & IT asset management. The solution simplifies the network performance monitoring and troubleshooting. It will also let you automate configuration backup and recovery.

The main factor driving the transition towards cloud-based manufacturing is data visualization. Italian start-upiProd creates an IoT tablet that connects to any machine and provides insights into its status. The iProd manufacturing optimization platform (MOP) collects, manages, and optimizes four operational areas for manufacturers. These include production technology, production planning, and monitoring, preventive and extraordinary maintenance, and management of materials and tools. Additionally, iProd allows operators and managers to monitor production and control efficiency by using reporting, advanced tags, social collaboration, and smart widgets.

Share This ArticleDo the sharing thingy

Read more:
Check Out the Top Cloud-Based Manufacturing Tools for 2021 - Analytics Insight

Read More..

What Is Edge Computing, and How Can It Be Leveraged for Higher Ed? – EdTech Magazine: Focus on Higher Education

Edge Computing vs. Cloud Computing: Whats the Difference?

Theres a common misconception that cloud and edge computing are synonymous because many cloud providers such asDell,Amazon Web ServicesandGoogle also offer edge-based services. For example, an edge cloud architecture can decentralize processing power to a networks edge.

But there are key differences between cloud and edge computing. You can use cloud for some of the edge computing journey, Gallego says. But can you put edge computing in the cloud? Not really. If you put it back in the cloud, its not closer to the data.

Gallego notes that while cloud services have been around for more than a decade, edge computing is still considered an emerging technology. As a result, colleges and universities often lack the in-house skills and capabilities to make use of this technology. If thats the case, an institution maywant to work with a partnerto help it get started.

GET THE WHITE PAPER: As cloud adoption accelerates, security must keep pace.

The most common use case for edge computing is supporting IoT capabilities. By bringing servers closer to connected sensors and devices, institutions can leverage Big Data to gain actionable insights more quickly.

By placing clouds in edge environments, institutions can also cut costs by reducing the distance that data must travel. For an increasingly connected campus, edge computing can also help reduce bandwidth requirements.

As campuses prepare to support the next generation of students (the children of millennials), edge computing will play a key role in bolstering campus networks. Sometimes calledGeneration AI,this cohort will be using AI technologies in almost every aspect of their lives. To support an exponential amount of AI-enabled IoT technologies connecting to campus networks, universities and colleges will need 5G networks and mobile edge computing.

MORE ON EDTECH: Georgia Tech researcher discusses how AI can improve student success.

Edge solutions make it possible for post-secondary campuses to adopt what Gallego describes as a three-tiered computing model: on-premises, at the edge and in the cloud, with each fulfilling a specific purpose.

Onsite servers might be used to securely storeconfidential financial or research data, while the cloudunderpins hybrid and remote learning frameworks. Edge computing, meanwhile, offers benefits for data-driven research, especially time-sensitive research projects that require immediate data processing.

View original post here:
What Is Edge Computing, and How Can It Be Leveraged for Higher Ed? - EdTech Magazine: Focus on Higher Education

Read More..

Top Cloud Computing Jobs in India to Apply This November – Analytics Insight

You can apply for these cloud computing jobs

Cloud computing is the delivery of different services through the Internet. These resources include tools and applications like data storage, servers, databases, networking, and software. As long as an electronic device has access to the web, it has access to the data and the software programs to run it.

Skill Sets: Should have good communication skills, knowledge in cloud computing concepts and basic knowledge in Amazon web services, web developments in cloud.

Qualifications: Any UG or PG Degree

Skill Sets: Should have good communication skills, knowledge in cloud computing concepts and basic knowledge in Amazon Web Services, web developments in Cloud.

Qualifications: Any UG or PG Degree

Industry Type: IT Services & Consulting

Functional Area: Engineering Software

Employment Type: Full Time, Permanent

Role Category Software Development

Education

UG: B.Tech/B.E. in Computers

PG: M.Tech in Computers

Job Description

Industry Type: Management Consulting

Functional Area: Engineering Software

Employment Type: Full Time, Permanent

Role Category: Quality Assurance and Testing

Education

UG: Any Graduate

PG: Post Graduation Not Required

This is a home-based or part-time job. You will have to support our projects in your free time OR after your office hours. We are looking for experience in software development, AWS, cloud computing, google cloud, Microsoft Azure.

Role: Cloud Consultant

Industry Type: IT Services & Consulting

Functional Area: Consulting

Employment Type: Part-Time, Freelance/Homebased

Role Category: IT Consulting

Education

UG: Any Graduate

Role: System Administrator / Engineer

Industry Type: Recruitment / Staffing

Functional Area: Engineering Hardware & Networks

Employment Type: Full Time, Permanent

Role Category: IT Network

Education

UG: Any Graduate

PG: Any Postgraduate

Cloud Engineer Requirements:

Role: Cloud System Administration

Industry Type: IT Services & Consulting

Functional Area: IT and Information Security

Employment Type: Full Time, Permanent

Role Category: IT Infrastructure Services

Share This ArticleDo the sharing thingy

Continue reading here:
Top Cloud Computing Jobs in India to Apply This November - Analytics Insight

Read More..

Protecting financial institutions from downtime and data loss – BAI Banking Strategies

In todays digital economy, a few minutes of downtime for critical applications and databases needed for online banking can be devastating loss of customer satisfaction, negative press and social media, drained IT resources, reduced end user productivity, etc.

Be aware of four key threats to financial services organizations when evaluating business continuity plans: cyberattacks, systems failures, natural disasters and cloud outages. In the face of these threats, which applications and databases would incur the greatest cost to your organization were they to go offline?

Review your applications with other questions in mind: Would losing this system reduce employee productivity or disrupt operations? Would losing this system increase the workload of your IT team? Added work for your IT team could add to labor costs and costly delays to planned projects.

Other questions can reveal costs that may be harder to quantify, but are impossible to ignore. What would losing a customer-facing application cost in terms of customer satisfaction and reputation? Negative publicity or social-media standing? If this application or database is locked by ransomware, what will that cost in terms of public confidence? Similarly, what if downtime draws regulatory scrutiny?

Having used these questions to identify your most critical applications, consider the main threats they face and how best to protect them.

Banks and credit unions may face the challenge of protecting vital applications and data without dedicated cybersecurity experts on staff. There are some important steps every organization can take to improve cyber security, regardless of size or IT resources.

The cost of ransomware and other cyber threats justifies the investment in an expert audit of regulated data and any means of accessing it including firewall weaknesses, routers, network access points, and servers and recommended countermeasures specific to each weakness.

Document and communicate policies about the acceptable use of the companys computer equipment and network, both in office and at home. Include clear restrictions for accessing and downloading sensitive data to local laptops and PCs, use of network access points, wireless security and best practices to avoid email-borne threats.

And apply software solutions for protection this includes workstation/laptop antispam software, as well as automated security systems that hunt, detect and manage defenses against threats throughout the system.

Component failure within your IT infrastructure servers, storage, network routers, etc. is inevitable. To mitigate the cost of failure, answer these three questions:

Your most critical applications those that require an RPO of zero, an RTO of just 1-2 minutes, and true high availability (HA) of at least 99.99% annual application uptime can be protected against hardware failure through failover clustering. For less critical applications and data, a simple backup or archiving plan may suffice.

Failover clustering provides redundancy for potential sources of system failure. Clustering software monitors application availability and if a threat is detected, this software moves the application operations to a standby server where operation continues with minimal downtime and near zero data loss.

Some applications may need protection from disasters that damage the local IT infrastructure. For applications needing HA, the primary and standby cluster nodes should be geographically separated, but connected by efficient replication that can synchronize storage between locations.

Cloud infrastructure does not automatically provide application-level HA or disaster recovery protection. Cloud availability service-level agreements apply only to the hardware, which may not ensure that an application or database remains accessible.

Like any computing system, clouds are vulnerable to human error, disasters and other downtime threats. HA clustering for applications in the cloud should be capable of failing over across both cloud regions and availability zones. Traditional shared storage clustering in the cloud is costly and complex to configure, and is sometimes not available. Use block-level replication to ensure the synchronization of local storage among each cluster node. This enables a standby node to access an identical copy of the primary node storage and an RPO of zero.

By assessing the criticality of the applications, databases and systems required to operate efficiently and calculating the real cost of downtime for these systems, banks and credit unions can invest time and resources wisely to mitigate those threats cost efficiently.

Ian Allton is solutions architect at SIOS Technology Corp.

View post:
Protecting financial institutions from downtime and data loss - BAI Banking Strategies

Read More..

OVHcloud to share its OpenStack automation for use in on-prem clouds – The Register

Cloudy contender OVHcloud will share the automation tools it developed to run its own OpenStack-based cloud, as part of a plan to grow its managed cloud business.

In Europe, the recently floated French company has offered to operate and manage a private cloud using its tech on customers' premises. Now OVH plans to let others do the same. The plan is that managed services providers or end-user organisations could choose to use OVH's tools to run their own OpenStack rigs, or take up OVH's offer of managed on-prem cloud.

OVH will happily deploy those on-prem clouds at scales ranging from a couple of cabinets to hundreds of racks, with the latter scale directed at carriers and other large users.

The company has also detailed the expansion plans that were among the reasons for its IPO naming the USA, Canada, India, and Asia-Pacific as targets.

The Register has learned that in the latter two expansion targets OVH will, for the first time, use its home-grown water-cooling tech. The company's Asian efforts have, to date, co-located Open Compute Project hardware in third-party datacentres a contrast to its presences elsewhere in the world that utilise datacentres OVH controls which use the company's own server designs.

Lionel Legros, OVH's veep and general manager for Asia Pacific, told The Register that consulting with co-lo providers as they design new datacentres means the French company can influence designs so they're friendly to water cooling. This means the company expects the Mumbai datacentre it will bring online in the first half of 2022 won't be using air conditioning after eighteen months of operations.

In Singapore, OVH will also expand its presence and bring in its water-cooling tech.

Legros declined to name the other Asia-Pacific nations OVH is targeting, but indicated that nations which model their privacy laws on the EU's GDPR are natural landing pads.

Follow this link:
OVHcloud to share its OpenStack automation for use in on-prem clouds - The Register

Read More..

What You Need to Know About Cloud Automation | ENP – EnterpriseNetworkingPlanet

For most enterprises, migrating to the cloud is a prerequisite for digital transformation and a means to outperform their competitors in a deeply competitive landscape. As businesses are becoming comfortable with the cloud, they are increasingly moving advanced workloads to the cloud.

But advanced workloads mean more complicated and intricate cloud environments. As a result, IT has the task of potentially managing thousands of VMs and diverse workloads spread across the globe. Cloud automation offers an efficient way to deal with these challenges.

Cloud automation simplifies and optimizes the management of complex cloud infrastructures and enables teams to work efficiently at scale. It also makes sound business sense to invest in automation. In a survey by Capgemini, 80% of Fast Movers reported that their organizations agility had improved by implementing automation. Another 75% of Fast Movers saw an increase in profitability, exhibiting the economic benefits of adopting cloud automation.

With the global cloud automation market poised to reach $149.9 Billion by 2027 at a CAGR of 22.8%, it seems to be the right time to learn about cloud automation and its role in improving operational efficiency in the cloud environment.

So, what exactly is cloud automation, and how does it benefit your business?

Cloud automation refers to the method and processes used by enterprises to minimize manual efforts by IT teams when deploying and managing workloads. With automation, organizations reduce an IT teams need to micromanage things, thus freeing up their time and enabling them to focus more on higher-value projects that drive significant ROI.

Having to manage heterogeneous systems in the cloud is no small task. Cloud management is a complicated process that requires proper orchestration between the people, processes, and technologies operating in a cloud environment. With a cloud automation solution, you can minimize errors, reduce operational costs, and optimize business value. Whether your IT team needs to provision/deprovision servers, configure VMs, move applications between systems, or adjust workloads, automation can step in to expedite the process(es).

Besides the benefits of reducing manual work, cloud automation provides added advantages like:

When repetitive tasks are automated, the workflow speed increases as tasks that used to take weeks or days are done in minutes. With a drastic reduction in development and production time, the operational efficiency of an organization naturally improves. The productivity of employees also increases as they get to focus more on the rewarding aspects of their work instead of doing IT heavy lifting.

Provisioning servers manually can expose sensitive data to unauthorized users and increase the attack surface. In contrast, an automated solution creates an orderly environment that is far easier to protect. Automation reduces the possibility of misconfiguration and security posture drifts, thus amplifying the security stance of the enterprise.

Humans are prone to making mistakes, but mistakes are costly. Automated systems can handle routine, monotonous work much better than humans at far less cost. Moreover, automated solutions let you identify under-provisioned and unnecessary resources in your cloud system. By acting on these money sinkholes, you can reduce your organizations overall expenses and save money.

When you work with manually configured clusters, youre going to run into misconfigurations. Without having complete visibility into the system, it becomes difficult for IT staff to pinpoint irregularities and rectify them. Cloud automation allows you to set up resources in a standardized manner, which means you have better control over the infrastructure, leading to improved governance.

The most common use case of cloud automation is infrastructure provisioning. IaC (Infrastructure as Code) is the process of managing infrastructure through code. Before adopting IaC, teams had to maintain multiple clusters manually, which over time led to configuration drifts and created snowflake servers. Snowflake servers are servers whose configuration has changed so much that they can no longer be integrated with the system.

IaC streamlines the management of environment drift and removes discrepancies that lead to deployment issues. Further, manually configuring servers is time-consuming. Infrastructure automation tools, such as Terraform, Pulumi, or AWS CloudFormation; automate recurring tasks, like building, deploying, decommissioning, or scaling servers; and bring down the deployment time from days to minutes.

In todays fast-moving and agile IT environment, manual deployment of applications doesnt have a lot of value to some organizations. Agile organizations believe in continuous delivery and often push out a dozen releases in a week. That is not possible with the manual method of deploying applications, where failing to execute even a single deployment script leads to inconsistencies affecting the software release cycle.

By automating application deployment, the probabilities of errors are reduced to a minimum, and firms achieve faster delivery of projects in a much shorter time frame with fewer efforts.

As enterprises move from legacy systems to expansive cloud environments, it can become challenging to supervise hundreds of end users that need various levels of access to cloud services. Manually allocating access rights to individual users is cumbersome and leads to delayed action. Plus, there is the risk of granting access to the wrong person(s), which can threaten the organizations cloud security posture.

With cloud automation, identity and access management (IAM) becomes a lot more structured and secure. By automating IAM policies, you can reduce the chances of errors by restricting access to only specific people.

Also read: Best Network Automation Tools for 2021

Here are some examples of automation tools that you can use to manage your cloud resources effectively.

In the market since 2005, Puppet is an open-source deployment tool that automates server configuration by eliminating the manual use of shell scripts. Puppet uses its own domain-specific language called Puppet Code to automate infrastructure deployment across a range of devices and operating systems. Mostly preferred for complex deployments, Puppet codifies applications into declarative files and stores configurations in version controls for teams to compare with a centralized source of truth.

CloudFormation is an IaC tool in the AWS platform that provides a quick and efficient way to automate and provision AWS deployments. CloudFormation enables users to build their infrastructure within a YAML or JSON format. Then, using the suitable template language, users can code the required infrastructure and use CloudFormation to model and provision the stacks. In addition, they can also make use of Rollback Triggers to restore infrastructure stacks to a previously deployed condition if errors are detected.

Ansible is an open-source deployment and network automation tool that is simple to put up and operate. Unlike Puppet, which installs agents on clients servers, Ansible uses an agentless architecture with all functions carried out through the SSH command line. Thus, not having to install individual agents in servers saves time and simplifies the deployment process. Also, it uses YAML language, which is much easier to read than other data formats like JSON or XML.

Its simply not enough by migrating to the cloud. Organizations have to shed their legacy methods of operation; otherwise, the move will not be worth it. To fully leverage the limitless possibilities of the cloud, organizations need to adopt cloud automation. In fact, automation should no longer be an option but recognized as a vital cloud feature that organizations need to adopt for reduced complexity and greater agility.

Read next: Top RPA Tools 2021: Robotic Process Automation Software

Continue reading here:
What You Need to Know About Cloud Automation | ENP - EnterpriseNetworkingPlanet

Read More..

5 Ways That Restaurant Systems in the Cloud Unite Operations and Strategy – Hospitality Net

Recently, the capacity for restaurants and food services organizations to adapt quickly to drastic changes was sorely tested. To move as a whole organization in one accord between one paradigm and another to continue to serve guests and thrive was key to success. It remains to be that. Achieving that mission centers around efficiently managing restaurant systems in a more centralized manner.

What are some essential areas where restaurant systems in the cloud help unite your locations and brands? In what ways does cloud-based restaurant software increase your capacity to adjust and adapt as the industry landscape shifts and changes? How does it impact the way your organization creates strategy and competitive momentum? Heres a selected list of 5 to consider.

Cloud technology enables restaurants and food services organizations to get a better sense of what the winners are when it comes to menu items. Being able to strategically roll out menus based on popularity and profitability based on organization-wide data is a decided advantage. This is in comparison to on-prem solutions that tend to silo locations and make comparative reporting more involved and less agile.

Whats needed today is a better way for restaurant and food services location management, franchisees, and head offices to connect the dots to make changes as quickly and accurately as possible according to above-store analysis. Doing that helps promote greater profitability across the spectrum as quickly as is necessary in the areas of pricing, combos, specials, new items, and other essentials. The cloud enables that with timely, accurate, and simultaneous updates to help you optimize all factors that affect profitability.

Mobile-based ordering was a vital source of revenue for a spectrum of restaurant organizations over the past year, arguably more so than ever before. Even as we come out of a drastic disruption to how restaurants and food services engage with guests, the central concept of mobility being a primary revenue driver remains, with the market expected to reach $192 billion in 2025 compared to $126.91 billion today.

This is a part of a movement toward access to menus and ordering that grants guests with more control over the process to get what they want more easily from wherever they happen to be. All locations are brought into this equation by way of a central platform in the cloud. This empowers them, and strengthens the whole organization, too.

In the world of food services, a single item may appear under several menu items depending on the brand or concept. Yet, tracking those items is more efficient if it can be done in a common and centralized environment that assigns a single SKU to multiple public-facing item names. In doing that, tracking those items, related ingredients, and other vital factors becomes far easier to manage.

The cloud is designed to enable your teams to do that by its very nature. When youre enabled to see trends across a spectrum of brands and concepts in a single environment, item management is more efficient and more informative to cost-effectiveness for your whole organization.

By extension, analyzing business data of all kinds organization-wide is more straightforward in the cloud. A single technology platform that unifies solutions helps teams access the information they need in real time wherever they are to best determine performance and related strategy from a higher vantage point.

From there, your organization can better establish what the most realistic benchmarks and standards to judge growth and performance really are. Above-store reporting on a single platform adds to how adaptive your organization can be overall and in a crucial way when its time to switch gears as needed.

Managing a siloed technology structure based on on-prem servers can be a distraction to delivering a great guest experience. As such, many leading restaurant organizations are continuing digital transformation processes by partnering with SaaS providers to ensure the stability and effectiveness of technology solutions on their behalf.

The immediate advantage to this is a reduction in overhead and maintenance. Cloud and SaaS support has other implications, like deployment of new integrations that in-house IT staff no longer have to worry about. That allows your in-house technology teams more time on innovation in collaboration with technology partners to be more proactive in envisioning what the future will be for the industry and for your business.

Connecting with the right technology partners, especially in this era of transition, is top of mind for many organizations. The next step is making sure that your business finds a partner to help you transition into that new era who will be supportive for the long term. Whats the best way to approach that?

Weve authored a resource that touches on that subject, helping you to outline a framework by which to judge current partners as well as engage with new ones.

You can easily get your copy of that resource right here.

Infor is a global leader in business cloud software specialized by industry. Providing mission-critical enterprise applications to 65,000 customers in more than 175 countries, Infor software is designed to deliver more value and less risk, with more sustainable operational advantages. We empower our 17,000 employees to leverage their deep industry expertise and use data-driven insights to create, learn and adapt quickly to solve emerging business and industry challenges. Infor is committed to providing our customers with modern tools to transform their business and accelerate their own path to innovation. To learn more, please visit http://www.infor.com.

Read more from the original source:
5 Ways That Restaurant Systems in the Cloud Unite Operations and Strategy - Hospitality Net

Read More..

The rise of the cloud data warehouse – The Register

Paid Feature The cloud has a habit of transforming on-premises technologies that have existed for decades.

It absorbed business applications that used to run exclusively on local servers. It embraced the databases they relied on, presenting an alternative to costly proprietary implementations. And it has also driven new efficiencies into one of the most venerable on-premises data analytics technologies of all: the data warehouse.

Data warehousing is a huge market. Allied Market Research put it at $21.18bn in 2019, and estimates that it will more than double to $51.18bn in 2028. The projected 10.7 percent CAGR between 2020 and 2028 comes from a raw hunger for data-driven insights that we've never seen before.

It isn't as though data warehousing is a new concept. It has been around since the late eighties, when researchers began building systems that funneled operational data through to decision-making systems. They wanted that data to help strategists understand the subtle currents that made a business tick.

This product category initially targeted on-premises installations, with big iron servers capable of handling large computing workloads. Many of these systems were designed to scale up, adding more processors connected by proprietary backplanes. They were expensive to buy, complex to operate, and difficult to maintain. The upshot, AWS claims, was that companies found themselves spending a lot on these implementations and not getting enough value in return.

As companies produced more data, it became harder for these implementations to keep up. Data volumes exploded, driven not just by the increase in structured records but also by an expansion in data types. Unstructured data, ranging from social media posts to streaming IoT data, has sent storage and processing requirements soaring.

Cloud computing evolved around the same time, and AWS argues that it changed data warehousing for the better. Data Warehousing has been popular with customers in sectors like financial services and healthcare, which have been heavy analytics users.

Manage data at any scale and velocity while remaining cost effective

But the cloud has opened up the concept to far more companies thanks to lower prices and better performance, according to AWS. Applications previously restricted to multinational banks and academic labs are now open to smaller businesses. For example, youre able to perform data analytics in the cloud with benefits like scale, elasticity, time to value, cost efficiency and readily available applications.

The numbers bear this out. According to Research and Markets, the global market for data warehouse as a service (DWaaS) products will enjoy a 21.7 percent CAGR between 2021 and 2026, growing from $1.7bn to $4.5bn.

The largest cloud players have leaped on this trend, with Microsoft offering its Synapse service and Google running BigQuery. AWS announced Redshift as the first cloud data warehouse to address the market in 2012. The idea was pretty simple, AWS told us. The company wanted to give customers a scalable solution, where they could use the flexibility of the cloud to manage data at any scale and velocity while remaining cost effective.

Unlike online transaction processing databases like Amazon Aurora, Redshift targets online analytics processing (OLAP), offering support for fast queries thanks to scalable nodes with massive parallel processing (MPP) in a cluster. The cloud-based data warehouse follows the AWS managed database ethos. Rather than relying on a customer's administrators to take care of maintenance tasks, the company handles it behind the scenes in the cloud.

Aside from standing up hardware, this includes patching the software and handling backups and recovery. That means developers can focus on building applications ranging from modernizing existing data warehouse strategies through to accelerating analytics workloads, which it does using back-end parallel processing to spread queries over up to 128 nodes. Companies can use it for everything from analyzing global sales data to crunching through advertising impression metrics.

AWS also highlights other applications that can draw on cloud-based data warehouse technology, including predictive analytics, which enable companies to mine historical data for insights that could help to chart future events. Redshift also helps customers with applications that are often time critical, AWS says. These include recommendation and personalization, and fraud detection.

Performance at the right price is key, asserts AWS, which reports that customers latency requirements for processing and analyzing their data are shortening, with many wanting to make things almost real time.

AWS benchmarked Redshift against other market players and found price performance up to three times better than the alternatives. The system's ability to dynamically scale the number of nodes in a cluster helps here, as does its ability to access data in place from various sources across a data lake.

Data sharing is a cumbersome process, traditionally, where files are uploaded manually from one system and copied to another. This system, AWS says, does not provide complete and up-to-date views of the data as the manual processes introduce delays, human error and data inconsistencies, resulting in stale data and poor decisions.

In response to feedback from customers who wanted to share data at many levels to enable broad and deep insights but also minimize complexity and cost, AWS has introduced a capability that overcomes this issue.

Announced late last year, Amazon Redshift data sharing enables you to avoid copies. The new capability enables customers to query live data at their convenience, and get up to date views across organizations, customers and partners as the data is updated. In addition, Redshift integrates with AWS Data Exchange, enabling customers to easily find and subscribe to third-party data in AWS Data Exchange without extracting, transforming and loading it.

Amazon Redshift data sharing is already proving a hit with AWS customers, who are finding new use cases such as data marketplaces and workload isolation.

Data lakes have evolved as companies draw in data of different types from multiple sources. When unstructured data comes in such as machine logs, sensor data, or clickstream data from websites, you don't know about its quality or what insights you're going to find from it.

AWS told us many customers have asked for data stores where they can break free of data silos and land all of this data quickly, process it, and move it to more SLA-intensive systems for query and reporting like data warehouses and databases.

The cloud is the perfect place to put this data thanks to commodity storage. Storing data in the cloud is cheap thanks to a mixture of economies of scale on the cloud service provider side, and tiered storage that lets you put data in lower-cost tiers such as S3.

Data gravity is the other driver. A lot of data today begins in the cloud whether it comes from social media, machine logs, or cloud-based business software. It makes little sense to move that data from the cloud to on-premises applications for processing. Instead, why not just shorten the time it takes to get insights from it, AWS says.

The company designed the data warehouse to share information in the cloud, folding in API support for direct access. Redshift can pull in data from S3's cheap storage layer if necessary for fast, repeated processing, or it can access it in place. It also features different types of nodes optimized for storage or compute. It can interact with data in Amazon's Aurora cloud-native relational database, and other relational databases via Amazon Relational Database Services (RDS).

It also includes support for other interface types. Developers can import and export data from other data warehousing systems using open data formats like Parquet and optimized row columnar (ORC). Client applications also access the system via standard SQL, ODBC, or JDBC interfaces, making it easy to connect with business intelligence and analytics tools.

The ability to scale the storage layer separately to the compute nodes makes the system more flexible and eliminates network bottlenecks, the cloud service provider says.

Cloud databases also provide application developers with other services that they can use to enhance those insights. One of the most notable for AWS is its machine learning capability. ML algorithms are good at spotting probabilistic patterns in data, making them useful for analytics applications, but inference - the application of statistical models when processing new data - takes a lot of computing power. Scalable cloud computing power makes that easier, AWS says.

Cloud-based machine learning services are also easy for companies to consume because they are pluggable with data warehouses via application programming interfaces (APIs). AWS makes these available to anyone who knows SQL. Customers can use SQL statements to create and use machine learning models from data warehouse data using Redshift ML, a capability of Redshift that provides integration with Amazon SageMaker, a fully managed machine learning service.

In 2019, Amazon Redshift also introduced support for geospatial data by adding a new data type to Redshift: geometry. That supports coordinate data in table columns, making it possible to handle geospatial polygons for mapping purposes. This makes it possible to combine location information with other data types when making conventional data warehousing queries and building machine learning models for Redshift.

As data warehousing continues its move to the cloud, it shows no sign of slowing down. Customers can choose offerings from the largest cloud service providers or from third-party software vendors alike. Evaluation criteria will depend on each customer's individual strategy, but the need to scale compute and storage capabilities is sure to factor highly in any decision. One thing's for sure: the cloud will help customers as their big data gets bigger still.

This article is sponsored by AWS.

Continue reading here:
The rise of the cloud data warehouse - The Register

Read More..