Page 2,821«..1020..2,8202,8212,8222,823..2,8302,840..»

The Best Ways to Elevate State and Local Government Resilience in 2021 – StateTech Magazine

Focus on Cybersecurity Recovery and Protection

As evidenced inFlorida,Washington, D.C., and at agencies around the country, municipalities are ripe for cyberattacks, and bad actors are increasingly taking advantage of the shift to remote work. Cybersecurity and data protection must be at the forefront of any resilience strategy.

With state and local employees accessing government information and systems from anywhere, it becomes harder to trust the incoming network traffic. Network-based security alone isnt enough to stop new threats. A zero-trust approach, strong backup to combat ransomware and built-in security features in all servers and storage are critical elements for cyber resilience.

The zero-trust approach is becoming a standard to harden government networks. The National Institute of Standards and Technologydescribeszero-trust security as a set of paradigms that move defenses from static, network-based perimeters to focus on users, assets, and resources. To get started with zero-trust security, state and local governments can tailor their strategies using NISTs August 2020 guidance for implementing a zero-trust architecture.

With ransomware attacks increasing in state and local governments, protecting data is also essential. CISOs should always have an immutable and encrypted backup of sensitive data. An offline backup is a critical tool to ensure governments always have an accurate copy of important data and never have to pay hackers ransom demands.

Cyber resilience is mission-critical to governments that are connected across platforms, devices and geographies. IT and security professionals should evaluate and select servers and storage with integrated security features that make extensive use of intelligence and automation to help organizations stay ahead of the threat curve.

RELATED:New cybersecurity tools can protect utilities.

Cloud computinghas been the key to helpingstate and local governments deliver services both to internal employees and constituents. From safer and smarter cities to more mobile government employees, cloud innovation is helping state and local governments deliver 21st-century citizen and employee experiences around the clock.

In some cases, departments have learned a lot about how to manage cloud efficiently. Many juggled two, three or even four different cloud solutions and providers, with little visibility into their overall cloud resource usage.

Without clear visibility into public cloud use, some departments experience sticker shock when their invoices arrive at the end of the month. Decreased visibility means cloud dollars and human resources may not be spent as effectively.

As state and local governments look to improve their resilience, the flexibility and agility of a cloud operating model allows them to remain mission focused. Hybrid and multi-cloud options provide the right balance of control, flexibility and security. A consistent hybrid cloud approach is not only cost effective but offers the critical business continuity needed by government organizations during both natural and cyber disasters.

EXPLORE:Learn about the technology and approaches needed to quickly enable digital government.

The past year also illuminated equity issues: According to the Census BureausHousehold Pulse Survey, 3.7 million households lacked regular internet access in fall 2020. Municipal leaders know that inequalities will continue to grow should they not solve the problem now.

As governments worldwide look to rebuild economies and invest in technology infrastructure,enhanced broadband deployment allows communities to close the digital divideby shrinking the gap between constituents who have access to high-speed connectivity and those who dont.

In the short term, government leaders need tofocus on providing Wi-Fi hotspots and broadbandto alleviate immediate issues and increase connectivity for activities like telework and remote learning. Still, its important to remember these solutions dont get at the root cause of the digital divide. A plan for long-term resilience that focuses on digital equity will prioritize residential access to high-speed internet to ensure every community is connected for the future.

When we approach the digital divide, we also need to ensure multiuse solutions. While governments focus on building out data connectivity their strategy needs to focus on safety and security, but it also should deliver better experiences for residents and employees. Dont only focus on one or the other. This will lead to a higher ROI for communities throughout the country.

This past year, state and local governments navigated major issues, including budget shortfalls, ransomware attacks, public health emergencies and natural disasters. Its been a challenging time, but state and local governments have proved to be more resilient than ever.

As we turn the page and look ahead, leaders can continue the momentum of resilient government with strong cyber recovery and protection, integrated cloud strategies and connected communities. This will keep governments ready for the next unpredictable event, whether its a cyberattack, natural disaster or public health emergency.

MORE FROM STATETECH:How are cities and counties helping school districts get students online?

View post:
The Best Ways to Elevate State and Local Government Resilience in 2021 - StateTech Magazine

Read More..

How to easily join an AlmaLinux server to an Active Directory Domain with Cockpit – TechRepublic

Jack Wallen shows you just how easy it is to join an existing AlmaLinux server to an Active Directory domain via a web-based GUI.

Image: Jack Wallen

If you've begun deploying AlmaLinux into your data center or your cloud-hosted services, you might have a reason to join those servers to your existing Active Directory domain. At first blush, you might think that process is a drawn-out exercise in command-line marathons. It's not. Believe it or not, thanks to the Cockpit web-based GUI, the process is incredibly simple.

And I'm going to show you how it's done.

SEE: Security incident response policy (TechRepublic Premium)

To make this work, you'll need an instance of AlmaLinux, a running Active Directory Domain Controller, and a user with sudo privileges (or the root user itself).

The first thing you must do is enable Cockpit since it's not enabled out of the box. Do this, log in to your AlmaLinux server and issue the command:

That's all there is to enabling Cockpit. You can now point a web browser to https://SERVER:9090 (where SERVER is either the IP address or domain of the AlmaLinux server).

Before you can join the domain, you must first set the computer's hostname. Let's say, for example, the domain you'll be joining is example.lan. You might want to set your domain to almalinux.example.lan. For that, you could either use the command line or do it through Cockpit. From the terminal, that command would be:

If you'd rather do this through Cockpit, click Edit next to the hostname and then, when prompted (Figure A), type a Pretty Host Name (such as almalinux) and the full hostname (such as almalinux.example.lan). Click Change when finished.

Figure A

Setting the hostname for AlmaLinux through Cockpit.

You're now ready to connect AlmaLinux to the domain. Click Join Domain, in the Configuration section (Figure B).

Figure B

The Cockpit main page makes it easy to connect to a domain.

In the resulting window, you'll first type the address of your Domain Controller. As soon as AlmaLinux finds the controller (Figure C), you can then select the authentication type, add the Administrator Name and Administrator password.

Figure C

Cockpit was able to see the Domain Controller and is ready for authentication.

At this point, AlmaLinux is now connected to the domain (Figure D).

Figure D

Our AlmaLinux server is connected to my monkeypantz.lan Active Directory domain.

Congratulations, you've just joined your AlmaLinux server to an Active Directory domain, via the web-based GUI, Cockpit.

Subscribe to TechRepublic's How To Make Tech Work on YouTube for all the latest tech advice for business pros from Jack Wallen.

Strengthen your organization's IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices. Delivered Tuesdays and Thursdays

Visit link:
How to easily join an AlmaLinux server to an Active Directory Domain with Cockpit - TechRepublic

Read More..

C3 AI Stock Is a Buy on Partnerships and a Likely Sales Pop – InvestorPlace

Growth stocks have seemed to stop their descent as risk appetite has slowly returned to the market. C3 AI (NYSE:AI) stock was one of those growth plays that have been severely punished by Wall Street for the perceived lackluster results.

Source: Shutterstock

AI stock is down more than 60% from its highs however it seems to have been finding support at the $60 price level. It should open today at about $61.60

I remain bullish on the stock and believe that the company is properly laying the foundation for future growth.

Wall Street Analysts had been disappointed with C3 AIs latest earnings release. This is despite reporting a decrease in net loss and a 26% increase in revenues.

Revenues for fiscal Q4 were $52.3 million, which is an increase from $41.6 million at the same time last year.

As a result of these so-called mixed results, Wedbush reduced the price target of AI stock from $175 to $100.

Canaccord Genuity and Deutsche Bank both reduced their price targets as well from $120 to $75 and $98 to $63 respectively.

These downgrades reflect Wall Streets short-term thinking as C3 AI actually had a few solid wins for the quarter. The company extended its contract with Shell (NYSE:RDS) thus proving the value-add of the companys AI systems.

C3 AI also grew its total number of enterprise customers by 89, an increase of 82% compared to the same time last year.

Despite the earnings miss, C3 AI is continuing to build out its partnership pipeline.

The company recently announced a partnership with data cloud company Snowflake (NYSE:SNOW). This collaboration has the potential to create a value-added service for enterprise clients more than a sum of the individual parts.

Snowflakes cloud data platform generates a ton of data as it runs concurrently across multiple clouds and regions. Enterprise clients can analyze the data generated from those cloud servers using C3 AIs suite of AI products to generate insight that may not be readily apparent.

Remember that C3 AI actually has a suite of pre-built AI applications right out of the gate. This partnership will allow Snowflake customers access to these AI tools without replicating the data thus reducing workload and improving turnaround times.

There are many use cases already for C3 AIs product suite as its tools are being used in industries as wide-ranging as banking to national defense.

Apart from Snowflake, C3 AI also has a strategic partnership with Infor, a cloud-based ERP service provider.

Infor plans to use integrate C3 AIs product offering with its own solutions. The partnership will focus initially on Internet of Things systems but will eventually move to other verticals.

By augmenting our existing Infor product portfolio with prebuilt AI applications from C3 AI that run natively in the cloud, we can expand the Infor portfolio of use cases and further position Infor to capitalize upon the cognitive era, said Infor CEO Kevin Samuelson.

I like that C3 AI is continuing to gather strategic partners in order to sell its AI products. It is also smart of the company to partner with enterprise platforms such as Snowflake and Infor as it will reduce the friction of selling their product.

Enterprise sales typically take a long time and vendors need to go through the corporate bureaucracy before being able to land a sale.

These partnerships effectively ensure that C3 AI has one foot through the door when selling to enterprise clients. It will also ensure that the C3 AI suite is properly integrated with existing clients technology systems thus ensuring a smooth deployment and immediate value-add.

I believe that C3 AI is one of the leaders in the AI space which has massive potential. I fully expect the pace of adoption of AI technology to be slow at the start.

However, I believe adoption could easily hockey-stick in the immediate future. I continue to like C3 AI.

On the date of publication, Joseph Nograles held a LONG position in AI.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Read more here:
C3 AI Stock Is a Buy on Partnerships and a Likely Sales Pop - InvestorPlace

Read More..

Distributed cloud offers the best of both worlds – ITProPortal

The next big thing in cloud computing offers numerous advantages to the enterprise IT user, says Neo4js Jim Webber.

Cloud users benefit from shifting the responsibility of running hardware and software infrastructure to providers. They can leverage the economics of cloud elasticity and benefit from a fast pace of innovation from the cloud vendors.

Yet for a variety of reasons such as data security, cost, and infrastructure, contemporary CIOs in enterprises have tended to use a combination of their own data centers, private cloud, and (multiple) public cloud provision. While the debates of the cost-effectiveness of a multi-cloud strategy continue, the notion of distributed cloud has recently entered the lexicon, stretching the discussion along an interesting new axis.

Distributed cloud is a mix of on-premise and cloud computing, but where the abstraction layer for the on-premise part of the system matches that of the cloud. So the abstraction your data center provides looks very similar to your cloud vendors abstraction to a software developer.

This is an interesting notion: that some workloads that can be more cost-effectively run on your own servers can be run that way, and other cloud-friendly workloads run on the cloud. But because both workloads target the same APIs, there is a degree of flexibility over time in exactly where workloads will be run. In a sense, you can move the dial between your own infrastructure and the cloud to suit your current needs.

For the cloud providers like Google, Amazon AWS, Microsoft, Alibaba, etc there is an interesting investment at play. By releasing (some of) the code that runs their clouds, they are improving the quality of non-cloud data centers so that they run more like efficient cloud data centers. At first glance, this is an own-goal. But dig deeper and youll see this is a gateway drug to give CIOs the freedom to move systems with relative ease into the cloud and rationalize their cloud versus on-premise spend.

From an ISV point of view, you could be forgiven for ignoring the relevance of distributed cloud. But that would be naive, because not all cloud services can be neatly packaged and moved from the cloud providers into your data center, regardless of API uniformity. For example, your data center does not have the specialized hardware like GPS atomic clocks or FPGAs to support advanced transaction processing and data analysis - that remains in the cloud, even if you want it closer to your other systems for security, compliance, or latency reasons.

Its clear that cloud providers have some amazing innovations to offer. But they are not the sole source of useful information technology. Most ISVs (including the one I work for) have offerings for your on-premise needs as well as existing in the marketplaces of all the biggest public clouds.

This means you can choose an ISV system such as a database and have complete freedom to deploy to on-prem or cloud-provisioned hardware, or indeed simply to consume the existing SaaS version of that software from your chosen cloud. Want to move from on-prem to cloud or back? Simple. Want to move clouds? Also straightforward.

This means you can use the application on a distributed cloud but not be tied into any particular public cloud provider. You can decide between going with an on-premise-only solution or a fully managed cloud solution.

Consider how, for example, you might find great value in Amazon Neptune, but the organization says for cost or uptime reasons it needs to move to Google. In this situation, an Amazon Neptune proprietary solution leaves the CIO exposed, as AWS Neptune is not an option on Google. But a distributed cloud-aware ISV can easily run on either Google or any other public cloud. In this new distributed cloud world, not being coupled to any cloud provider is a definite liberation.

Running ISV apps in a distributed cloud also allows us to manage latency. Picture a core business application partly run by Google in California, 5,000 miles away from your London data center. This can be a mission-critical issue if the data is latency-sensitive. A financial trader can make a local call in London that takes an incredibly small fraction of a second to transmit to a server. Still, if all of the data has to travel to the US and come back, that 10,000-mile round trip takes a surprisingly large number of milliseconds. If I'm running that part of my system on the local on-prem Google cloud, that same transaction can be completed in sub-milliseconds, which is a huge win.

Even in a more conventional business context, every millisecond matters now, however. The speed it takes to render a web page or a screen on an app is critical. It takes half a second to a second before users start to get twitchy. Theres an acceptance criteria of approximately 200 milliseconds, and probably a lot less in the case of Gen Z. As a result, you don't want any app to wait while the database query travels from London to California and back again. The query needs to be answered as near to the user as possible.

There is a similar distributed cloud business advantage in the case of addressing regulatory requirements such as GDPR. In the public cloud, you never know where your data resides at any given moment.

If you have your own data center, you know the data is physically there. One of the benefits of a distributive cloud is that it gives you the ability to host some data in the cloud and some data physically in your servers. The data is still run on the same cloud operating system, but the pieces of data are physically located in the building next door. When a GDPR inspector comes and wants to know where customer Xs data is, you can say it's in this building, on that computer, and in that rack.

The fact that data can have a physical location is a great message for systems that process sensitive information. If there is any concern that the data would be open to hacking if stored in the public cloud, the four walls and 24x7 security offered by a secure data center might be reassuring.

Multiple distributed cloud use cases are opening up, from healthcare, to media to financial services. Any type of customer likely to have a large set of proprietary platforms would benefit.

All in all, the move to distributed cloud will give the market the clear computing and compliance benefits associated with proximity while also accessing the flexibility and cost benefits of a distributed arrangement.

It also offers the ideal on-ramp to public cloud computing. It would make any business run its data center so that it would be straightforward to package up and pass across to a public cloud provider when it makes business sense.

Buying into a distributed cloud is a sound way of keeping your options open about where you should move your workloads in the future, while working with an ISV gives you extra flexibility and benefit. And now is definitely the time to start seeing how.

Jim Webber, Chief scientist, Neo4j

The rest is here:
Distributed cloud offers the best of both worlds - ITProPortal

Read More..

How will the semiconductor chip shortage affect enterprise IT? – IT PRO

Its not quite the dystopian headlines dreamed up by George Orwell or Margaret Atwood, but the current attention given to the semiconductor chip shortage is the biggest indicator of our reliance on technology.

The predicted losses from the automotive and IT industries, along with the effects lasting anywhere from six months to two years, paint a picture of over-reliance on computing to push out products. The automotive industry alone is set to lose $110 billion this year, as the chips that power nearly all of the modern cars functionality continue to be unavailable.

Such a loss has naturally gained the attention of the technology industry and has prompted a lot of chief executives to take a guess at how long the shortage will have an effect on manufacturing. However CCS Insight senior director of research, Wayne Lam, says the shortage mainly applies to smaller components that are not on the bleeding edge of technological innovation, but are just as necessary to produce a device.

The shortage is not so much the high ticket chips, the processors, or even memories, it's more benign like power management, or some sort of analogue chipset that isn't necessarily on the bleeding edge of semiconductor technology says Lam.

Those parts are in a bit of a crunch because they are so necessary. It's like buying a brand new car without spark plugs, you just need that one small piece to make it work.

While the chip shortage is being painted as a tech industry apocalypse in some quarters, not all parts of the sector have been affected equally. Ben Stanton, research manager at Canalys, says the mobile industry has experienced a first quarter largely unaffected by the chip shortage, with global analysis firm Counterpoint also reporting a 6% year-on-year increase in European smartphone sales.

Data from Counterpoint found that Samsung and Apple shipments grew 13% and 31% year-on-year respectively, with Xiaomi, Oppo, OnePlus and RealMe all seeing between 73% and 183% growth on the year.

Medium business IT survey highlights

An online study to understand the state of medium businesses in Europe and South Africa

The chip shortage is certainly having a wide impact across many industries adds Stanton. The smartphone market, though, has actually been fairly resilient, particularly in Q1 when many vendors blew past their quarterly target.

The bottlenecks are there and are having an impact, but they have been slightly overblown and are exacerbated by many vendors now overbooking on component orders to try and secure supply. But if you look at some of the biggest markets, like China, actually the channel was stuffed with inventory by the end of Q1.

Although mobile may be sitting pretty for now, IBM president Jim Whitehurst told the BBC the computer chip shortage could last up to two years and that the industry may need to look into recycling or extending the life of certain computing technologies.

For those looking to upgrade a suite of laptops, IDC manufacturing analyst Maggie Slowik says purchasing decisions will be influenced by availability rather than price.

We've already seen price increases and the forecast is that these products that contain semiconductors will also see prices increase going forward, says Slowik. But we certainly saw with consumer goods, that people are willing to switch very quickly to a different brand, if the particular brand they previously relied on wasnt available.

Businesses have a lot of choice when it comes to office hardware and they can quickly switch brands if a particular product is priced too high, or they have to wait for too long. I think that raises this concept of businesses being more brand agnostic, which makes it hard for manufacturers to retain any loyalty. This chip shortage situation certainly doesn't help at all.

Although the shortage could affect the hardware employees use on a daily basis, Lam and Stanton both say the mass shift to remote working as a result of the pandemic will have alleviated some of the pressure that may have otherwise been felt by businesses.

As organisations move more of their workforce to collaboration platforms, such as Teams, Slack or Webex, an employees ability to do their job is less reliant on the hardware they are using and more affected by whether they have a stable connection to a cloud environment.

For the enterprise market, how youre affected by this shortage is going to depend on where your business is placed. If you're building more cloud servers, that business is going to keep going said Lam. The pandemic has proved that we can productively work virtually and that transition to cloud computing is only going to accelerate; the demand is indisputable.

Digital transformation has accelerated, if anything said Stanton, remote working has forced companies to shift workloads to cloud, and invest in tools to deploy and manage an estate of devices which is decentralised, and no longer exists in a fixed office location.

Going forward, there will be constraints that definitely last until the end of 2021. One important point to note, though, is that lucrative developed regions are being prioritised for device allocation, so the impact of shortages may actually affect areas like Africa, Southeast Asia and Latin America more.

Another interesting nugget I am hearing, particularly in relation to Q2 2021, is that suppressed demand for smartphones in India due to the terrible outbreak of COVID-19 is actually freeing up devices and components to fulfil orders in other markets. So I think the important point is that there are many more regional nuances than people realise. The chip shortage is often painted as a global issue, but the impacts may well be local.

Although it can be seen as an isolated issue, this chip shortage, along with the global pandemic, tremors from changing relationships relating to Brexit and the US and Chinas trade war, as well as global incidents seen in Suez have had huge effects on the supply of products.

Slowik says the main lesson for businesses to learn from the semiconductor chip shortage is not about our reliance on technology. Instead, she implored enterprises to assess their supply chains, making sure they are diverse with distributors they trust, minimising the effect of global events on their business.

Weve been talking about supply chain risk for a long time, but this particular shortage and the pandemic has raised the issue that companies now need to really start looking at further diversifying their supply chains.

One of the hard lessons businesses should have learnt over the past year is that they cannot just assume that there's going to be unlimited supply. They have to build agility into their procurement and supply chain strategies so that they have that additional buffer to freak events, such as the Suez Canal blockage, without having to hoard products.

Consumer choice and the payment experience

A software provider's guide to getting, growing, and keeping customers

Prevent fraud and phishing attacks with DMARC

How to use domain-based message authentication, reporting, and conformance for email security

Business in the new economy landscape

How we coped with 2020 and looking ahead to a brighter 2021

How to increase cyber resilience within your organisation

Cyber resilience for dummies

Read more here:
How will the semiconductor chip shortage affect enterprise IT? - IT PRO

Read More..

Mysterious security update to Google Drive cloud storage locker will break links to some files – The Register

Google has advised administrators of its Workspace productivity suite that its set to improve security of its Drive cloud storage locker, but that the fix will break links to some files.

The ad giants advisory to Workspace admins doesnt mention the reason for the update, other than saying its an enhancement.

The little detail offered states that the update changes the URLs for some Google Drive files and folders. The new links include a resource key in a files URL.

Access to impacted files wont change for people who have already viewed them or who have direct access, but others might need to request access.

And as Drive is used to share files far and wide, that could mean pain for users.

Admins have been given until July 23 to choose from three options:

Brace for some inquiries, dear readers, as Google advises that if you choose to apply the update, users will be notified starting July 26, 2021. If you pick the default option of applying the change, users will have until August 25, 2021, to remove the update from impacted files.

If the result of that is grumpy users, you can still change your choice after July 23. However, Drive wont notify your users of your changes.

Which sounds like fun waiting to happen.

Google says the update will be rolled out by September 13th, 2021 81 days downstream from its notification to admins. Which is at least more than the 31 days Google gave Workspace admins between the announcement and the deadline to act.

Follow this link:
Mysterious security update to Google Drive cloud storage locker will break links to some files - The Register

Read More..

Zerto keeps Texas bank from being left in the cold – TechTarget

When Winter Storm Uri cut off its access to grid power, Woodforest National Bank had to enact an emergency data center failover with Zerto.

The massive snowstorm in mid-February created rolling blackouts that shorted out the Houston-based community bank's automatic transfer switch, preventing it from switching over to generator power. This left Woodforest's data center with only UPS battery power, which amounted to about 30 minutes when the issue was discovered.

Luckily, Woodforest has a secondary data center at a colocation facility in Austin, Texas, and immediately began migrating servers over to it. However, at around the 20-minute mark, the Houston building lost power completely. At that point, Woodforest enacted its disaster recovery (DR) plan and used Zerto to fail over the remaining servers to the Austin site.

We had limited people, limited internet, limited power. Marcus LohrIT infrastructure manager, Woodforest National Bank

"We had limited people, limited internet, limited power," said Marcus Lohr, Woodforest National Bank's IT infrastructure manager.

Even under the circumstances, all customer-facing functions, including credit card processing and the website portal, were available within 90 minutes. Within four hours, Woodforest was back to running at full capacity as applications and ancillary business functions were brought back online.

Woodforest's infrastructure was fully tested and prepared for natural disasters, Lohr said. Using Zerto's replication technology, the company had been performing planned migrations between its Houston and Austin sites for the past 12 years.

This two-site setup was initially in response to hurricane season, Lohr said. From July to January, Woodforest would run its servers out of the hardened colocation facility in Austin, and for the rest of the year, the Houston data center would be primary. The two locations alternated between primary and secondary roles every six months, and both locations ran Cisco Unified Computing System on the same hardware.

Zerto serves as Woodforest's primary data protection product for rapid and granular recovery, but it can typically recover only data that's three to seven days old. The bank uses a different backup product for retaining and recovering data that's older than seven days.

Woodforest had refined its DR plan to the point that migrations and failovers have gone from taking a day to down to about an hour, Lohr said. Outside of the technological capabilities provided by Zerto, Woodforest had also worked out a pecking order of who to contact, and all its DR procedures are hosted in a wiki that is accessible even if Woodforest's main site is down.

Additionally, the February outage took place just two weeks after a planned migration, so migration was still relatively fresh on staff members' minds, Lohr said. Despite the fact that only about 25% of the people responsible for DR were available during the winter storm incident, everyone remained calm and knew how to carry out the transition, according to Lohr.

"Twelve years of doing this and trying to make it better, the business has done such a good job with it and nobody lost their heads during this," he said.

In order to prepare for the next big outage, Lohr has started investigating another colocation facility. He said he plans to move all data center activity out of Woodforest's main building, such that both primary and secondary data centers will be in hardened facilities with redundant power, cooling and security year-round.

About 90% of Woodforest's infrastructure is virtualized, but the next step will be to involve cloud more, Lohr said. Woodforest currently has very little cloud infrastructure, with its largest investment being email in the cloud through Office 365.

There are many benefits to the cloud, such as being able to expand Woodforest's DR options, but it's something he needs to approach carefully to ensure he's optimizing costs, Lohr said.

"We're dipping our toes in the cloud right now, but we've been moving towards it. We see it as a tertiary data center in the future," Lohr said.

Read more:
Zerto keeps Texas bank from being left in the cold - TechTarget

Read More..

Server Security Solution Market to Witness Huge Growth by Imperva ,Sophos ,Nibusinessinfo The Manomet Current – The Manomet Current

Server security is an anti-malware suite which is designed for servers and offers protection for file servers. It provides real time protection against viruses, Trojans, spyware, rootkits, and other malware. In other words the server security is the protection against information assets that can be accessed from web server. It is important for small as well as large enterprises that has physical or virtual web server. It comes to integrity, confidentiality, and availability of information. As the security breach incurs a cost to organization the server security is highly critical.

Latest released the research study on Global Server Security Solution Market, offers a detailed overview of the factors influencing the global business scope. Server Security Solution Market research report shows the latest market insights, current situation analysis with upcoming trends and breakdown of the products and services. The report provides key statistics on the market status, size, share, growth factors of the Server Security Solution. The study covers emerging players data, including: competitive landscape, sales, revenue and global market share of top manufacturers are: Imperva (United States),Sophos (United Kingdom),Nibusinessinfo (Ireland),Blue Planet-works Inc. (Japan),F-Secure (Finland),McAfee (United States),Kaspersky (Russia),ESET, spol. s r.o. (Slovakia),Trend Micro Incorporated (Japan),Computer Security Products Inc. (United States)

Free Sample Report + All Related Graphs & Charts @:https://www.advancemarketanalytics.com/sample-report/130164-global-server-security-solution-market

Analyst at AMA have conducted special survey and have connected with opinion leaders and Industry experts from various region to minutely understand impact on growth as well as local reforms to fight the situation. A special chapter in the study presents Impact Analysis of COVID-19 on Global Server Security Solution Market along with tables and graphs related to various country and segments showcasing impact on growth trends.

Market Trend:Rising Demand of Cloud Based Security Solutions

Adoption of Security Solutions by Enterprises

Market Drivers:Increasing Concerns over Data Breach is Fueling the Market

Wide Range of Applications in Industries Such as Banking, Government, Retail, Healthcare and Others

Opportunities:Growing Internal and External Threats is Boosting the Market

Challenges:

Lack of Skilled Professionals

The Global Server Security Solution Market segments and Market Data Break Down are illuminated below:

by Type (Network firewall security, Server hardening), Organization size (SMEs, Large enterprises), Pricing (Monthly, Annually, One time license), Server type (Cloud Server, Local server)

Enquire for customization in Report @:https://www.advancemarketanalytics.com/enquiry-before-buy/130164-global-server-security-solution-market

Market Insights:

Merger Acquisition:

On July 2018, Imperva has acquired Prevoty which is an app security firm. This acquisition will help to provide comprehensive security solutions to protect application services.

Region Included are: North America, Europe, Asia Pacific, Oceania, South America, Middle East & Africa

Country Level Break-Up: United States, Canada, Mexico, Brazil, Argentina, Colombia, Chile, South Africa, Nigeria, Tunisia, Morocco, Germany, United Kingdom (UK), the Netherlands, Spain, Italy, Belgium, Austria, Turkey, Russia, France, Poland, Israel, United Arab Emirates, Qatar, Saudi Arabia, China, Japan, Taiwan, South Korea, Singapore, India, Australia and New Zealand etc.

What benefits does AMA research study is going to provide? Latest industry influencing trends and development scenario Open up New Markets To Seize powerful market opportunities Key decision in planning and to further expand market share Identify Key Business Segments, Market proposition & Gap Analysis Assisting in allocating marketing investments

Strategic Points Covered in Table of Content of Global Server Security Solution Market:?Chapter 1: Introduction, market driving force product Objective of Study and Research Scope the Server Security Solution marketChapter 2: Exclusive Summary the basic information of the Server Security Solution Market.Chapter 3: Displaying the Market Dynamics- Drivers, Trends and Challenges of the Server Security SolutionChapter 4: Presenting the Server Security Solution Market Factor Analysis Porters Five Forces, Supply/Value Chain, PESTEL analysis, Market Entropy, Patent/Trademark Analysis.Chapter 5: Displaying market size by Type, End User and Region 2015-2020Chapter 6: Evaluating the leading manufacturers of the Server Security Solution market which consists of its Competitive Landscape, Peer Group Analysis, BCG Matrix & Company ProfileChapter 7: To evaluate the market by segments, by countries and by manufacturers with revenue share and sales by key countries (2021-2026).Chapter 8 & 9: Displaying the Appendix, Methodology and Data Source

Finally, Server Security Solution Market is a valuable source of guidance for individuals and companies in decision framework.

Get More Information: https://www.advancemarketanalytics.com/request-discount/130164-global-server-security-solution-market

Key questions answered Who are the Leading key players and what are their Key Business plans in the Global Server Security Solution market? What are the key concerns of the five forces analysis of the Global Server Security Solution market? What are different prospects and threats faced by the dealers in the Global Server Security Solution market? What are the strengths and weaknesses of the key vendors?

Definitively, this report will give you an unmistakable perspective on every single reality of the market without a need to allude to some other research report or an information source. Our report will give all of you the realities about the past, present, and eventual fate of the concerned Market.

Thanks for reading this article; you can also get individual chapter wise section or region wise report version like North America, Europe or Asia.

About Author:

Advance Market Analytics is Global leaders of Market Research Industry provides the quantified B2B research to Fortune 500 companies on high growth emerging opportunities which will impact more than 80% of worldwide companies revenues.

Our Analyst is tracking high growth study with detailed statistical and in-depth analysis of market trends & dynamics that provide a complete overview of the industry. We follow an extensive research methodology coupled with critical insights related industry factors and market forces to generate the best value for our clients. We Provides reliable primary and secondary data sources, our analysts and consultants derive informative and usable data suited for our clients business needs. The research study enables clients to meet varied market objectives a from global footprint expansion to supply chain optimization and from competitor profiling to M&As.

Contact Us:

Craig Francis (PR & Marketing Manager)AMA Research & Media LLPUnit No. 429, Parsonage Road Edison, NJNew Jersey USA 08837Phone: +1 (206) 317 1218sales@advancemarketanalytics.com Connect with us athttps://www.linkedin.com/company/advance-market-analyticshttps://www.facebook.com/AMA-Research-Media-LLP-344722399585916https://twitter.com/amareport

Read more:
Server Security Solution Market to Witness Huge Growth by Imperva ,Sophos ,Nibusinessinfo The Manomet Current - The Manomet Current

Read More..

Enfabrica Takes On Hyperdistributed I/O Bottlenecks – The Next Platform

Not so very long ago, distributed computing meant clustering together a bunch of cheap X86 servers and equipping them with some form of middleware that allowed for work to be distributed across hundreds to thousands to sometimes tens of thousands of nodes. Such scale-out approaches, which added complexity to the software stack, were necessary because normal SMP and NUMA scale up techniques, with very tightly coupled compute and shared memory across a dozen or two nodes, simply could not stretch any further.

These distributed systems, which were difficult enough to build, are childs play compared to what we at The Next Platform are starting to call hyperdistributed systems, which are evolving as disaggregation and composability have entered the imagination of system architects at the same time as a wider and wider variety of compute, memory, storage, and networking components are available and are expected to be used in flexible rather than static ways.

The problem, say the co-founders of a stealth-mode startup called Enfabrica, is that this new hyperdistributed architecture has more bottlenecks than a well-stocked bar. And they say they have developed a combination of silicon, system hardware, and software that will create a new I/O architecture that better suits hyperdistributed systems. Enfabrica is not uncloaking from stealth mode just yet, but the companys founders reached out to us as they were securing their first round of funding $50 million from Sutter Hill Ventures and wanted to elaborate the problems they see in modern distributed systems before they eventually disclose how they have solved those problems.

Enfabrica was formed in 2020 by Rochan Sankar, its chief executive officer, Shrijeet Mukherjee, its chief development officer, plus other founding engineers, and its founding advisor is Christos Kozyrakis, a professor of electrical engineering and computer science at Stanford University for the past two decades who got his PhD in computer science at the University of California at Berkeley with none other than David Patterson as his PhD advisor. Kozyrakis runs the Multiscale Architecture and Systems Team (MAST) at Stanford and has done research stints at Google and Intel, among other organizations; he has done extensive work on vector processors, operating systems, cluster managers for clouds, and transactional memory systems.

Sankar got his bachelors in electrical engineering from the University of Toronto and an MBA from the Wharton School at the University of Pennsylvania and spent seven years at Cypress Semiconductor as an application engineer and chip architect and was notably the director of product marketing and management at Broadcom who drove five generations of its Trident and Tomahawk datacenter switching ASICs, which had over 300 million ports sold and generated billions of dollars in revenue for Broadcom.

Mukherjee got his Masters at the University of Oregon and spent eight years at Silicon Graphics working on high-end graphics systems before joining Cisco Systems as a member of its technical staff and becoming a director of engineering on the groundbreaking California Unified Computing System converged server-network system, specifically working on the virtual interface card that is a predecessor to the DPUs we see emerging today. After that, Mukherjee spent nearly seven years at Cumulus Networks as vice president of software engineering, building the software team that created its open source switch software (now part of the Nvidia stack along with the switch ASICs, NICs, and switch operating systems from the $6.9 billion acquisition of Mellanox Technologies.) When Nvidia bought Cumulus, Mukherjee did a two year stint at Google working on network architecture and platforms and he cant say much more about what he did there, as usual.

Sankar and Mukherjee got to know one another because it was a natural for the leading merchant silicon supplier for hyperscaler and cloud builder switches to get to know the open source network operating system supplier Cumulus needed Broadcom more than the other way around of course. Mukherjee and Kozyrakis worked together during their stints at Google. The team they have assembled the exact number is a secret are system architects and distributed systems engineers that have deployed planetscale software, Mukherjee put it, including people from Amazon Web Services, Broadcom, Cisco Systems, Digital Ocean, Facebook, Google, Intel, and Oracle.

We jointly saw a massive transformation happening in distributed computing, Sankar tells The Next Platform. And that is being keyed by the deceleration of Moores Law and on the fact that Intel has lost the leadership role in setting the pace on server architecture iterations. It is no longer a tick-tock cycle, which then drove all of the corresponding silicon and operating system innovation. That has been completely disrupted by the hyperscalers and cloud builders. And we are now in a race with heterogeneous instances of compute, storage, and networking. where we see a diversity of solutions, cloud sourced processors, other ASICs, GPUs, transcoders, FPGAs, disaggregated flash, potentially disaggregated memory. What we saw happening at the datacenter level in terms of the disaggregation of the architecture and the need for interconnects at the datacenter level is now headed straight into the rack.

It is hard to argue with that, and we dont. We see the same thing happening, and the I/O is way far out of whack with the compute and the storage. Take AI as an example.

AI chips are basically improving their processing capabilities by 10X to 100X, depending on who you believe, says Kozyrakis. At the same time is that systems are becoming bigger. If you look at just hyperscalers, its an order of magnitude increase in the size of datacenters. So we have this massive increase in compute capacity. But we need to provide the 10X, the 100X, the 1000X really, in the I/O connectivity infrastructure. Otherwise, it will be very difficult to bring the benefits of this capacity to bear.

To put it bluntly, hyperscaling was relatively easy if no less impressive for its time, but hyperdistribution is much more complex and it is never going to work without the right I/O. With hyperscaling, says Sankar, distributed systems were built with parent-child query architectures mapped onto homogeneous two-socket X86 server nodes with the same memory and storage and the same network interfaces. The hardware was essentially the same, and that made it all easy and drove volume economics to boot.

Datacenters are evolving into data pipelines, Sankar explains. The diversity of what is happening in the software layer with respect to how data is being processed is mapping into the infrastructure layers, and it is driving increasing heterogeneity in the server architectures to make them optimized. We firmly believe that the solutions that are being sketched out today suffer from problems with scalability and performance, and they suffer from the inability to be best of breed across a wide range of composable architectures.

And without really getting into specifics, Enfabrica says it is building the hardware and software that is going to glue all of this compute, storage, and networking together in a more scalable fashion. We strongly suspect that Enfabrica will borrow some ideas from fast networks and DPUs, but that this is also more than just having a DPU in every server and lashing them together. Pensando, Fungible, Nvidia, and Annapurna Labs within Amazon Web Services are already doing that. And to be frank, what those companies will tell you is that many of the ideas that are in those smart-NICs or DPUs came from the work that Mukherjee did on the virtual network and storage interfaces in the UCS platform. The work Mukherjee did with Cumulus has also figured prominently in the way certain hyperscalers do their networking today, by the way.

Without getting into specifics, since the company is still in stealth mode, Enfabrica thinks it has come up with a better idea for massively distributed I/O.

If you look at all of these companies, they have built a product and now they are going to try to convince people to use them, says Mukherjee. Whereas we assembled a team of people who know what the product needs to do and how it will actually fit in into the lattice of compute, network, and storage that it needs to fit into. This difference actually changes how we emphasize whats hardware and whats software, and where you need to put effort in and where you dont. For example, to make a very illustrative point: how big should a table of something be? Hardware is always going to be limited, software will always want everything to be unlimited. How do you make of those decisions and how do you partition? It requires people who have delivered these kinds of solutions because they understand where people are willing to take a cutback and where absolute line performance matters.

We realize that none of this tells you what Enfabrica is doing. But we can tell you how the company is thinking about I/O in the datacenter and the market sizes and players in these areas that it plans to disrupt. Take a look at this chart we have assembled:

This is what Sankar calls the $10 billion I/O problem that Enfabrica is trying to solve, and that is roughly the total addressable market of all of the silicon for interconnects shown above. This lays out all of the shortcomings of various layers of the interconnect stack.

Whatever Enfabrica is doing, we strongly suspect that it is going to disrupt each of these layers withing the server, within the rack, across the rows, and within the walls of the datacenter. The company is still in stealth mode and is not saying, but we expect to hear more in 2021 and 2022 as it works to intercept a slew of different technologies and scale out systems that are being architected for 2023 and beyond.

Read the original here:
Enfabrica Takes On Hyperdistributed I/O Bottlenecks - The Next Platform

Read More..

Chinas role in the 2021 cryptocurrency crash – Economic Times

Bengaluru: Earlier this week, the price of Bitcoin dropped below $30,000 for the first time since January, after hitting an all-time high of almost $65,000 in mid-April.

While Tesla chief executive Elon Musk's tweets are one of the reasons for this price dip, another major reason is China's massive crackdown on the digital coin and cryptocurrencies in general.

The country has always had a firm stance against cryptocurrencies. Back in 2013, Chinas central bank had barred financial institutions from handling Bitcoin transactions when the price of the digital coin jumped from $100 to $1,000 within a few months. It had also banned fundraising through initial coin offerings and shuttered domestic Bitcoin exchanges in 2017.

In May, Chinese Vice Premier Liu He and the State Council issued a warning saying it was necessary to crack down on Bitcoin mining and trading behavior, and resolutely prevent the transmission of individual risks to the social field.

This was after three Chinese state-backed financial associations raised concerns about risks emerging from the volatility of cryptocurrencies, and directed their members including banks and online payment firms to not provide any cryptocurrency-related services.

Crypto miners shut down

Soon after the government warning, several cryptocurrency miners including HashCow and BTC.TOP halted all or part of their China operations last month. This had huge ramifications since Chinese miners reportedly account for as much as 70% of crypto mining worldwide.

Earlier in June, Weibo, Chinas version of Twitter, blocked several prominent crypto-related accounts, saying each of them violates laws and rules.

On Monday, China's central bank The Peoples Bank of China (PBOC) also met with several domestic banks and payment firms such as Alipay, urging them to tighten restrictions on cryptocurrency trading and directing them to stop facilitating cryptocurrency transactions. These institutions must also comprehensively investigate and identify crypto exchanges and over-the-counter capital accounts of dealers and cut off the payment link for transaction funds in a timely manner, it said.

This crackdown has forced several miners to shut down or sell their machines in despair and exit the business. Some of them are also relocating overseas to countries like Kazakhstan, according to a Reuters report. It said that Chinas crackdown could cause up to 90% of crypto mining to go offline in the country, citing an estimate by Adam James, a senior editor at OKEx Insights.

Here is the original post:
Chinas role in the 2021 cryptocurrency crash - Economic Times

Read More..