Page 3,333«..1020..3,3323,3333,3343,335..3,3403,350..»

Zerto beefs up backup, DR and in-AWS-cloud protection Blocks and Files – Blocks and Files

DR specialist Zerto is converging backup and disaster recovery (DR) and hopes to strengthen both sides of that equation with stronger backup and DR facilities, using an expanded continuous data protection engine, as well as previewing an in-AWS cloud backup service.

Zerto Enterprise Cloud Edition (ECE) is its core a DR product providing DR facilities for on-premises and cloud applications. The DR can be provided to the cloud, from the cloud and in-between clouds. It features automation of both DR and backup functions and ECE includes the Zerto Data Protection (ZDP) continuous data protection technology.

Gil Levonai, CMOandSVPof product at Zerto offered a quote: We are now delivering a new offering that I personally believe will change the backup marketan industry that hasnt evolved in more than 30 years. ZDP gives businesses a data protection strategy for all of their applications with significant TCO savings tailored to their unique needs.

ZDP delivers local continuous backup for day-to-day backup restores. Its local journaling technology enables customers to recover without the data loss, downtime, or production impact that Zerto says are inherent to traditional backup systems ensuring business continuity and availability.

In fact ZDP should, in Zertos view, displace traditional backup because it offers lower data loss rates and lower infrastructure costs in TCO terms, with an up to 50 per cent saving claimed. ZDP also provides long-term retention on-premises or in the public cloud; with both AWS and Azure as public cloud targets.

Updated: Oct 16

Zerto has ZDP, ECE and Zerto 8.5. How do they fit together? Zerto told us: The core software, referred to as the Zerto platform has now moved from v8.0 to v8.5; ZDP and ECE are the ways you consume/use the platform. They are the license types that you can have now with Zerto.

Zerto told us ZDP [which] is for backup and long term retention. is a new offering, still based on Continuous Data Protection that is focused and priced for backup. The reason its priced for backup is because Disaster Recovery capabilities (failback, failover, DR testing, re-IP) are removed because customers dont need those capabilities for backup.

In effect ZDP is ECE-lite. The company tells us: This means that we will see most customers use ECE for the mission-critical applications and ZDP to back up the rest of their environment that only requires backup and not DR.

Zerto 8.5 is the latest version of Zertos core protection software, and is used in both the ECE and ZDP products. It follows on from the Zerto 7.0 and Zerto 8.0 versions. At the time of its release Zerto said v7 converges backup and disaster recovery using hypervisor-based replication and journalling for short- and long-term retention. And: Its technology allows you to achieve RPOs of seconds using journal checkpoints up to 30 days ago, instead of a 24-hour recovery time frame.

Zerto 8 advanced this, as it brought Continuous Data Protection (CDP) to VMware on Google Cloud. The company said: Zertos continuous data protection (CDP) offers the most effective protection for your business applications and data. Zerto automatically tracks and captures modifications, saving every version of your data locally or to a target repository, short and long term.

Zerto claimed its CDP and innovative journaling technology removes the need for snapshots and thus eliminates production impact and lengthy backup windows. Its recovery granularity reduces data loss to just seconds.

The 8.5 version takes this a step further as it expands the CDP applicability beyond VMware on Googles Cloud. It is also now suited for for lower tier application backup, and not only DR for upper tier applications. Zerto 8.5 includes;

The Cmdlets enable performing specific tasks by using a script and not from within the Zerto User Interface. This could be for retrieving information like the Virtual Protection Groups (VPGs)defined for a site, working with checkpoints and deleting a VPG.

Zerto also previewed an in-cloud data protection and DR product on AWS, which protects applications across regions with cloud-native resilience.

The company said it will extend its platform to offer more simplicity and orchestration across all use cases. This will cover businesses requiring a recent data restore due to user error or DR from an infrastructure outage, cloud-first businesses, or businesses just starting out in the public cloud.

It wants to be the one-supplier-fits-all use cases across the data protection spectrum from SMB backup to large enterprise DR covering on-premises virtual servers, containerised servers and in-cloud applications in multiple public clouds.

Read the original:
Zerto beefs up backup, DR and in-AWS-cloud protection Blocks and Files - Blocks and Files

Read More..

How to move your computer systems to the cloud – KnowTechie

Cloud computing has been around for many years now, but some people are still skeptical of its benefits to business operations. Does it bring more efficiency to your company than the traditional local servers? Well, based on recent reviews, theres no doubt that cloud-based storage and services enhance the success of any business project. For instance, the fact that you can access your files from wherever you are makes it a must-have technology in the modern world.

If youre running a start-up business, then perhaps one of the stumbling blocks would be the process of migration. Luckily, there are professionals specialized in this task and are focused in making it all smooth sailing. In your research, youll come across many success stories such as Chicago MSP helps company move to the cloud. This is just proof that its very possible to modernize your business and become even more competitive. But before you start strategizing on how to migrate to the new world of computing, its important to understand what it entails.

This article will discuss all the fundamentals of moving your local servers to the cloud.

Simply put, the cloud is a series of servers that you can access over the internet. Others may define it as someone elses computer, which is still correct. Cloud computing, therefore, is the act of storing programs and data on a remote server and accessing them over the internet. Now that you have this basic knowledge, lets see how you can start the migration process.

What Do You Want To Move?

Before you can start your plans, its important to know which parts of your current system can be moved to the cloud. Every business has desktop applications, data, internet, and some peripherals. But which of these can be moved?

The most obvious group of items that should be on your list is the programs used in various departments within the business. There are two categories of these applications; cloud-based and traditional desktop programs. Cloud-based applications are those whose data is held on the cloud servers. You can access them either through a web browser or an installed program.

Traditional desktop applications, on the other hand, are those which dont have a web-based alternative and might need to be integrated with other programs for them to be operational. However, the best option is to use a hosted desktop, which basically works like your physical computer. It can accommodate as many services as you want to be provided you choose the right memory size.

The advantage of using a hosted desktop is that you dont need any special IT skills to operate it. In fact, all the maintenance procedures are done by the service provider. All you have to do is install the applications you have on your local server and start working. The best part is that you can connect it to your local server and transfer various files seamlessly. Continue reading to find more about the migration of databases.

Of course, if you decide to use cloud-based applications or hosted desktops, youll need to also move your database. Structured data, which includes names, geolocation, card numbers, and addresses, can be easily accessed via this system. All you need is to link your local server to the hosted desktop and youll be good to go.

Unstructured data, on the other hand, are quite difficult to deconstruct since they lack a pre-defined model. As such, they cannot be accessed in a similar way to the structured data. They include satellite imagery, videos, and audio. The best way to manage these files is by using separate storage servers.

For instance, you can store your data in DropBox, Google Drive, and OneDrive. There are many cloud-based storage services you can choose but not all of them are ideal for your business. Before making a final decision, consider reliability, security, performance, and flexibility. The best thing about transferring your data to these servers is the fact that you can access the files via your local computer and the remote desktop.

Peripherals such as your printers cannot be moved to the cloud. However, youll need to ensure that whatever cloud service you decide to use can be linked to your printers in case you need any printing done remotely.

As you already know, any cloud-based computing relies heavily on internet connectivity. In other words, without a good internet connection, there isnt much you can do. Therefore, before you even think of transferring your data to cloud servers, make sure you find a reliable internet service provider (ISP).

The whole point of moving your computer systems to the cloud is to enhance your operations. Slow internet will render all your efforts useless regardless of how hardworking your employees are. Remember, time is money, so every second you lose because of downtime can be very costly.

Unfortunately, there is no perfect ISP and downtime is something that can come when least expected. The best you can do is have backup internet links with other providers to create redundancy. One thing to consider though is that all providers should be using different underlying networks. As such, youll rarely experience significant downtime. For instance, you can have a 4G dongle that can be switched on whenever your main ISP goes offline.

If done correctly, implementing cloud computing can be very beneficial for your business. However, any wrong move can be quite significant and might even take you back to square one. Therefore, its important to consult different experienced professionals and check out the profiles of various cloud service providers.

The process of migration might seem quite intimidating considering the fact that some steps require a lot of special IT skills. However, all you need to do is understand which files can be moved and how you can move them. Also, find a cloud service that will perfectly fit your needs. The most important part of this project is to have a reliable service provider to enhance your operations. In addition, you should have a backup 4G dongle or another ISP that can be used during emergencies.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to ourTwitterorFacebook.

Link:
How to move your computer systems to the cloud - KnowTechie

Read More..

What is Elasticsearch and why is it involved in so many data leaks? – TechRadar

The term Elasticsearch is never far away from the news headlines and usually for the wrong reasons. Seemingly every week that goes by brings a new story about an Elasticsearch server that has been breached, often resulting in troves of data being exposed. But why are so many breaches originating from Elasticsearch buckets, and how can businesses that leverage this technology use it to its fullest extent while still preventing a data leak?

To answer these questions, firstly, one must understand what Elasticsearch is. Elasticsearch is an open source search and analytics engine as well as a data store developed by Elastic.

Regardless of whether an organization has a thousand or a billion discrete pieces of information, by using Elasticsearch, they have the capabilities to search through huge amounts of data, running calculations with the blink of an eye. Elasticsearch is a cloud-based service, but businesses can also use Elasticsearch locally or in tandem with another cloud offering.

Organizations will then use the platform to store all of its information in depositories (also known as buckets), and these buckets can include emails, spreadsheets, social media posts, files basically any raw data in the form of text, numbers, or geospatial data. As convenient as this sounds, it can be disastrous when mass amounts of data are left unprotected and exposed online. Unfortunately for Elastic, this has resulted in many high-profile breaches involving well-known brands from a variety of industries.

During 2020 alone, cosmetics giant Avon had 19 million records leaked on an Elasticsearch database. Another misconfigured bucket involving Family Tree Maker, an online genealogy service, experienced over 25GB of sensitive data exposed. The same happened with sports giant, Decathlon, which saw 123 million records leaked. Then, more than five billion records were exposed after another Elasticsearch database was left unprotected. Surprisingly, it contained a massive database of previously breached user information from 2012 to 2019.

From what has been disclosed so far, clearly those who chose to use cloud-based databases must also perform the necessary due diligence to configure and secure every corner of the system. Also, quite clearly, this necessity is often being overlooked or just plain ignored. A security researcher even went to the length to discover how long it would take for hackers to locate, attack, and exploit an unprotected Elasticsearch server which was left purposely exposed online eight hours was all it took.

Digital transformation has definitely changed the mindset of the modern business, with cloud seen as a novel technology that must be adopted. While cloud technologies certainly have their benefits, improper use of them has very negative consequences. Failing or refusing to understand the security ramifications of this technology can have a dangerous impact on business.

As such, it is important to realize that in the case of Elasticsearch, just because a product is freely available and highly scalable doesnt mean you can skip the basic security recommendations and configurations. Furthermore, given the fact that data is widely hailed as the new gold coinage, demand for monetising up-to-date data has never been greater. Evidently for some organizations, data privacy and security have played second fiddle to profit as they do their utmost to capitalize on the data-gold rush.

Is there only one attack vector for a server to be breached? Not really. In truth, there are a variety of different ways for the contents of a server to be leaked a password being stolen, hackers infiltrating systems, or even the threat of an insider breaching from within the protected environment itself. The most common, however, occurs when a database is left online without any security (even lacking a password), leaving it open for anyone to access the data. So, if this is the case, then there is clearly a poor understanding of the Elasticsearch security features and what is expected from organizations when protecting sensitive customer data. This could derive from the common misconception that the responsibility of security automatically transfers to the cloud service provider. This is a false assumption and often results in misconfigured or under-protected servers. Cloud security is a shared responsibility between the organizations security team and the cloud service provider; however, as a minimum, the organization itself owns the responsibility to perform the necessary due diligence to configure and secure every corner of the system properly to mitigate any potential risks.

To effectively avoid Elasticsearch (or similar) data breaches, a different mindset to data security is required and one that allows data to be a) protected wherever it may exist, and b) by whomever may be managing it on their behalf. This is why a data-centric security model is more appropriate, as it allows a company to secure data and use it while it is protected for analytics and data sharing on cloud-based resources.

Standard encryption-based security is one way to do this, but encryption methods come with sometimes-complicated administrative overhead to manage keys. Also, many encryption algorithms can be easily cracked. Tokenization, on the other hand, is a data-centric security method that replaces sensitive information with innocuous representational tokens. This means that, even if the data falls into the wrong hands, no clear meaning can be derived from the tokens. Sensitive information remains protected, resulting in the inability of threat actors to capitalise on the breach and data theft.

With GDPR and the new wave of similar data privacy & security laws, consumers are more aware of what is expected when they hand over their sensitive information to vendors and service providers, thus making protecting data more important than ever before. Had techniques like tokenization been deployed to mask the information in many of these Elasticsearch server leaks, that data would have been indecipherable by criminal threat actorsthe information itself would not have been compromised, and the organization at fault would have been compliant and avoided liability-based repercussions.

This is a lesson to all of us in the business of working with data - if anyone is actually day-dreaming that their data is safe while hidden in plain sight on an anonymous cloud resource, the string of lapses around Elasticsearch and other cloud service providers should provide the necessary wake-up call to act now. Nobody wants to deal with the fall-out when a real alarm bell goes off!

Link:
What is Elasticsearch and why is it involved in so many data leaks? - TechRadar

Read More..

What is application hosting? – Techradar

Put simply, a hosted application is any piece of software that is running on someone elses infrastructure rather than on-premise. Such hosted applications are accessed over the Internet and provide a web-based user interface for users to interact with them. Hosted applications are usually offered as Software-as-a-Service (SaaS).

In other words, application hosting allows you to run your applications on servers or in a cloud that is hosted by a service provider, such as Amazon Web Services (AWS) and Google Cloud Platform (GCP) that provide the necessary foundations to host your apps.

An example of a hosted application that everyone can identify with is WordPress. If you wanted to blog, the traditional method would be to download WordPress, spend time installing and configuring it before you could publish. However, thanks to application hosting on WordPress.com, you can simply visit the website, and get started immediately after registering an account. Thats because WordPress.com hosts a pre-installed, pre-configured version of WordPress.

Hosting applications on remote machines has several advantages. For starters, it reduces costs since you dont have to spend any money in building and maintaining the underlying hardware and software, and general IT infrastructure. This is hugely beneficial since most of the time the underlying hardware remains underutilized.

Furthermore, with application hosting, you only pay for the services you use. This also makes it very scalable as opposed to the traditional on-premise hosting, since you can provision additional resources to handle peak load requirements with just a few clicks. You can start small and grow as needed without incurring the costs of pre-purchasing excess server capacity.

Application hosting also provides improved availability by minimizing downtime as most of the reputable hosts provide enough redundancy to handle hardware failures and other faults. In the same vein, the cloud hosts also invest in enhancing the security of their servers. In fact, most of the reputable ones meet stringent ISO security standards.

Finally, since the hosted application is accessible from the Internet, all authorized users can access the app from anywhere and work remotely.

Theres no dearth of cloud hosting providers that you can use to host your applications. Since they dont all offer their services at the same price, there are some important factors that influence the final cost of hosting applications.

The most important factor is the nature of the application to be hosted. Some applications take more processing power while others need a lot of storage. The final cost of application hosting will be based on these technical requirements of the hosted application.

Another factor that influences the cost is the type of server. The two most common are shared and dedicated servers. While dedicated servers are more expensive than shared hosting ones, both types have their advantages and drawbacks. You should evaluate both to determine which option works best for your application.

In addition to choosing the type of server, the duration of the plan will also have an impact on the final cost of hosting the application. Instead of charging an upfront cost, most of the application hosting providers offer them on a recurring subscription. While most platforms offer flexible tenures, well advise you to choose a long term plan, which will be comparatively cheaper than monthly plans.

Convinced about the benefits of application hosting? The next step is to hunt for the right hosting provider that meets all your requirements without breaking the bank. Here are some of the main features that you should look for while evaluating a application hosting vendor:

1. Application compatibility: Just like all applications are built differently, application hosting too isnt a one-size-fits-all solution. The software requirements of your application will dictate the features that the hosting platform must meet.

2. Onboarding process: Depending on the type of application you need to host, many hosting platforms will offer a one-click setup to simplify the deployment process. But deployment is only one piece of the puzzle, and youll also need to look into the platforms management tools and also evaluate their documentation and other resources to help you get starting with the platform.

3. Security features: You should always be on the offensive when it comes to cyber security, especially these days when data breaches happen at an alarming frequency. Keep your eyes peeled for hosting platforms that invest in the security of their infrastructure, both from physical and online attacks.

4. Reliability and uptime: Servers, whether hosted on-premise or online, do occasionally have to go offline for maintenance, and for other reasons, such as faulty hardware, and other disruptions. Make sure you check the amount of time a service is affected by these kinds of issues. Many reputable providers promise 99% uptime and some even back their claims with a guarantee.

5. Support and service: Since most businesses service clients round the clock, youll need the same kind of available from your application hosting provider. Look for the platform that provides 24/7 customer support, and the available avenues of communication such as live chat, email, or phone.

6. Data export services: Although its fairly common for all providers to allow you to export your data from the provider, itll still be a good idea to check for this function. Application providers can and do go out of business, which is a legitimate concern and often cited as one of the disadvantages of hosting applications on a remote platform rather than on-premise. However, having the flexibility to export your data will help you migrate it to another platform without too much downtime.

See the original post here:
What is application hosting? - Techradar

Read More..

IBM Goes All-In On Hybrid Cloud – IT Jungle

October 12, 2020Timothy Prickett Morgan

Well, that was a bit of a surprise, and probably something that only obliquely matters to IBM i shops at the moment, but Big Blues top brass has decided to carve out its managed infrastructure services business from Global Services and spin it out as a new, publicly traded company.

This business, which is tentatively being called NewCo until a real name is provided, is expected to be cut loose in a tax free manner and distributed to IBMs shareholders by the end of 2021, so we have some time to assess the ramifications, if any, for the IBM i base. The core of the $19 billion NewCo is the outsourcing and hosting business that made Global Services gigantic and, in a very real sense that we have described many times, saved IBM because it gave the company a story to tell and then live up to in the very difficult 1990s.

Looking at the prior 12 months of sales, the NewCo business brought in about $19 billion in revenues, according to IBM, which hosted a briefing with Wall Street analysts on Thursday to go over the separation. The remaining IBM will be smaller, at $59 billion a year, but growth in its cloud sales, including Red Hat, will now seem that much larger against a smaller base. In the trailing 12 months, Red Hat revenues were up by 19 percent, from $3.5 billion to $4.2 billion, and that very good growth smaller than many expected, mind you gets lost in the noise of the much, much larger NewCo business that is pulling down.

IBM will pay pretty handsomely for spinning out this business, which will incur a $2.3 billion charge at the end of 2020 for structural actions plus another $200 million or so as the deal closes next year, including around $1.5 billion in cash charges and around $1 billion in balance sheet charges. Presumably the IBM stock split to form NewIBM and NewCo will be proportional to the revenue streams of the two pieces of Big Blue that remain.

There is a certain amount of Power Systems iron running within the piece of the current Global Technology Services business that forms the core of NewCo, and the customers who have IBM run their applications on outsourced iron (meaning IBM literally takes over your stuff and usually some of your people and moves them to its own datacenter) or hosted iron (meaning IBM owns the iron and runs your applications on it) do upgrade their machines every couple of years as workloads dictate. This is an important source of revenue for the Systems group, and the majority of internal sales for Systems group are for servers and storage sold to Global Technology Services for this purpose. According to our 2019 revenue model, we think Global Technology Services bought $250 million in Power machinery and the Storage division bought another $226 million of Power servers to underpin the DS series of SAN storage arrays, which are basically AIX servers running storage software. Customers outside of IBM, by contrast, bought $1.78 billion in Power Systems servers, so this internal Power sales number is not an insignificant one and one big piece of it will now come from NewCo and it will be booked as an external sale. It will be interesting to see if NewCo will stretch out the lifetimes of Power Systems and System z iron as real customers do. We suspect the Global Technology Services customers might have had shinier iron than customers buying their own gear.

The most important thing about this table above, which shows what is staying and what is going, is that IBM is keeping servers, storage, operating systems, middleware, databases, break/fix and other technical support services, and IBM Cloud, and it is also retaining the core IT consulting, systems integration, process services, and application management services that are part of the Global Business Services Edition, the latter of which has a $41.1 billion services backlog against something on the order of $23.8 billion in revenues. NewCo has a $60 billion backlog against a $19 billion revenue stream, so that ratio is higher. NewCo has 4,600 customers in 115 countries and around 90,000 employees will be leaving Big Blue to go to NewCo, leaving something on the order of 260,000 employees in what we will call Littler Blue to be funny.

So, why is IBM doing this? Aside from getting slowly declining businesses out of its revenue stream, it is also because IBM wants to focus entirely on hybrid cloud. That means enterprise customers who have on-premises, mission critical systems who want to extend out into one or more public clouds, or maybe even run solely across one or more public clouds.

Here is what IBM is really focused on: Making money from hybrid cloud. Its no sexy, and some have argued that no one is going to get excited about plumbing, and they are right. But if IBM can make money with hybrid cloud, it doesnt matter if it is more boring than being Amazon Web Services, Google, or Microsoft. The way that IBM sees it, for ever $1 that customers spend on core Red Hat infrastructure software for hybrid cloud Enterprise Linux operating systems and OpenShift Kubernetes container controllers with perhaps some storage and virtualization they spend another $1 to $2 for the physical infrastructure and another $3 to $5 for middleware and applications and another $6 to $8 for various kinds of cloud transformation services. IBM knows that others sell servers, storage, cloud infrastructure, applications, middleware, and cloud transformation services, so it cannot capture all of that revenue, but if it could, then that $4.2 billion annualized revenue rate from Red Hat would be somewhere between $40 billion to $80 billion in total addressable market. If IBM can get half of that, then the Red Hat deal for $34 billion can pay for itself all that more quickly and, presumably, IBM can grow that $19 billion back like a salamander losing a tail.

IBM i Tries On a Red Hat

Red Hats Ansible Automation Comes To IBM i

How Big Blue Stacks Up IBM i On Premises And On Cloud

Big Blue Finally Brings IBM i To Its Own Public Cloud

IBM Takes A Hands Off Approach With Red Hat

The Impact On IBM i Of Big Blues Acquisition Of Red Hat

Tags: Tags: Global Services, Global Technology Services, IBM i, Kubernetes, Linux, NewCo, OpenShift, Power Systems, Red Hat, SAN, System z

Tweaks To Power System Iron Complement TR UpdatesGuru: Dynamic Arrays Come To RPG Limitations, Circumventions, And More

Originally posted here:
IBM Goes All-In On Hybrid Cloud - IT Jungle

Read More..

The Role of Hybrid Cloud Technologies in Today’s Business Climate Wall Street Call – Reported Times

Oct 14, 2020 7:00 PM ET iCrowd Newswire Oct 14, 2020

You will agree with me that the cloud has incredibly transformed business computing dynamics. Cloud technologies come with a vast range of windfalls, from little upfront costs to easy scalability and superior uptime availability.

Mike Shelah (Advantage Industries) is just one among the many Managed Service Providers who agree that using the cloud presents many benefits to their clients. He particularly singles out Microsoft SharePoint that gives customers substantial storage for internal collaboration as part of their 365 licensing.

Our focus today is the role of hybrid cloud technologies in todays business climate. To put things into perspective, lets look at the different types of clouds.

According to Google, cloud computing is the practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer.

If you have been probing cloud computing, you must be well aware of the longstanding debate on private vs. public cloud. Before you make up your mind, its essential to interrogate the differences between the two:

We would generally define the hybrid cloud as a combination of private and public clouds and on-premises (bare metal) infrastructure, often with some level of integration or orchestration between environments.

So, How Does It Work?

At SemTech IT Solutions, clients have the main physical server on-site, but the server is paired with Office 365 and a third party file sync solution. This, according to Nick Allo, allows users who are working remotely to still access their email/files from anywhere, anytime. Besides being cost-effective, this solution enables business executives to have more control over their data.

The main drive pushing people toward cloud computing is the remote accessibility of data from anywhere. Ilan Sredni works with Palindrome Consulting in South Florida. He admits that everyone is conscious of hurricane seasons and power outages, and so clients want solutions that give them the ability to get work done, even when the office may be completely inaccessible. At the same time, administrators want to control their data and how its stored and managed. The only way out is to integrate both private and public clouds. In Ilans words, Hybrid cloud seems to bring the best of both worlds.

Also Read:

Rick CrawfordMSP Tech News

Keywords:hybrid cloud, cloud computing, cloud services, public cloud, private cloud, cloud technologies, cloud migrations

Original post:
The Role of Hybrid Cloud Technologies in Today's Business Climate Wall Street Call - Reported Times

Read More..

Is your college in a severe wildfire zone? – CALmatters

In summary

Of Californias nearly 150 public colleges and universities, 18 are within areas Cal Fire deems at high risk from wildfires. In addition to evacuation plans, colleges have different fire mitigation tactics they can employ to minimize risk.

As a wall of flame drew closer to the northernmost reaches of the UC Santa Cruz campus, Saxon Stahl knew an evacuation order was imminent.

Stahl, a student living on campus during summer session, had been following the progress of the CZU Lightning Complex fires that started Aug. 16. By the time the email for voluntary evacuations reached Stahls inbox the afternoon of Aug. 20, they leapt at the chance, accepting a voucher to stay at a hotel four miles south.

They fled the ash raining from the sky, but the smell of campfire lingered still.

Hours later on the 20th, campus police sent all 1,200 students and staff packing under a mandatory evacuation order that was only fully lifted nearly three weeks later.

It was kind of chaotic in retrospect, the senior told CalMatters. An assignment due at 5pm the day they relocated to a hotel was only extended to midnight. After eight days of hotel dwelling, Stahl and several dozen other students lived out the rest of their evacuations at San Jose State University.

That the CZU Fires came within a mile of the northern end of UC Santa Cruzs borders shouldnt come as a surprise. The photogenic campus, nestled in a forest of redwoods, is one of several dozen public universities and community colleges near or in a fire hazard severity zone as designated by the states fire authority, the Department of Forestry and Fire Protection.

A CalMatters analysis found that 18 public higher-education institutions in California, out of 148, have addresses in these zones. That number excludes campuses whose territories partially stretch into hazard zones but have addresses outside of them or are within a few miles of the zones. Cal Fire ranks these zones by severity moderate, high and very high and bases the labels on signs of fire danger, such as topography, weather, wind, fire history and flammable forest debris. The Cal Fire zones also exclude federal lands and local areas that arent deemed a very high hazard.

Interactive graphic

Load interactive graphic

Already this year California endured its largest fire season in recorded history, with more than 4 million acres burned that have claimed 8 lives and damaged or destroyed almost 5,500 structures. Fires, predicted to intensify, could threaten numerous college dorms and school buildings.

And theres a lot of wood to burn. Before the Gold Rush, Californias forests had 50 to 70 trees per acre. In 2009, there were 400 trees per acre, the result of decades of fire suppression and public policies that abandoned the purposeful fires practiced by Native American tribes to limit the intensity of forest fires.

Campuses close to a fire hazard severity zone should definitely look at ways that they can reduce the risk around the campus, said Steven Hawks, staff chief of Cal Fires wildfire planning and engineering division. That means clearing out fallen leaves, removing brush and committing to expensive retrofitting, among other actions, especially for campuses built before fire-resistant building codes that came into effect in 2008.

While Cal Fires hazard zones show severity, they dont show risk. Hawks and others interviewed for this story stressed that campuses can put in the work to limit the damage caused by fires. Expanding roads for emergency vehicles, swapping out single-pane for dual-pane glass and new roofs are other mitigation techniques campuses near or in fire zones could pursue, Hawks and others said. Meanwhile, some campuses close to fire zones enjoy favorable conditions that can keep wildfires at bay.

Being prepared can only get a campus so far, however, especially with the increasing menace of recent fires. The town of Paradise had a pretty good evacuation plan, Hawks said, but the deadly 2018 Camp Fire burned so intensely and so rapidly that it cut off some of the towns evacuation routes, forcing officials to alter plans on the go.

Just how close is too close before calling an evacuation is impossible to say. The conditions determine the response. Most structures are destroyed because of an ember, said Hawks, which are carried by winds ahead of fires, sometimes for several miles. Fire burns faster uphill, so campuses atop an ignited slope stand a greater risk of damage than colleges in valleys where fires burn in the hills above. Of course, if the wind shifts, all bets are off, Hawks said. The drier the season, the greater the risk.

At UCs, wildfires are a local campus response. Cal Fire issues the evacuation orders but UC campuses implement those orders in conjunction with first responders and regional emergency management personnel. Each campus also has an emergency management director who coordinates emergency planning. During emergency events the director will be at the response center and may lead it, depending on the campus. But campuses keep the UC Office of the President informed. The office knew UC Santa Cruz would declare a campus emergency before it happened, said Amina Assefa, the UC systems director of emergency management and business continuity.

When I look at those maps, I see a lot of the state is in the fire hazard zones, said Assefa. We are aware of this reality and the challenge that poses.

For the the 23-campus California State University system, wildfire responses are largely the domain of campus chancellors and their staff, plus input from the system chancellor, said spokesperson MichaelUhlenkamp.

UC Santa Cruz is bounded by a horseshoe of public land susceptible to wildfire. The CZU fire that licked the terrain within a mile of campus burned to the north and west in mostly high hazard zones.

The sylvan landscape requires constant upkeep to reduce the risk of fire damage. The campus maintains a series of fire roads in the northern campus, where its more heavily wooded, for fire truck access. In collaboration with Cal Fire, the university annually clears out excess leaves, branches and trees to reduce the fuel load or material that can burn during a wildfire.

A report on lessons learned about the universitys response is in the works, though university officials shared some details. For one, the campus needs to increase the sheer tonnage of material that it clears off the land, said Jean Marie Scott, an associate vice chancellor at UC Santa Cruz who oversees a budget of nearly $15 million in risk and safety services, including the campus police, transportation and fire departments.

Interactive graphic

Load interactive graphic

Next, the website containing fire updates initially was hosted on physical servers until it was moved to a cloud server in case flames torched the IT equipment. UCSC also wants to ink a memorandum of understanding with the company that runs the coastal boardwalk. Thats where evacuated students and staff waited for resettlement but the campus wants the relationship formalized to move people to safety quicker next time.

The evacuation coincided with a global pandemic, further complicating the campuss response. Buses that brought students to San Jose State normally fit 40 people but in the era of COVID-19 could only carry 10 each, requiring more vehicles. Students waiting for rides to hotels stood masked in marked off squares measuring 10 by 10 feet to keep space from one another.

The evacuation didnt include just people. UCSC sent several mammals in its marine lab down to SeaWorld of San Diego and another location for safe harbor after ash from the fire littered the animals saltwater pools. The two dolphins, Donley and Rain, rode in a refrigerated truck on their way to San Diego, squeaking and whistling at each other the whole trip.

Other colleges close to a hazard zone are at a low risk of sustaining wildfire damage. Scan the CalMatters interactive map and Humboldt State University sits less than two miles west of an expanse of high fire hazard woodlands. But thanks to a marine layer cloaking the university and the quilt of redwoods surrounding it, the fuel that sits on the ground of the forest is just constantly moist as a result, and so its not a great conductor of fire, said Cris Koczera, emergency management coordinator at Humboldt State. She cant recall a single fire in the 20 years shes worked in disaster planning in the area that came close to the ridge line separating the damp area of the university and the drier forest to the east.

If emergency strikes, Koczera says Humboldt State has agreements with a fairground in Crescent City and a conference hall in Eureka to temporarily shelter evacuated campus students and staff. If an incident affects those two cities, other CSU campuses can help out, Koczera said.

Chico State also appears just a few miles from a hazard severity zone but the campus is relatively safe because its surrounded by city.

I dont think that Chico State as a campus is at risk, said Jacquelyn Chase, a professor of geography and planning at the university who studies fires. A wildfire that jumped the wildlands boundary into the city would run out of steam before it got that far in to damage the campus, she said.

The campus maintains its land in a way that reduces the risk of a wildfire consuming the university, Chase said. Plants are juicy because theyre watered often, ground crews pick up the leaves that fall to the ground and the buildings are not that close together, all of which limits ignition and fire spread.

If a crisis does strike, Chico States evacuation plan is largely to follow the orders of county and state disaster response officials, said J Marvin Pratt, director of environmental health and safety at Chico State. While the campus could order an evacuation before the rest of the city is issued one, Pratt says thats unlikely. It didnt during the 2018 Camp Fire, one of the most destructive in recorded state history.

It wasnt directly threatening us. So thats where it gets back to listening to the professionals and what they have to suggest, Pratt said, who added that the campus has never needed to evacuate because of a fire.

The Cal State campus also follows UC guidance on managing campus events during days with poor air quality caused by fire. The 2019 report includes a table indicating when events need to move inside or UC employees working outside should put on masks. The number of actions grows the higher the Air Quality Index indicator climbs. The UC has tweaked the guidance some this year, recommending that outdoor events be cancelled rather than move inside because of COVID-19.

The College of the Siskiyous, which is encircled by fire hazard zones, came closer to ruin. The campus was evacuated during the Sept. 2014 Boles Fire. The inferno ultimately damaged or destroyed more than 100 structures in the small town of Weed that sits at the base of Mt. Shasta an hour south of the Oregon border. Campus spokesperson Dawn Slabaugh told CalMatters that the campus president made the call because initially it seemed the fire was gunning for the college. You can step outside into our parking lots and see the fire on the hill that is just directly across the freeway coming in our direction, she said. Do you keep it open or wait too long?

But the fire shifted, barreling toward town and away from the campus. Spared, the college served as a community anchor. Slabaugh said Cal Fire officials conducted town halls for the community of 2,700 residents in the colleges theater for a few days immediately following the blaze. A food assistance program for the area and a Catholic Church temporarily relocated to the campus.

California Polytechnic State University, San Luis Obispo is a sprawling 9,000-acre campus that lies partly in fire-hazard wildland. Conflagrations have approached the campus several times. Had the wind shifted, it would have readily threatened buildings, including dorms and classrooms on campus, said Christopher A. Dicus, a renowned professor of wildland fire at Cal Poly San Luis Obispo. More recently, the 2020 CZU August fire torched a remote campus site after students, faculty and livestock were safely relocated.

In recent years Dicus and other fire professionals have argued for stricter rules removing combustible material thats within five feet of buildings. Anytime you have anything combustible that is touching a building, that is a really, really bad idea, he said, adding that vegetation, mulch, and flammable lawn furniture near buildings should either be removed or unable to catch fire.

A 2020 law creates a new ember-resistant zone within five feet of a structure in a fire hazard area. The law could go a long way to making some of Dicus recommendations a reality. What wont be permitted in the zone is still to be determined. The laws rules now have to be fleshed out by Cal Fire and the Board of Forestry and Fire Protection. The campus relies on Cal Fire and the San Luis Obispo fire department, but we cant just rely completely on the cavalry to come rescue us, said Dicus. We, like all California campuses, have to work with the fire service to shape that battlefield, to be such that the firefighters have a much easier chance at saving our buildings.

More:
Is your college in a severe wildfire zone? - CALmatters

Read More..

Fujitsu Verifies Effectiveness of Private 5G in Manufacturing Sites with Microsoft Japan – Latest Digital Transformation Trends | Cloud News – Wire19

Fujitsu today announced that, in collaboration with Microsoft Japan Ltd., it has recently verified the effectiveness of a system that uses private 5G to visualize real-time data within the facility, with a view toward manufacturers digital transformation (DX).

Using Microsoft Azure IoT Edge(1) in the Fujitsu Collaboration Lab, a private 5G verification facility in Kawasaki, Japan, this system analyzes high-definition images of people moving in the private 5G network and operating data from cameras, mobile terminals, servers, and other equipment. This enables integrated visualization of the status of people, unmanned vehicles, and equipment with the Fujitsu Manufacturing Industry Solution COLMINA(2), unifying private 5G and cloud environments to bring about a system with optimized for network and processing load.

Based on the findings of the verification test, Fujitsu will collaborate with Microsoft Japan to conduct verification tests at Fujitsus plant in Oyama, Japan, Fujitsus manufacturing base for network products, by the end of fiscal 2020 and jointly develop solutions with a view to achieving global expansion going forward.

This verification will be showcased at Fujitsu ActivateNow, to be held as an online virtual conference on October 14, 2020.

Background

In the new normal society, the manufacturing industry is being called upon to improve the efficiency, automation, and remote capabilities of their operations through digitization, while maintaining quality that will transform manufacturing sites to make them more resilient to changes. Private 5G is attracting attention as one of the key technologies supporting this.

Private 5G enables enterprises to flexibly construct and operate 5G networks in their own buildings and premises and is expected to be used for unmanned and remote controls at manufacturing sites. On the other hand, in order to achieve these goals, a large amount of sensor data and high-definition video must be utilized to construct an optimal system according to the requirements of network and application processing load.

As the very first achievement of the Private 5G Partnership Program; the co-creation program which enables the use of Fujitsus expertise and technologies such as private 5G along with advanced technology from partners, the system that integrated Fujitsus private 5G technology with Azure IoT Edge and Azure was constructed and its effectiveness was verified.

Summary of Verification

Future Developments

Fujitsu plans to deploy this verification system with Microsoft Japan at the Oyama Plant by the end of FY 2020 and to verify it on-site. In this verification test, Fujitsu will use an AI technology for video-based behavioral analysis developed by Fujitsu Laboratories to recognize various human behaviors and to improve the quality and efficiency of operations at manufacturing sites.

In addition, Fujitsu will consider jointly developing an edge computing solution utilizing 5G with Microsoft Japan from a global perspective.

References:

(1) Azure IoT Edge Locally deployed cloud intelligence on IoT Edge devices(2) COLMINA A digital data solution that connects various information on manufacturing from design to manufacturing and maintenance.

See the article here:
Fujitsu Verifies Effectiveness of Private 5G in Manufacturing Sites with Microsoft Japan - Latest Digital Transformation Trends | Cloud News - Wire19

Read More..

New Neoverse Platforms Take on the Cloud, HPC, and the Edge – Electronic Design

Last year, Arm delivered the first of its Neoverse solutions targeting the enterprise and cloud computing. The Neoverse N1 and E1 platforms are available now (Fig. 1). The N1 architecture is applied in Amazons Graviton processor, which is used in a number of AWS installations.

1. The Neoverse N2 extends the family; the Neoverse V1 expands the application space for the family.

Now, Arm has unveiled its next-generation systemsthe Neoverse V1, code-named Zeus, and Neoverse N2, code-named Perseus, platforms. The V1 and N2 deliver 50% and 40% more performance, respectively, compared to the existing Neoverse N1 implementations.

This space is dominated by Intel platforms, but the Neoverse solutions have given them considerable competition thanks to their power efficiency. Data centers are changing significantly, from incorporation of FPGA-based SmartNICs to GPGPUs to machine-learning (ML) accelerators.

The new Neoverse platforms address ML and high-performance-compute (HPC) applications with bFloat16 support and the addition of the Scalable Vector Extensions (SVE). SVE handles SIMD integer, bfloat16, or floating-point instructions on wider vector units (Fig. 2). A key feature of SVE is that the programming model is data-width agnostic. Vectors can be 128 bits up to 2048 bits. This simplifies programming in addition to providing increased performance.

2. Scalable Vector Extensions (SVE) is vector-width agnostic, supporting up to 2048-bit vectors.

Although the cloud and HPC environments are one primary use of the Neoverse architecture, its not the only space being targeted by Arm. The high-performance edge is where Neoverse is at home, albeit with fewer cores (Fig. 3). On the plus side, power requirements are also reduced. The 5G edge infrastructure can benefit from the large number of power-efficient cores the Neoverse architecture is able to bring to the design table.

3. Neoverse targets the edge as well as the cloud. Chips may have fewer cores on the edge, but there are still lots in the mix. In addition, power requirements are scaled down.

Different configurations can fill rackspace depending on application requirements. The Neoverse V1 looks to pack a lot of HPC cores into a chip, while the Neoverse N2 packs in more cores (Fig. 4). A typical dual-socket motherboard will house hundreds of cores.

4. Designers have a balancing act when it comes to chip design, with the V1 on the performance side and the N2 packing in more cores.

The V-series delivers maximum performance with larger buffers, caches, and queues. The N-series is optimized for performance and power, while the E-series is optimized for power efficiency and area. Right now, the N2 comes with dual 128-bit SVE pipelines while the V1 has two 256-bit SVE pipelines.

CCIX (pronounced see six) and CXL are two PCI Express-based interprocessor communication links (Fig. 5). Earlier versions of Neoverse supported CCIX; the latest incarnations support both. CXL has been used to split out memory expansion, and CCIX provides coherent, heterogeneous multicore interconnects.

5. CCIX and CXL will be supported by the Neoverse family.

Neoverse is already invading the data center. Platforms like NXPs LX210A Neoverse-based system-on-chip (SoC) target 5G RAN solutions. This latest crop of new architectures is likely to improve on these existing solutions.

Follow this link:
New Neoverse Platforms Take on the Cloud, HPC, and the Edge - Electronic Design

Read More..

Quantum Computing Market 2020 | Outlook, Growth By Top Companies, Regions, Types, Applications, Drivers, Trends & Forecasts by 2025 – PRnews…

Market Study Report, LLC, has added a research study on Quantum Computing market which delivers a concise outline of the market share, market size, revenue estimation, geographical outlook and SWOT analysis of the business. The report further offers key insights based on growth opportunities and challenges as experienced by leaders of this industry, while evaluating their present standing in the market and growth strategies.

The new Quantum Computing market research report presents a granular analysis of the business outlook and also covers the world market overview. It throws lights on various market segmentations based on product type, application spectrum, well-established companies, and regions.

Request a sample Report of Quantum Computing Market at:https://www.marketstudyreport.com/request-a-sample/2855012?utm_source=prnewsleader.com&utm_medium=SK

Additionally, the document analyses the impact of COVID-19 on the market growth.

Key features of Quantum Computing market report:

Regional Analysis of Quantum Computing market:

Quantum Computing Market Segmentation: Americas, APAC, Europe, Middle East & Africa

Overview of the regional terrain of Quantum Computing market:

Product types and application scope of Quantum Computing market:

Product landscape:

Product types: Hardware, Software and Cloud Service

Key factors enclosed in the report:

Ask for Discount on Quantum Computing Market Report at:https://www.marketstudyreport.com/check-for-discount/2855012?utm_source=prnewsleader.com&utm_medium=SK

Application Landscape:

Application segmentation: Medical, Chemistry, Transportation, Manufacturing and Others

Details stated in the report:

Other details specified in the report:

Competitive spectrum of the Quantum Computing market:

Competitive landscape of Quantum Computing market: D-Wave Solutions, IBM, Microsoft, Rigetti Computing, Google, Anyon Systems Inc., Intel, Cambridge Quantum Computing Limited and Origin Quantum Computing Technology

Major features as per the report:

For More Details On this Report: https://www.marketstudyreport.com/reports/global-quantum-computing-market-growth-status-and-outlook-2020-2025

Related Reports:

1. Global Mortgage Brokerage Services Market Growth (Status and Outlook) 2020-2025Read More: https://www.marketstudyreport.com/reports/global-mortgage-brokerage-services-market-growth-status-and-outlook-2020-2025

2. Global Auto Leasing Services Market Growth (Status and Outlook) 2020-2025Read More: https://www.marketstudyreport.com/reports/global-auto-leasing-services-market-growth-status-and-outlook-2020-2025

Related Report : https://www.marketwatch.com/press-release/latest-figures-global-smart-home-healthcare-market-to-witness-us-30-billion-by-2025-2020-10-15

Contact Us:Corporate Sales,Market Study Report LLCPhone: 1-302-273-0910Toll Free: 1-866-764-2150 Email: [emailprotected]

Go here to see the original:
Quantum Computing Market 2020 | Outlook, Growth By Top Companies, Regions, Types, Applications, Drivers, Trends & Forecasts by 2025 - PRnews...

Read More..