Page 3,520«..1020..3,5193,5203,5213,522..3,5303,540..»

Want to work at Microsoft? Dice.com looks at top jobs, skills the tech giant is looking for – OnMSFT

Despite CDC recommendations, parts of the US are beginning to re-open which means some companies that furloughed or laid off staff early on during quarantine are now looking to refill positions, and Microsoft as a company is no different.

A report found over at Dice.com looks at the top jobs Microsoft has recently posted for employment. The data used in the report comes via employment analysis firm Burning Glass, which aggregates info from job postings across the United States.

Within the last two months, Microsoft has posted over 25 listings ranging from software development engineer to sales specialists with a weighted mix of managerial experience thrown in there.

As for the most popular technical skilled jobs, anyone with a background in cloud computing might find themselves working at Microsoft in the near future. Over the same 60-day window postings for Microsoft Azure related skills outnumber any other technical skill seeking from the company by a wide margin.

Some 620 applicants for Microsoft Azure related skills have been considered whereas the next crowded position as general software engineer had a modest 266 applicants considered. Other skilled requests from Microsoft included people knowledgable in software development, Microsoft C#, SQL, Microsoft PowerShell, Python, C++, Java, SAP, Salesforce, Oracle, Excel, and Windows among others.

As more businesses place their data on cloud servers to allow employees to VM into digital workstations from home, Microsofts current job listings remain in line with the companys push to build out its Azure cloud platform.

Read the original:
Want to work at Microsoft? Dice.com looks at top jobs, skills the tech giant is looking for - OnMSFT

Read More..

The Best VPNs for Businesses and Teams – PCMag.com

Since long before most office workers began full-timing it from home, VPNs have been the tool of choice for remote workers who need to access corporate networks. While the average security-conscious person might use a personal VPN to access region-locked streaming video or just to protect their privacy, VPNs have a much longer history in the workplace. VPNs let employers offer protection to their workforce, and in some cases, let remote employees access corporate resources as if they were sitting in their offices.

PCMag has done extensive testing of personal VPN services for years. That being the case, we decided that our first foray into the space of business-class VPNs would be to examine the business and team offerings of some of our favorite personal VPNs. Note that there are other products and services tailored exclusively for enterprise customers and IT departments. We haven't examined those services for this piece.

All of these VPNs provide all the assurance or privacy you get with any VPN. When anyone on your team connects to one of these services, their traffic is routed through an encrypted tunnel between their machine and a server operated by the VPN company (or by your company, but more on that later). Nobody, not even those on the same Wi-Fi network, will be able to monitor or intercept their traffic. Even ISPs will be blinded, and unable to sell anonymized data about their movements. Out on the web, your team members will have their true IP addresses hidden behind the IP address of the VPN server. They'll also be able to spoof their locations by connecting to a distant server.

This doesn't directly help your employees do their jobs, but it does protect their privacy and their data. Some VPN companies argue that it helps protect sensitive files and corporate data, but I'd argue that those shouldn't be sent over any system that doesn't already encrypt and protect them. Or, better yet, don't let those files out of your secure network.

If your workforce travels often, or works remotely, they may frequently be in situations where the available Wi-Fi is far from trustworthy. Similarly, remote workers may need to spoof their locations to access localized versions of sites. Also, providing the tools and training to improve their privacy and keep themselves safe, may spill over to keeping your corporate information safe.

Keep in mind that your employees and your companies will still need basic security protection. Using password managers and two-factor authentication will help protect against account takeovers that can expose corporate data and be used for phishing attacks. Antivirus protects machines against malware that could cost valuable time and money repairing.

Several of PCMags top-rated VPN companies said that while they do not offer corporate or team options, they are aware that some companies do procure their products to secure employee privacy online. These include CyberGhost, Surfshark, and Editors' Choice winner ProtonVPN.

Some of these VPNs go further, letting you access your local network and network resources as if your employees were physically on the network. Before the bewildering advent of consumer VPNs, this is what VPNs were primarily used for: connecting securely to work stuff. While terminology sometimes differs, the companies we spoke with usually call this a "site-to-site VPN."

With this setup, all VPN traffic is routed through a server controlled by your company, usually on company premises. This lets employees access resources like shared drives, and work as if they were connected to their office internet. Jack Murray, Senior Researcher at NordVPN Teams, told me that this model has some issues. The on-premises server requires upkeep, and can become a bottleneck since all the VPN traffic has to flow through the corporate network. "The connection between the outside networks and the company network gets jammed at the edge just as we saw with numerous companies during the COVID19 lockdown," said Murray.

Like so many business operations, some VPN companies have moved to the cloud. In this scenario, traffic is routed from employees not through a server in your office but a dedicated server operated by NordVPN. "Allocating different connection capacities, the traffic is split into the flow that goes to the local network and the rest of the internet, instead of sending all traffic through local network as traditional corporate VPNs do," explained Murray.

Golden Frog VyprVPN has a similar offering called VyprVPN Cloud. "Although the on-premesis server has similar characteristics, it is slightly different from a site to site VPN," a VyprVPN representative told me. By connecting to a dedicated, cloud-based server, Golden Frog's customers can securely access more company resources, not just those attached to an on-premises server. The representative explained that corporate customers can add the static IP address of the dedicated server to an access list, letting remote employees connect to cloud-based resources.

The differences between different setups can be very confusing. If you're exploring purchasing a VPN for your company, it's important to understand what you want a VPN to provide your team. Once you understand that, you can work with potential vendors and ensure that you're getting exactly what you pay for.

Among the VPNs listed here, Encrypt.me, Golden Frog VyprVPN, and NordVPN allow for accessing corporate resources remotely. In some cases this can include on-site, dedicated server deployment. You should contact these companies for more information if this sounds right for you.

(Note: Encrypt.me is owned by J2 Global, which also owns PCMag's publisher Ziff Media.)

Keep in mind that routing employee traffic through corporate networks can make things a little complicated. Unless there's been some very specific configuration, anyone connected to the VPN will have all of their traffic routed through the corporate network. This might include some things that suck up company bandwidth, such as streaming videos, or may be against the policies of the company, like BitTorrent seeding.

It can also lead to embarrassing situations. An employee could easily forget they're connected to the corporate VPN before streaming porn, or any content that's inappropriate for the workplace. When using corporate VPNs that connect to private networks, be sure you know how they work, how to tell when they're active, and how to shut them off.

When you connect your personal device to a VPN, all of your traffic flows through its infrastructure. If the VPN company chose to abuse that position, it could learn as much about you as your ISP. That's different with some of the team and corporate options, since it might be your company that's in control of the server. But signing on with any third party means being aware of the risks to your business and your employees.

Unfortunately, the consumer VPN industry is still fairly young and extremely volatile. It can sometimes be hard to tell who the good actors are. When we review VPNs for PCMag, we send the companies a questionnaire that asks about what country's legal framework the VPN company operates under, what efforts are made to secure server infrastructure, whether the company sells user data, and so on. We try to include as much information as possible in our reviews so readers can make an informed decision. For one reader, a US-based VPN might be a total nonstarter. For another reader, being based in the US might be a critical need. Read our full reviews for more on how each service protects users' privacy.

Again, it's probably a different story if you're hosting the VPN server yourself. But consider that your employees may be using the service's commercial servers for day-to-day browsing, and may be using the VPN company's app. If you're looking at a VPN company, take some time to ask about their privacy policies, what protections are in place for your information, and what efforts they make to protect customers.

A VPN goes a long way toward protecting individual privacy. It can also protect your corporate data, whether it's by connecting your workforce to a secure network or ensuring that your employees are safe in their day-to-day lives. While you can set up your own VPN, opting for a team or business account from a consumer VPN means you'll get more servers, more support, and apps made for everyday use.

Advertisement

Mullvad VPN is clearly a company of principals, and is fanatically dedicated to customer privacy. When you sign up with Mullvad, you're issued a number instead of a login or username. The company works hard to know as little about its customers as possible, and to protect its customers above all else. The service is also a flat fee: 5 per month ($5.50 USD, at the time of writing), and does not offer annual discounts or other promotions.

This dedication to fairness and privacy earned Mullvad VPN an Editors' Choice award. However, that laudable stance may have some drawbacks in a business setting. The company says that the entire model of enterprise VPNs is antithetical to its security and privacy practices. A centralized billing system, a company representative pointed out, requires information Mullvad simply does not have. A company may also have to log hacking attempts, which Mullvad also does not do.

That said, the company tells us it offers discounts on its base price. Small teams can expect a 10 percent discount, while larger teams can get a 50 percent discount. Mullvad cannot be used for accessing corporate resources, and does not offer dedicated servers. A company representative did say, however, that the company can segment its servers and offer a portion to corporate customers.

We appreciate the simplicity of TunnelBear, and it has helped earn the product an Editors' Choice award. With a bright yellow color scheme and a cadre of powerful bears, TunnelBear makes it super simple to get online with a VPN.

TunnelBear embraces a similar simplicity with its teams option. The VPN does not offer access to corporate resources, nor does it offer dedicated servers. It does offer standard VPN protection for $69 per person, per year. Customers can provide access to any employee with a certain email domain, making it easy to grant employees access. TunnelBear prorates the cost of adding new team members mid-year, and a company representative says that TunnelBear credits customers for unused time when employers remove a user.

ExpressVPN is notable for offering servers in 94 countries, without relying on virtual servers to boost that figure. Among the services we have reviewed, it offers the most hardware in the most locations, making it a strong choice for a frequent traveler.

ExpressVPN offers a discount model for teams that's similar to Mullvad's. Depending on the number of licenses, corporate customers can expect up to 40 percent off the normal price. Unlike Mullvad, Express does offer centralized billing and license management. It does not offer access to corporate resources, nor does it provide dedicated servers.

NordVPN is a juggernaut in the VPN space, boasting an enormous number of servers and a strong global presence. It offers many additional privacy features that other VPNs ignore. This includes multi-hop connections, which let you route a VPN connection through an additional server for added privacy, and VPN access to the Tor anonymization network. It has also made a major investment deploying WireGuard, a new open-source VPN technology. The company has worked to mend fences after a limited security incident on one of its servers.

On the business side, the company has a very robust offering. NordVPN's team option starts at $7 per user, per month for its Basic plan, and goes up to $9 per user. per month for its Advanced plan. A company representative tells us that dedicated servers are available at the higher tier. The company offers dedicated servers in 31 countries, but more can be offered upon request. NordVPN does provide remote access to corporate resources, and can provide on-site deployment of its service.

One of the more established companies on our list is Golden Frog, with its VyprVPN product. VyprVPN has a smaller number of servers, but does far better than most with a wide array of server locations available across the world. Its app is simple and easy to use.

Golden Frog VyprVPN has two team offerings: VyprVPN for Business and VyprVPN for Business Cloud. The former starts at $299 per year for the first three users, and increases by $99 per year for each additional user. VyprVPN for Business customers get access to all of the company's consumer features, but are limited to just three simultaneous connections per user.

VyprVPN for Business Cloud, on the other hand, starts at $349 per year for the first three users, and each additional user costs $99 per year after that. Business Cloud customers get all of the features provided by the Business plan, but also get the option for adding on-premises servers or dedicated cloud-based servers.

The consumer version of IVPN is notable for its affordable price and the wide array of server locations it offers. IVPN also includes port forwarding and a multi-hop connection option, both of which are rarely seen among VPN products, and are included in the team offering.

IVPN offers a tiered pricing system for teams. The first five seats will cost you $50 a month or $500 per year. This scales up in increments of 10, and tops out at 91-100 seats for $500 per month or $5,000 per year. It's notable in that you're not charged per person, per month. The company tells us that customized plans for teams larger than 100 people are available.

This product cannot be used to access corporate resources remotely, nor does it offer dedicated servers.

As a consumer product, Encrypt.me is especially notable for allowing an unlimited number of simultaneous connections. Most companies limit you to just five, but Encrypt.me skips limitations entirely. It also has an excellent distribution of servers across the globe, offering locations in 75 countries.

In some ways, Encrypt.me may be more successful as a business or teams VPN than as a consumer product. Encrypt.me forgoes the cash-up-front model and instead charges monthly with no long-term commitments. Teams of one to 25 employees costs $7.99 per user, per month. Teams of 25 to 100 users cost $6.99 per user, per month. For teams beyond 100 users, Encrypt.me charges $5.99 per user, per month.

Encrypt.me does let customers access corporate resources with either an on-site server, or an AWS cloud-deployed server. The company also says that it offers content filtering. This feature can block known dangerous sites, but can also block advertising, porn, gambling, and social media.

PureVPN boasts an array of server locations across the globe, meaning theres always one close at hand (or far away for spoofing your location). The service is reasonably priced, but it's in need of a UI refresh.

When we spoke with PureVPN, the company emphasized its dedicated servers and dedicated IPs available with PureVPN for Business. PureVPN notes that administrators can assign sub admins, and assign shared or dedicated IPs to team members.

PureVPN for Business starts at five accounts and goes up to 50, although the company says it can provide custom configurations for larger organizations. An account with shared IP addresses costs $8.45 per person, per month. If you want dedicated IP addresses, the price jumps to $9.99 per user, per month. Adding a dedicated server takes the cost much higher, to $399 per month, depending on the server requirements.

PureVPN offers dedicated IP addresses in Australia, Canada, Hong Kong, Germany, Malta, Singapore, the UK, and the US. Currently, PureVPN does not provide on-site deployment or the ability to access corporate resources via VPN. However, it does offer custom port forwarding.

Continue reading here:
The Best VPNs for Businesses and Teams - PCMag.com

Read More..

NexTech AR Solutions tie-up with Fastly cloud platform leads to video security breakthrough – Proactive Investors USA & Canada

Nextech's InfernoAR is an advanced cloud-based AR video conferencing and video learning experience platform for events

NexTech AR Solutions Corp () () said partnering its technology with Fastly (NYSE:FSLY), which provides the edge cloud platform,had greatly enhanced user security for video streaming.

Fastly's edge cloud platform provides a content delivery network, Internet security services, video and streaming services.

Nextech's InfernoAR is an advanced cloud-based AR Video Conferencing and Video Learning Experience Platform for events.

"We were happy to work with NexTech AR Infernos team in bringing their JWT authentication to the edge,"Scott Bishop, who looks after sales inFastly said in a statement on Thursday.

"Our platforms design allows us to compute at the edge and authenticating JWT at this stage is exactly the kind of use case we envisioned. Computing these at the edge enhances both Inferno ARs security as well as performance. What we have achieved with InfernoAR is fantastic.

JWT authentication works by a server generating a token that certifies the user identity, and sends it to the client.

The client will send the token back to the server for every subsequent request, so the server knows the request comes from a particular identity.

Security is a core feature of InfernoAR, and the platform services many Fortune 500 companies, highlighted NexTech AR in the statement.

NexTech is one of the leaders in the rapidly growing augmented reality (AR) industry, estimated by Statista to hit $120 billion by 2022.

Evan Gappelberg, the CEO of NexTech AR described InfernoAR as a "category killer product" for NexTech.

Meanwhile, Fastlys customers include many of the worlds most prominent companies, including Vimeo, Pinterest, The New York Times, and GitHub.

NexTech AR shares in Toronto added almost 21% to stand at C$3.38each.

---Updates for share price---

Contact the author at [emailprotected]

Originally posted here:
NexTech AR Solutions tie-up with Fastly cloud platform leads to video security breakthrough - Proactive Investors USA & Canada

Read More..

Moving to the cloud: Migrating Blazegraph to Amazon Neptune – idk.dev

During the lifespan of a graph database application, the applications themselves tend to only have basic requirements, namely a functioning W3C standard SPARQL endpoint. However, as graph databases become embedded in critical business applications, both businesses and operations require much more. Critical business infrastructure is required not only to function, but also to be highly available, secure, scalable, and cost-effective. These requirements are driving the desire to move from on-premises or self-hosted solutions to a fully managed graph database solution such as Amazon Neptune.

Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run business-critical graph database applications. Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with millisecond latency. Neptune is designed to be highly available, with read replicas, point-in-time recovery, continuous backup to Amazon Simple Storage Service (Amazon S3), and replication across Availability Zones. Neptune is secure with support for AWS Identity and Access Management (IAM) authentication, HTTPS-encrypted client connections, and encryption at rest. Neptune also provides a variety of instance types, including low-cost instances targeted at development and testing, which provide a predictable, low-cost, managed infrastructure.

When choosing to migrate from current on-premises or self-hosted graph database solutions to Neptune, whats the best way to perform this migration?

This post demonstrates how to migrate from the open-source RDF triplestore Blazegraph to Neptune by completing the following steps:

This post also examines the differences you need to be aware of while migrating between the two databases. Although this post is targeted at those migrating from Blazegraph, the approach is generally applicable for migration from other RDF triplestore databases.

Before covering the migration process, lets examine the fundamental building blocks of the architecture used throughout this post. This architecture consists of four main components:

The following diagram summarizes these resources and illustrates the solution architecture.

Although its possible to construct the required AWS infrastructure manually through the AWS Management Console or CLI, this post uses a CloudFormation template to create the majority of the required infrastructure.

The process of exporting data from Blazegraph involves three steps:

The first step is exporting the data out of Blazegraph in a format thats compatible with the Neptune bulk loader. For more information about supported formats, see RDF Load Data Formats.

Depending on how the data is stored in Blazegraph (triples or quads) and how many named graphs are in use, Blazegraph may require that you perform the export process multiple times and generate multiple data files. If the data is stored as triples, you need to run one export for each named graph. If the data is stored as quads, you may choose to either export data in N-Quads format or export each named graph in a triples format. For this post, you export a single namespace as N-Quads, but you can repeat the process for additional namespaces or desired export formats.

There are two recommended methods for exporting data from Blazegraph. Which one you choose depends if the application needs to be online and available during the migration.

If it must be online, we recommend using SPARQL CONSTRUCT queries. With this option, you need to install, configure, and run a Blazegraph instance with an accessible SPARQL endpoint.

If the application is not required to be online, we recommend using the BlazeGraph Export utility. With this option, you must download Blazegraph, and the data file and configuration files need to be accessible, but the server doesnt need to be running.

SPARQL CONSTRUCT queries are a feature of SPARQL that returns an RDF graph matching the query template specified. For this use case, you use them to export your data one namespace at a time using the following query:

Although a variety of RDF tools to export this data exist, the easiest way to run this query is by using the REST API endpoint provided by Blazegraph. The following script demonstrates how to use a Python (3.6+) script to export data as N-Quads:

If the data is stored as triples, you need to change the Accept header parameter to export data in an appropriate format (N-Triples, RDF/XML, or Turtle) using the values specified on the GitHub repo.

Although performing this export using the REST API is one way to export your data, it requires a running server and sufficient server resources to process this additional query overhead. This isnt always possible, so how do you perform an export on an offline copy of the data?

For those use cases, you can use the Blazegraph Export utility to get an export of the data.

Blazegraph contains a utility method to export data: the ExportKB class. This utility facilitates exporting data from Blazegraph, but unlike the previous method, the server must be offline while the export is running. This makes it the ideal method to use when you can take the application offline during migration, or the migration can occur from a backup of the data.

You run the utility via a Java command line from a machine that has Blazegraph installed but not running. The easiest way to run this command is to download the latest blazegraph.jar release located on GitHub. Running this command requires several parameters:

For example, if you have the Blazegraph journal file and properties files, export data as N-Quads with the following code:

Upon successful completion, you see a message similar to the following code:

No matter which option you choose, you can successfully export your data from Blazegraph in a Neptune-compatible format. You can now move on to migrating these data files to Amazon S3 to prepare for bulk load.

With your data exported from Blazegraph, the next step is to create a new S3 bucket. This bucket holds the data files exported from Blazegraph for the Neptune bulk loader to use. Because the Neptune bulk loader requires low latency access to the data during load, this bucket needs to be located in the same Region as the target Neptune instance. Other than the location of the S3 bucket, no specific additional configuration is required.

You can create a bucket in a variety of ways:

You use the newly created S3 bucket location to bulk load the data into Neptune.

The next step is to upload your data files from your export location to this S3 bucket. As with the bucket creation, you can do this in the following ways:

Although this example code only loads a single file, if you exported multiple files, you need to upload each file to this S3 bucket.

After loading all the files in your S3 bucket, youre ready for the final task of the migration: importing data into Neptune.

Because you exported your data from Blazegraph and made it available via Amazon S3, your next step is to import the data into Neptune. Neptune has a bulk loader that loads data faster and with less overhead than performing load operations using SPARQL. The bulk loader process is started by a call to the loader endpoint API to load data stored in the identified S3 bucket into Neptune. This loading process happens in three steps:

The following diagram illustrates how we will perform these steps in our AWS infrastructure.

You begin the import process by making a request into Neptune to start the bulk load. Although this is possible via a direct call to the loader REST endpoint, you must have access to the private VPC in which the target Neptune instance runs. You could set up a bastion host, SSH into that machine, and run the cURL command, but Neptune Workbench is an easier method.

Neptune Workbench is a preconfigured Jupyter notebook which is an Amazon SageMaker notebook, with several Neptune-specific notebook magics installed. These magics simplify common Neptune interactions, such as checking the cluster status, running SPARQL and Gremlin traversals, and running a bulk loading operation.

To start the bulk load process use the %load magic, which provides an interface to run the Neptune loader API.

The result contains the status of the request. Bulk loads are long-running processes; this response doesnt mean that the load is complete, only that it has begun. This status updates periodically to provide the most recent loading job status until the job is complete. When loading is complete, you receive notification of the job status.

With your loading job having completed successfully your data is loaded into Neptune and youre ready to move on to the final step of the import process: validating the data migration.

As with any data migration, you can validate that the data migrated correctly in several ways. These tend to be specific to the data youre migrating, the confidence level required for the migration, and what is most important in the particular domain. In most cases, these validation efforts involve running queries that compare the before and after data values.

To make this easier, the Neptune Workbench notebook has a magic (%%sparql) that simplifies running SPARQL queries against your Neptune cluster. See the following code.

This Neptune-specific magic runs SPARQL queries against the associated Neptune instance and returns the results in tabular form.

The last thing you need to investigate is any application changes that you may need to make due to the differences between Blazegraph and Neptune. Luckily, both Blazegraph and Neptune are compatible with SPARQL 1.1, meaning that you can change your application configuration to point to your new Neptune SPARQL endpoint, and everything should work.

However, as with any database migration, several differences exist between the implementations of Blazegraph and Neptune that may impact your ability to migrate. The following major differences either require changes to queries, the application architecture, or both, as part of the migration process:

However, Neptune offers several additional features that Blazegraph doesnt offer:

This post examined the process for migrating from an on-premises or self-hosted Blazegraph instance to a fully managed Neptune database. A migration to Neptune not only satisfies the requirements of many applications from a development viewpoint, it also satisfies the operational business requirements of business-critical applications. Additionally, this migration unlocks many advantages, including cost-optimization, better integration with native cloud tools, and lowering operational burden.

Its our hope that this post provides you with the confidence to begin your migration. If you have any questions, comments, or other feedback, were always available through your Amazon account manager or via the Amazon Neptune Discussion Forums.

Dave Bechberger is a Sr. Graph Architect with the Amazon Neptune team. He used his years of experience working with customers to build graph database-backed applications as inspiration to co-author Graph Databases in Action by Manning.

Read the rest here:
Moving to the cloud: Migrating Blazegraph to Amazon Neptune - idk.dev

Read More..

Cloud-Based Automation Is a Reality; Now What? – Radio World – Radio World

The challenges of Big C must be overcome before we can truly untether radio

By Adam Robinson Published: June 24, 2020

The author of this commentary is VP of operations at DJB Radio Software Inc. This commentary is excerpted from the Radio World ebook Trends in Automation.

Virtualization. Cloud. Untethered Radio.

A couple of years ago I was invited to give a chat at my local AES chapter about remote broadcasts. As a lifelong radio guy I have stories aplenty (as most of us do), and the AES folk were fascinated by my tales of guerilla engineering.

On this particular occasion I gave a humorous history of radio remotes starting from the days of literally bringing the radio station to the remote site via a cargo van (or horse-drawn carriage) to todays more rational events. These might include a small mixer, a couple of mics and a laptop or two, but are still firmly rooted at a table and plugged into a wall.

I then got all what if and I started talking about the radio remote of the future. I envisioned the radio host as a one-man band, going from place to place in a shopping mall with nothing more than a tablet strapped to their arm and a headset mic (Bluetooth, of course) on his or her head. I raved like a lunatic about cloud-based this and virtualized that with AES67 to deliver audio and AES70 managing control protocols. No wires or other obsolete shackles to hold our fearless host back no broken folding table and threadbare chairs just untethered freedom!

Little did I know my seemingly far-fetched Roddenberry-esque model would start coming to life in short order but it would also become a model for brick-and-mortar radio stations not just remotes.

Virtualization is here. Cloud is here. The question is how do we make it work?

LITTLE C, BIG C

In 2018 I took on my current position with my lifelong friend Ron Paley at his second automation venture, DJB Radio Software. Among the challenges presented was to come up with a cloud model for the newly minted DJB Zone radio automation platform.

No problem! Well go get some space at AWS, spin up a cloud server and off we go. Right? Well partly.

If all we want to do is run an automation system in the cloud, DJB Zone, or any of the popular automation platforms, can accomplish the task by simply using the cloud to house data or to run the software virtually on a cloud-based server. An HTML interface or third-party remote access software can get you to the dance, so to speak, and virtual sound drivers can send audio back to your studio or direct to your transmitter site. Lets call that model Little C cloud.

Expectations are high among the decision makers in the industry that we can further rationalize operations by employing this wonderfully cost-effective place called the cloud to replace expensive brick and mortar studios. Well call that model Big C cloud and it is a complex beast.

SHOWING BACKBONE

If what we need is something that resembles the traditional radio model of mics and phones and multiple audio sources and codecs with a host (or hosts) in multiple locations all contributing to one broadcast without so much as a single physical fader, weve got quite the hill to climb. Getting automation in and out of the cloud is one thing, but what about the backbone?

First and foremost, theres the issue of reliable internet connections even the most robust fiber pipe suffers from downtime. Next, we have to tackle multipoint latency not only in audio but in LIO controls. And then theres the issue of a virtualized, cloud-based mixing console that can handle inputs from all over the place and sync all of this disparate audio.

It works for the streaming services why not for traditional radio? asks the most vocal member of the peanut gallery.

For starters, radio has a very different business model it is not an on-demand service, nor is it entirely canned content. It also has a fickle audience for generations now, radio listeners have been trained to be impatient. With that in mind, I generally respond to our vocal friend with the following if it takes a few extra seconds for Apple or Spotify or Pandora to buffer, the average listener happily sits there watching the little wheel or hourglass go around. If a radio station disappears for a few seconds, that same listener will hit seek and move on to the next available frequency that IS playing something.

Live. Local. Immediate. The three hallmarks of radio since the dawn of the golden age. Lose those and we may just lose radio as we know it. This is the challenge facing not only the software companies but the hardware manufacturers too.

Little C cloud-based automation is a reality there are some rough corners to smooth out yet, but were getting there. Its the challenges of Big C that must be overcome before we can truly virtualize and untether radio. In the meantime, we can happily enjoy the many benefits of virtualizing radio automation systems in a central TOC or a cloud platform, saving money and increasing synergies among markets. Lets invest those reclaimed resources in coming up with a new model for radio that will see it into its second century.

Adam Robinson is a 25-year radio veteran who has worked on both sides of the mic. An early adopter of radio automation and AoIP systems, he is now VP operations for DJB Radio Software. Contact him at adam@djbradio.com.

For more stories like this, and to keep up to date with all our market leading news, features and analysis, sign up to our newsletter here.

Here is the original post:
Cloud-Based Automation Is a Reality; Now What? - Radio World - Radio World

Read More..

How Azure, AWS, Google handle data destruction in the cloud – TechTarget

Market research from Cybersecurity Insiders indicates that 93% of organizations have concerns about the security of the public cloud. This healthy distrust likely stems from a lack of information. Cloud service providers know their customers; they understand these concerns and have developed a plethora of documentation and sales collateral to earn our trust. One very welcome documentation improvement by the leading cloud providers is the amount of transparency pertaining to data destruction. Here is a review of this documentation so you can form a more complete picture of what exactly happens when we tell our cloud service provider to delete our data.

To frame the problem, let us take an inductive look at the question of why do we care about data destruction? Well, it is one of many security controls that the Cloud Security Alliance teaches us to review before engaging with a cloud service provider (CSP). When I teach about cloud security, I remind my audience that a given security control only mitigates specific risks and that in cybersecurity, there are no magic security bullets. Hence, it is essential to understand what a particular security control mitigates and what it does not.

Okay, but why is data destruction an important security control? Sensitive data must be destroyed when it is no longer needed to prevent unauthorized access to it. Until data is destroyed, it must be properly secured. How could an unauthorized person access sensitive information in the cloud that was not properly destroyed? They could:

For many people, the first tactic that comes to mind to gain unauthorized access to sensitive information would be to obtain a hard disk somehow and use forensic tools to extract data from the drive. Now, let's break this down into its two parts -- getting access to a hard disk and then extracting meaningful data from it.

The large, mature cloud service providers excel at physical security. The only personnel who have access to a CSP data center are the few people who have job duties inside it, and only a subset of those employees is responsible for the lifecycle of the hard disks. Hard disk drives (HDDs) have a limited lifespan, and cloud service providers consume them by the thousands. The CSP uses software to track each HDD by serial number and accounts for its exact location at any point in time. When the drive has reached the end of its useful life, the cloud service provider will shred it or use a similar means of complete physical destruction. Independent audit firms closely scrutinize this process.

If an attacker is somehow able to obtain access to a physical hard drive, they may attempt to use various forensic techniques to extract sensitive data from the device. However, unlike the disk drive in your laptop, each hard drive that cloud service providers use contains fragments of data (called shards) from potentially hundreds of different tenants. Even if these fragments are not encrypted, it would be nearly impossible for an attacker to associate a fragment with a specific tenant. Note that I stated "nearly impossible" because the fragment could contain an identifying data element. Likewise, lacking the mapping information, it would be impossible for an attacker to identify all the drives for a specific target. I'll cover the benefits of encrypting customer data with tenant-specific encryption keys later.

Many of us have had the experience of renting an apartment only to find that the previous occupant left us with cleaning supplies, trash and possibly even a lost diamond earring. We certainly do not want that to happen when we become a tenant in the cloud. Amazon Web Services, Microsoft Azure and Google Cloud Platform have designed their cloud systems to prevent this from happening. The "Amazon Web Services: Overview of Security Processes" white paper states:

"When an object is deleted from Amazon S3, removal of the mapping from the public name to the object starts immediately, and is generally processed across the distributed system within several seconds. Once the mapping is removed, there is no remote access to the deleted object. The underlying storage area is then reclaimed for use by the system."

"Amazon EBS volumes are presented to you as raw unformatted block devices that have been wiped prior to being made available for use. Wiping occurs immediately before reuse so that you can be assured that the wipe process completed."

To continue with the apartment building analogy, it is as if the apartment is completely obliterated, (walls, floors, ceiling and all) and the elevator (which provides the access control) no longer stops at that floor. In the case of Amazon Elastic Block Store (EBS), the data is not securely wiped until a new EBS volume is provisioned for a tenant and sized according to the cloud user specifications.

Some readers may be initially concerned that Amazon waits to wipe the data until it is reprovisioned for a new user. However, that is most efficient and preserves the life of the solid-state hard drives. Also, do not inaccurately assume that an EBS volume is hosted on a single physical hard drive. The AWS documentation states, "Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component."

John Molesky, a senior program manager at Microsoft, made a similar statement:

"The sectors on the disk associated with the deleted data become immediately available for reuse and are overwritten when the associated storage block is reused for storing other data. The time to overwrite varies depending on disk utilization and activity but is rarely more than two days. This is consistent with the operation of a log-structured file system. Azure Storage interfaces do not permit direct disk reads, mitigating the risk of another customer (or even the same customer) from accessing the deleted data before it is overwritten."

I appreciate the additional information Microsoft provided in the above blog excerpt because this is the type of disclosure we need from our cloud service providers. I applaud the relatively recent acknowledgments that data is not wiped until it is provisioned by a new customer and appreciate the additional context Azure provided that reminds us that these highly optimized resources are overwritten naturally within days due to high use.

It should be noted that Google Cloud Platform also uses log-structured file systems. I would like to see all cloud service providers supply additional technical details of these systems along with the relevant security implications. Given the fact that the cloud service provider maintains strict physical security over its hard drives, my professional belief is that this data handling is acceptable for any classifications of data suitable for storage in the public cloud.

Cloud customers expect their data to be protected throughout its lifecycle until the data is destroyed and can no longer be accessed. I have already covered the safeguards in place to protect customer data from external parties prior to its destruction, but what about protecting data from trusted insiders? AWS, Azure and Google Cloud Platform have security documentation that covers the applicable security controls, including background checks, separation of duties, supervision and privileged access monitoring.

The primary concern with insider threats is that employees and contractors have detailed system knowledge and access to lower-level systems that are not exposed to public cloud customers. The CERT National Insider Threat Center has detailed guidance, and cloud customers should explore what controls are in place to protect data that has been deleted and is pending destruction. As thoughtful technical customers ask probing questions of their cloud service providers, en masse, the best cloud providers listen and respond with increasingly transparent documentation.

Encryption is a security control that can mitigate unauthorized insider access when appropriately applied. Unfortunately, encryption is often used as a Jedi mind trick. Some customers stop asking the tough questions once they hear that the service uses encryption. Encryption is a technique to control access. The person or system that controls the encryption key controls the access. For example, with transparent database encryption, the database management system controls the key and therefore controls the access. A database administrator (DBA) can query the data in the clear, but the administrator of the storage system that the database uses can only see the ciphertext. However, if the application controls the key, both the DBA and the storage system administrator can only see ciphertext.

With cryptographic erasure, the only copies of the encryption keys are destroyed, thereby rendering the encrypted data unrecoverable. NIST Special Publication 800-88, Revision 1 recognizes cryptographic erasure as a valid data destruction technique within certain parameters that are readily enforced in modern public cloud environments.

Azure documentation states that encryption is enabled for all storage accounts and cannot be disabled -- the same for Google. However, in AWS, it is a configuration option for services such as S3 and EBS.

Unfortunately, AWS and Azure fail to tout the benefits of the cryptographic erasure technique even though they are using it to destroy customer data. Also, it is often unclear when a CSP is using a tenant-specific encryption key to perform encryption at rest for their various services. When a tenant-specific encryption key is used in conjunction with cryptographic erasure, only the data belonging to a single tenant is destroyed. Cryptographic erasure is a very attractive alternative to overwriting data, especially for customers with hundreds of petabytes of data in cloud storage.

The last attack vector involves an adversary attempting to recover sensitive data from a backup. I always caution my clients to not assume that a CSP is backing up your data unless the contract clearly specifies it. Unless stated otherwise, cloud service providers are primarily using backups or snapshot techniques to meet service-level agreements regarding data durability and availability.

If data is being backed up, the backup must be protected with at least the same level of security as the primary data store. Among the top three cloud service providers, Google's documentation, "Data deletion on Google Cloud Platform," provides the most transparency concerning how deleted data expires and is rotated out throughout its 180-day regimen of daily/weekly/monthly backup cycles. To Google's credit, this document even covers the important role of cryptographic erasure in protecting the data until it expires from all the backups.

Without a doubt, the top three cloud service providers have expended great effort to make their system secure. All cloud service providers must balance the need to protect against leaking too much information that would aid an adversary while providing enough transparency to maintain the trust of their customers. As cloud customers speak with their cloud service providers and seek the appropriate information necessary to make intelligent risk decisions, the cloud service providers are improving their messaging that explains their security investments.

About the authorKenneth G. Hartman is an independent security consultant based in Silicon Valley and a certified instructor for the SANS Institute. Ken's motto is "I help my clients earn and maintain the trust of their customers in its products and services." To this end, he consults on a comprehensive program portfolio of technical security initiatives focused on securing client data in the public cloud. Ken has worked for a variety of cloud service providers in architecture, engineering, compliance, and security product management roles. Ken has earned the CISSP, as well as multiple GIAC security certifications, including the GIAC Security Expert.

Visit link:
How Azure, AWS, Google handle data destruction in the cloud - TechTarget

Read More..

AMD EPYC Processor Adoption Expands with New Supercomputing and High-Performance Cloud Computing System Wins – GlobeNewswire

2nd Gen AMD EPYC-powered system lands in the Top 10 on new TOP500 list ahead of AMD delivering the first ever exascale system next year

CERN, Indiana University, and Purdue University adopt AMD EPYC processors for advanced research

SANTA CLARA, Calif., June 22, 2020 (GLOBE NEWSWIRE) -- AMD (NASDAQ: AMD) today announced multiple new high-performance computing wins for AMD EPYC processors, including that the seventh fastest supercomputer in the world and four of the 50 highest-performance systems on the bi-annual TOP500 list are now powered by AMD. Momentum for AMD EPYC processors in advanced science and health research continues to grow with new installations at Indiana University, Purdue University and CERN as well as high-performance computing (HPC) cloud instances from Amazon Web Services, Google, and Oracle Cloud.

The leading HPC institutions are increasingly leveraging the power of 2nd Gen AMD EPYC processors to enable cutting-edge research that addresses the worlds greatest challenges, said Forrest Norrod, senior vice president and general manager, data center and embedded systems group, AMD. Our AMD EPYC CPUs, Radeon Instinct accelerators and open software programming environment are helping to advance the industry towards exascale-class computing, and we are proud to strengthen the global HPC ecosystem through our support of the top supercomputing clusters and cloud computing environments.

From powering the upcoming worlds fastest exascale supercomputers, Frontier and El Capitan, to supporting workloads in the cloud, and driving new advancements in health research, the high core count and extensive memory bandwidth of AMD EPYC processors are helping meet the growing demand from HPC providers for improved performance, scalability, efficiency, and total cost of ownership.

AMD Continues Expanding Share of TOP500 Supercomputers

Four AMD EPYC powered supercomputers are now among the 50 highest-performance systems in the world and there are now ten AMD EPYC-powered supercomputers on the TOP500:

Atos is proud to provide to its customers with cutting edge technology, integrating 2nd Gen AMD EPYC processors as soon as released, and demonstrating increased performance on HPC applications in production environments, said Agns Boudot, group senior vice president, Head of HPC and Quantum at Atos.

AMD Powered Supercomputing Systems Drive Research of the FutureTwo universities announced new research supercomputing systems powered by AMD EPYC processors in Dell EMC PowerEdge servers.

Indiana University will deploy Jetstream 2, an eight-petaflop distributed cloud computing system powered by upcoming 3rd Gen AMD EPYC processors. This system will be used by researchers in a variety of fields such as AI, social sciences, and COVID-19 research. AMD EPYC processors already power Big Red 200 at the Indiana University campus.Jetstream 2 bundles computation, software and access to storage for individuals and teams of researchers across an array of areas of research, said David Hancock, Director in Research Technologies, affiliated with the Pervasive Technology Institute at Indiana University. With the next generation AMD EPYC processor, Jetstream 2 will provide 8 petaflops of cloud computing power, giving more access to high-end technologies to enable deep learning and artificial intelligence techniques.

Purdue University will deploy Anvil, a supercomputer powered by next generation AMD EPYC processors, which will provide advanced computing capabilities to support a wide range of computational and data-intensive research. AMD EPYC will also power Purdues latest community cluster Bell, scheduled for deployment early this fall.

In addition, CERN, the largest particle physics laboratory in the world, recently selected 2nd Gen AMD EPYC processors in Gigabyte servers to harness the massive amounts of data from their latest Large Hadron Collider (LHC) experiment to rapidly detect subatomic particles known as beauty quarks. A new case study details how combining the increased bandwidth of PCIe 4.0, DDR4 memory speed, and the 64 core AMD EPYC 7742 processor allows researchers to collect the raw data streams generated by 40 terabytes of collision data occurring every second in the LHC.

High Performance Computing in the Cloud with AMD EPYCAs the HPC industry evolves to support new workload demands, cloud providers continue to adopt 2nd Gen AMD EPYC processors to provide leadership performance and flexible solutions. With recent cloud wins among technology partners like Amazon Web Services, Google Cloud and Oracle Cloud, AMD is helping industry leaders push the boundaries in the new era of HPC and cloud computing.

AMD and Microsoft Azure have continued to build upon their cloud partnership with the recently announced HBv2-Series VMs for high-performance computing workloads. The 2nd Gen AMD EPYC processors provide Microsoft Azure customers with impressive core scaling, access to massive memory bandwidth and are the first x86 server processors that support PCIe 4.0, enabling some of the best high-performance computing experiences in the industry. Together, AMD and Microsoft Azure will support real-world HPC workloads, such as CFD, explicit finite element analysis, seismic processing, reservoir modeling, rendering, and weather simulation.

AMD Updates ROCm For Heterogenous Software SupportCommunity support continues to grow for AMD Radeon Open eCosystem (ROCm), AMDs open source foundation for heterogenous compute. Major development milestones in the latest update include:

Join AMD CTO and executive vice president, Mark Papermaster, for a webinar on July 15th to discuss the full range of AMD solutions and upcoming innovations in HPC. Click the link for the time most convenient for you to register: 9 AM EDT, 12 PM EDT or 9 PM EDT.

Supporting Resources

About AMDFor 50 years AMD has driven innovation in high-performance computing, graphics, and visualization technologies the building blocks for gaming, immersive platforms and the datacenter. Hundreds of millions of consumers, leading Fortune 500 businesses and cutting-edge scientific research facilities around the world rely on AMD technology daily to improve how they live, work and play. AMD employees around the world are focused on building great products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ: AMD) website, blog, Facebook and Twitter pages.

AMD, the AMD logo, EPYC, Radeon Instinct, ROCm, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Amazon Web Services (AWS) is a trademark of Amazon.com, Inc. or its affiliates in the United States and/or other countries, Google Cloud Platform is a trademark of Google LLC., LLVM is a trademark of LLVM Foundation, OpenCL is a trademark of Apple Inc. used by permission by Khronos Group, Inc., Oracle is a registered mark of Oracle and/or its affiliates. PCIe is a registered trademark of PCI-SIG Corporation. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies. Links to third party sites are provided for convenience and unless explicitly stated, AMD is not responsible for the contents of such linked sites and no endorsement is implied.

This press release contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) including features, functionality, availability, timing, deployment and expectations of 2nd Gen AMD EPYC CPU powered supercomputer systems, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as "would," "intends," "believes," "expects," "may," "will," "should," "seeks," "intends," "plans," "pro forma," "estimates," "anticipates," or the negative of these words and phrases, other variations of these words and phrases or comparable terminology. Investors are cautioned that the forward-looking statements in this document are based on current beliefs, assumptions and expectations, speak only as of the date of this document and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Material factors that could cause actual results to differ materially from current expectations include, without limitation, the following: Intel Corporations dominance of the microprocessor market and its aggressive business practices may limit AMDs ability to compete effectively; AMD relies on third parties to manufacture its products, and if they are unable to do so on a timely basis in sufficient quantities and using competitive technologies, AMDs business could be materially adversely affected; failure to achieve expected manufacturing yields for AMDs products could negatively impact its financial results; the success of AMDs business is dependent upon its ability to introduce products on a timely basis with features and performance levels that provide value to its customers while supporting and coinciding with significant industry transitions; if AMD cannot generate sufficient revenue and operating cash flow or obtain external financing, it may face a cash shortfall and be unable to make all of its planned investments in research and development or other strategic investments; the loss of a significant customer may have a material adverse effect on AMD; AMDs receipt of revenue from its semi-custom SoC products is dependent upon its technology being designed into third-party products and the success of those products; global economic and market uncertainty may adversely impact AMDs business and operating results; the ongoing novel coronavirus (COVID-19) pandemic could materially adversely affect AMDs business, financial condition and results of operations; AMDs worldwide operations are subject to political, legal and economic risks and natural disasters which could have a material adverse effect on AMD; government actions and regulations such as export administration regulations, tariffs and trade protection measures, may limit AMDs ability to export its products to certain customers; AMD products may be subject to security vulnerabilities that could have a material adverse effect on AMD; IT outages, data loss, data breaches and cyber-attacks could compromise AMDs intellectual property or other sensitive information, be costly to remediate and cause significant damage to its business, reputation and operations; uncertainties involving the ordering and shipment of AMDs products could materially adversely affect it; AMDs operating results are subject to quarterly and seasonal sales patterns; the agreements governing AMDs notes and the Secured Revolving Facility impose restrictions on AMD that may adversely affect its ability to operate its business; the markets in which AMDs products are sold are highly competitive; the conversion of the 2.125% Convertible Senior Notes due 2026 may dilute the ownership interest of its existing stockholders, or may otherwise depress the price of its common stock; the demand for AMDs products depends in part on the market conditions in the industries into which they are sold. Fluctuations in demand for AMDs products or a market decline in any of these industries could have a material adverse effect on its results of operations; AMDs ability to design and introduce new products in a timely manner is dependent upon third-party intellectual property; AMD depends on third-party companies for the design, manufacture and supply of motherboards, software, memory and other computer platform components to support its business; if AMD loses Microsoft Corporations support for its products or other software vendors do not design and develop software to run on AMDs products, its ability to sell its products could be materially adversely affected; and AMDs reliance on third-party distributors and AIB partners subjects it to certain risks. Investors are urged to review in detail the risks and uncertainties in AMD's Securities and Exchange Commission filings, including but not limited to AMD's Quarterly Report on Form 10-Q for the quarter ended March 28, 2020.

More here:
AMD EPYC Processor Adoption Expands with New Supercomputing and High-Performance Cloud Computing System Wins - GlobeNewswire

Read More..

What it Means To Be Software-Defined in Retail and How We Got Here – Retail Info Systems News

What does it mean to be software-defined at the retail edge? Its all about what tech underpins software-defined, and what innovation and benefits software-defined can unlock.

To understand the value it can create within retail technology its important to look back at how we got here.

There was a time mainframes were the size of small data centers and minicomputers were the size of a closet. With no network to connect them, LANs and WANs started to tie everything together from the late 80s, and by the mid-90s we had client server systems that drove the emergence of large datacenters.

Meanwhile, the internet created flexible ways of communicating with these datacenters. However, they were still one-application-to-one-server configurations. The hardware configuration was in control and data centers pattern and application capability were set once the hardware was built.

Then the first instance of software-defined came along virtualization. The business case was simple: if the relationship between application and server is implemented in software, it is more flexible, which helps you squeeze more from your server investment.

See also:How Regatta Weathers Retail With Cloud Technology

The cloud took virtualization to another level by scaling up and further centralizing the data center on a continental scale. It added a crucial component: sophisticated management. Virtualization and cloud have evolved to make all aspects of centralized computing software-defined, not just reliant on the hardware alone.

We now have software-defined storage and software-defined networking in which developers can create arbitrary combinations of compute, storage and networking and scale them quickly. The key issue here is configuration is controlled by the software.

Device-centric architecture served the retail industry well through the 80s and 90s, as electronic POS was married to LANs for the first time. In early generations, it was convenient to deploy one application per device; one computer, one operating system and one POS.

However, this arrangement comes with high overhead. Complex logistics are required to get devices, networking, power, operating systems and security software into the right configuration, in the right location, to support an application.

In todays fast-paced environment where customer expectations of speed and new experiences are elevated, retailers need a more flexible, dynamic model that is not tied to this one device, one application, device-orientated model. They can no longer wait six months to get a new computer into every store to launch a new service, or send a field engineer to every site just to make a change. Retailers must deploy at the speed of the cloud, in just a few clicks, or risk being held back.

Edge computing is just one of countless emerging technologies transforming retail stores. However, because its infrastructure, it is a critical underpinning of the technology adoption cycle for retailers.

Edge computing is about transforming the way in which workloads are hosted and managed in distributed locations. There are plenty of reasons why these workloads will never end up being centralized and require a software defined edge strategy:

In retail, edge computing provides a new paradigm for running existing applications like POS and at the same time is oven ready for the new world of distributed microservices. Edge computing and its associated software-defined concept gives retailers the power to make things happen instantly across the business from a single point without having to rely on armies of engineers roaming the country.

Brian Buggy is CTO and co-founder of Zynstra, an NCR Company.

See the original post:
What it Means To Be Software-Defined in Retail and How We Got Here - Retail Info Systems News

Read More..

Cloud Storage Market 2020: Challenges, Growth, Types, Applications, Revenue, Insights, Growth Analysis, Competitive Landscape, Forecast- 2025 – Cole…

The report scope includes detailed competitive outlook covering market shares and profiles key participants in the global Cloud Storage market share. Major industry players with significant revenue share include Tencent, Virtustream, Fujitsu, Box, Carbonite, AWS, IBM, Microsoft, Google, Oracle, and others.

The global Cloud Storage market report provides geographic analysis covering regions, such as North America, Europe, Asia-Pacific, and Rest of the World. The Cloud Storage market for each region is further segmented for major countries including the U.S., Canada, Germany, the U.K., France, Italy, China, India, Japan, Brazil, South Africa, and others.

FYI, You will get latest updated report as per the COVID-19 Impact on this industry. Our updated reports will now feature detailed analysis that will help you make critical decisions.

The global Cloud Storage market is segregated on the basis of Component as Services and Solutions. Based on Type the global Cloud Storage market is segmented in File Storage, Block Storage, and Object Storage. Based on Deployment Model the global Cloud Storage market is segmented in Hybrid Cloud, Public Cloud, and Private Cloud.

Browse Full Report: https://www.marketresearchengine.com/cloud-storage-market

Competitive Rivalry

Tencent, Virtustream, Fujitsu, Box, Carbonite, AWS, IBM, Microsoft, Google, Oracle, and others are among the major players in the global Cloud Storage market. The companies are involved in several growth and expansion strategies to gain a competitive advantage. Industry participants also follow value chain integration with business operations in multiple stages of the value chain.

Cloud storage is allowing enterprises to store information on remote servers which may be accessed using internet. These remote servers are operated, maintained and managed by cloud storage facility providers. It is commonly a virtual mode of knowledge storage. Data stored on cloud are often accessed and shared across strategy with internet. cloud storage adoption has rapidly increased among several industry verticals including banking, government & education, healthcare, manufacturing, telecommunication & IT, defense, and lots of others due to the rise within the need for the standardized and low cost of knowledge storage facilities. Cloud storage enhances business operations by leveraging mobile workforce with easy accumulation, archive, access, and data recovery facilities. Also, cloud enables storage scalability with minimal cost as compared to on-premise data centers. Cloud storage has accumulated a concrete consideration among enterprises due to its flexible, authentic, and secure mode of knowledge storage and availability. Growing demand for low cost data storage, backup, and data protection enhances the expansion of cloud storage market between numerous user groups including small, medium, and enormous enterprises. Additionally, numerous industry verticals like BFSI, retail, healthcare, and public sector with an outsized customer base to store critical business information of investors in cloud storage, due to data privacy and client information, which successively expand the worldwide cloud storage market growth. Based on deployment model, Public cloud deployment model is predicted to carry the most important share of cloud storage market. Industries implementing cloud storage solutions are increasingly inclined towards cloud-based deployment models so driving the general public cloud deployment model. High price of personal cloud and hazard of open threats publicly cloud storage among end-users including banking, healthcare, and government sectors is generating the necessity of hybrid storage where elasticity is provided to modify among private and public space for storing as per the need.

Based on Organization Size, the global Cloud Storage market is segmented in Small and Medium-Sized Enterprises and Large Enterprises. The report also bifurcates the global Cloud Storage market based on Vertical in Healthcare and Life Sciences, Government and Public Sector, Manufacturing, Media and Entertainment, Consumer Goods and Retail, Banking, Financial Services, and Insurance, Telecommunications, IT and ITeS, Energy and Utilities, and Education.

The Cloud Storage Market has been segmented as below:

Cloud Storage Market, By Component

Cloud Storage Market, By Type

Cloud Storage Market, By Deployment Model

Cloud Storage Market, By Organization Size

Cloud Storage Market, By Vertical

Cloud Storage Market, By Region

Cloud Storage Market, By Company

The report covers:

Report Scope:

The global Cloud Storage market report scope includes detailed study covering underlying factors influencing the industry trends.

The report covers analysis on regional and country level market dynamics. The scope also covers competitive overview providing company market shares along with company profiles for major revenue contributing companies.

Reasons to Buy this Report:

Customization

Customized report as per the requirement can be offered with appropriate recommendations

Request Sample Report from here: https://www.marketresearchengine.com/cloud-storage-market

Table of Contents:

6. Cloud Storage Market, By Type7. Cloud Storage Market, By Deployment Model8. Cloud Storage Market, By Organization Size

9. Cloud Storage Market, By Vertical

About MarketResearchEngine

Market Research Engine is a global market research and consulting organization. We provide market intelligence in emerging, niche technologies and markets. Our market analysis powered by rigorous methodology and quality metrics provide information and forecasts across emerging markets, emerging technologies and emerging business models. Our deep focus on industry verticals and country reports help our clients to identify opportunities and develop business strategies.

Media Contact

Contact Person: John Bay

Email: [emailprotected]

Phone: +1-855-984-1862

Country: United States

Website: https://www.marketresearchengine.com

Read the original here:
Cloud Storage Market 2020: Challenges, Growth, Types, Applications, Revenue, Insights, Growth Analysis, Competitive Landscape, Forecast- 2025 - Cole...

Read More..

How to Host Your Own VPN with Algo and Cloud Hosting – How-To Geek

Companies all over the world sell VPN services to secure your online activity, but can you really trust a VPN provider? If you want, you can create your own virtual private network with the open-source Algo software, and the cloud-hosting provider of your choice.

Regardless of what the privacy policy says or boasts about security audits on a company blog, theres nothing stopping a VPN from monitoring everything you do online. In the end, choosing a VPN serviceall comes down to trust.

If trusting faceless online services isnt your thing, one alternative is to run your own VPN server. This used to be a daunting task, but thanks to the open-source project Algo from security company Trail of Bits, creating your own VPN is now easy.

For $5 per month, you can run and control your own full-time VPN server. Even better, you can use Algo to set up and tear down VPN servers as you need them, and save money in the process.

To set-up Algo, you have to use the command line. If thats off-putting, dont worrywell walk you through every step.

These instructions might seem like a lot, but thats only because were explaining as much as we can. Once youve created a VPN with Algo a few times, it shouldnt take very long at all. Plus, you only have to set up Algos installation environment once. After that, you can create a new VPN server with a few keystrokes.

But can you trust that Algos scripts arent doing anything untoward? Well, the good news is Algos code is public on GitHub for anyone to look at. Plus, many security experts are interested in the Algo project, which makes misdeeds less likely.

RELATED: What Is a VPN, and Why Would I Need One?

A VPN is a good way to protect your online activityespecially on a public Wi-Fi network in an airport or coffee shop. A VPN makes web browsing more secure and stymies any malicious actors who might be on the same local Wi-Fi network. A VPN can also help if your ISP restricts certain kinds of traffic, like torrents.

But watch out, pirates! Downloading booty through your own VPN isnt a good idea, as the activity can more easily be traced back to you.

Also, if you wanna watch Netflix over your VPN, youll have to look elsewhereAlgo doesnt work with it. However, there are many commercial services that do support Netflix.

To get an Algo VPN server up and running, you need a Unix Bash shell. On a Mac or Linux system, you can use your Terminal program, but on Windows, youll have to activate the Subsystem for Linux. Hereshow to install and use the Linux Bash shell on Windows 10.

Youll also need an account at a cloud server hosting provider. Algo supports all of the following:

If youve never used any of these services, we recommend DigitalOcean, as its very user-friendly. Its also the service were using in this tutorial. The process will be a bit different if you use a different provider.

When your DigitalOcean account is ready to go, sign in, and then, from the primary dashboard, select API from the left rail under the Account heading.

On the next page, click Generate New Token. An access token is a long string of letters and numbers that permits access to account resources without a username and password. Youll need to name the new token. Generally, its a good idea to name it after the application youre using, such as algo or ian-algo (if your first name happens to be Ian).

After the new token is generated, copy and paste it into a text document on your desktop. Youll need it in a few minutes.

Back on your desktop, open a fresh terminal window, type cd(for change directory, which is what folders are called in the Unix world), and hit Enter. This will ensure youre working from the terminals home directory.

At this writing, Algo requires Python 3.6 or later. Type the following into your terminal program:

If you get a response likePython 3.6.9, youre good to go; if not, youll have to install Python 3.

To install Python 3 on Mac, you can use the Homebrew package manager. When Homebrews ready to go, type the following command in a Terminal window:

If youre using Ubuntu Linux or WSL on Windows, they should have Python 3 by default. If not, installation methods vary depending on your version of Linux. Search online for install Python 3 on [insert your version of Linux here] for instructions.

Next, you need to install Python3s Virtualenv to create an isolated Python environment for Algo. Type the following in Bash on a Mac:

On Ubuntu Linux and WSL, the command is the following:

Note that were tailoring this tutorial for Ubuntu and related distributions, but these instructions will also work for other versions of Linux with some minor alterations. If youre using CentOS, for example, youd substitute the instructions using apt with dnf.

Next, we need to download Algo with the wget command. Macs dont have wget installed by default, so to get it via Homebrew, type the following:

Now, lets download Algos files:

After wget finishes, there will be a compressed file called master.zip in your terminals home directory; lets check that with ls.

If you see master.zip in the list of files and folders that appears, youre good to go. If not, try running wget again.

Now, we need to unzip the file, so we type the following:

After thats done, hit ls again. You should now see a new folder in your home directory called algo-master.

Were almost ready for action, but first, we need to set up our isolated environment and install a few more dependencies. This time well work inside the algo-master folder.

Type the following to switch to the folder:

Make sure youre there with this command:

This stands for print working directory, and it should show you something like /home/Bob/algo-master or /Users/Bob/algo-master. Now that were in the right place, lets get everything ready.

Either copy and paste or type the command below on a single line (dont press Enter until the end):

This triggers a whole lot of action inside the Algo directory to prepare to run.

Next, you have to name your users for the VPN. If you dont name all of them now, youll either have to hold onto the security keys (which is less secure) or start a new server from scratch later on.

Either way, type the following in terminal:

This opens the user-friendly command-line text editor,Nano. The Algo config file has a lot of information in it, but were only interested in the part that says users. All you have to do is remove the default usernames (phone, laptop, desktop), and type a name for each device you want on your VPN.

For example, if Im creating a VPN for myself, Bill, and Mary, the config file might look like the following:

Once youve named everyone, press Ctrl+O to save the file, followed by Ctrl+X to exit.

Were almost ready for action, but first Windows folks need to take a little detour. WSL usually doesnt set the correct user permissions for the Algo folder, which upsets Ansible (the tool Algo relies on to deploy a server).

On WSL, type the following to go back to your home directory:

Then, type the following:

To go back to the Algo folder, type:

And now is the moment of truth.

From the algo-master folder, type the followingin the terminal window:

The Algo configuration should start running. Youll know its working when it asks which cloud provider youd like to use. In our case, we select the number (1) for DigitalOcean.

If Algo fails, it could be a number of reasons we cant possibly predict here. If the error says your directory is world write configurable, then follow the instructions above for changing permissions.

If you get a different error, check the troubleshooting page in the Algo project repository on GitHub. You can also copy the error message and paste it in Google to search for it. You should find a forum post that will help, as its unlikely youre the first person to receive that error.

Next, youll be asked for the access token you copied earlier from your DigitalOcean account. Copy and paste it into terminal. You wont see anything because Bash doesnt display characters for password- and security-phrase entries. As long as you hit paste, and then press Enter, though, it should be fine.

If it fails, you might have just messed up the paste, which everyone does in Bash. Just type the following to try again:

When Algo is running, answer the questions it asks. These are all pretty straightforward, like what you want to name your server (using algo in the name is a good idea).

Next, it will ask if you want to enable Connect on Demand for Mac and iOS devices. If youre not using any of those devices, type N for no. It will also ask if you want to keep the PKI keys to add more users later; generally, youll type N here, as well.

Thats it! Algo will now take about 15 to 30 minutes to get your server up and running.

When Algo finishes its setup, the terminal returns to a command-line prompt, which means the VPN is ready to go. Like a lot of commercial services, Algo uses the WireGuard VPN protocol, which is the hottest new thing in the world of VPNs. This is because it offers good security, greater speeds, and is easier to work with.

As an example of what to do next, well activate Algo on Windows. To set up other devices, you can refer to the Algo repository on GitHub.

First, well install the generic Windows desktop client from the WireGuard site. Next, we have to feed the program our config file for the PC. The configuration files are stored deep in the algo-master folder at: ~/algo-master/configs/[VPN server IP address]/wireguard/.

There are two types of files for configuring VPN client devices: .CONF and .PNG. The latter are QR codes for devices like phones, that can scan QR codes. The .CONF (configuration) files are text files for the desktop WireGuard clients.

On Mac and Ubuntu, it shouldnt be hard to find the algo-master folder outside of the command line. On Macs,algo-master is in the Home folder; just useFinder > Go > Home to get there. On Ubuntu, you can open Nautilus, and itll be in the Home folder.

On Windows, however, WSL is separate from the rest of the OS. For this reason, its just easier to copy the files over with the command line.

Using our previous example, lets say we want the Mary-PC.conf configuration file to use on a Windows 10 PC. The command would look something like this:

Note the space between Mary-PC.conf and /mnt/; thats how Bash knows where the file to be copied is located, and where its going. Case also matters, so make sure you type capitals where specified.

Its natural on Windows to want to capitalize the C in C: drive, but in Bash you dont. Also, dont forget toreplace the bits in brackets with the actual information for your PC.

For example, if your user folder is on the D: drive, not the C:, then replace /mnt/c/ with /mnt/d/.

Once the file is copied, open the WireGuard for Windows client. Click Import Tunnels From File, and then select your configuration file on the desktop. After thats done, click Activate.

In just a few seconds, youll be connected to your very own VPN!

See the original post here:
How to Host Your Own VPN with Algo and Cloud Hosting - How-To Geek

Read More..