Page 3,819«..1020..3,8183,8193,8203,821..3,8303,840..»

Source Code Escrow Agreements Are Reaching For The Cloud – Law360

Law360 (February 28, 2020, 3:22 PM EST) -- Source code escrow agreements have long been accepted by software providers in traditional on-premises software sales. But how often do we see on-premises software licenses today? An overwhelming number of vital business functions are now offered through cloud applications, including software-as-a-service solutions.

When it comes to SaaS, the customer is often at a greater risk of losing access to the solution than it would be with traditional software, and yet the traditional source code escrow model is not sufficient to mitigate that risk. As tech transactions practitioners who negotiate SaaS agreements on a near-daily basis, we are seeing, in real time,...

In the legal profession, information is the key to success. You have to know whats happening with clients, competitors, practice areas, and industries. Law360 provides the intelligence you need to remain an expert and beat the competition.

TRY LAW360 FREE FOR SEVEN DAYS

Here is the original post:
Source Code Escrow Agreements Are Reaching For The Cloud - Law360

Read More..

Two Rare Cloud Features Appear Over New Hampshire’s Mount Washington – Smithsonian.com

Mount Washington, the tallest peak in New Hampshires White Mountains, has long been the site of unusual and extreme weather. On Monday, its mountaintop observatory shared a striking view: not one, but two rare cloud features hovering side by side, lit by the sunrise.

The photograph, taken around 7 a.m., features a smooth, round lenticular cloud that swoops into a curled Kelvin-Helmholtz wave. The pair was likely a fleeting feature in the morning sky, as clouds like these are unstable and usually lasts less than a minute.

Seeing either type of cloud is unusual, meteorologist Matthew Cappucci writes for the Washington Post. But for both features to be wrapped up in one cloud? An exotic sight and then some.

As Cappucci explains, lenticular clouds are common around mountains that force incoming air to rise. If that air is carrying water vapor, then when it rises through the atmosphere and its temperature drops, the water will condense into a crowd of water droplets, the same way that droplets cover a cold glass on a warm, humid day. In the sky, the massive collection of suspended droplets forms a lenticular cloud, which erodes as the air sinks back down.

A single lenticular cloud recently appeared downwind of Californias Mount Shasta, per the Post. But Mount Washingtons cloud had another trick to share. Unstable conditions in the atmosphere can lead air currents to rise, while fast, low-density wind sweeps along the top. These wind currents can sculpt a cloud into the curlicue shape of a Kelvin-Helmholtz wave, meteorologist Jesse Ferrell tells AccuWeathers Lauren Fox.

Kelvin-Helmholtz waves normally appear in a repetitive series, making Mount Washingtons one-off wave unique. Then, the waves usually only last for a matter of seconds. A series of clear Kelvin-Helmholtz waves in Virginia was photographed last June when it appeared for less than a minute, according to the photographer Amy Hunter. The waves are named for the scientists that discovered the weather patterns that cause them: Hermann von Helmholtz and William Thomson, also known as Lord Kelvin, meteorologist Brett Rossio told Ashley Williams at AccuWeather last year.

Mount Washington is no stranger to mind-boggling winds. For 62 years, the Mount Washington Observatory held the record for the fastest gust of wind recorded on Earths surface. At 231 miles per hour, the measurement taken in April 1934 is still the fastest surface windspeed observed directly by humankind. An autonomous instrument recorded the current windspeed record, 251 mph, in 1996 during Typhoon Olivia in Australia.

The observatory is staffed year-round by scientists and staff, and has a history of hosting cats, like Elmer, Manx, Oomph and a litter of kittens who huddled by the coal stove the winter that the record windspeed was recorded. This Monday, temperatures at the observatory were in the low 20s, and winds gusted near 50 miles per hour.

The Mount Washington Observatory regularly shares sunrises and sunsets on social media. On a foggy day, visibility is severely limited in all directions. But on a clear day, viewers from the 6,288-foot-tall mountains peak can see the peaks of Mounts Adams, Jefferson and Madison of the Presidential Range.

Mondays view was special because, as the observatory said in its original post, "Mother Nature waved back."

More here:
Two Rare Cloud Features Appear Over New Hampshire's Mount Washington - Smithsonian.com

Read More..

Millennials to Drive Bitcoin Higher: 4 Stocks to Watch – Nasdaq

U.S. stock markets fell for the sixth straight day on Thursday after the coronavirus epidemic sparked fears of a slowdown in the major economies of the world. Following this, investors rotated out of stocks to safer haven assets such as gold and U.S. Treasury bonds.

Bitcoin prices rose 1.4% to $8,902. The worlds favorite cryptocurrency rose along with other popular cryptocurrencies such as ether. While some might argue, like they have in the past, that Bitcoin is all but a fad, millennials have a different take on it when it comes to investing.

Approximately one-third of all the millennials in the country prefer to hold Bitcoin over shares. Further, a staggering 43% of U.S. millennials stated that they trusted cryptocurrency exchanges more than Americas stock exchanges.

A report by Edelman stated that approximately 25% if the countrys millennials who earn at least $100,000 in individual or joint income or own $50,000 worth of investable assets, admitted to either holding or using cryptocurrencies. Further, the report also stated that another 31% expressed their interest in using them.

Quite unsurprisingly, a shift toward alternative banking options such as PayPal PYPL and Square Cash have also popularized Bitcoins. Tech-savvy millennials who find the digital way of doing business efficient and normal choose to take this route.

Bitcoin or Digital Gold?

It is a norm to save up for the future by investing in gold and bonds, a lesson that has been passed down through generations. However, technology and money have revolutionized the way investing is done in todays age. Millennials, who vouch for bitcoin democratization, also prefer the worlds numero uno cryptocurrency as a form of investment to save for the future.

A survey revealed that millennials are five times more likely than baby boomers to say that Bitcoin is the best way to save for the future. Such sentiments have only cemented the cryptocurrencys place as the digital gold of the modern world.

Transfer of Wealth from Baby Boomers to Millennials

Now that it has more or less been established that millennials prefer Bitcoins for long-term savings, an increase in millennials wealth should only prove to be a boon for Bitcoin. A generational shift of wealth from the baby boomers, currently the richest generation in Americas history, is set to take place through the 2020s.

After living the American dream and enjoying a long period of economic prosperity, baby boomers are all set to pass on the baton of wealth to the largest generational cohort currently in America, the millennials. It is estimated that millennials would collectively receive approximately $7 trillion from their elders till 2030. This is going to shape up the cryptocurrency space over the next decade, not to mention technology.

4 Stocks to Watch Out For

As with any revolutionary technology, the Lindy effect applies to Bitcoin as well. The Lindy effect is a theory, which states that the future life expectancy of certain non-perishable things such as technology or an idea is proportional to their current age. This means that for every additional period that the technology survives, it ensures a longer life expectancy. This only strengthens the argument that Bitcoins are the future.

In this context, we have selected four stocks that are expected to gain from these factors.

Microsoft CorporationMSFT became the first major cloud hosting provider to integrate blockchain into its Azure cloud last year. Furthermore, the companys affinity for Bitcoin is a known fact. The tech giant, has in the past, also launched the decentralized identity system on Bitcoin.

The company carries a Zacks Rank #1 (Strong Buy) and is based out of Redmond, WA. It has an expected earnings growth rate of 18.72% for the current year. The Zacks Consensus Estimate for the current year has improved 5.4% over the past 60 days.

You can seethe complete list of todays Zacks #1 Rank stocks here.

International Business Machines Corporation IBM has been one of the early providers of the blockchain technology. Broad-based availability of the IBM Blockchain World Wire a blockchain driven global payments network has driven the companys performance in the past.

The company carries a Zacks Rank #2 (Buy) and is based out of Armonk, NY. It has an expected earnings growth rate of 4.30% for the current year. The Zacks Consensus Estimate for the current year has improved 1.2% over the past 60 days.

PayPal Holdings, Inc.is another bitcoin-related stock that you must watch. The leader in digital payment process has done well recently to strike a deal with three major bitcoin payment processors, BitPay, GoCoin and Coinbase, to help PayPal merchants accept Bitcoin as a mode of payment.

The company carries a Zacks Rank #3 and is based out of San Jose, CA. It has an expected earnings growth rate of 11.35% for the current year. The Zacks Consensus Estimate for the current year has improved 1.4% over the past 60 days.

Social media giant Facebook Inc. FB is another stock worth taking note of. Facebook currently has a Zacks Rank #3. The Zacks Consensus Estimate for its current-quarter earnings has increased 2.7% over the past 60 days. It has an expected earnings growth rate of 44.79% for the current year.

5 Stocks Set to Double

Each was hand-picked by a Zacks expert as the #1 favorite stock to gain +100% or more in 2020. Each comes from a different sector and has unique qualities and catalysts that could fuel exceptional growth.

Most of the stocks in this report are flying under Wall Street radar, which provides a great opportunity to get in on the ground floor.

Today, See These 5 Potential Home Runs >>

Microsoft Corporation (MSFT): Free Stock Analysis Report

International Business Machines Corporation (IBM): Free Stock Analysis Report

Facebook, Inc. (FB): Free Stock Analysis Report

PayPal Holdings, Inc. (PYPL): Free Stock Analysis Report

To read this article on Zacks.com click here.

Zacks Investment Research

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

Read more from the original source:
Millennials to Drive Bitcoin Higher: 4 Stocks to Watch - Nasdaq

Read More..

Harnessing the benefits of HSCN requires more than just ‘connectivity’ – Open Access Government

Its evident that NHS Digital has a vision for greater collaboration and improved access for health and social care services. However, the onus is on health and social care organisations to ensure they adopt its recommended framework and that the digital journey keeps pace with the rest of the sector.

As part of this, the Health and Social Care Network (HSCN) data network has been widely adopted by NHS bodies across the UK. Those that have procured HSCN have subsequently enabled themselves to move to internet and cloud-based systems and protect themselves against potentially devastating cybersecurity threats.

As one of the building blocks in an organisations digital ecosystem, HSCN gives health and social care organisations the robust IT infrastructure they need to align with NHS Digitals mission to harness the power of information and technology to improve health and care.

However, to truly reap the benefits of the new network, and deliver the best outcomes for the people they serve, organisations must look beyond the initial procurement stage and maximise everything the network has to offer.

There are three phases of IT implementation that organisation can follow up on to ensure it is maximising HSCN. Phase one involves optimising the bandwidth at each of site based on the needs of the organisation and cost-efficiency requirements, ensuring that HSCN is supported by a reliable network. With mobile failover and proactive network monitoring too, an organisation will have total peace of mind that a robust system is in place. Networks can also be opened up with Wi-Fi, letting different users access the network in different ways and creating an always connected environment.

With a reliable network in place, organisations can then explore phase 2. This includes exploring hosted voice services for efficient communication between staff. Moving to hosted platforms for voice needs provides clarity, flexibility and cost savings. Organisations should then consider cloud adoption, providing flexible, always-on access to applications and data the moment their workforce needs it. And with direct cloud connections to these platforms, organisations can offer speed, security and reliability when its needed most.

The next thing to consider is quick, secure and reliable data and application access, via direct, uncontended cloud connections. Organisations can get a safe, flexible hybrid hosting solution using Microsoft Azure, AWS and Google Cloud, alongside their UK-based supplier data centres. Office 365 can then be integrated and leveraged in order to let employees work smarter, anytime, anywhere.

The third and final phase that an organisation can undertake to maximise HSCN post-procurement is to gain more network control with SD-WAN. A 21st century network with performance and service at its heart, SD-WAN offers resilience and security, and is easily configured via the cloud.

HSCN connectivity to the public cloud will also be key. This will enable health and social care workers to securely access public cloud providers (e.g. AWS and Microsoft Azure) to leverage the associated cost and operational benefits. Finally, an organisation can become a truly mobile workforce by using mobility services that provide applications and devices that ensure employees can access whatever they need, wherever they are.

Whilst NHS Digitals strategy is still evolving, and the migration to a new network may seem onerous, HSCN represents a major milestone in how health and social care is delivered. Every health and social care organisation stands to benefit from the greater collaboration, standardised networks and simpler access to national systems and applications that HSCN offers. By creating a robust IT infrastructure that combines connectivity, cloud and hosted voice, organisations can deliver the best possible care to those that need it.

Editor's Recommended Articles

Here is the original post:
Harnessing the benefits of HSCN requires more than just 'connectivity' - Open Access Government

Read More..

Why CFOs are turning to hosting as they move to in-house accounts – Financial Director

So, youve realised the benefits of running accounts in-house rather than outsourcing.

You want to make the move to improve visibility, control, and save your organisation significantly on costs.

And youve seen the great benefits that IRIS accountancy software can bring to your employer.

But youve then discovered it can take many months to set up and deploy the software because of internal security and privacy procedures that have been put in place by your IT department.

Some organisations can have very stringent IT operations and, if they are taking on new software only relevant to the finance department, it can be very difficult to get it set up and brought on board.

Some CFOs have reported having to go through a conference call involving 30 senior managers to assess whether a software provider passes numerous tests before they can get approval. These issues can relate to budget, security, infrastructure or privacy.

But if the decision can be taken outside the IT operation, sometimes it can be easier to push through without having to go through that lengthy and frustrating approval process.

Yes. The way we do it is through IRIS Hosting essentially putting your finance department and our accountancy software in the cloud. By doing so, you isolate your finance teams IT from the wider organisations network. Many businesses who have adopted it already have found this means avoiding the usual hold ups involved in the IT approval process.

You get a fully virtualised, secure and continually updated IT environment with independent working platform for your accountancy software.

And, in terms of approvals, for those already with IRIS or those about to partner with us, you get the benefit from the one supplier relationship. The due diligence and checks have already been carried out on IRIS, making the move to Hosting even easier.

There are many other good reasons that CFOs might want to consider choosing IRIS Hosting.

Firstly, theres value for money. Hosted assets can be scaled upward and downward quickly against an agreed cost structure, so IT spend can be more closely aligned with growth, and Finance Directors can keep related costs in-sync with business levels.

Costs for IT security are also kept in check. Keeping IT up to date is expensive and the costs of a breach can be severe but with Hosting you get highly secure protection and continual updates without having to take any action yourself.

Furthermore, many customers say that the experience of using a Hosted system is more satisfying because they always have the latest versions and the latest functions.

There are many advantages that IRIS Hosting can provide your corporation, including:

Want to gain the benefits from IRIS Hosting for your finance department? We can help you understand more and discuss options for getting your organisation set up.

Give us a call on 0344 844 9644 or email [emailprotected].

Excerpt from:
Why CFOs are turning to hosting as they move to in-house accounts - Financial Director

Read More..

The Cloud Snooper malware that sneaks into your Linux servers – Naked Security

SophosLabs has just published a detailed report about a malware attack dubbed Cloud Snooper.

The reason for the name is not so much that the attack is cloud-specific (the technique could be used against pretty much any server, wherever its hosted), but that its a sneaky way for cybercrooks to open up your server to the cloud, in ways you very definitely dont want, from the inside out.

The Cloud Snooper report covers a whole raft of related malware samples that our researchers found deployed in combination.

Its a fascinating and highly recommended read if youre responsible for running servers that are supposed to be both secure and yet accessible from the outside world for example, websites, blogs, community forums, upload sites, file repositories, mail servers, jump hosts and so forth.

In this article, were going to focus on just one of the components in the Cloud Snooper menagerie, because its an excellent reminder of how devious crooks can be, and how sneakily they can stay hidden, once theyre inside your network in the first place.

If youve already downloaded the report, or have it open in another window, the component were going to be talking about here is the file called snd_floppy.

Thats a Linux kernel driver used by the Cloud Snooper crooks so that they can send command-and-control instructions right into your network, but hidden in plain sight.

If youve heard of steganography, which is where you hide snippets of data in otherwise innocent-looking files such as videos or images where a few noise pixels wont attract any attention, then this is a similar sort of thing, but for network traffic.

As we say in the steganography video that we linked to in the previous paragraph:

You dont try and scramble the message so nobody can read it, so much as deliver a message in a way that no one even realises youve sent a message in the first place.

The jargon term for the trick that the snd_floppy driver uses is known as in-band signalling, which is where you use unexceptionable but unusual data patterns in regular network traffic to denote something special.

Readers whose IT careers date back to the modem era will remember probably unfondly that many modems would helpfully interpret three plus signs (+++) at any point in the incoming data as a signal to switch into command mode, so that the characters that came next would be sent to the modem itself, not to the user.

So if you were downloading a text file with the characters HELLO+HOWDY in it, youd receive all those characters, as expected.

But if the joker at the other end deliberately sent HELLO+++ATH0 instead, you would receive the text HELLO, but the modem would receive the text ATH0, which is the command to hang up the phone and so HELLO would be the last thing youd see before the line went dead.

This malware uses a similar, but undocumented and unexpected, approach to embedding control information in regular-looking data.

The crooks can therefore hide commands where you simply wouldnt think to watch for them or know what to watch for anyway.

In case youre wondering, there isnt a legitimate Linux driver called snd_floppy, but its a sneakily chosen name, because there are plenty of audio drivers called snd_somethingorother, as you can see from this list we extracted from our own Linux system:

In real life, the bogus snd_floppy driver has nothing to do with floppy disks, emulated or real, and nothing to do with sound or audio support.

What snd_floppy does is to monitor innocent-looking network traffic to look for in-band characteristics that act as secret signals.

There are lots of things that sniffer-triggered malware like this could look out for slightly weird HTTP headers, for instance, or web requests of a very specific or unusual size, or emails with an unlikely but not-too-weird name in the MAIL FROM: line.

But snd_floppy has a much simpler and lower-level trick than that: it uses whats called the network source port for its sneaky in-band signals.

Youre probably familiar with TCP destination ports theyre effectively service identifiers that you use along with an IP address to denote the specific program you want to connect to on the server of your choice.

When you make an HTTP connection, for example, its usually sent to port 80, or 443 if its HTTPS, on the server youre reaching out to, denoted in full as http://example.com:80 or https://example.com:443. (The numbers are typically omitted whenever the standard port is used.)

Because TCP supports multiple port numbers on every server, you can run multiple services at the same time on the same server the IP address alone is like a street name, with the port number denoting the specific house you want to visit.

But every TCP packet also has a source port, which is set by the other end when it sends the packet, so that traffic coming back can be tracked and routed correctly, too.

Now, the destination port is almost always chosen to select a well-known service, which means that everyone sticks to a standard set of numbers: 80 for HTTP and 443 for HTTPS, as mentioned above, or 22 for SSH, 25 for email, and so on.

But TCP source ports only need to be unique for each outbound connection, so most programmers simply let the operating system choose a port number for them, known in the jargon as an ephemeral port.

Ports are 16-bit numbers, so they can vary from 1 to 65535; ephemeral ports are usually chosen (randomly or in sequence, wrapping around back to the start after the end of their range) from the set 49152 to 65535.

Windows and the BSD-based operating systems use this range; Linux does it slightly differently, usually starting at 32768 instead you can check the range used on your Linux system as shown below.

On our Linux system, for example, ephemeral (also known as dynamic) ports vary between 32768 and 60999:

But there are no rules to say you cant choose numbers outside the ephemeral range, and most firewalls and computers will accept any legal source port on incoming traffic because it is, after all, legal traffic.

You can see where this is going.

The devious driver snd_floppy uses the usually unimportant numeric value of the TCP source port to recognise secret signals that have come in from outside the firewall.

The source port just 16 pesky bits in the entire packet is what sneaks the message in through the firewall, whereupon snd_floppy will perform one of its secret functions based on the port number, including:

Sure, the crooks are taking a small risk that traffic that wasnt specially crafted by them might accidentally trigger one of the their secret functions, which could get in the way of their attack.

But most of the time it wont, because the crooks use source port numbers below 10000, while conventional software and most modern operating systems stick to source port numbers of 32768 and above.

For details of the port numbers used and what they are for, please see the full Cloud Snooper report.

As suggested above, there is a small chance that source port filtering of this sort might block some legitimate traffic, because its not illegal, merely unusual, to use source port numbers below 32768.

Also, the crooks could easily change the secret numbers in future variants of the malware, so this would be a temporary measure only.

There are five TCP source port numbers that the driver watches out for, and one UDP source port number. Ironically, leaving just TCP source port 9999 unblocked would allow any kill payload commands to get through, thus allowing the crooks to stop the malware but not to start it up again.

This will help you to spot and stop dangerous files of many types, including rogue kernel drivers, unwanted userland programs, and malicious scripts.

Crooks need administrator-level access to your network to load their own kernel drivers, which means that by the time you are vulnerable to an attack like Cloud Snooper, the crooks are potentially in control of everything anyway.

Many network-level attacks where criminals need root or admin powers are made possible because the crooks find their way in through a legimitate remote access portal that wasnt properly secured.

Yes, crooks who already have root powers can tamper with your logging configuration, and even with the logs themselves, making it harder to spot malicious activity.

But its rare that crooks are able to take over your servers without leaving some trace of their actions such log entries showing unauthorised or unexpected kernel drivers being activated.

The only thing worse than being hacked is realising after youve been hacked you could have spotted the attack before it unfolded if only youd taken the time to look.

Read more:
The Cloud Snooper malware that sneaks into your Linux servers - Naked Security

Read More..

Cloud Security Risks Will Be a Top Concern for Organizations in 2020 – Security Magazine

Cloud Security Risks Will Be a Top Concern for Organizations in 2020 | 2020-02-28 | Security Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

See the rest here:
Cloud Security Risks Will Be a Top Concern for Organizations in 2020 - Security Magazine

Read More..

Cloud Snooper firewall bypass may be work of nation state – ComputerWeekly.com

Next-gen security specialist Sophos has revealed details of a sophisticated new attack known as Cloud Snooper, which enables malware on servers to communicate freely with its command and control (C2) servers through its victims firewalls, and may have been developed by a nation state actor.

The attack technique was uncovered by SophosLabs threat research manager Sergei Shevchenko whilst investigating a malware infection of some AWS hosted cloud servers. However, it is not an AWS-specific attack, but rather it represents a method of piggybacking C2 traffic on legitimate traffic to get past firewalls and exfiltrate data.

Cloud Snooper uses three main tactics, techniques and procedures (TTPs) in tandem. These consist of a rootkit to circumvent firewalls, a rare technique to gain access to servers while disguised as legitimate traffic essentially a wolf in sheeps clothing and a backdoor payload that shares the malicious code between both Windows and Linux systems. Each of these elements has been seen before, but never yet all at once.

This is the first time we have seen an attack formula that combines a bypassing technique with a multi-platform payload targeting both Windows and Linux systems,said Shevchenko.

IT security teams and network administrators need to be diligent about patching all external-facing services to prevent attackers from evading cloud and firewall security policies.

IT security teams also need to protect against multi-platform attacks. Until now, Windows-based assets have been the typical target, but attackers are more frequently considering Linux systems because cloud services have become popular hunting grounds. Its a matter of time before more cyber criminals adopt these techniques.

Shevchenko said that the complexity of the attack and the use of bespoke advanced persistent threat (APT) toolkit strongly suggested that the malware and its operators are highly advanced and possibly being backed by a nation state actor.

He added that is was possible, indeed highly likely, that the specific package of TTPs would trickle down to the lower rungs of the cyber criminal hierarchy, and eventually form a blueprint for widespread firewall bypass attacks.

This case is extremely interesting as it demonstrates the true multi-platform nature of a modern attack, said Shevchenko.

A well-financed, competent, determined attacker will be unlikely ever to be restricted by the boundaries imposed by different platforms building a unified server infrastructure that serves various agents working on different platforms makes perfect sense, he added.

Shevchenko said that in terms of prevention against this or similar attacks, while AWS Security Groups (SGs) provide a robust boundary firewall for EC2 instances, this does not in and of itself remove the need for network admins to fully patch all their outward-facing services.

He added that the default installation for the SSH server also needs extra steps to harden it, turning it into a rock-solid communication daemon.

Sophos shared a number of steps proactive admins should be taking. These include creating a full inventory of all network-connected devices and keeping their security software updated; fully-patching outward-facing services above and beyond what Amazon or your cloud service of choice might provide; check and double-check all cloud configurations; and enable multi-factor authentication on security dashboards or control panels to stop attackers disabling your defences, or at least to make it harder for them to do so.

Continued here:
Cloud Snooper firewall bypass may be work of nation state - ComputerWeekly.com

Read More..

Windows 10: Containers are the future, and here’s what you need to know – TechRepublic

With two use cases for its containers, and five different container models, it would seem that Microsoft's container strategy is ripe for confusion. But that's not the case.

Microsoft offers many different container models on Windows. If you're running Windows 10 you're running several without even realising it: wrapping and isolating all your UWP apps; using thin virtual machines to deliver security; and, if you're a developer, either Windows or Linux Docker instances.

That layered container model is key to the future of Windows -- one that reaches into the upcoming Windows 10X and out into the wider world of public and private clouds, with Docker Windows containers now officially part of Kubernetes. Microsoft is working on shrinking Windows Server to produce lightweight container base images with a more capable Windows.

While the desktop containers are intended to both simplify and secure your desktop applications, providing much-needed isolation for apps installed via appx or MSIX (and in Windows 10X for any other Win32 code), Windows 10's containers are based on Windows' own process isolation technology. It's not the familiar Docker model that we find in our cloud-hosted enterprise applications.

That's not to say Windows 10 can't run Docker containers. Microsoft is using Docker's services to underpin its Windows Server containers. You can build and test code running inside them on Windows PCs, running either Pro or Enterprise builds, and the upcoming 2004 release of Windows 10 brings WSL2 and support for Linux containers running on Windows.

Docker has been developing a new version of its Docker Desktop tools for Windows around WSL2, making it as easy to develop and test Linux containers on Windows 10 as it is to work with Windows' own containers. With Microsoft positioning Windows as a development platform for Kubernetes and other cloud platforms, first-class Docker support on Windows PCs is essential.

It's not only Linux containers in the cloud. Windows containers have a place too, hosting .NET and other Windows platforms. Instead of deploying SQL Server or another Windows server application in your cloud services, you can install it in a container and quickly deploy the code as part of a DevOps CI/CD deployment. Modern DevOps treats infrastructures (especially virtual infrastructures) as the end state of a build, so treating component applications in containers as one of many different types of build artifact makes a lot of sense.

What's important here is not the application, but how it's orchestrated and managed. That's where Kubernetes comes in, along with RedHat's OpenShift Kubernetes service. Recent releases have added support for Windows containers alongside Linux, managing both from the same controller.

While both OpenShift and Kubernetes now support Windows containers, they're not actually running Windows containers on Linux hosts. There's no practical reason why they can't use a similar technique to that used by Docker to run Linux containers on Windows. However, Windows Server's relatively strict licensing conditions require a Windows licence for each virtual machine instance that was hosting the Windows containers.

Using Windows containers in Kubernetes means building a hybrid infrastructure that mixes Linux and Windows hosts, with Windows containers running on Windows Server-powered worker nodes. Using tools like OpenShift or the Azure Kubernetes Service automates the placement of code on those workers, managing a cross-OS cluster for your application. .NET code can be lifted into a Windows Docker container and deployed via the Azure Container Registry. You can manage those nodes from the same controller as your Linux nodes.

SEE:Serverless computing: A guide for IT leaders(TechRepublic Premium)

There's no need to learn anything new, if you're coming to Windows containers from Linux. You're using familiar Docker tools to build and manage your container images, and then the same Kubernetes tooling as you'd use for a pure Linux application. Mixing and matching Windows and Linux microservices in a single application allows you to take advantage of OS-specific features and to keep the expertise of existing developer teams, even as you're switching from a traditional monolithic application environment to a modern distributed system.

Microsoft is building a suite of open-source tools to help manage Windows containers, with a GitHub repository for the first one, a logging tool. Improving logging makes sense for a distributed application, where multiple containers interact under the control of Kubernetes operators.

Outside of Kubernetes, Windows containers on Windows Server have two different isolation modes. The first, process isolation, is similar to that used by Linux containers, running multiple images on a host OS, using the same kernel for all the images and the host. Namespaces keep the processes isolated, managing resources appropriately. It's an approach that's best used when you know what all the processes running on a server are, ensuring that there's no risk of information leaking between different container images. The small security risk that comes with a shared kernel is why Microsoft offers a more secure alternative: isolated containers.

Under the hood of Windows Server's isolated containers is, of course, Hyper-V. Microsoft has been using it to improve the isolation of Docker containers on Windows, using a thin OS layer running on top of Hyper-V to host a Docker container image, keeping performance while ensuring that containers remain fully isolated. While each container is technically a virtual machine with its own kernel, they're optimised for running container images. Using virtualization in this way adds a layer of hardware isolation between container images, making it harder for information to leak between them and giving you a platform that can host multiple tenant images for you.

It's easy enough to make and run a Hyper-V container. All you need to do is set the isolation parameter in the Docker command line to 'hyperv', which will launch the container using virtualisation to protect it. The default on desktop PCs is to use Hyper-V, for servers it's to use Docker isolation. As a result, you may prefer to force Hyper-V containers on your Windows Server container hosts.

Microsoft has been working hard to reduce the size of the Hyper-V server image that's used for Windows containers. It's gone down from nearly 5GB with Windows Server 1809 and 1903, to half the size at 2.46GB in the upcoming 2004 release. And that's Windows Server Core, not Nano! Building on Windows Server Core makes sense as it has a larger API surface, reducing the risk of application incompatibility.

With two use cases for its containers, and five different container models, it would seem that Microsoft's container strategy is ripe for confusion. But that's not the case. Windows' own application isolation technologies are managed automatically by the installer, so all you need to consider is whether your server applications run using process isolation or in Hyper-V. And that's a decision best made by whether you're running your applications on your own servers in your own data centre, or in the public cloud.

Be your company's Microsoft insider by reading these Windows and Office tips, tricks, and cheat sheets. Delivered Mondays and Wednesdays

See more here:
Windows 10: Containers are the future, and here's what you need to know - TechRepublic

Read More..

Arm-based AI Inference Edge Server Takes on GPU Price/Performance – EnterpriseAI

source: shutterstock

Edge computing specialist SolidRun and ASIC solutions company Gyrfalcon Technology this week announced an Arm-based AI inference edge server that the companies say outperforms GPU performance for less cost and power consumption.

The server, called the Janux GS31, can be configured with up to 128 Gyrfalcon Lightspeeur SPR2803S neural accelerator chips, delivering a maximum of 24 TOPS per watt, outperforming SoC- and GPU-based systems by orders of magnitude, while using a fraction of the energy required by systems with equivalent computational power, the companies said in a joint announcement. The hardware supports low latency decoding and video analytics of up to 128 channels of 1080p/60Hz video designed for such edge AI use cases as monitoring smart cities and infrastructure, intelligent enterprise/industrial video surveillance applications and tagging photos and videos for text-based searching.

"AI is rapidly moving to the edge of the network to address the performance and security needs of many applications, said Jim McGregor, founder and principal analyst, Tirias Research. As a result, new networks will drive increasing demand for processing performance and efficiency. The SolidRun platform, leveraging the GTI AI acceleration technology, will provide a powerful and efficient way to build a new intelligent network bridging the gap between devices and the cloud."

Milpitas, CA-based Gyrfalcon bills itself as a developer of high-performance AI accelerators that use low power small-sized chips. SolidRun is an Israeli Arm and x86 computing and network technology company focused on AI edge deployment and 5G.

"Powerful, new AI models are being brought to market every minute, and demand for AI inference solutions to deploy these AI models is growing massively," said Dr. Atai Ziv, CEO at SolidRun. "While GPU-based inference servers have seen significant traction for cloud-based applications, there is a growing need for edge-optimized solutions that offer powerful AI inference with less latency than cloud-based solutions. Working with Gyrfalcon and utilizing their industry-proven ASICs has allowed us to create a powerful, cost-effective solution for deploying AI at the Edge that offers seamless scalability."

Related

Go here to see the original:
Arm-based AI Inference Edge Server Takes on GPU Price/Performance - EnterpriseAI

Read More..