Page 4,074«..1020..4,0734,0744,0754,076..4,0804,090..»

Bitcoins Bounce Bumped Into The Resistance – Yahoo Finance

In June and October, this area was a significant support that now may become serious resistance. Nevertheless, we are witnessing a decline in trading volumes to $20bn on the way to growth against $45bn at the beginning of the week, which calls into question the short-term outlook for the benchmark cryptocurrency, pointing to the wait-and-see approach of investors.

Judging by the record statistics from Bakkts platform on Wednesday, institutional investors show much higher determination. It became known that the daily volume of the company reached a maximum of $42.5mln. With the extremely disappointing start, the current events may indicate a gradual increase in demand from institutional investors. Even more positive is the fact that a sharp rise in the volume of Bakkt coincided with the rebound of Bitcoin price from $7K to $7.5K.

Everyone knows that the crypto market is relatively small compared to the traditional sector. Therefore, it is much easier victim for manipulations. The collapse at the beginning of the working week may be since hedge funds on CME have $50 mln in short positions for Bitcoin. Quite enough to strengthen the adverse reaction of the market, already affected by the actions of the Chinese authorities. In addition, the situation with the miners exodus also helped to form a perfect storm.

Some analysts call the current crypto market bounce dead cat bounce, treating everything as a bounce in the downstream channel. Some media also expressed an opinion that crypto-winter did not end at all, which caused a fierce reaction of crypto-community, which is not famous for its tolerance to the opposite position and supports only the growth theory.

Nevertheless, there is a relatively large camp of experts who still believe that the value of Bitcoin is zero. Regulators effectively stopped off all massive projects based on blockchain or any other monetary surrogate. There is growing confidence that the cryptocurrency will not be allowed to become the mainstream unless it is the right state-owned cryptocurrency.

Ceteris Paribus has estimated that the current cost of producing one bitcoin is $7,100. Past average values of around $4,300 named incorrect, as they do not take into account equipment depreciation. A lousy situation for small miners, who may gradually leave the market, strengthening consolidation in the industry.

This article was written by FxPro

This article was originally posted on FX Empire

More here:
Bitcoins Bounce Bumped Into The Resistance - Yahoo Finance

Read More..

Startup jobs of the week: Marketing Communications Specialist, Oracle Architect, Machine Learning Scientist – BetaKit

Every day, Canadas tech startups post their latest and greatest job opportunities on Jobs.BetaKit, powered by Jobbio. From early-stage to Series B and beyond, Jobs.BetaKit helps startups from all over the country hire Canadas top tech talent.

Each week, BetaKit will highlight a selection of the job roles posted to Jobs.BetaKit. If youre a candidate looking for a position at a tech Canadian startup, survey the selection below or view all the posted positions here. For companies in need of top candidates, scroll to the bottom of this post to learn how to get your roles posted to Jobs.BetaKit!

TalentMinded Controller

The companys client, a leader in their industry, is adding a Controller to their global Finance organization. Reporting to the VP Finance, the Controller is an individual who takes initiative, capitalizes on opportunities, and wants to build and maintain processes and guidelines. This is a role for someone who wants to manage relationships with external investors, take a hands-on approach with reporting and analysis, and oversee the milestones and monthly activities while leaving the day-to-day with your team. Our client is seeking an insightful influencer who can inspire, find solutions and motivate resources toward goals; someone who understands this is an active individual contributor role in a dynamic environment working closely with the VP Finance.

iNTERFACEWARE Marketing Communications Specialist

You are a Marketing Coordinator looking for a chance to step up or you are currently a marketing Communications Specialist looking to move to a company with purpose. You have B2B experience from working in a corporate environment, PR company or marketing agency. You are looking for a new challenge at a software company building technology with a purpose, occupying a powerful market niche; a place that needs your creative writing skills to further propel us along an impressive growth trajectory.

Sensibill Demand Generation Specialist

This role will create, test, deploy and measure demand gen campaigns and programs to acquire and nurture sales qualified leads that will contribute to revenue growth for Sensibill. The Demand Generation Specialist is responsible for marketing campaign elements including but not limited to: email blasts, email nurture campaigns, automation workflows, webinar campaigns, paid media, landing page and website optimization, A/B testing, SEO, list segmentation and data cleanliness. The role requires strategic thinking and a data-driven mindset to create campaigns that convert and add to the bottom line.

Chisel AI Machine Learning Scientist

Reporting to the Data Science Lead, you will collaborate with a team of Machine Learning Scientists to explore and understand the latest in AI, NLP, and ML; working together to implement best practices and further iterate on the companys core AI information extraction competency. This is a unique opportunity to get in at the ground level as the company scales; to define AI models for the entire organization and to revolutionize an industry with data thats actionable.

ventureLAB Director, Business Development

Reporting to the Vice President, Partnerships, this role is responsible for the development and end-to-end delivery of a robust private sector, industry pipeline and revenue stream. The successful candidate will possess a hunter mentality, and have the capability to switch seamlessly between strategic thinking, planning and tactical work. They will also be a strong communicator with the ability to clearly articulate the ventureLAB value proposition to prospective partners, and strategically align partnership opportunities.

CBC/Radio Canada Android Developer, Digital Audio

You are an Android Developer looking to apply and build upon your skills, creating end-to-end user-facing products with an emphasis on discoverability, engagement, and personalization. You understand the importance of accessibility and know what it takes to meet the needs of all users. You are adaptable and willing to jump into different areas, features, products wherever you are needed, you are happy to contribute. Whether you are currently working in a startup, the corporate world or somewhere in between, you want to be part of a fun team, engaged in a continuous learning culture, where you can take on new challenges and be a significant contributor to engaging a national audience.

HiMama Marketing Operations Specialist

Are you an analytical, technical, and metrics-obsessed marketing ops professional? Have you helped scale tech stacks for marketing teams, implemented rigorous reporting frameworks, and built funnels from scratch? HiMama is looking for amazing people like you to join their RevOps team and act as a key business partner to our Marketing team!

Vidyard Director of Growth and Digital

As the Director Growth and Digital you are responsible for defining and executing Vidyards growth and digital strategy. You thrive working in the middle of product, marketing and operations to build and maintain the industrys best web experience as well as define and execute high velocity experiments, both on web properties and within product to achieve business growth objectives. The successful candidate will be curious in nature, not afraid to push the limits of the possible and advocate for the benefits of working within high performing cross functional teams to drive results.

Motion Mobility Accessibility Consultant

Right now Motion is adding a Mobility & Accessibility Consultant to their team in Comox. They are looking to hire a Mobility & Accessibility Consultant who shares their passion for enriching the lives of the clients they work with. They need a compassionate and innovative self-starter looking to make an impact. You are an extraordinary salesperson. People trust you. You seek opportunities to enhance the lives of your clients and find satisfaction in solving their needs. You want to be part of a team. A small, tight knit community thats more like a family but with the backing and the support and reputation of a recognized, national brand.

Viziya Oracle Architect

VIZIYA is a quickly expanding software company that thrives on innovation to solve challenges. VIZIYA needs an Oracle Architect Consultant to join our team. As the Oracle Architect you will plan, direct, co-ordinate and implement Oracle Software Applications. You have an M.S. or equivalent or Bachelors Degree or equivalent plus five years of experience in Computer Science, Engineering or a related field, plus hands on experience with Oracle SQL plus, PL/SQL,RDBMS (11G), XML Publisher, and others.

BetaKit and Jobbio have joined forces to create a digital careers marketplace targeting BetaKits 1.8 million annual visitors.

For tech companies, that means access to an incredible audience of engaged passive and active candidates. Powered by Jobbios innovative technology, your jobs are delivered directly to a targeted audience of Canadian tech professionals.

Follow this link to post your job on Jobs.BetaKit today.

Read the original post:

Startup jobs of the week: Marketing Communications Specialist, Oracle Architect, Machine Learning Scientist - BetaKit

Read More..

Arm servers attack cloud and HPC – here’s why enterprises should care – Diginomica

NVIDIA - Arm reference server.

The engines powering the modern data center have long been made by Intel, however, the increasing diversity of workloads coupled with this decades secular deceleration of x86 performance improvements have prompted processor designers, application developers and infrastructure operators to look for alternatives.

While Intel wont lose its dominant position in data centers anytime soon, several trends eroding its position have been on full display recently, including:

These trends have been percolating for several years, mostly below the attention of enterprise technology executives, however, rapidly maturing technology and a relentless pursuit of higher performance by hyperscale cloud builders and high-performance computing (HPC) users have incubated an environment favorable to non-Intel alternatives. Recent events illustrate the rapid architectural changes within the AI and HPC communities that have longer-term implications for the average enterprise.

The SC supercomputing conferences were once niche events tailored to and dominated by researchers in government labs, academia and HPC vendors seeking to score some benchmarking victories with their latest products. While the target workloads havent changed, namely those using numerical simulations for fundamental scientific research, they have been supplemented by practical applications of HPC computational techniques and distributed systems to problems in numerous industries such as resource extraction, social networks, online marketing, cyber security and manufacturing.

Expanding the applicability of HPC to new industries and problems has created an environment that fosters tremendous innovation in many areas like processor architecture, workload-specific hardware acceleration, distributed software management and application development frameworks and libraries. Thus, a conference that was once dominated by Cray and later, custom distributed systems commissioned by government labs, is being disrupted by the likes of NVIDIA, Arm and the cloud vendors.

As is often the case, NVIDIA and its charismatic founder and CEO Jensen Huang aka the Worlds Top CEO are leading the innovative changes, making several significant announcements at SC19. In sum, they show a company that is one of the catalysts for this decades AI renaissance by fostering greater hardware diversity with workload-optimized system designs that substitute Arm processors for traditional x86 CPUs. Specifically, NVIDIA announced:

There is a renaissance in high performance computing. Breakthroughs in machine learning and AI are redefining scientific methods and enabling exciting opportunities for new architectures. Bringing NVIDIA GPUs to Arm opens the floodgates for innovators to create systems for growing new applications from hyperscale-cloud to exascale supercomputing and beyond.

While the SC19 announcements are focused on the HPC market and related applications, NVIDIAs latest moves are indicative of broader changes reshaping data center computing and application development that will eventually benefit mainstream enterprises. They are also indicative of growing acceptance of Arm as a data center platform and come amidst other evidence of significant improvements in Arm Server technology. Some examples include:

The newly energized market for data center Arm SoCs and systems would only be of passing interest to enterprise IT leaders if not for the existence of cloud services that interpose an API-centric abstraction layer between the developer/user and hardware implementation. Few organizations have the stomach for an architectural shift as fundamental to their enterprise software as changing processor platforms, even if it means saving money and hopping on a steeper performance growth curve.

Admittedly, most initial cloud offerings are IaaS compute instances that still expose the user to the processors architectural differences, however, even here, growing support for the Arm platform by Linux distros and software libraries and tools like NVIDIA CUDA-X eliminate major roadblocks to developers and IT operators. That said, given how secretive cloud operators like AWS and Google are about their internal workings, we have no idea how many services are already delivered from non-Intel hardware and whether new service features and performance gains result from their willingness to deploy hardware customized to the task. Chances are, they often are, particularly given comments from some of the Arm vendors like Ampere.

The real significance to enterprises of the news summarized here comes over the long term as the combination of vigorous hardware competition which fuels the proliferation of hardware like GPUs, TPUs, FPGAs designed for particular workloads means that the average business can access features and supercomputer performance levels that were once limited to the HPC priesthood in massive research labs. Indeed, the scale of what Microsoft can provide as a service is mind-boggling. Our friends at the Next Platform estimate that a maxed out NDv2 cluster delivers 5.36 petaflops of floating point performance, which would rank number 40 on the Top 500 list of the worlds supercomputers. All for a mere $2,661 per hour, versus the millions it would take to buy and operate such a beast. While an extreme example, it illustrates the tremendous democratizing force of combining cloud services with hardware competition.

Most organizations cant use a Top 500 supercomputer, but do have computationally intensive problems that can provide significant new business insights, but only be done on a new generation of AI- and HPC-optimized hardware; systems and services that can now be rented as needed. The combination of Arm servers, GPU and other accelerators and cloud services allow enterprise leader to unleash their creativity to solve previously intractable business problems.

View post:
Arm servers attack cloud and HPC - here's why enterprises should care - Diginomica

Read More..

Recognizing the Right Stuff in Cloud Security – eWeek

A plethora of cybersecurity breaches at both public and private sector entities over the past few months shows not only how far the industry has to go to shore up its defenses, but how much more sophisticated attackers have become, and how much more sophisticated they will likely get.

Many businesses have now concluded-correctly, most experts believe-that cloud computing offers a greater level of security than on-premises data centers managed by the businesses themselves. After all, cybersecurity is not core to what most companies do (and even those who live by the mantra that every company is a technology company didnt sign up for the arduous, tedious and thankless work of securing data); even having the right tools and technology isnt enough given that great talent is in such short supply.

Cloud providers can generally provide a greater level of security because security is (or should be) their business, and because chances are that they employ the best talent for this purpose. Yet not all clouds are created equal, and businesses should look closely under the hood to see what those differences are.

Businesses should consider the following six characteristics of any cloud provider to determine whether they have the right security stuff to help ensure their data remains safe and ahead of the cyber threat curve:

Many cloud vendors offer only a portion of a complete computing and storage stack, which is why, in many cases, they cannot guarantee the security of their customers systems. Infrastructure-only providers dont haveand thus cant control-the platforms or application layers used by their customers.

Others only provide platforms and applications but dont control the infrastructure.

Customers should look for providers who control all aspects of the cloud: infrastructure, platforms, as well as application layers. They should also look for providers who have proven that security isnt an afterthought, but part of their fundamental design philosophy.

Applications, platform and infrastructure should all be designed with security and functionality in mind, in equal measure. One specific design principle to look for is that of isolation. Most providers allow customer data and the control code used to manage the cloud to exist in the same serverwhich is a prime opportunity for bad actors to pose as customers and then use malicious software to manipulate the clouds control code.

Look for vendors using next-generation cloud infrastructure, in which the control code is isolated from customer data, so that no customer data can ever affect the control code.

Customers should consider working with cloud providers who have security as the most important design element, period, not just a high priority or an add-on but as the top priority above everything else, said Wim Coekaerts, Senior Vice President of Software Development at Oracle. Autonomous capabilities are a requirement to address proactive and reactive handling of cloud and cybersecurity issues.

Unpatched software is the root cause of many, if not most, big cybersecurity breaches. Thus, applying security fixes in a timely manner is fundamental to overall security. Yet in complex systems, applying security fixes takes time and often requires systems to be taken offline for hours at a timehardly ideal for any business that relies on processing transactions in a timely manner.

Its also important that those security fixes be applied without the end user being aware of it happening or suffering material downtime. Autonomous patching can apply software updates and security fixes as soon as they are available, without need for system shutdowns.

Cloud users often leave themselves exposed to potential hacks by leaving server ports open when doing so is not optimal. Or they leave computing or storage resources running even though they are no longer needed or in use. Bad guys can gain entry via these fallow resources and then more easily penetrate critical, running systems.

A modern cloud provider that can help spot unused-but-still-operating computing resources and proactively shut them down, which can dramatically lessen the attack surface that hackers can attack, helping customers maintain a more secure stance.

Cloud providers should also help ensure that the permissions on data access are secure by default by enforcing a rule that documents in object storage are never publicly accessible, and should automatically detect if certain permissions have changed which could potentially expose a system or data, and send alerts whenever that is the case.

In addition, cloud providers that mandate that encryption is on by default both for primary and back up databases make it harder for bad guys to steal usable information.

And, adoption of pervasive always-on encryption, for both data at rest and data in transit, means that even if hackers do get their hands on data, it will more likely be gibberish.

Thus far, machine learning and artificial intelligence are not deployed broadly to help cloud providers with early detection of malicious code and unusual patterns and activities that threaten customer data.

That is changing in the face of the cybersecurity hacking onslaught. But its important to remember that the best cloud provider needs more than top flight AI expertise. The optimal provider needs both a deep knowledge of what large business workflows look like, as well as the ability to use AI to spot possibly malignant activity before it gets painful.

A provider that can offer a solid one-two punch of historical enterprise experience and modern technology expertise is a winning proposition for customers looking for a highly secure cloud to entrust with their key business data and workloads.

Business customers should look for a cloud provider that will act as a full partner, not a passive participant who takes customers money in exchange for access to a raft of cloud servers. Likewise, cloud providers need to see themselves as full partners with customers when it comes to deploying and securing those all-important business workloads.

Michael Hickinsis a former eWEEK and Wall Street Journal editor and reporter who now works at Oracle. This report is special to eWEEK.

See more here:
Recognizing the Right Stuff in Cloud Security - eWeek

Read More..

Save over 90% on 5TB of cloud storage with this Black Friday deal – Boing Boing

Many of us rely on a single hard drive to store precious files. This strategy is risky, but many alternative backup solutions are pretty expensive. Polar Cloud Backup breaks the mold, providing secure storage at a price that anyone can afford. The service runs on reliable Amazon architecture and gives users total control over their data.

Available to download on PC and Mac, Polar Backup allows users to choose which files and folders to upload. All file types are supported, including videos, images, audio, documents, and more. The apps also have a scheduling feature, which ensures that the backup wont disrupt your Netflix movie.

Polar Backup provides enough space for thousands of files, and the storage is both secure and private. The data is protected by military-grade AES-256 encryption, while the servers meet with GDPR regulations.

Along with files on your PC or Mac, Polar Backup covers any connected external drives. Even if you delete files locally, they wont be removed from cloud storage.

Ahead of Black Friday, the Polar Backup lifetime 5TB plan is just $69.99. You will struggle to find a cloud storage provider that offers more for this price.

Don't wait for Black Fridayyou can get these top-sellers at deep discounts today!

In a decision released late Tuesday night, a federal judge ruled that up to 29 million Facebook users whose personal info was stolen in a September 2018 data breach are not entitled to sue Facebook as a group for damages but the users may be entitled to demand better personal data security at Facebook.

China-based technology company ByteDance is on a charm offensive, reports Reuters, ramping up efforts to distance its popular social app TikTok from the rest of its Chinese operations.

Earlier this month while I was in San Francisco, I went over to the Y Combinator incubator to record a podcast (MP3); we talked for more than an hour about the history of Adversarial Interoperability and what its role was in creating Silicon Valley and the tech sector and how monopolization now threatens adversarial interop []

Whether were on a long commute or marathon workout, the right soundtrack makes all the difference. And with all the thought we put into our playlists, the kind of earbuds we play it on is worth at least as much consideration. When it comes to sound, wearability, and durability, there are few models that hit []

If you havent switched to wireless charging for your devices yet, its a pretty good bet that youre not using more than one device. Juicing up your iPhone doesnt require much planning, but once you add an Apple Watch or pair of AirPods into the mix, youve got a logistics problem. If youre in that []

Ask any creative professional, and chances are theyll tell you Adobes Creative Cloud programs are essential to their livelihood. Still, theyre only as good as the person using them. Luckily, theres now a training that covers all the best elements of this software suite, and its the Complete 2020 Adobe CC Certification Bundle. As you []

More here:
Save over 90% on 5TB of cloud storage with this Black Friday deal - Boing Boing

Read More..

Kubernetes Is the Future of Computing. Everything You Should Know. – Barron’s

Text size

Nearly all major technology companies are saying the same thing. Kubernetes is the next big thing in computing.

The Greek word for helmsman or pilot, Kubernetes is accelerating the transition away for legacy client-server technology by making cloud-native software development easier, better and faster.

Last week, more than 12,000 developers and executives gathered in San Diego at the largest annual Kubernetes conference called KubeCon. Thats up from just 550 attendees four years ago. The conference goers are all looking for ways to take advantage of Kubernetes and its ability to automatically deploy, manage, and scale software workloads in the cloud.

To understand the trend, lets start with the changing dynamics of software in the cloud. Cloud apps increasingly run in aptly-named containers. The containers hold an application, its settings, and other related instructions. The trick is that these containers arent tied down to one piece of hardware and can run nearly anywhereacross different servers and clouds. Its how Google manages to scale Gmail and Google Maps across a billion-plus users.

Alphabets (ticker: GOOGL) Google long ago developed software called Borg to orchestrate its in-house containersspinning them up and down as needed. In 2014, the search giant opted to make a version of Borg open source, calling it Kubernetes. Today, the major cloud providers all offer a Kubernetes option to customers.

Aparna Sinha, the director of product for Kubernetes at Google, notes that Kubernetes is built by the same team that created Borg. We are quite confident in its ability and how it enables applications to run more reliably, more efficiently, and more affordably, Sinha says. Kubernetes has really taken off.

Gartner says more than 75% of global companies will run containerized applications by 2022, from less than 30% today. Kubernetes has become the de facto standard for these managing containers.

As enterprises modernize their infrastructure and adopt a hybrid multicloud strategy, we see Kubernetes and containers rapidly emerging as the standard, Jason McGee, chief technology officer of IBM Cloud Platform, told Barrons in an email.

In terms of who will thrive in the shift to Kubernetes, there are some early leaders. Last month, Microsoft (MSFT) Azure Chief Technology Officer Mark Russinovich told Barrons he thinks Microsofts Kubernetes service is best-of-breed.

Some industry analysts are pointing to other companies. When asked for the Kubernetes vendors that came up the most during discussions with customers, Gartner analyst Arun Chandrasekaran listed Amazon Web Services (AMZN), Google Cloud, and IBM (IBM) Red Hat OpenShift. For on-premise companies looking to use multiple clouds, IDC analyst Gary Chen added, Red Hat right now is the leader in Kubernetes software. They have the early lead.

It is still early in this new big trend. One thing is for sure, get ready to hear a lot more from technology companies on their Kubernetes strategies. The race is on.

Write to Tae Kim at tae.kim@barrons.com

View original post here:
Kubernetes Is the Future of Computing. Everything You Should Know. - Barron's

Read More..

Dell, HPE, IBM And Lenovo Face Competition From Cloud-Based Supercomputing – Forbes

HPE recently completed its acquisition of Cray, after doing the same with SGI just three years ago. Due to consolidation, top merchant supercomputer vendors (as opposed to government entities building their own) now include server OEMs Dell, HPE and IBM based in North America, Fujitsu in Japan and Atos Bull in Europe. Inspur, Lenovo and Sugon lead a growing group of supercomputer-focused OEM server vendors in China.

Cloud Competition

The challenge for all these supercomputer vendors is that public cloud vendors are also targeting high-performance computing (HPC) and supercomputing markets. Public cloud providers are changing the markets demand for functionality. This competition will challenge branded server OEMs ability to push upmarket toward higher-end customers with traditional on prem HPC server clusters.

Alibaba Cloud, AWS and Azure are already deploying HPC and supercomputing worthy infrastructure and services. I expect public cloud giants HPC and supercomputing focused deployments will improve continuously over time. As predictions go thats fairly tameits what they do.

Therefore, I believe it will be increasingly hard for HPC infrastructure vendors to sell clusters directly to end customers as customers opt to simply configure a supercomputer out of available public cloud instance types and networking options.

Public vs. Private Infrastructure

All the same arguments heard for the past decade about private infrastructure vs. public cloud are now surfacing in HPC market positioning.

Set aside all of the tired tropes about security, availability, latency, etc. Public clouds provide as good or better infrastructure service and support than most IT departments can manage on their own, and they have been doing that for years.

For public cloud-based HPC and supercomputing services to be successful, they cannot:

Alibaba Cloud, AWS and Azure all have recently deployed new HPC and supercomputing instance types and sizes implementing fast Ethernet networking (Azure also offers high-end InfiniBand networking) and shared-memory clustering capabilities that enable customers to meet both of the above requirements.

GCP asks HPC customers to use Preemptible instance types, which requires refactoring of existing HPC applications. In addition, Preemptible instance types have many other restrictions, including lack of any Service Level Agreements (SLAs). GCPs Cloud TPU Pods cannot be programmed with traditional supercomputing software development tools.

Data Gravity

Another important point of tension between using private and public infrastructure is data gravity.

Data gravity is somewhat similar to real gravity. The more massive a celestial object is (planet, star, galaxy or whatever), the more it influences objects in its vicinity. From a spaceflight perspective, getting spaceships out of Earths gravity well is very expensive. Its another order of magnitude of expensive to send spacecraft out of the Suns gravity well; humanity has only done that twice (Voyagers 1 and 2).

Data gravity urges IT customers to include data transfer and storage costs when considering the total costs of migrating applications from on prem infrastructure to public cloud infrastructure. In practical use, the key data gravity consideration for most applications is simple: Does an application send a lot of data back out of the cloud?

Sending a lot of data into a public cloud may take time, but most clouds do not charge data ingress fees. Moving data within a public cloud can run up data transfer charges. Sending a lot of data out of a cloud most certainly will run up data transfer expenses.

A different kind of data gravity is defined by government and military security. While public clouds are more than competitive with private infrastructure for commercial-grade security, there are datasets that must be air gapped (not connected to the public internet) and must not leave the facility or organization that created the dataset.

Customers using public cloud HPC resources most likely do not want to:

The combination of shared-memory architecture and data gravity points to a several areas that public cloud-based HPC and supercomputing clusters have distinct advantages in the short term:

Market Disruption

For the most part, disruptive technological change starts at the low-end of markets (despite a handful of Apple and Tesla counter-examples). Public cloud HPC and supercomputing capabilities will further disrupt on prem server and data center infrastructure sales. Its just a matter of how quickly this happens.

The author is an employee of Liftr Insights. The author and Liftr Insights may, from time to time, engage in business transactions involving the companies and/or the products mentioned in this post. The author has not made an investment in any company mentioned in this post. The views expressed in this post are solely those of the author and do not represent the views or opinions of any entity with which the author may be affiliated.

Follow this link:
Dell, HPE, IBM And Lenovo Face Competition From Cloud-Based Supercomputing - Forbes

Read More..

The Risk Of Complexity And How To Fix It – Forbes

Starting on March 22, 2019, Capital One bank was the victim of one of the largest data breaches in history. A former employee of Amazon Web Services used her knowledge to bypass security and download credit applications of approximately 100 million people. She was eventually caught, but its still unclear whether the information from those applications was sold or otherwise made available to hackers.

Getty

The breach happened because the hacker was able to take advantage of a misconfigured web application firewall on an AWS cloud application used by Capital One. That was just one firewall out of potentially hundreds. Youll note that there isnt a specific number of firewalls because the number changes as cloud servers are spun up or taken down, or as the banks network is changed.

Because firewalls (a firewall is a device that restricts access to the network to authorized people or devices) are frequently added to an enterprise when specific portions of the network are initially set up, they may not have any commonality. Each firewall will be one of only a few in the company.

The problem that most companies have, especially big companies, is that a lot of this stuff was put in piecemeal, but not with any strategy

This lack of standardization means that each one has to be configured individually in what can only be described as an intensely manual process. And even one mistake can make the firewall ineffective, especially to someone who knows the details of how it works.

Getty

The problem goes beyond firewalls. In many cases a companys servers need manual configuration, so do other security and network appliances. The number of such devices can reach the hundreds, and with cloud accounts, it can reach into the thousands. You can see how managing all of this, even for a properly staffed IT department, can be overwhelming.

Getting a Handle

The problem that most companies have, especially big companies, is that a lot of this stuff was put in piecemeal, but not with any strategy, said Jack Gold, principal analyst at J. Gold Associates. They had an ad hoc process.

We need to retire some of those tasks more quickly, said Tim Woods, VP technology alliances for Firemon. He said that these overly complex, manual processes are making it hard for companies to get a handle on security.

Theyre running away from security, Woods said. He said that Capital One tried to make their changes manually, which ultimately led to the breach. He said that a better way is to automate firewall management.

Woods said that doing all of this manual work also wastes resources. They have their best people doing mundane routine tasks, he said. By automating tasks such as firewall management, We believe you may be able to reap as mu h as 40 percent reduction in repetitive work cycles.

How to Automate

A company needs to look for that low-hanging fruit, Woods said. If I can automate those low-level tasks, itll make a big difference, Wood said.

Diagram of Automation

Woods also said that preparing to automate your configuration process, gives a good opportunity to evaluate your processes and to validate your expectations of what youre trying to do.

Setting sensible requirements is an important first step in automating your firewall management. The goal needs to be to automate those activities that require the most time and are done the most frequently. Activities that occur only occasionally and require little time arent good targets for automation.

To determine what the good targets are, you need to study the workflow of your IT staff in regards to firewall, server and other types of configuration. Find out where they spend the most time, and what tasks require tedious steps, and start with those tasks, because those are the low-hanging fruit Woods mentioned.

Once youve determined your beginning tasks, its time to look for an automation provider. Firemon is one of those. So is Tufin and other companies.

As youre implementing the automation platform, you will also need to determine your larger goals. Are you primarily focused on overall security? Compliance? To some extent your goals will dictate your process.

Or Maybe Outsource

Jack Gold isnt so sure that automating your firewall management is necessarily the best answer. Instead, he suggests looking at ways to ease the management effort for starters.

What they should be thinking is whether can they standardize on something. He said.

Its much easier to build out an orchestration platform of some sort if you get all of them pretty much the same.

Getty

Another possibility is that companies are outsourcing, Gold said. They do networks as a service (NaaS). They just pay a monthly fee. Theres a high value in preventing security breaches and data loss, but theres not a high value in managing all of this. Managing those devices is overhead.

The complexity has always been there and probably will always be there, Gold explained. Networking as a service is getting some legs because companies dont want to deal with it. Also, IT doesnt have the resources to deal with it.

He said that its important to ask whether there are better places where you can put those resources.

Gold pointed out that the nature of networking is changing with the nature of IT. In the past companies felt they had to control their destinies themselves, he said. Thats changing with the cloud-based services.

Getty

Gold explained that the growth of SDN (software defined networks) means that you need to consider whether you should even be running your own networking.

Things are moving to SDN, Gold said. New gear needs to be SDN capable. NFV (network function virtualization) needs to be supported.

NFV, in which the basic functions of networking exist in software, is critical to the operation of virtualization and thus to cloud computing.

Outsourcing your networking and network operations lets your focus your efforts on the activities of your business, and not on hiring even more staff to just run the network and its related security requirements.

Major network companies, including Cisco and IBM offer NaaS, and that part of their business is growing fast. It may be that outsourcing your network will be the real solution to the overwhelming complexity of managing configuration and operation of your network.

View post:
The Risk Of Complexity And How To Fix It - Forbes

Read More..

The 20 top tech skills that employers want and that can help you find a job, according to recruiting site Indeed – Business Insider

If you're trying to break into tech, learning certain new skills will help push you forward.

The job search site Indeed released a report this month about the top tech skills of 2019 based on job descriptions that are being posted.

Andrew Flowers, an economist at Indeed, says that in today's job market, there are two major trends that drive the top skills in tech. The first is the rise of data science, machine learning, and artificial intelligence. The second is the rise of cloud computing.

Some languages, like Java and C++, still remain some of the most important skills in tech, but they have been around for decades. On the other hand, the programming language Python is a relative newcomer and today is one of the top skills on the list.

"Python has had explosive growth," Flowers told Business Insider. "If I'm around the dinner table and a nephew asks what should I learn? Having done this report, I would say, learn Python."

The reason why Python has exploded is because of the rise of data science and machine learning, Flowers said.

"Some tools are good and will always be around," Flowers said. "Python is just so notable because it's easy to learn. It's used in data science and it's also used in web development. Its explosive growth is the #1 takeaway from this research."

Besides that, the report also shows growth in cloud technical skills for Amazon Web Services and Microsoft Azure.

"AWS is still the dominant cloud computing tool but Azure is growing really fast, too," Flowers said.

Here are the top 20 tech skills, according to Indeed:

Read the rest here:
The 20 top tech skills that employers want and that can help you find a job, according to recruiting site Indeed - Business Insider

Read More..

Who is responsible for cloud security? It’s a bit foggy, finds McAfee – Verdict

UK businesses are steaming ahead to become cloud-only companies, but establishing who is responsible for cloud security in an organisation is struggling to keep pace.

Research by cybersecurity firm McAfee found that 40% of large UK businesses expect to be cloud-only by 2021, with 70% expecting to be cloud-only at some point in the future.

Yet the survey of over 2,000 senior IT staff and employees in the UK, France and Germany found a lack of consensus as to who in the business is ultimately responsible for cloud security.

14% said the CEO should take responsibility, while 19% believe it should be the chief information officer. Just 5% said the chief information security officer is responsible for cloud security.

The role of IT manager drew the largest number of votes, with 34% believing them ultimately responsible for cloud security.

The findings echo those of a recent Big Data LDN survey, which found data responsibility to be spread thinly across the c-suite.

What scares me about this is that the answers are, dare I say it, sort of all over the place, said Nigel Hawthorn, EMEA director of cloud security business at McAfee, speaking at a media roundtable.

And I think this is why cloud security is not necessarily being addressed in a holistic manner, because it has to have an owner and has to have a team who are led by someone to actually make sure that its being addressed.

Hawthorne said that so-called shared responsibly models put forward by Microsoft and Amazon the two largest cloud vendors are not enough.

From the magazine: Mending leaky buckets: Overcoming the unsecured cloud server crisis

Drawing parallels with renting a car, he points out how manufacturers are responsible for safety features such as airbags, the rental firm responsible for oil and the driver for driving safely.

Theres no point in saying its your fault Ford when I drove the car at 100 miles an hour into a wall, he said.

Get the Verdict morning email

While 84% said the cloud improved their organisations data security, cloud computing provides a unique set of security problems.

Data repositories containing sensitive business or customer information can be misconfigured by businesses, providing easy pickings for cybercriminals.

Previous research conducted by McAfee found that 99% of misconfigured cloud servers go undetected.

You can outsource the work, but you cant outsource the risk, said Raj Samani, chief scientist and McAfee fellow.And the reality is [that] in cloud computing, we see organisations and people migrating and outsourcing over to cloud services with the belief that it absolutely absolves them of any risk or any concerns.

So whats the solution? Hawthorn and Samani believe that educating users at the right time in the right context about cloud security can help. But ultimately, an organisation needs to decide who is responsible for cloud security, give them adequate resources and allow their voice to be heard by the board.

I think were in a dangerous place if were going to cloud as fast as possible, but we havent decided whos responsible for the security, added Hawthorn.

Read more: Wed change AWS S3 bucket security if we had a time machine: AWS director

See the original post here:
Who is responsible for cloud security? It's a bit foggy, finds McAfee - Verdict

Read More..