Page 3,434«..1020..3,4333,4343,4353,436..3,4403,450..»

dinCloud Takes Its Security to the Next Level with Sophos Intercept X – IT News Online

PR.com2020-08-16

Clarksville, TN, August 16, 2020 --(PR.com)-- dinCloud, a digital transformation platform that offers hosted workspaces and cloud infrastructure, has announced the addition of Sophos Intercept X for end points as well as servers to make its cloud infrastructure nearly impregnable to cyber threats.

Sophos is a British company that specializes in cyber security solutions and is rated among the best in the security business by many independent international entities. dinCloud is pleased to announce the integration of Sophos Intercept X into its services.

Intercept X is a feature rich and highly capable cyber security suite offered by Sophos. dinCloud has fully integrated Sophos Intercept X for Endpoints as well as Servers, to make its already secure cloud infrastructure nearly impregnable to cyber threats.

Sophos Intercept X for Endpoint has been integrated into dinClouds Hosted Virtual Desktops (dinHVD). This welcome addition will take the security of vulnerable endpoint devices to a whole new level.

The Intercept X solution offered by Sophos for servers has been fully integrated with dinCloud Hosted Virtual Servers. This addition will immensely consolidate the cyber security profile of dinClouds otherwise highly secure global data centers.

For the convenience of its valued cloud users, dinCloud has fully integrated the management of Sophos Intercept X for Endpoints into dinManage. This is dinClouds unified cloud management portal which gives our valued users unmatched autonomy.

The Sophos Intercept X solution adopts a holistic approach to cyber security and at dinCloud, security is a core policy. We are offering Intercept X by Sophos with our Cloud Hosted Virtual Desktops as well as Hosted Virtual Servers. We believe the deployment of this solution will add to the already robust security parameters in place at our data centers. At dinCloud, we always strive to remain ahead of cyber miscreants, which is why we have added Sophos to our security arsenal, said Walid Elemary CTIO at dinCloud.

Regardless of whether you are availing their industry leading Hosted Virtual Desktops (dinHVD) or Hosted Virtual Servers, you can easily incorporate the worlds best cyber security solution (Sophos Intercept X) in a matter of few clicks.

Unlike many cloud security solutions in the market that focus on reacting to cyber threats, Intercept X takes a preventive and proactive approach to security. Using Deep Learning Technology, Intercept X will be able to detect both known and undiscovered threats.

About dinCloud

dinCloud offers digital transformation services to organizations through its cloud platform. Each customers hosted private cloud offers hosted workspaces and cloud infrastructure that the customer controls. Services are available through dinClouds network of Value Added Resellers (VARs) and Managed Service Providers (MSPs). Organizations interested in business process outsourcing (BPO) can leverage Premier BPO to extend services from IT to other back office and front office functions as well.

Contact Information:

dincloud

Sam Aslam

424-286-2379

Contact via Email

http://www.dincloud.com

Read the full story here: https://www.pr.com/press-release/819085

Press Release Distributed by PR.com

Continued here:
dinCloud Takes Its Security to the Next Level with Sophos Intercept X - IT News Online

Read More..

How Configr Relies on the Alternative Cloud – Channel Futures

The objective of alternative cloud platforms is simpleprovide useful services and reliable support at a transparent price.

Alternative cloud providers are becoming popular for small and midsize businesses across the world, offering IT solutions that fit their needs, providing dependable customer support and not forcing any unnecessary features theyll never use. The objective of alternative cloud platforms is simpleprovide useful services and reliable support at a transparent price.

Linode prides itself on offering transparent and consistent pricing and a customer support team extremely technically qualified and available 24/7/365 with no tiers, no bots and no hand-offs. The big three public cloud providersAWS, Azure and Google Cloudcant as easily make the same claim.

This mission is what made Linode the right choice for cloud services provider Configr. Founded in 2013 by Arthur Furlan and Felipe Tomaz, Configr serves customers throughout South America by democratizing cloud computing for agencies, freelancers and businesses of all sizes. One of Configrs core values has always been the ability to simplify running a business in the cloud.

Our clients are not cloud infrastructure experts, said Felipe, Configrs COO. They are from industries like digital marketing or e-commerce that dont typically have the knowledge or staff to install, configure and maintain their own cloud infrastructure. We provide technical and people expertise so they can focus on growing their business.

The co-founders had worked with hyperscale cloud providers in the past and found the complexity, lack of transparency and high costs a challenge. For their business, they needed a cloud partner that could provide high-performance infrastructure at low, predictable rates coupled with a people-focused service approach.

Configr uses Linode Backups, Block Storage, Dedicated CPU and High Memory compute plans, allowing its customers to deliver highly reliable and scalable web applications. Configr has grown to hosting more than 3,000 servers with Linode.

Like Configr, many of your own clients dont want, or need, the hassle of a large-scale cloud provider. Theyll end up with more tools and features than they need and a particularly generalized way of setting up and running the service.

Linode offers us a great price point without a compromise in performance or forcing tools and features on us we dont want, said Arthur. They also give us the service experience we need to help our customers grow.

Simplicity is likely what your clients are asking for, and it is what alternative cloud providers like Linode offer. If you need an uncomplicated cloud services solution, consider making the switch to an alternative cloud provider like Linode.

Read the entire Configr story: People-Focused Cloud Technology Delivers for Solutions Provider.

Sam Smith is a Senior Customer Success Specialist at Linode, where he works with the companys customers and partners, which include managed service providers, systems integrators and specialized service providers.

This guest blog is part of a Channel Futures sponsorship.

See more here:
How Configr Relies on the Alternative Cloud - Channel Futures

Read More..

How PyTorch And AWS Come To The Rescue Of ML Models In Production – Analytics India Magazine

Today, more than 83% of the cloud-based PyTorch projects happen on AWS.

The Computer Vision Developer Conference(CVDC) 2020 is a two day event(13-14th Aug) organized by Association of Data Scientists (ADaSci). ADaSci is a premier global professional body of data science & machine learning professionals. Apart from the tech talks covering a wide range of topics, CVDC 2020 also flaunts paper presentations, exhibitions & hackathons. There is also a full day workshop on computer vision that comes with a participation certificate for the attendees.

CVDC2020 kicked off with Suman Debnaths talk on how to Deploy PyTorch models in Production on AWS with TorchServe. Suman is a Principal Developer Advocate at AWS. Prior to joining AWS, he worked at various organisations like IBM Software Lab, EMC, NetApp and Toshiba.

Though PyTorch seen a sudden rise in popularity amongst ML practitioners, it does come with few challenges:

TorchServe addresses the difficulty of deploying PyTorch models.

Today, more than 83% of the cloud-based PyTorch projects happen on AWS. So, it is crucial to address these challenges. This is where TorchServe comes in handy. TorchServe, a PyTorch model-serving library that makes it easy to deploy trained models at scale without writing custom code. TorchServe was developed by AWS in partnership with Facebook. TorchServe addresses the difficulty of deploying PyTorch models.

Model serving is the process of situating a trained ML model within a system so that it can take new inputs and return inferences to the system. TorchServe allows users to expose webAPI for their model that can be accessed directly or via application.

In this intriguing talk, Suman detailed how to deploy and manage machine learning models in production, which is often considered to be the most challenging part in an ML pipeline. Suman, who has vast experience of working with AWS cloud services introduced the attendees to the many advantages of using AWS in conjunction with PyTorch. With TorchServe, one can deploy PyTorch models in either eager or graph mode using TorchScript, serve multiple models simultaneously, version production models for A/B testing, load and unload models dynamically, and many more.

Using an EC2 instance as a VM, Suman demonstrated how to launch TorchServe. Heres a snippet of code that gives an idea of the working of TorchServe:

Install torchserve and torch-model-archiver

pip install torchserve torch-model-archiver

To serve a model with TorchServe, first archive the model as a MAR file.

Download a trained model.

wget https://download.pytorch.org/models/densenet161-8d451a50.pth

To get predictions from a model, test the model server by sending a request to the servers predictions API.

Know more here.

Talking about the real world applications of TorchServe, Suman cited the examples of Toyota and Matroid. While Toyota Research Institute Advanced Development, Inc. (TRI-AD) is training their computer vision models with PyTorch, the framework lacking a model serving framework. As a result, the car maker spent significant engineering effort in creating and maintaining software for deploying PyTorch models to our fleet of vehicles and cloud servers.

With TorchServe, Toyota now has a performant and lightweight model server that is officially supported and maintained by AWS and the PyTorch community. Whereas, in case of Matroid, a maker of computer vision software, TorchServe allows them to simplify model deployment using a single servable file that also serves as the single source of truth, and is easy to share and manage.

Stay tuned for more updates from CVDC 2020.

comments

I have a master's degree in Robotics and I write about machine learning advancements.email:ram.sagar@analyticsindiamag.com

See more here:
How PyTorch And AWS Come To The Rescue Of ML Models In Production - Analytics India Magazine

Read More..

New Oracle Cloud Chief Clay Magouyrk: ‘We’ve Thrown Everything Behind The Cloud’ – CRN: Technology news for channel partners and solution providers

When Clay Magouyrk crossed Seattle six years ago to join the effort of building Oracles public cloud from the ground up, the decisions he and the nascent team faced were foundational.

We started out a few people in the corner office thinking about where to put some data centers and build a physical network, Magouyrk, who was recently promoted to Oracles cloud chief, told CRN.

As the first hire of his predecessor, Don Johnson, Magouyrk has been instrumental every step along the way as Oracle Cloud Infrastructure (OCI) went from ideation to strategizing around a bare-bones first-generation product to bridging to the Gen2 Oracle Cloud Infrastructure now running in more than two dozen regions and countless customer data centers.

Thousands of hires and billions of dollars later, Magouyrk is taking over for Johnson as executive vice president for Oracle Cloud, a succession plan long in the works that puts him reporting directly to Oracle founder and CTO Larry Ellison.

[Related: Despite Big Customer Wins, Oracle Says Coronavirus Crisis Weighed Down Q4 Revenue]

Magouyrk told CRN hes ready to guide Oracles public cloud into its next phase of growth that establishes it as a leading force in the all-important and still rapidly growing market.

Oracle hasnt yet penetrated the top tier of the highly competitive public cloud leader board, where Magouyrks previous employer, Amazon Web Services, dominates, and together with rivals Microsoft Azure and Google Cloud control some 60 percent of the market, according to a recent tabulation by Synergy Research.

Oracle, with 2 percent share, comes eighth in the category of IaaS, PaaS and hosted private cloud, still trailing Alibaba, IBM, Salesforce and Tencent, Synergy estimated. Those eight leaders together control 77 percent of the market.

But Magouyrk sees enormous potential to leapfrog competitors in a still largely untapped market by emphasizing a unique value proposition that will drive Oracle, and its network of cloud partners, to new heights.

Oracle is not the same Oracle it was five to 10 years ago, he told CRN. It has become very much a cloud-first company. Weve thrown everything behind the cloud.

One thing that differentiates Oracles cloud, and gives those partners an advantage when selling to the enterprise, is the larger Oracle application portfolioproducts like Oracle E-Business Suite, Fusion, JD Edwards, PeopleSoft and its industry-leading database footprint, including the new, self-driving Oracle Autonomous Database.

Those workloads run better on our platform, and we work with our customers to understand that value proposition, Magouyrk said. And customers successful in running those mission-critical business apps translate to partners successful in selling additional cloud services.

Oracle leaders appreciate that systems integrators are essential in migrating Oracle and third-party workloads to Oracle Cloud as are MSPs in maintaining production operations while adding additional services.

Oracle now enables and rewards those partners appropriately through a revamped channel program launched at the end of last year that implements a cloud-first approach to partner engagement, Magouyrk said.

But there are still many solution providers holding an outdated view of the company.

I think a lot of partners still think of Oracle primarily as an on-premises database company or a SaaS applications company, Magouyrk said. The thing I would ask them to do is take a look at what we offer with OCIservices, infrastructure, partnership benefits.

Those capabilities are paired with a price-performance ratio that Oracle believes its larger competitors cant come close to matching, he said.

Were massively cheaper, Magouyrk told CRN, somewhere between 50 [percent] and 75 percent cheaper than all of our competitors when it comes down to any real-world comparison of cloud bills.

But to scale Oracles share of the market doesnt mean waking up every morning thinking about how to poach customers from AWS or Microsoft.

The real growth opportunity lies in the 85 percent of all server-side computing workloads that still run in colocation facilities and corporate data centers, Magouyrk said.

You focus on enabling customers that have existing workloads on-premises that have been unable to move to the cloud and how you help them move to the cloud, Magouyrk said. We have to show them the vast value OCI can provide compared to having to deal with on-premises infrastructure.

One important weapon in Oracles arsenal when making that pitch to enterprises is its new on-premises cloud: Dedicated Region Cloud@Customer.

Where competitive solutions like Amazon Outposts and Microsoft Azure Stack offer only a few of their many public cloud services in those on-premises environments, Oracles Cloud@Customer now incorporates every OCI service, delivering the complete public cloud experience behind the customers firewall, Magouyrk said.

Then theres Oracle Autonomous Database, which enterprises are starting to recognize for its compelling benefits in offloading management, patching, versioning and most other administrative chores to an AI-powered system that eliminates the potential for human error.

For customers who understand the power of relational databases, but are tired by administration and overhead, autonomous is a leap forward for them, Magouyrk said.

Where its hyperscale competitors are reworking existing open-source databases to run in their clouds, Oracle is the only company still innovating in the relational database space, he said.

Before spending six years at Oracle solely focused on OCI, Magouyrk worked as an engineer across town at Amazon, and then went over to AWS as the pioneering cloud player was starting to see adoption skyrocket.

The OCI team has taken an entirely different approach to cloud development, largely because of the differences between where Amazon and Oracle started from.

AWS built a cloud from the bottom up, Magouyrk said, first creating infrastructure, then moving up the stack to platform and later dipping its toe in applications. But Oracles application dominance encouraged a cloud path following an opposite trajectory.

Oracle started with Software as a Service, making massive investments in bringing to the cloud its ERP and HCM systems.

We came later to the infrastructure space, Magouyrk said. But we knew when we decided to invest in it, infrastructure was critically important as an underlying foundation for our application portfolio.

More recently, Magouyrk has spearheaded work on some of OCIs latest capabilities, including support for cloud-native technologies like serverless and Kubernetes, Dedicated Region Cloud@Customer and Oracle VMware Cloud Solution.

Those products maintain the initial trajectory that Johnson, Magouyrk and the rest of the founding team laid out at their new Seattle offices in 2014.

The reason a bunch of us joined Oracle to build a new cloud was we looked around at the industry and saw the vast majority of developers and projects dont get to take advantage of the cloud, Magouyrk said.

Weve had the same strategy since the beginning, he added.

See more here:
New Oracle Cloud Chief Clay Magouyrk: 'We've Thrown Everything Behind The Cloud' - CRN: Technology news for channel partners and solution providers

Read More..

Denial-of-Wallet attacks: How to protect against costly exploits targeting serverless setups – The Daily Swig

Attackers look to drain their victims cloud computing resources and their bank accounts

Over recent years, the popularity of serverless computing has exploded, as organizations continue to realize the benefits of this easily scalable, cloud-based infrastructure model.

In fact, one study estimates that the number of serverless customers will exceed seven billion by 2021.

With this trend, however, comes the added risk of cyber-attacks specifically targeting cloud-based infrastructure.

Counted among this growing list of exploits is the Denial-of-Wallet attack a lesser-known but easy-to-execute technique that that can leave victims severely financially damaged.

Denial-of-Wallet (DoW) exploits are similar to traditional denial-of-service (DoS) attacks in the sense that both are carried with the intent to cause disruption.

However, while DoS assaults aim to force a targeted service offline, DoW seeks to cause the victim financial loss.

In addition, while traditional web-based distributed denial-of-service (DDoS) attacks flood the server with traffic until it crashes, DoW attacks specifically target serverless users.

Contrary its name serverless does not mean the user isnt connected to a server, but rather that they pay for access to a server maintained by a third party.

Denial-of-Wallet attacks exploit the fact that serverless vendors charge users according to amount of resources that are consumed by an application, meaning that if an attacker floods a website with traffic, the site owner could be landed with a huge bill.

Read more of the latest cloud security news

An attacker doesnt personally gain from DoW attacks in the same way they might through other exploits except, of course, from causing their target financial distress.

When you have servers in a data center and an attacker just wants to bring you hurt, they can DDoS you and your site goes down, explains Scott Piper, AWS security consultant at Summit Route.

When you run in the cloud, an attacker can do things such that your site might stay up, but youll be bankrupt.

Make it rain: Denial-of-Wallet attacks can cause huge financial losses for serverless users

Serverless computing is when backend services are supplied on a used-by basis. The company pays a serverless vendor to provide the infrastructure and maintain the server.

Popular serverless brands include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), which together count many millions of users.

Serverless pros

There are obvious upsides to using a serverless mode, one being that it allows smaller organizations to get their services live without the need to invest in hardware.

Another positive is that because the service is provided on a pay-as-you-go basis, the user isnt charged for any bandwidth or resources they dont use.

YOU MIGHT ALSO LIKE A guide to spear-phishing how to protect against targeted attacks

Serverless computing is stateless architecture for stateless applications, Erica Windish, founder of serverless vendor IOpipe, told The Daily Swig. I think theres security benefits to such an architecture, as it enforces immutability.

Because serverless environments are constantly being updated, this makes it difficult for malware or nefarious applications to stay dormant inside the infrastructure for too long.

Serverless cons

Employing serverless computing does, however, come with risks. For example, Windish notes, it can sometimes hinder the opportunity to perform an in-depth infrastructure analysis.

Serverless also creates some challenges around security observability, such as if a compromised container is destroyed every five minutes to eight hours, how do you do a post-mortem? There are no tools to freeze or save those environments for analysis, she said.

The serverless model also causes the user to rely on the vendors security practices. If the server is insecure, Denial of Wallet isnt the only cyber-attack that administrators should be worrying about.

A victim will likely notice something is up when their bill is higher than expected. However, there are ways you can stop a Denial-of-Wallet attack before it becomes too costly.

In the first instance, Piper of Summit Route suggests setting up a billing alert. This will notify the user if they are exceeding a predefined spending limit.

Users should also employ limits to mitigate any runaway code, especially lines that can trigger an infinite loop scenario.

Many people have stories about infinite loops happening in AWS that caused resources to be created over and over again, or a Lambda to trigger that caused itself to trigger again, Piper said.

This is a common enough problem that the Cloudwatch Event Rule documentation even has a warning about it.

He added: Without these limits an attacker could try to spin up a million EC2s, but due to these limits, the attacker might only spin up a few dozen EC2s.

Serverless users should put limits in place to trigger billing alerts

The origin of the DoW attack can be traced back to 2008, Piper told The Daily Swig, when it was termed Economic Denial of Sustainability in a blog post by Rational Security.

Piper suggests that the term Denial of Wallet was first used in 2013, pointing to a Twitter user named @gepeto42.

There is no actual bulletproof protection against Denial-of-Wallet attacks. Instead, serverless users should put in place the above limits to trigger alerts should they become a target.

The OWASP Top 10 Serverless Threats (PDF) describes the risk of DoW:

To protect against such attacks, AWS allows configuring limits for invocations or budget. However, if the attacker can achieve that limit, he can cause DoS to the account availability.

There is no actual protection that is not resulting in DoS. The attack is not as straightforward in traditional architecture as in serverless. Therefore, the risk should be high.

Measures should also be put in place to secure credentials associated with a serverless account.

Piper said that if an attacker is able to make costly API calls to a victims AWS account, they likely also have the ability to delete all your files in S3, terminate all your instances, and cause other mayhem that has the potential to cause worse business impact.

He suggested mitigating against this scenario by implementing least privilege services, enforcing multi-factor authentication on all users, and implementing service control policies.

READ MORE Cloud-based cyber-attacks flaring up during the coronavirus pandemic

Visit link:
Denial-of-Wallet attacks: How to protect against costly exploits targeting serverless setups - The Daily Swig

Read More..

Folding@home infectious disease research with Spot Instances – idk.dev

This post was contributed by Jarman Hauser, Jessie Xie, and Kinnar Kumar Sen.

Folding@home(FAH) is a distributed computing project that uses computational modeling to simulate protein structure, stability, and shape (how it folds). These simulationshelp to advancedrug discoveries and cures for diseases linked to protein dynamics within human cells. The FAH software crowdsources its distributed compute platform allowing anyone to contribute by donating unused computational resources from personal computers, laptops, and cloud servers.

In this post, I walk through deploying EC2 Spot Instances, optimized for the latest Folding@home client software. I describe how to be flexible across a combination ofGPU-optimized Amazon EC2 Spot Instances configured in anEC2 Auto Scaling group. The Auto Scaling group handles launching and maintaining a desiredcapacity, and automatically request resources to replace any that are interrupted or manually shut down.

Spot Instances are spare EC2 capacity available at up to a 90% discount compared to On-Demand Instance prices. The only difference between On-Demand Instance and Spot Instances is that Spot Instances can be interrupted by EC2 with two minutes of notification when EC2 needs the capacity back. This makes Spot Instances a great fit for stateless, fault-tolerant workloads like big data, containers, batch processing, AI/ML training, CI/CD and test/dev. For more information, see Amazon EC2 Spot Instances.

In addition to being flexible across instance types, another best practice for using Spot Instances effectively is to select the appropriate allocation strategy. Allocation strategies in EC2 Auto Scaling help you automatically provision capacity according to your workload requirements. We recommend that using the capacity optimized strategy to automatically provision instances from the most-available Spot Instance pools by looking at real-time capacity data. Because your Spot Instance capacity is sourced from pools with optimal capacity, this decreases the possibility that your Spot Instances are reclaimed. For more information about allocation strategies, see Spot Instances in the EC2 Auto Scaling user guide and configuring Spot capacity optimization in this user guide.

AmazonCloudWatchinstance metrics and logs for real-time monitoring of the protein folding progress.

To complete the setup, you must have an AWS account with permissions to the listed resources above. When you sign up for AWS, your AWS account is automatically signed up for all services in AWS, including Amazon EC2. If you dont have an AWS account, find more info about creating an accounthere.

The AWS CloudFormation (CFn) template includes customizable configuration parameters. Some of these settings, such as instance type, affect the cost of deployment. For cost estimates, see the pricing pages for each AWS service you are using. Prices are subject to change. You are responsible for the cost of the AWS services used. There is no additional cost for using the CFn template.

Note: There is no additional charge to use Deep Learning AMIs you pay only for the AWS resources while theyre running. Folding@home client software is a free, open-source software that is distributed under theFolding@home EULA.

Tip: After you deploy the AWS CloudFormationtemplate, we recommend that you enable AWS Cost Explorer. Cost Explorer is aneasy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage e.g. you can break down costs to show hourly costs for your protein folding project.

First thing you must do is download, then make a few edits to the template.

Once downloaded, open the template file in your favorite text editor to make a few edits to the configuration before deploying.

In theUser Information section, you have the option to create a unique user name, join or create a new team, or contribute anonymously. For this example, I leave the values set to default and contribute as an anonymous user, the default team. More details about teams and leaderboards can be foundhereand details about PASSKEYshere.

Once edited and saved to a location you can easily find later, in the next section youll learn how to upload the template in the AWS CloudFormation console.

Next, log into the AWS Management Console, choose the Region you want to run the solution in, then navigate to AWSCloudFormation to launch the template.

In the AWS CloudFormation console, click on Create stack. Upload the template we just configured and click on Next to specify stack details.

Enter a stack name and adjust the capacity parameters as needed. In this example I set the desiredCapacity and minSize at 2 to handle protein folding jobs assigned to the client, and then the maxSize set at 12. Setting your maxSize to 12 ensures you have capacity for larger jobs that get assigned. These parameters can be adjusted based on your desired capacity.

If breaking out usage and cost data is required,, you can optionally add additional configurations like tags, permissions, stack policies, rollback options, and more advanced options in the next stack configuration step. Click Next to Review and then create the stack.

Under the Events tab, you can see the status of the AWS resources being created. When the status is CREATE_COMPLETE (approx. 35 minutes), the environment with Folding@home is installed and ready.Once the stack is created, the GPU instances will begin protein simulation.

The AWS CloudFormation template creates a log group fahlog that each of the instances send log data to. This allows you to visualize the protein folding progress in near real time via the Amazon CloudWatch console. To see the log data, navigate over to the Resources tab and click on the cloudWatchLogGroup link for fahlog. Alternatively, you can navigate to the Amazon CloudWatch console and choose fahlog under log groups.Note: Sometimes it takes a bit of time for Folding@Home Work Units (WU) to be downloaded in the instances and allocate all the available GPUs.

In the CloudWatch console, check out the Insights feature in the left navigation menu to see analytics for your protein folding logs. Select fahlog in the search box and run the default query that is provided for you in the query editor window to see your protein folding results.

Another thing you can do is create a dashboard in the CloudWatch console to automatically refresh based on the time intervals you set. Under Dashboardsin the left navigation bar, I was able to quickly create a few widgets to visualize CPU utilization, network in/out, and protein folding completed steps. This is a nifty tool that, with a little more time, you could configure more detailed metrics like cost per fold, and GPU monitoring.

You can let this run as long as you want to contribute to this project. When youre ready to stop, AWS CloudFormation gives us the option to delete the stack and resources created.On the AWS CloudFormation console, select the stack, and select delete.When you delete a stack, you delete the stack and all of its resources.

In this post, I shared how to launch a cluster of EC2 GPU-optimized Spot Instances to aid in Folding@homes protein dynamics research that could lead to therapeutics for infectious diseases. I leveraged Spot best practices by being flexible with instance selections across multiple families, sizes, and Availability Zones, and by choosing the capacity-optimized allocation strategy to ensure our cluster scales optimally and securely. Now you are ready to donate compute capacity with Spot Instances to aid disease research efforts on Folding@home.

Folding@home is currently based at the Washington University School of Medicine in St. Louis, under the directorship of Dr. Greg Bowman. The project was started by the Pande Laboratory atStanford University, under the direction of Dr.Vijay Pande, who led the project until 2019.[4]Since 2019, Folding@home has been led by Dr. Greg Bowman ofWashington University in St. Louis, a former student of Dr. Pande, in close collaboration with Dr. John Chodera of MSKCC and Vince Voelz of Temple University.[5]

With heightened interest in the project, Folding@home has grown to a community of2M+ users, bringing together the compute power of over 600K GPUs and 1.6M CPUs.

This outpouring of support has made Folding@home one of the worlds fastest computing systems achievingspeeds of approximately 1.2exaFLOPS, or 2.3 x86 exaFLOPS, by April 9, 2020 making it the worlds firstexaFLOP computing system.Folding@homes COVID-19 effortspecifically focuses on better understanding how the viral proteins moving parts enable to infect a human host, evade an immune response, and create new copies of the virus.The project is leveraging this insight to help design new therapeutic antibodies and small molecules that might prevent infection. They are engaged with a number of experimental collaborators to quickly iterate between computational design and experimental testing.

Go here to read the rest:
Folding@home infectious disease research with Spot Instances - idk.dev

Read More..

A China-based loan app exposed millions of Indians’ data in an unsecured server – The Next Web

China-based lending company Moneeds unprotected database has exposed the names and phone numbers of millions of Indians, putting them at risk of identity theft. Security researcher Anurag Sen found this database on an open elastic serverthat had more than 389 million phonebook records. Moneed has offices in Hangzhou, New Delhi, and Hong Kong.

Sen told TNW that the data is stored on a server provided by Hangzhou Alibaba advertising co. ltd in China. The discovery comes in the wake of anti-China sentiments across government authorities and citizens in India who are wary of its powerful neighbors operations in cyberspace. Recently, Indiabanned 59 Chinese apps including TikTokfor allegedlystealing and surreptitiously transmitting users data in an unauthorized manner to servers which have locations outside India.

Looking at the database entries, especially names, the app seems to have uploaded phonebooks of people who mightve installed Moneedsapps. The company has two Androidapps for securing loans, called Moneed and Momoon the Play Store, both of them have more than a million downloads. Both of these apps ask for a ton of permission including contacts, phone, storage, and location.

Shockingly, I managed to find my own contact details in the database. However, there were three entries againstthe same phone number; its likely that different users will have saved my number against different names for that contact.

The database contained data gathered between August 2019 and July 2020. Despite multiple emails to Moneed, we received no reply at the time of writing. We contacted the host of the server, and the Alibaba Security Response Center (ASRC) took the database offline for security.

Meanwhile, Moneeds loan service itself appears to be in violation of Googles app store policy. You can apply for a short-term loan for a tenure of 14 or 28 days. However, Googles developer policy states that the company doesnt allow apps that demand full repayment of loans in under 60 days. Weve reached out to the company for an explanation, and well update the story when we hear back.

In the past few months, several reports have noted that Moneed and several other Chinese microloan apps have been harassing borrowers in India for repayment. One of the methods these companies use is reportedly to call borrowers family and friends to ask for money. They also create a WhatsApp group with the borrowers family to ask for their whereabouts.

In this tense political climate, its worrisome that the data of so many Indian citizens were captured and stored on a foreign server without explicit consent or disclosure. Recently, Cyble reported that more than 150,000 IDs of Indians were leaked on the dark web by a Mandarin-speaking actor.

Moreover, despite such a large amount of data store on the database, there were no security precautions. Furthermore, this data could be used for illegal extortion of money or other malicious purposes.The company has a responsibility to keep customer data safe and respond to security threats in a timely manner and it has clearly failed them in this case.

Read next: SPEC BATTLE: Pixel 3a vs. Pixel 4a, in graphs

Do you want to get the sassiest daily tech newsletter every day, in your inbox, for FREE? Of course you do: sign up for Big Spam here.

Link:
A China-based loan app exposed millions of Indians' data in an unsecured server - The Next Web

Read More..

Remote Workers Aren’t Always Served Best by the Cloud – ITPro Today

At first glance, the cloud seems more valuable today than ever--especially for remote workers. As companies large and small rethink the nature of work in the wake of the coronavirus pandemic, the cloud is being hailed as a critical asset for organizations that want to give remote employees the flexibility to access enterprise IT resources from any location. To an extent, the cloud does offer these benefits. However, it is not a silver bullet. Several drawbacks limit the clouds ability to support the seamless work-from-anywhere scenarios that many employers are now prioritizing.

As cloud solution vendors have been eager to point out during the pandemic, the cloud offers some key advantages to companies that want to support remote workers.

The most obvious is that the cloud enables data and applications to be accessed from anywhere with an Internet connection. If your line-of-business apps are hosted on a SaaS platform, and your corporate file share can be accessed from the public internet, no one needs to be in the office to use these basic IT resources.

Solutions like cloud-based desktop-as-a-service offerings are attractive, too, as a means of making employees workstations accessible from any location. So are hosted collaboration and productivity platforms--like Microsoft Teams, G Suite and Microsoft 365--which make it easy for employees to collaborate without being dependent on software that they can access only in a local office.

Its certainly true that the cloud can solve some of the pain points of supporting remote workers. But to suggest that building a remote workforce is as simple as migrating every IT resource to the cloud would be an overstatement.

There are some critical limitations that make the cloud a poor solution for every type of remote-work need.

To be sure, most cloud-based resources that employees typically access dont require a high-performance network connection. But, in certain cases, a lack of bandwidth could disrupt productivity. For example, if you need to upload and download very large files from a company server, having to do so could take quite a long time if the server is hosted in the cloud and must be accessed via the public Internet. In contrast, a local server that is in the same office as a worker will generally deliver a much faster network experience.

Another example most of us are all too familiar with at this point is video conferencing. In theory, being able to use cloud-based conferencing platforms to hold virtual meetings regardless of where individual employees are located is great. In practice, the connections are often spotty, which is one reason why virtual meetings are so taxing.

This would be much less likely to pose an issue if all meeting participants were on the same local network or--quaint as it may now sound--located in the same room, holding a non-virtual meeting.

In a traditional office, employees dont need special tools to access corporate applications or data. They log into their workstations and get to work. Because everything they need is on their local network, they dont have to worry about running additional tools to get access.

When working remotely via the cloud, however, they need more software to be productive. Remote workers probably need a VPN client to access restricted resources. They may need an RDP or VNC tool, too, to log into remote workstations hosted in the cloud. Remote workers might require password managers to help keep track of all the passwords they must juggle for their various cloud-based apps.

In this respect, the software stack required to work via the cloud is larger. This is a challenge that can certainly be managed, but it increases the complexity that IT teams have to manage.

Hybrid cloud platforms are growing increasingly sophisticated as public cloud vendors vie to outdo each others hybrid offerings. Unfortunately, hybrid architectures dont jive well with remote workers.

When you use a hybrid solution like AWS Outposts or Azure Stack, you host some cloud resources in your own data center. The advantages of doing this include faster access (because you can connect via the local network, instead of the public internet) and fewer compliance issues (because data remains on your local infrastructure, instead of the public cloud).

But when employees work remotely, these hybrid cloud advantages disappear. Remote workers wont enjoy the speed benefits of a hybrid architecture if they have to connect to the on-premises portion of a hybrid cloud via the public Internet. And if they download data from the hybrid environment to their local devices when working remotely, they undercut the compliance benefits that a hybrid cloud stands to provide by keeping data on-premises.

A final limitation of using the cloud for remote work is that not every application can be hosted in the cloud.

Most companies that have been around for a while have at least some legacy apps that were designed long before anything was thinking about SaaS as a delivery model. Moving those apps to the cloud could require a major overhaul, which would demand more development resources than companies can spare.

And then there are apps that--for technical or compliance-related reasons--just wont work in the cloud at all, no matter how hard you try. You may have an app that consumes massive amounts of data, and just cant perform adequately when that data has to be uploaded or downloaded over the public internet. Or you could have an app that requires ultra-low latency rates. Or, you may be subject to compliance policies that make it impossible to move certain applications or data to the cloud.

If employees depend on apps like these, the cloud is not a complete solution for enabling remote work.

For a variety of reasons, simply moving applications or data to the cloud is not always a solution for streamlining remote-work needs. It helps in many situations, but effective remote-work solutions will require a mix of cloud-based resources and on-premises ones, with the latter filling in the gaps that the former cannot address. If you think surging remote-work needs will drive more companies to go all-in on the cloud, think again.

Go here to see the original:
Remote Workers Aren't Always Served Best by the Cloud - ITPro Today

Read More..

Our back office and cloud apps are not aligned. Middleware hasn’t solved this problem – but an API architecture can – Diginomica

(via Shutterstock.com)

Many organizations have not one but two IT departments - one runs core back-office systems such as SAP and Oracle, while the other runs cloud-based productivity and customer-facing apps such as Office365, Salesforce and so on.

Too often, the lack of alignment and friction between these two IT departments forms a major obstacle when it comes to overarching transformation and harmonization of application landscapes.

The back-office IT (also called the SAP practice) function is often perceived as merely "keeping the lights on," while their colleagues who look after topics such as big data, IoT and machine learning are seen as nurturing "the innovative cloud stuff." How can these two camps come closer together and agree on a common denominator of digital products to drive a successful digitalization strategy for the company?

There are many reasons for the existence of heterogenous IT landscapes. Business imperatives often dictated that projects move forward without waiting to achieve enterprise-wide alignment. Acquisitions and local initiatives by the various LOBs (line of business) brought in new platforms that gradually extended their reach into other parts of the organization.

Most companies today are operating a hybrid cloud architecture, which recognizes the need to adopt SaaS solutions while simultaneously maintaining their on-premise digital core of central Enterprise Resource Planning (ERP) and Transport Management (TM) solutions etc.

Fortunately, it is possible to combine these two worlds in a way that brings out the best in both. We, at Neptune Software, have seen cross-functional teams and a modern approach to technology integration resulting in successful digitalization campaigns among a number of our clients. The technology foundation of these success stories is an API-based approach to integration. API-based integration differs from classic middleware-based integration technologies such as Enterprise Service Bus (ESB) solutions such as SAP NetWeaver Process Orchestration. Instead of complex connectors that depend on overstretched IT specialists to build or adjust each integration, the connections are published as APIs to an intermediary layer, where they can be plugged into new digital applications on demand.

This API-based integration layer acts as a standardized 'membrane' around the back-end systems, allowing data to flow freely across systems and into modern frontend applications such as Progressive Web Apps (PWAs), hybrid mobile apps, native mobile apps, or websites. These front-end applications are able to consume and merge data as well as workflows from multiple back-end systems.

Security and user access controls are managed by connecting into LDAP-based single sign-on (SSO) systems such as Microsoft Azure Active Directory. This provides authentication through the familiar user experience that is typically already in place for Office 365 and other applications.

These three elements are the key to successfully providing integration across SaaS, cloud and back-end systems and securely unlocking all of the data and capabilities of your existing IT systems to new digital and mobile applications:

Here are some examples drawn from the Neptune customer base that show the benefits of introducing this API-based integration to combine the power of back-end systems with today's cloud-based development platforms and SaaS applications:

Tobacco industry - self-service access to ERP. A major player in the tobacco industry in the UK has been working with numerous system integrators to provide employees with native applications and PWAs to provide self-service access to ERP resources across the company. The goal is to provide modern, consumer-style apps that allow staff to self-serve HR-related employee functions and other functional processes across the ERP landscape, regardless of background or training.

Connection to the ERP back-ends was implemented through the oData protocol, but the company's IT architects had struggled to harmonize the logon functionality to provide a cohesive SSO experience to all user groups. Integrating Azure Active Directory with Neptune Software's DX Platform provided an authentication layer that supports both offline and server-side authentication to log on to multiple SAP back-ends with the matching SAP end-user. Users can now authenticate using their Office 365 logon to access the existing native apps, while the same capability will enable future digital products to access any desired back-end system within the constantly evolving IT landscape.

Automotive industry - integrating SCM silos. A global player within the automotive industry in the Nordics was challenged by integrating their global SCM architecture to gain central control and insights into isolated silos along the company's world-spanning supply chain. Managing and supporting more than 6,000 suppliers including a reverse supply chain for packaging materials, this monster of a process had been based on a scattered IT landscape, with over 50 major back-ends and thousands of proprietary and often bi-directional interfaces.

It took years for a team of enterprise IT architects to create a technology-driven strategy that finally broke up the technology silos and cracked open organizational silos that were stalling performance improvements along the supply chain. Starting to free up major systems by providing integrated applications and dashboards on top of Oracle's Transport Management solution, Microsoft Azure Data Lake Analytics, as well as multiple SAP S/4HANA back-ends, the team began to use the Neptune DX Platform to replace proprietary interfaces and gateways with a common REST-standard and created a central control system, adding application by application with a common UX design and an Active Directory based single-sign-on.

Finally, we ourselves at Neptune Software have successfully implemented SAP S4/HANA solution in combination with our own platform to integrate Microsoft Azure AD as our identity provider via Office365, and connect multiple third-party services ranging from Salesforce CRM to banking APIs. We have experienced the ability to rapidly adopt to changing business needs and services at scale, as well as our staff's noticeably heightened enthusiasm about the digital tools they are using day in and day out.

Read more about our approach to SAP integration in our white paper.

Read more here:
Our back office and cloud apps are not aligned. Middleware hasn't solved this problem - but an API architecture can - Diginomica

Read More..

Pure and Cohesity team up with FlashRecover data protection – Blocks and Files

Pure Storage is reselling Cohesity software with its FlashBlade storage array to provide a single flash-to-flash-cloud, data protection and secondary data management system.

Mohit Aron, Cohesity CEO and founder, said in a statement: We are thrilled to partner with Pure in bringing to market a solution that integrates exceptional all-flash capabilities and cutting-edge data protection offerings that together unleash new opportunities for customers.

Called Pure FlashRecover, the team effort combines the FlashBlade array with a white box server that runs Cohesitys Data Platform software. This is not an appliance in the sense that it is a dedicated and purpose-built product but it can be used in an appliance-like manner. Pure FlashRecover is a jointly-engineered system, with disaggregated and independently scalable compute and storage resources. The FlashBlade array can perform functions beyond providing storage for Cohesity software.

FlashRecover can function as a general data protection facility for Pure Storage and other suppliers physical, virtual, and cloud-native environments, with faster-than-disk restore and throughput from the all-flash FlashBlade array. Most functionality of the hyperconverged, scale-out Cohesity DataPlatform is also available to customers. Features include tiering off data to a public cloud, ransomware protection, copy data management, data supply to analytics and test and dev.

Pure has become a Cohesity Technology Partner and the two companies have integrated their environments. Cohesity Helios management software auto-discovers FlashBlade systems and Cohesity uses FlashBlade snapshots.

Cohesity spreads the data across available space on FlashBlade to maximise restore performance and enhance efficiency. The software is optimised to provide performance even when the storage for the data is from disaggregated FlashBlades.

FlashRecover will be sold by Pures channel and supported by Pure. Cohesity and Pure are looking forward to further joint technology developments from this point.

Last month, Pure canned the FlashBlade-based ObjectEngine backup appliance. The company told us it was working with select data protection partners, which we see as a more cohesive path to enhancing those solutions with native high performance and cloud-connected fast file and object storage to satisfy the needs in the market. Now we see that Cohesity replaces the ObjectEngine software and FlashRecover replaces the ObjectEngine appliance.

Pure FlashRecover, Powered by Cohesity is being tested by joint customers today and will be generally available in the United States in the fourth quarter, and elsewhere at unspecified later dates. Proof of concepts are available now for select customers .

Read more:
Pure and Cohesity team up with FlashRecover data protection - Blocks and Files

Read More..