Page 2,670«..1020..2,6692,6702,6712,672..2,6802,690..»

Cloud and the future of healthcare – IT World Canada

One silver lining of the COVID-19 pandemic is that were seeing faster adoption of cloud-based solutions in sectors that were previously slow to adopt them, such as healthcare. With an urgent need to enable a remote workforce, provide virtual care and track hospital resources, health care providers are now increasingly relying on cloud-based workflows and applications.

They also have unique requirements for patient privacy and safety, along with data certification and classification, and often face budgetary constraints. Despite these challenges, the future of healthcare is patient-centric, digitally enabled and evidence-based and cloud is well-positioned to support this paradigm shift.

During the pandemic, weve seen how patients can benefit from improved access to care through advances in telemedicine and chatbots powered by artificial intelligence (AI). With these types of cloud-based solutions, patients typically experience faster responses to health inquiries and reduced wait times, as well as increased autonomy through access to their own health data and interactive scheduling.

Care providers can also benefit through solutions that deliver improved demand forecasts and automated triage. Cloud provides a foundation for evidenced-based, insight-driven care, such as AI-assisted diagnostics and clinical decision support at the point of care. It also allows for safe data sharing and clinical collaboration between care providers to promote seamless patient management, which in turn lays the foundation for a connected, data-driven healthcare ecosystem.

Cloud allows any organization to quickly provision and manage scalable computing services, but its particularly beneficial for the health-care sector. Consider COVID-19 vaccination management: scalable, flexible infrastructure that can handle a rapid increase in website traffic is critical for online vaccination registrations. With cloud, this can be achieved with the click of a button, without the need to expand local servers and hosting capabilities.

When registering for vaccines, patients are often required to enter personal data, including their health information. Major cloud service providers have the capabilities to provide best-in-class security and compliance in their cloud environments, especially when compared to the on-premise capabilities of healthcare organizations.

The ability to easily and securely transfer healthcare data across providers and platforms allows for the efficient collection and aggregation of vaccination data from various sources. Cloud-based intelligence solutions can analyze and visualize this data in real time, drawing insights relevant to population health management and enabling an insight-driven approach in forecasting future capacity and demand.

When adopting cloud solutions, management may be concerned about perceived risks and challenges, such as inadequate security and compliance in cloud environments. After all, patient data is sensitive and the consequences of a data breach are significant, as reflected by such stringent health-care regulations such as the Personal Information Protection and Electronic Document Act (PIPEDA) and Bill-C11 in Canada, the Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health Act (HITECH) in the U.S., and Europes General Data Protection Regulation (GDPR).

IT professionals will have to demonstrate to management that its often more advantageous to opt for cloud-based solutions versus an on-premises solution, since large cloud service providers have more sophisticated capabilities to ensure up-to-date and rigorous security and compliance measures on their cloud services compared to the local capabilities of healthcare organizations.

Cloud is complex: There are public, private, hybrid and multi-cloud environments. Choosing the right mix of cloud is often a challenge for healthcare organizations with small IT teams and limited access to the skills, training and subject matter expertise to develop, provision and manage cloud-based services and applications. This is where third-party consultants and cloud service organizations can help, by providing access to skilled resources with deep implementation experience.

IT professionals should conduct proper due diligence for any proposed cloud solution before implementation, such as ensuring up-to-date compliance with industry regulations. They should also consider third-party supplier risks from contracted partners of cloud service providers who may have less sophisticated security and compliance measures.

As part of this process, they should map out contingency, business continuity and disaster recovery plans, as well as strengthen their cybersecurity and IT risk management capabilities. This can be done through regular risk assessments and by planning for worst-case scenarios to mitigate security breaches or non-compliance issues.

Healthcare organizations can further prepare for cloud adoption by devising a cloud-conscious technology roadmap and architecture to ensure its cloud adoption is in line with the organizations overall IT strategy. This roadmap should include a plan for current state and gap analysis, multi-cloud governance and management processes, change management processes and the evolution of resource capabilities and training.

Moving forward, Canadian healthcare organizations will need to consider how they can sustain and scale digital services post-pandemic, while dealing with system-wide financial strain. The idea is to create a more patient-centred, connected health system that benefits patients and care providers. To successfully adapt to this digital world, organizations should start now to prepare for a cloud-enabled future of care.

Read this article:
Cloud and the future of healthcare - IT World Canada

Read More..

Veeam survey: Big cloud impact on backup and disaster recovery – ComputerWeekly.com

The rise of the cloud has had a massive impact on data protection, making backup processes almost unrecognisable from just a decade ago. The cloud is increasingly popular as a site for production workloads and their backups, while physical and virtual servers on-site decline.

Meanwhile, disaster recovery (DR) using the cloud is in widespread use, despite some challenges. And native cloud-based backup of software-as-a-service (SaaS) platforms such as Microsoft Office 365 is largely untrusted.

Those are some of the findings of the 2021 Veeam cloud protection trends report, which questioned 1,551 IT decision-makers in 14 countries about data protection and the cloud.

The most general finding of the survey is that the cloud as a location for data protection is increasing hugely, especially since before the pandemic.

According to respondents estimates, use of physical servers in the customer datacentre will decline from 38% of the organisations data in 2020 pre-Covid to 24% in 2023.

Meanwhile, use of virtual machines in the datacentre will decline from 30% in 2020 to 24% in 2023. But use of virtual machines in the cloud is set to increase from 32% in 2020 to 52% in 2023.

In keeping with that finding, the cloud is now a mainstream location for high priority and normal production workloads for a majority of respondents (47% and 55% respectively). One-fifth (21%) use the cloud as a secondary site for DR and 36% use it for development.

Despite talk of cloud repatriation bringing workloads back from the cloud to the customer datacentre this mostly happens to those that have been developed in the cloud but for use on-prem (58% of those questioned had done this).

Only 7% had had second thoughts and repatriated cloud workloads back in-house. About one-quarter (23%) had brought workloads back on-site after failing over to the cloud during a disaster.

Data protection strategy in the cloud is increasingly not handled by the data protection team in the IT department. Only about 33% of those questioned said this was how they do things, with central IT, the cloud decision-making team and application owners more likely to be involved.

Use of the cloud as a DR and secondary data location is well established, with 40% reporting its use for these purposes. Only one-fifth (19%) said they do not use any cloud services as part of their DR strategy.

For more of those (40%), data is mountable in the cloud but run from the customer location. For 25% of respondents, data has to be pulled back from the cloud first. About one-eighth (12%) are fully cloud-based in their ability to spin up servers and start work again.

Despite DR being a good choice as a cloud deployment, there are challenges. Hosting restored servers that were in one location and bringing them back up elsewhere can be fraught with problems, including how to reconnect networks while ensuring they are secure. If there is a mix of cloud and on-prem, the difficulties can be multiplied.

Key challenges in cloud DR identified by those questioned included network configuration (54%), connecting users in the office (47%), securing the remote site (43%) and connecting home workers (42%).

For those not using the cloud for DR, key concerns are security (20%), already using a third-party DR location (18%), cloud infrastructure being too expensive (14%), existing use of multiple datacentres for data protection (14%) and lack of manageability in cloud DR (12%).

The Veeam survey also asked specifically about Office 365 and found that about one-third (37%) of respondents use backup other than that provided by native features, so-called cloud-to-cloud backup.

Key reasons given were to protect against accidental deletion of data (54%), against cyber attack (52%), internal threats (45%), to provide better restore functionality than in-built capabilities (45%) and to meet compliance requirements (36%).

Finally, when it came to protecting data used in containerised applications, the largest number of respondents (37%) said stateful data was protected separately and backed up in that location, possibly indicating that it is held in dedicated local or shared storage, such as an array.

Meanwhile, 19% said their containerised applications data did not need to be backed up, and 28% said their container architecture is natively durable.

Only 7% use a third-party backup tool to protected containers stateful data, while 7% do not back up container data and are looking for a solution.

See the original post:
Veeam survey: Big cloud impact on backup and disaster recovery - ComputerWeekly.com

Read More..

Get to know 8 core cloud team roles and responsibilities – TechTarget

Cloud adoption can be a stressful and risky decision. It's the choice to step away from the total ownership and control of the local IT environment and embrace an uncertain partnership with third-party cloud and SaaS providers. While the cloud delivers an astonishing array of resources, it requires skill to perfect.

A key element in cloud success involves finding people with the right skills and expertise. Let's take a closer look at a modern cloud team structure, consider some of the most important roles, and review the tasks and responsibilities needed for cloud computing success.

There is no single universal team structure -- no single set of cloud team skills or tasks. In fact, a busy enterprise can support numerous cloud teams. The goals, however, will be similar from organization to organization. Specifically, cloud teams will be asked to:

The skills, knowledge and actions needed to complete each of these project examples vary widely. Because of this, some teams will only need broad expertise, while others require a tighter and more efficient focus.

Consider the creation of a new cloud-centric application. This may require cloud-savvy software developers, as well as cloud architects or engineers to assemble the appropriate infrastructure for that application. Setting the standards for configuring and securing cloud resources may demand greater participation from security-minded cloud engineers, along with business leaders with detailed compliance insights. The trick is to match the skills and mindsets of cloud team members with the specific needs of the project.

While teams are typically tailored to meet a project's specific technical and business needs, there are eight key cloud team roles and responsibilities commonly found in a cloud team structure.

Business leaders are typically the project stakeholders or executive sponsors who manage the budget for a cloud project and anticipate the tangible benefits from the project's outcome. They serve as liaisons between the cloud team and upper management. Additionally, they establish the cloud project's goals, gather metrics and evaluate success.

One business leader, such as a CTO or CIO, can be responsible for many, or even all, of an organization's cloud projects. In other cases, department or division heads may be involved with cloud initiatives, decision-making, business policy development favoring the cloud and training.

Business leaders can handle project management, but they may not possess the skills and IT background needed to organize and manage the technical aspects of a cloud project. To fill this gap, an organization often turns to a project manager. The project manager in a cloud team structure serves as the bridge between the project's stakeholders and the technical team.

Project managers should be outstanding communicators and motivators. They understand both the business and technical implications of the cloud project and are often involved with staffing, vendor selection, scheduling and budgeting. They use established key performance indicators to measure costs, availability, productivity and other actionable aspects of the cloud project. Project managers are also excellent troubleshooters, able to recognize and resolve problems before they cause delays or blow the budget.

The trick is to match the skills and mindsets of cloud team members with the specific needs of the project.

The cloud architect is a senior IT member with solid knowledge and expertise of cloud applications, resources, services and operations. Because they have extensive hands-on experience with specific cloud environments, such as AWS, Azure and Google, they will understand the subtle nuances within each provider's services.

Cloud architects often help to design applications so apps function effectively in the cloud. They can also be involved with the creation of an efficient, reliable cloud infrastructure that enables applications to achieve high availability. The emphasis on design requires architects to understand cloud technologies in detail and remain current with cloud developments.

A cloud engineer is primarily responsible for cloud implementation, monitoring and maintenance. They set up and operate the cloud infrastructure designed by the architects. This requires engineers to possess detailed knowledge of a cloud's operation and be able to set up and configure resources, including servers, storage, networks and an array of cloud services. This may involve a significant amount of automation.

A project could include multiple engineers to focus on different areas of cloud operations, such as networks, compute, databases, security and so on. Once the cloud infrastructure is set up, engineers will provide the first line of support and maintenance. For example, if metrics report faltering performance of a cloud application, it's the engineers who get the call to investigate. Engineers also frequently handle project documentation and reporting.

Cloud software developers are expert programmers, testers and communicators -- often, working in CI/CD environments. Most cloud projects are typically focused on three goals:

All of these use cases involve a team of professional cloud software developers responsible for designing, coding, testing, tuning and scaling applications intended for cloud deployment.

Developers that specialize in cloud projects understand specific cloud resources, services, architectures and service-level agreements in order to create scalable and extensive software products. A cloud project may involve multiple software development teams, each focusing on a particular aspect of the project -- be it the user interface, network code or back-end integration.

While cloud providers are responsible for the security of the cloud, cloud users are responsible for security in the cloud. This is the notion of shared responsibility popularized by AWS.

A cloud security specialist sometimes oversees the architected infrastructure and software under development and ensure cloud accounts, resources, services and applications meet security standards. Security specialists also review activity logs, look for vulnerabilities, drive incident post-mortems and deliver recommendations for security improvements.

Policies and processes guide the access and use of business data, and they protect that data from misuse, loss or theft. Cloud providers are working to accommodate major compliance standards, including HIPAA, PCI DSS and GDPR. Compliance specialists understand and monitor cloud compliance certifications and confer with legal staff. They also create, implement, review and update processes to meet evolving requirements.

In some organizations, an existing corporate compliance officer, the project's business leader or security specialists may take responsibility for compliance. Because security and compliance are so tightly aligned, compliance specialists work closely with the security team.

While serious problems or disruptions are typically directed to engineers and architects, systems and performance analysts gather metrics and work to ensure workload capacity and performance remain within acceptable parameters. They may watch help desk tickets and categorize incidents to recommend additional updates or improvements.

The rest is here:
Get to know 8 core cloud team roles and responsibilities - TechTarget

Read More..

Google’s newest cloud region taken out by ‘transient voltage’ that rebooted network kit – The Register

On July 25th, Google cloud launched a new region with all sorts of fanfare about how the new facility australia-southeast2 in Melbourne would accelerate the nation's digital transformation and make the world a better place in myriad ways.

And on August 24th, the region went down quite hard. Late in the afternoon, local time, users of the region lost the ability to create new VMs in the Google Cloud Engine. Load balancers became unavailable, as did cloud storage. In all, 13 services experienced issues.

Things improved an hour or so later, with some services resuming but the number of services impacted blew out to 17.

That list grew by one by the time all services were restored, and Google's final analysis of the incident named 23 impacted services.

That analysis stated that while the underlying impact of the incident lasted 40 minutes, services remained hard to use for a couple of hours afterwards.

Google says the core of the incident was a failure of "Public IP traffic connectivity" and its preliminary assessment of the cause was transient voltage at the feeder to the network equipment, causing the equipment to reboot."

"Transient voltage" is a phenomenon that sees enormous but very short spikes of energy, sometimes because of events like lightning strikes.

Data centres are built to survive them or at least they're supposed to be. Yet within a month of opening its virtual doors, australia-southeast2 succumbed to one.

Google hasn't said if the networking equipment that rebooted belonged to it, or a supplier. Either way, it's another lesson that clouds are far from infallible.

Visit link:
Google's newest cloud region taken out by 'transient voltage' that rebooted network kit - The Register

Read More..

SolarWinds and the Holiday Bear Campaign: A Case Study for the Classroom – Lawfare

Author's Note: Have you been looking for a detailed-but-accessible case study of the Russian cyberespionage campaign that targeted SolarWinds (among others)? The following piece is adapted from my newly-released eCasebook Cybersecurity Law, Policy, and Institutions (v.3.1), which is available free and in full (270+ pages) in pdf formathere. My aim is for this excerpt to be useful especially for teachers and students who want an account that takes the technical aspects quite seriously but is written in a way that non-technical readers can digest.

Southwest Parkway is a wide and winding road that leads away from Austin towards the Texas Hill Country. Along its length are neighborhoods, schools, and long stretches of open landscape. It is not where one might expect to find the epicenter of a major cybersecurity episode. But Southwest Parkways also is where one can find the unassuming headquarters of SolarWinds, a name that burst into the headlines in December 2020.

SolarWinds specializes in network-management toolsthat is, software that large enterprises use to monitor and control conditions throughout their information technology environment. Its products are in widespread use around the world, including a wide array of prominent private sector entities and government agencies. Among its most successful products is a network-monitoring system called Orion. Orion is an on premises platform, meaning that it does not reside in the cloud (that is, on remote servers controlled by SolarWinds itself). Rather, customers upload Orion as a software package installed on their own networks and run from there. And this has consequences for the process by which Orion periodically is updated. Like any software, Orion periodically requires updates both for security purposes and to improve performance. Thus, Orion customers periodically receive and routinely install from SolarWinds software updates, much as all of us periodically accept vendor-provided updates for the operating system on our phones. In both contexts, the provider digitally signs the update in a way that can be verified technically, ensuring that the update really is coming from the provider. Trusting that the provider took all necessary safety precautions, and often lacking the means to conduct an independent security check in any event, most of us accept these verified updates as a matter of course. This is true for us as individuals with our phones, and it is true to some extent for many a large organization, including those using Orion.

Therein lay an extraordinary opportunity for espionage. If a would-be spy could trojan an Orion updatethat is, if one could find a way to embed malicious code somewhere within an otherwise legitimate updatethen customers by the thousands would open their virtual gates and let that code into their networks. And given the particular function of Orionspanning across a users IT infrastructurethe resulting backdoors, if employed discretely enough, might then pave the way for deployment of further malware directly into those now-compromised networks. The end result could be an intelligence bonanza.

The opportunity would have been tempting for any foreign intelligence service engaged in collection against the U.S. government. And at least one such service did spot it: Russias Foreign Intelligence Service, better known in English as SVR (Sluzbha Vneshney Razvedki).

SVR has a well-deserved reputation for its ability to conduct espionage through cyber means. Hackers associated with SVR sometimes are referred to as Advanced Persistent Threat 29 (APT29), under the anodyne labeling system frequently used in the information security industry as a way to track government hackers without having to expressly attribute particular campaigns or entities to the actual government involved. Others have used the label Cozy Bear, following the more-entertaining naming system popularized by Dmitri Alperovich and the security firm CrowdStrike. With Crowdstrikes nomenclature, groups believed to be linked to the Russian government are named some variation of Bear. A group thought to be associated with Russias military intelligence agency (GRU), for example, are known in this system as Fancy Bear. And so, when a possibly new group of hackers linked to Russia emerges, a new name may follow. And in this case, when the SolarWinds story began to break in December 2020, the initial framing offered by Dmitri Alperovich was the seasonally appropriate Holiday Bear.

Since that time, attribution has focused firmly and reliably on SVR, but I will still refer periodically to Holiday Bear. This will remind us that analysts wrestling with attribution amidst unfolding attacks often are drawing on forensic and contextual clues that may be specific to specific groups within larger organizations.

What follows is a detailed account of the complex sequence of operations SVR conducted as part of the Holiday Bear campaign. As we shall see, exploiting SolarWinds was a central part of the campaign, but there is far more to the story than that (indeed, the intense media focus on SolarWinds has had the unfortunate effect of deflecting attention from the shortcomings of other companies and government agencies).

Step one: accessing the SolarWinds build environment

It is one thing to recognize that SolarWinds customers might not detect a trojaned Orion update, but quite another to compromise the update system in the first place. The task SVR first faced, accordingly, was to sort out how it could penetrate without detection the build environment (aka development environment) used by SolarWinds engineers to draft and tinker with Orions code. Then SVR would need to find a way to inject malicious code into an Orion build without detection. These were tall orders.

SVR managed both tasks, but we do not (yet) know precisely how. There are plenty of plausible explanations, however. Perhaps it spearphished an employees credentials (that is: tailoring a fake message for that particular employee and thereby tricking them into opening a malware-laced file and thereby gaining access to the persons legitimate login credentials). Perhaps it simply engaged in password spraying until it hit upon something that worked (famously, a security researcher in 2019 had discovered the password for an unrelated SolarWinds server stored in a GitHub repository; the password was password123). Perhaps SVR hacked its way into the build environment, taking advantage of vulnerabilities or configuration errors in the software used by SolarWinds engineers to perform their development work. All of these are common routes to initial exploitation, and any could have been the culprit here. At any rate, SVR managed it. No later than early September 2019 (and possibly as early as January 2019), SVR established access to the build environment.

Step two: injecting malware into an Orion update

SVRs next step was to test-drive its access by inserting some innocuous code into the build environment, to see if this could be done without detection. In fall 2019, SVR dipped its toes into the water, inserting a modest batch of innocuous code. It worked; the addition was not detected. Exhibiting remarkable patience, SVR continued with similar experiments for months before at last taking advantage of this access to inject actual malware into an Orion build. It took that step in February 2020.

To accomplish this, SVR used malware now known to us as SUNSPOT. SUNSPOT was thoughtfully designed for its task. Once deployed into the build environment, it determined whether a particular development tool (Microsoft Visual Studio) was in use by the SolarWinds engineers at that time. If so, SUNSPOT then checked to see if that tool was being used for an Orion build in particular. If so, SUNSPOT monitored to determine precisely when the developers used that tool to access a particular part of the Orion source code, and at that moment SUNSPOT injected a file with the plausible-sounding name SolarWinds.Orion.Core.BusinessLayer.dll into the Orion build. That .dll file was, in fact, a custom malware package known to us now as SUNBURST.

It might help at this point to draw out the analogy to the Trojan Horse. If the eventual distribution of a corrupted Orion update was the cyber equivalent to that giant wooden horse being brought within the walls of the city of Troy, then SUNBURST was the equivalent of the small squad of Greek warriors hiding within the horse, prepared to sneak out and quietly open a side gate in those walls and thus allows their fellows to pour in undetected. From that perspective, SVRs success in loading SUNBURST into the Orion build was equivalent to loading that squad of Greek warriors into the horse prior to presenting the horse at the citys gates; it remained to be seen if the Trojans would take the horse within their walls or if the squad hidden within could manage to open a gate from the inside without getting caught.

In the event, it unfolded exactly as SVR hoped. SolarWinds did not detect the compromise of their latest Orion update, and hence proceeded to digitally sign it and roll it out to customers though the existing, trusted remote-update mechanism. That process began in March 2020 and continued for a few months. The horse was now within the walls for a vast array of customers. The spread might have gone further, in fact, but at that point SVR apparently concluded that it had achieved sufficient distribution. Rather than risk someone at SolarWinds detecting the compromise of its build environment and unspooling the entire campaign, SVR at that point did what it could delete its access to the build environment, covering its tracks as well as it could.

At this point, some 18,000 customers had downloaded the trojaned Orion update. Among them were several U.S. government agencies, including the Departments of Treasury, State, Commerce, Energy, and Homeland Security.

Step three: deciding where to take advantage of SUNBURST

In theory, SVR could have blindly exploited all infected systems to the maximum extent possible. But doing so would have increased the opportunities for detection, with the potential to expose and thus put an end to the entire campaign. Rather than run such risks, SVR instead chose to proceed slowly and narrowly, with an emphasis on remaining undetected while carefully identifying which of the infected customers best fit their specific collection priorities.

In every instance, SUNBURST took no action at all for approximately two weeks once installed on the customers premises. This put some distance between its eventual actions and the upload process. Then, when it did finally come alive, its first action was to perform a complex series of checks designed to determine whether various security products were in operation on the system. Where such products were detected, SUNBURST disabled them if possible, but otherwise simply shut itself down without taking further action. Only if and when it passed these safety checks would SUNBURST go on to its next step: communicating with an external command-and-control (C2) server. In Trojan horse terms, the warriors in the horse had waited two full weeks, and after making sure no one was watching they moved at last moved to open up a side gate.

To minimize the risk that its communications with a C2 server would not draw attention, SVR attempted to disguise this malicious traffic. This was made easier by the fact that on-premises instances of Orion routinely had to communicate with SolarWinds as part of what SolarWinds calls the Orion Improvement Program (much as an iPhone user might click yes when asked if it is ok for our phone to share information with Apple about how we are using Siri, in order to help Apple improve the product or better tailor it to our usage patterns). SVR accordingly designed SUNBURSTs communications with C2 servers so as to appear, on casual inspection, to be part of that legitimate traffic flow. The ruse appears to have been highly effective. Among other things, this illustrates that victims typically were not limiting Orions external communications in a way that would limit them to specifically pre-approved (allow-listed) domains, or that would at least flag for attention any communications to a non-pre-approved domain.

Critically, the initial communication did not automatically result in SVR dispatching additional malware. The first task, instead, was to enable SVR to decide whether to exploit that particular victim or, instead to cover its tracks by uninstalling SUNBURST from that location. Towards that end, SUNBURST dispatched location information that SVR in turn used to make a judgment about the identity of the infected system. In most cases, it appears, the system in question proved not to be of interest, and SVR uninstalled SUNBURST frequently. But not in all cases. SVR had hit the jackpot in plenty of instances, and now it was time to move on to the active-exploitation stage.

Step four: injecting the tools needed to act effectively within targeted systems

Notably, SUNBURST did not itself contain the tools needed to engage in lateral movement within the compromised networks, to conduct data exfiltration, and to engage in other aspects of active exploitation. It was a slim design, reflecting the premium SVR placed on detection avoidance at the initial stage. SUNBURST contained no more than was necessary to reach the point that it could open a backdoor to a C2 server. Put simply: in order to actively exploit an infected system, SVR needed to use SUNBURST to inject additional malware.

But how to do so without that fresh injection of malware being detected at the victims perimeter by their networks antivirus capability? This is a classic problem that hackers have long faced. Of course, one might try to overcome it by developing sophisticated, never-seen-elsewhere custom malware that might not be detected by security software that scans for known indicators of compromise. Thats not a realistic option for many attackers, and even where it is, it remains tempting to instead make use of retail malware that already exists and has a long track record of reliability and effectiveness. If such commonplace tools can be injected without detection, even sophisticated entities like SVR will make use of them.

These considerations over time have led to the development of a category of malware tools known as droppers and loaders. These are programs that hide within themselves some otherwise-detectable malware payload. The idea with a dropper is that the attacker might be able to install the dropper on a targeted system without detection, at which the dropper will unpack and install the core malware payload. Loaders are similar, but rather than containing the malware payload within themselves all along, they provide a (more secure) means to download the payload surreptitiously from an external server. Both have the effect, at any rate, of reducing detection risk at the stage when a malware payload must transit a systems perimeter, thus making it more attractive to rely on retail malware.

Thats the route SVR took with the SUNBURST-infected systems that it targeted for active exploitation. Specifically, SVR developed what is now called TEARDROP. TEARDROP was a novel dropper that SVR could load into compromised systems via SUNBURST, with little risk of detection at victims network perimeter. SVR also deployed a novel loader, RAINDROP, for the same purpose.

In this way, SVR successfully injected the toolkit it needed for active exploitation of its priority targets, despite the fact that the toolkit it chose to use was, very much, a well-known one: Cobalt Strike.

Cobalt Strike is a commercial product sold by the Minnesota company HelpSystems as a tool for use in attack simulations and penetration testing. That is to say, it is meant to be a pro-security product, emulating what a real attacker might do. The Cobalt Strike BEACON capability, for example, provides a range of capabilities comparable to what a sophisticated, genuine attacker might employ. All of which is quite useful from the defense-improvement perspective, when actually used for testing purposes. But real attackers also find these tools useful. As the firm Intel471 recently wrote, criminals love Cobalt Strike, and it has become a very common second-stage payload for many malware campaigns.

SVR apparently loves it too, for rather than using TEARDROP and RAINDROP to inject bespoke (SVR-made) tools at this stage in the operation (as they had done at every prior stage), they instead went with what amounted to a somewhat customized version of Cobalt Strike BEACON.

Some have suggested that this was a significant mistake on SVRs part, since there are techniques available to detect the presence of BEACON on a network. Yet it does not appear that any of the targeted entities actually detected BEACON in use before otherwise learning of the SolarWinds campaign. SVR did take steps to hide what was occurring, after all. To execute a file, for example, it would temporarily replace a legitimate file with a malicious one, execute that file (at times doing so by temporarily modifying a legitimate scheduled task so that it would issue the command to execute the file), and then restore the original. And, in any event, SVR may have had the notion that using such common tools might actually make it more difficult for investigators to attribute the operation to it, and by extension make it less likely that the larger campaign would be undone by exposure of any one instance (relatedly, SVR ensured that each deployed instance of BEACON varied from the others in terms of file names and other otherwise-matchable details, thus reducing the chance that detection by one victim would result in the sharing of indicators-of-compromise that could be used effectively by other victims).

Step five: swimming upstream to the cloud

At this point, SVR was well-established within victims own local (on-premises) servers and systems. And if that was where to find the richest intelligence, the table would have been fully set for espionage success. But for a growing number of organizationsincluding government entitiesthe on-prem environment is not where one will find most of the information an intelligence collector would value. Increasingly, organizations rely on cloud services for email and documents. This is true, for example, for organizations that rely on the Microsoft 365 (M365) product suite, which encompasses, among other things, the Microsoft Office software suite (Outlook for email, Word for documents, PowerPoint for slides, and Excel for spreadsheets), or that use Microsofts Azure cloud services (just as the same would be true for an organization that instead used the cloud services of other providers, such as Amazons AWS cloud services).

Not surprisingly, this was true for many of the entities SVR had compromised via Orion, especially with respect to using M365. In those case, accordingly, SVR had more work to do even after getting BEACON into the on-prem environment. To get the intelligence it actually soughtto realize the purpose of the whole effortit next needed to swim upstream from its on-premises foothold into the M365 cloud environment.

This brings us to an important and underappreciated aspect of the SolarWinds story: it is just as much a Microsoft story. The SolarWinds framing is understandable, of course, given the breadth of the compromises that resulted from the breach of that one companys own security. But from SVRs perspective, what ultimately mattered was accessing the M365 cloud environment. Orion was one (very attractive and scalable) pathway to get to the doorstep, but it was not the only such pathway. Indeed, in a security advisory released on January 8, 2021, CISA revealed that it was investigating instances in which [SVR] may have obtained initial access not via Orion but, rather, by Password Guessing, Password Spraying, and/or exploiting inappropriately secured administrative or service credentialsinstead of utilizing the compromised SolarWinds Orion products. Soon thereafter, the firm Malwarebytes revealed publicly that it too had been recently targeted by the same threat actor, and that it could confirm the existence of another intrusion vector that works by abusing applications [other than Orion] with privileged access to Microsoft Office 365 and Azure environments.

How did SVR make the leap to the cloud from within customer systems? It appears it used multiple pathways, one of which is known as the Golden SAML approach. This method overcomes the critical identity authentication process that normally precludes unauthorized users from accessing cloud-based accounts. More specifically, it is a method that works for systems that use Active Directory Federation Services, or ADFS, to connect users with cloud-based services like M365. The basic idea with ADFS is that when a user goes to log in to the cloud service, that service directs the request to a particular server (the ADFS server) to validate the users identity. If the ADFS server approves the credentials, it issues a token to the cloud service that confirms the users legitimacy and enabling access. For obvious reasons, then, the security of the ADFS server itself is critical. In particular, the private encryption key and signing certificate for that server are critical. Which brings us to the heart of the Golden SAML approach: if the attacker gains access to the private key, this in turn leads to the ability to issue tokens, and that then opens the door to accessing the accounts associated with that ADFS server.

That was one way that SVR began strolling through the 365 accounts of its victims. One of the other methods later identified by FireEye was similar: the creation within Azure Active Directory (Azure AD) of entirely new domains approved to function as a trusted third-party identity providerthat is, as a provider of authentication services such as those described above.

Whichever particular pathway was taken, however, the fundamental approach was the same: leveraging the clients own credentials so as to circumvent cloud identity-authentication safeguards from the inside. At this point, it was simply a question of whether anyone would notice unusual patterns of account access and other account activity. For a long time, no one did.

Step six: dont get caught

What changed? SVRs luck ran out. It was extracting material from a variety of private victims, not just government agencies. One of these, as it happened, was the cybersecurity company FireEye, and at some point FireEye detected what was happening.

On December 8, 2020, Kevin Mandia published the jaw-dropping news that someone had penetrated FireEyes systems and gotten away with their pen-testing tools. The sophistication of the attack suggested it was the work of a nation-state actor, in Mandias opinion, as did the fact that the attacker also apparently attempted to learn things about FireEyes government customers. Five days later, Mandia announced as a follow-up that FireEye had identified a global campaign that introduces a compromise into the networks of public and private organizations through the software supply chain. In particular, Mandia explained, the campaign exploited the update mechanism for SolarWinds Orion. It was a stunning turn of events, with holiday-wrecking implications for thousands-upon-thousands of SolarWinds customers.

Given the scale and potential impact of the operation, it was clear early on that the episode constituted a significant cyber incidentthat is, an incident of sufficiently serious nature to warrant formal federal government interagency coordination effortsunder the U.S. governments National Cyber Response Plan. In practical terms, this meant that the federal government would form an interagency Cyber Unified Coordination Group (UCG) as the focal point for cooperation and deconfliction among FBI (as the lead agency for what the government calls threat response), CISA (as the lead agency for asset response), and ODNI with NSA support (as the lead agency for intelligence support and related activities).By January 5, the UCG was ready to go on the record with a (very) limited degree of public attribution, declaring their collective belief that the attacker was likely Russian in origin. This was a far cry from blaming the Russian government directly, but it was a beginning. Meanwhile, private-sector experts were more explicit about the attribution. Dmitri Alperovitch and David Cross used the label Holiday Bear in an interview on Patrick Grays Risky Business show (a weekly must-listen for everyone interested in cybersecurity) as a way to capture their view that a Russian government actor was responsible, even it if was not yet clear which precise actor it was.

Eventually, the US government would state in no uncertain terms that it was the SVR, specifically, that conducted the operation: Today the United States if formally naming the Russian Foreign Intelligence Service (SVR), also known as APT 29, Cozy Bear, and The Dukes, as the perpetrator of the broad-scope cyber-espionage campaign that exploited the SolarWinds Orion platform and other information technology infrastructures. The U.S. Intelligence Community has high confidence in its assessment of attribution to the SVR.

Read more:
SolarWinds and the Holiday Bear Campaign: A Case Study for the Classroom - Lawfare

Read More..

IBM Re-Architects The Mainframe With New Telum Processor – Forbes

The New IBM Z Telum processor that scan scale up to 32 chips and 256 CPU cores

Similar to what the company did with the new Power10 processors for cloud systems, IBM also started from scratch in designing a new processor for the companys IBM Z mainframe. The IBM Z has a long history and is unique in that it still uses processors specially designed for enterprise security, reliability, scalability, and performance. The new Telum processor for the next generation IBM Z, enhances all these aspects and adds embedded acceleration, something most systems are accomplishing through discrete accelerators. IBM introduced the new Telum processor at the annual Hot Chips technology conference this morning.

IBM Z Telum processor die photo

A key to the design of the Telum processor was to put everything on one die for performance and efficiency. The Telum processor features 8 CPU cores, on-chip workload accelerators, and 32MB of what IBM calls semi-private cache. Each chip module will feature two closely coupled Telum die for a total of 16 cores per socket. Just to indicate how different this architecture is, the prior z15 processor featured twelve cores and a separate chip for a shared cache. The Telum processor will also be manufactured on the Samsung 7nm process as opposed to the 14nm process used for the z15 processor.

Besides the processing cores themselves the most significant change is in the cache structure. Each CPU core has a dedicated L1 cache and 32MB of semi-private low-latency L2 cache. The reason it is semi-private is because the L2 caches are used together to build a shared virtual 256MB L3 between the cores on the chip. The L2 caches are connected through a bi-directional ring bus design for communications and capable of over 320 GB/s bandwidth with an average latency of just 12ns. The L2 caches are also used to build a virtual shared L4 cache between all chips in a drawer. There are up to four sockets per drawer, and two processors per socket, for a total of up to eight chips and 64 CPU cores with 2GB of shared L4 cache per drawer. That can then be scaled up to four drawers in rack for up to thirty-two chips and 256 CPU cores.

The IBM Z Telum processor dual-die chip module and four-chip drawer configuration

The cache architecture was matched with improvements in the CPU cores and accelerators. The Telum CPU cores are an out-of-order design with SMT2 (Simultaneous Multithreading) that can operate at or above a 5GHz base frequency. The CPU cores also feature, amongst other things, enhancements in branch prediction for large footprint and diverse enterprise workloads. The Telum processor also features encrypted memory and improvements to the trusted execution environment for enhanced security and dedicated on-chip accelerators for sort, compression, crypto, and artificial intelligence (AI) to scale with the workload.

One of the key dynamics of the electronics industry today is accelerated computing. Everything from smartphones to cloud servers are using custom or programmable processing blocks to perform tasks more efficiently than general purpose CPUs. This is occurring for two reasons. The first is that as certain tasks mature, it becomes more efficient to perform the tasks through dedicated hardware than through software. Even though some of these tasks may still be performed using a programmable processing engine, there are many programmable engines such as DSP, GPUs, NPUs, and FPGAs that may be able to perform certain tasks more efficiently than CPUs due to the nature of the workload and/or the design of the processing cores.

The second reason for the rise in accelerators is the slowing of Moores Law. As it becomes difficult to improved CPU performance and efficiency through the semiconductor manufacturing technology, the industry is shifting more towards heterogenous architectural improvements. By designing more efficient processing cores, whether they are dedicated to a specific function, or optimized around a specific type of workload or execution, significantly improved performance and efficiency can be achieved in the same or a similar amount of space. As a result, the direction going forward is accelerated computing. Even innovative technologies like quantum and neuromorphic computing, two areas where IBM research is leading the industry, are really forms of accelerating computing that will enhance traditional computing platforms.

AI is one of the most common workloads being accelerated and there is a wide variety of processors and accelerators under development for both AI training and inference processing. The benefits of each will depend on how efficiently the accelerator processes particular workloads. For servers, most AI accelerators are discrete chips. While this does offer more silicon area for higher peak performance, it also increases costs, power consumption, latency, and variability in performing AI tasks. IBMs approach of adding the AI accelerator onto the chip and interfacing it directly with the CPU cores and sharing the memory will allow for secure real-time or close to real-time processing of AI models while increasing overall system efficiency. And because the processor is aimed at enterprise-class workloads, as opposed to large research workloads like scientific or financial modeling, the demands are likely to be spread across multiple AI models with low-latency requirements. The AI accelerators were designed for business workloads like fraud detection, as well as system and infrastructure management like workload placement, database query plans, and anomaly detection.

The AI accelerator features a matrix array with 128 processing tiles designed for 8-way FP-16 SIMD operations and an activation array with thirty-two tiles designed for 8-way FP-16/FP-32 SIMD operations. The reason for the two arrays is to divide the operations between more straightforward matrix multiplication and convolution functions and more complex functions like sigmoid or softmax while optimizing the execution of each. The two arrays are connected through an Intelligent Data Mover and Formatter capable of 600 GB/s of bandwidth internally and have programmable prefetchers and write-back engines connected to the on-chip caches with more than 120 GB/s bandwidth. According to IBM, the AI processor multiplexes AI workloads from the various CPUs and has an aggregate performance of over 6 TFLOPS per chip and is anticipated to be over 200 TLFOPS for a fully populated rack. The AI accelerator also uses the AI tools designed to work with other IBM platforms from the IBM Deep Learning Compiler for porting and optimizing trained models to the platform and Snap ML model libraries.

IBM Z Telum low-latency AI performance scales with the number of chips

According to IBM, the new cache structure has resulted in an estimated 40% performance increase in per socket performance. This is impressive given a platform that has evolved into a scalable mainframe that is optimized across the stack and all the way down to the processor. Ironically, just as Moores Law once drove the industry away from customized processor and systems, it now driving the industry back to customization in the era of accelerated computing. While IBM revenue is driven by software and services, having expertise that drives everything from semiconductor manufacturing to custom chips and systems, gives IBM a competitive advantage in this new accelerated world focused on workload optimization.

See more here:
IBM Re-Architects The Mainframe With New Telum Processor - Forbes

Read More..

ThycoticCentrify Enhances DevOps Security with Certificate-Based Authentication and Configurable Time-to-Live for All Cloud Platforms | Scoop News -…

Wednesday, 25 August 2021, 11:57 amPress Release: ThycoticCentrify

Auckland, New Zealand August25, 2021 ThycoticCentrify,a leading provider of cloud identity security solutionsformed by the merger of privileged access management (PAM)leaders Thycotic and Centrify, today announced enhancementsto its PAM solution for DevOps, ThycoticDevOps Secrets Vault and new and expanded capabilitiesfor its award-winning PAM solution, ThycoticSecret Server.

The latest version of DevOpsSecrets Vault offers certificate-based authentication andthe ability to configure Time-to-Live (TTL) for secrets,leading to even tighter DevOps security and easiermanagement.

With the latest enhancements toThycotic DevOps Secrets Vault, were continuing ourcommitment to deliver usable security solutions, saidRichard Wang, Director of Product Management atThycoticCentrify. Todays organisations require aDevOps solution thats as agile as their development whilesatisfying the needs of IT and securityteams.

Certificate-based authenticationdesigned for privileged machines

ThycoticsDevOps Secrets Vault addresses all scenarios in a DevOpsflow where secrets are exchanged between machines, includingdatabases and applications for software and infrastructuredeployment, testing, orchestration, configuration, andRobotic Process Automation (RPA). In sync with thehigh-speed workflow, DevOps Secrets Vault creates digitalauthentication credentials that grant privileged access tosystems and data.

With the latest release,organisations can use certificate-based authentication forenhanced security and easier management. Unlikeauthentication solutions designed for people (such asbiometrics and one-time passwords), certificate-basedauthentication can be used for machines non-humanprivileged users such as systems, devices, and the growingInternet of Things (IoT) to identify a machine beforegranting access to a resource, network, or application.Certificates are stored locally and securely, whichalleviates the headache of managing passwords anddistributing, replacing, and revokingtokens.

Time-to-Live eliminates standingsecrets for all cloud platforms

In a DevOpsworkflow, resources are created quickly and must expireautomatically to meet compliance requirements and avoid therisk of standing privilege. When cloud platformadministrators, developers, applications, or databases needto access a target, DevOps Secrets Vault generatesjust-in-time, dynamic secrets.

DevOps Secrets Vaulthas long supported automatically expiring secrets for AWSand Azure, and now extends this capability to Google CloudPlatform. Now, no matter which environment organisationschoose, they can set a predetermined time for secrets toexpire automatically.

Security and identity teamsare working in lockstep with DevOps to meet the requirementsof these high-speed processes, said Wang. They requirea powerful solution that delivers immediate value whileserving the needs of agile innovation.

Combinedwith Thycotic Secret Server, the industry-leading vault fordigital credentials, DevOps Secrets Vault provides securityand IT teams full visibility and control over secretsmanagement throughout an organisation. Specifically, DevOpsSecrets Vault replaces the need for hardcoded credentialsused in the DevOps process and CI/CD toolchains.

Tolearn more about DevOps Secrets Vault, visit https://thycotic.com/products/devops-secrets-vault-password-management/.

With the addition of the new Secret Erasefeature, enhancements to Secret Servers mobileapplication, Connection Manager, and Web Password Filler, ThycoticSecret Server now more than ever helps reduce cyberrisk, expand discovery, and increase productivity for ITadministrators as well as business users.

Removal ofprivileged account information after its no longer neededis critical to security and compliance standards, especiallywhen organisations are working with contracted third-partyadministrators. With Secret Erase, secrets and related data such as usernames, passwords, and email addresses are purged completely from the database, while stillproviding an audit trail to meet documentation andcompliance requirements.

After a third-partyengagement with a privileged user is completed, removingsecrets and related data is a best practice, said JasonMitchell, Senior Vice President of Engineering atThycoticCentrify. Our latest release of Secret Serveradds this important capability with Secret Erase,prioritising both security and compliance. Now ITadministrators can rest a little easier knowing no historicor unnecessary credentials are left available for cybercriminals to exploit and gain privilegedaccess.

SSH management forUnix/Linux

An accurate record of all SSH keysis essential to properly secure them. Locating and trackingSSH public keys can be an arduous task for ITadministrators. To save time and effort, Secret ServersDiscovery tool now includes the ability to locate existingSSH keys associated with Linux and Unix servers. AdditionalSSH session management capabilities in the release simplifysudo/su elevation and enable select command blocklistingduring SSH proxied sessions.

Usable securityfor greater productivity

The onslaught ofdaily alerts and notifications can be fatiguing for manyusers. With so much noise, its difficult to digestinformation quickly and understand which notificationsrequire action. To reduce alert fatigue, Secret ServersInbox now provides a customisable toolset to manage howemail and notifications are sent and received by users.Inbox allows for configuration of notification scheduling,collecting notifications into digest format, creation ofmessage templates, rules, and more.

Organisations cantest drive the latest version of Thycotic Secret Server forfree at https://thycotic.com/products/secret-server/.

ThycoticCentrify is a leading cloudidentity security vendor, enabling digital transformation atscale. ThycoticCentrifys industry-leading privilegedaccess management (PAM) solutions reduce risk, complexity,and cost while securing organisations data, devices, andcode across cloud, on-premises, and hybrid environments.ThycoticCentrify is trusted by over 14,000 leadingorganisations around the globe including over half of theFortune 100, and customers include the worlds largestfinancial institutions, intelligence agencies, and criticalinfrastructurecompanies.

Scoop Media

Become a member Find out more

Go here to read the rest:
ThycoticCentrify Enhances DevOps Security with Certificate-Based Authentication and Configurable Time-to-Live for All Cloud Platforms | Scoop News -...

Read More..

MONITORAPP Brings Their AIWAF-VE to Microsoft Azure – PRNewswire

RANCHO CUCAMONGA, Calif., Aug. 25, 2021 /PRNewswire/ --MONITORAPP, Inc., a leading cybersecurity vendor ,is not only expanding their security solutions to the cloud, but to the market as well with listings on AWS and Microsoft Azure. MONITORAPP is bringing competitive pricing and various payment options, including licensing and pay-as-you-go options to help buyers utilize their extensive protection. Their success with physical security appliances has led them to adapt their AIWAF (Application Insight Web Application Firewall) to the cloud with their AIWAF-VE (Virtual Edition).

While many infrastructures and applications are moving to the cloud, the threat of potential attackers comes with them. Cloud security is more important now than ever. MONITORAPP AIWAF-VE is a viable means of bringing comprehensive protection to the cloud.

MONITORAPP was founded in 2005 and has invested heavily into developing top-performing cybersecurity solutions over the last 16 years. Their success with physical appliances has led them to adapt the reliability and stability of their physical security appliances to the flexibility and ease of access to the cloud. AIWAF-VE provides strong protection against major web vulnerability attacks such as OWASP TOP10, application exploits, and web application-based DoS/DDoS attacks. This has helped MONITORAPP expand the reach of their industry-leading services to AWS and recently Microsoft Azure as well.

AIWAF-VE also helps protect against unknown threats. Working together with Application Insight Cloud Center (AICC) and machine learning systems, AIWAF-VE can defend against unknown attacks that cannot be blocked by firewalls alone and can seamlessly filter encrypted traffic. Its TCP stack ensures high performance and reliable traffic handling and provides its own load balancing and health check without the need for a separate load balancer. This allows for efficient traffic handling for multiple web servers serving the same domain.

AIWAF-VE can be deployed by using the PAYG (Pay as You Go) plan through AWS or Microsoft Azure Marketplace, or by purchasing a license and deploying the BYOL (Bring Your Own License) plan on AWS. MONITORAPP is currently working on bringing their leading technology to GCP as well.

More information on AIWAF-VE or any other MONITORAPP products can be found by visiting the website below or contacting us at [emailprotected].

Contact UsMONITORAPPMonitorapp.comAzure AIWAF-VEAWS AIWAF-VETwitterFacebookLinkedInYoutube

SOURCE MONITORAPP

See the rest here:
MONITORAPP Brings Their AIWAF-VE to Microsoft Azure - PRNewswire

Read More..

Whats *THAT* on my 3D printer? Cloud bug lets anyone print to everyone – Naked Security

Are you part of the Maker scene?

If so, you probably have your very own 3D printer (or, depending on how keen you are, several 3D printers) stashed in your garage, shed, basement, attic or local makerspace.

Unlike an old-school 2D plotter than can move its printing mechanism side-to-side and top-to-bottom in order to skim across a horizontal surface, a 3D printer can move its print head vertically as well.

To print on a surface, a 2D plotter usually uses some sort of pen that releases ink as the print head moves in the (X,Y) plane.

A 3D printer, however, can be instructed to emit a stream of liquid filament from its print head as it moves in (X,Y,Z) space.

In hobbyist printers, this filament is usually a spool of fine polymer cord thats melted by a heating element as it passes through the head, so that it emerges like gloopy plastic dental floss.

If emitted close enough to a part of the output thats already been printed, the melted floss gloms onto the existing plastic, hardens, and ultimately forms a complete model, like this (but a lot more slowly):

As you can imagine, theres a lot that can go wrong when printing a model in this way, notably if the fine stream of molten gloop doesnt emerge near an existing surface onto which it can stick and solidify.

If the model becomes poorly balanced and falls over; if the print head gets out of alignment; if the polymer is not quite hot enough to stick, or is too hot to harden in time; if theres even a tiny mistake in any of the (X,Y,Z) co-ordinates in the print job; if an already-printed part of the model buckles out of shape or warps slightly; if the print nozzle suffers a temporary blockage

then you can end up with the print head spewing out a detached swirl of unattached plastic thread, like a giant toothpaste tube thats been squeezed, and squeezed, and squeezed.

And once your 3D printer has got itself into the squeeze-and-squeeze-the-toothpaste-tube state, it will almost certainly keep on squishing out disconnected strands of plastic floss, with nothing to adhere to, until the filament runs out, the printer overheats, or you spot the problem and hit the [Cancel] button.

This produces what makerpeople refer to as a spaghetti monster, as this Reddit poster reveals in a plea for help entitled What makes spaghetti happen?, complete with a picture of one that got away:

The problem with most 3D print jobs is that they dont take minutes, they take hours, perhaps even days, so its difficult to keep an eye on them all the time.

Many hobbyists rig up up webcams that they can connect to remotely, so that they can intermittently check up on running print jobs while theyre out and about running other jobs such as shopping and going to work, which gives them a chance to shut down a failed job without using up a whole spool of filament first.

But even with remote access enabled, you cant keep watch all the time, especially if youre sleeping while an overnight job completes.

Enter The Spaghetti Detective (TSD), an open source toolkit that uses automated image recognition techniques to detect the appearance of spaghetti in or around a running print job so that it can warn you or shut down the job automatically.

Alternatively, if you dont want the hassle of setting up a working TSD server of your own (theres quite a lot of work involved, and youll probably need a spare computer) then the creator of TSD, Kenneth Jiang, offers a cloud-based version thats free for occasional use, or $48 a year if you want 50 hours of online webcam monitoring a month that you can use to detect spaghettified jobs automatically.

Jiang himself say that he identifies as a hacker, not a coder, and admits that this which means he favours getting features built fast, as well as being sloppy about coding styles and terrible at algorithm questions.

Well, those comments came back to bite him late last week when he made some modifications to the TSD cloud code and inadvertently opened up printers on private networks, such as a home Wi-Fi setup, to the internet at large.

As one Reddit user dramatically claimed (the original post has since been deleted for undisclosed reasons): [Woke] up this morning and [saw] this on my 3D printer, with a picture allegedly showing a job kicked off by someone they didnt know, from a location they couldnt determined:

The good news is that Jiang has now fixed the problem he mistakenly created, written up a full mea culpa article to describe what happened, and thereby retained the goodwill of many, if not most, of the makerpeople that find his service useful:

I made a stupid mistake last night when I re-configured TSD cloud to make it more efficient and run faster. My mistake created a security vulnerability for about 8 hours. The users who happened to be linking a printer at that time were able to see each others printer through auto-discovery, and were able to link to them too! We were notified of a case in which a user started a print on someone elses printer. [] My sincere apologies to our community for this horrible mistake.

(If youre looking for lessons to learn from this response, take note that Jiang didnt start with the dreaded words, We take your security seriously; he didnt excuse himself by saying, At least credit cards numbers werent affected; and he didnt downplay the bug because it only lasted eight hours and apparently affected fewer than 100 people.)

The bad news is that although the immediate bug is fixed, the underlying system for deciding what devices are supposed to be able to discover which printers is still fundamentally flawed.

Jiang, it transpires, was permitting two devices to discover each other automatically based on whether they showed up on the internet with the same IP number, as they typically would if they were on the same private network behind the same home router.

Thats because most home routers, and many business firewalls, too, implement a feature called NAT, short for Network Address Translation, whereby outbound traffic from any internal device is rewritten so that it appears to have come directly from the router.

The replies, therefore, officially terminate at the router, which then rewrites the incoming traffic for the true recipient, and forwards it inwards to the originator.

This process is necessary (and, indeed, has been used since the 1990s) because there are fewer than 4 billion regular (IPv4) network numbers to go around, but far more than 4 billion devices that want to get online these days.

NAT allows entire networks, whether they consist of 5, 555 or 5555 different devices, to get by with just one internet-facing network number, and permits ISPs to reallocate network numbers on demand, instead of allocating them permanently to individual customers, where they might neither be needed or even used.

The bug that opened up Jiangs TSD cloud so that anyone could discover everyone was caused by the fact that he accidentally started supplying the IP number of one of his own servers, a load balancer through which he passed all incoming traffic, as the source IP address of every incoming connection.

Loosely speaking, he turned the load balancer into a second layer of NAT, so that everyone seemed to be connected to the same public network, thus making all the connected devices seem to belong to the same person.

Unfortunately, reverting the misconfiguration that caused this bug has only papered over the problem, for the simple reason that IP numbers arent suitable for identification and authentication.

Firstly, two devices with different IP numbers may very well be on the same physical network, as all devices were in the early days of the internet, back before NAT became necessary.

Secondly, two devices with the same IP number may very well be on different networks, for example if an ISP applies a second level of NAT in order to group different customers together and therefore to reduce the quantity of public IP numbers they need.

Likewise. if several companies in a shared building decide to pool their funds and share a firewall and high-speed internet connection, thus effectively letting the building act as an ISP, they may end up with the same public IP number, even though the individual devices are on independent networks operated by different businesses.

Jiang, in the meantime, says hes looking to replace the current TSD auto-discover system with one thats more precise and presumably also more secure, so if youre a TSD user, keep an eye on his website to see how that project is getting along.

Go here to read the rest:
Whats *THAT* on my 3D printer? Cloud bug lets anyone print to everyone - Naked Security

Read More..

Digital health is a vital tool: here’s how we can make it more sustainable – The Conversation UK

The pandemic has shown us the extraordinary potential of digital health to fight global health inequalities by providing expanded access to healthcare: as well as by better informing our responses to health crises.

Tools such as wearable monitoring devices, video consultations, and even chat-bots driven by AI can provide care from a distance and often cost less than a face-to-face meeting with a doctor or nurse. This, in turn, can improve global access to high-quality treatment.

Throughout the pandemic, being able to collect real-time data from cases across the world has been vital to local and global responses to combat the virus and track its progress. Machine learning analysis of viral gene sequences, track-and-trace mobile apps and telehealth services have also played their part. But as this monumental shift towards digital health accelerates, the environmental issues it raises are often overlooked.

Climate change disproportionately affects developing countries. Places that already face poor health outcomes are further subjected to the health effects of environmental change. Plus, considering that emissions from computing devices, data centres and communications networks already account for up to 4% of global carbon emissions, leaving environmental factors out of digital health debates is a significant omission.

As we continue to roll out this indispensable infrastructure, we also need to assess how we can minimise its environmental impact. My research shows three main ways that digital health technologies can contribute to environmental change and what can be done.

First, raw materials needed to produce digital health technologies including robotic tools, smartphones and cameras are taken from mines, which are mostly located in developing countries.

The toxic waste spillages that can occur when mining these materials create serious environmental degradation, potentially exposing workers to dangerous toxins. Meanwhile, at the other end of the process, the mishandling of discarded electrical devices can also release toxic chemicals into the environment, creating severe health risks for local populations including organ damage.

On top of this, the carbon required to produce electronic devices makes up around 8% of all carbon produced globally. Increased demand for devices driven by digital healths expansion will only push emissions higher.

Steps including developing green mining mining practices that minimise environmental damage and emissions while maximising recycling and supply-chain efficiency are vital to protect our planet alongside our health.

Second, from electronic health records to biometric data collected by wearable technologies, the digital health industry produces vast amounts of information. Health data accounts for around 30% of the worlds data.

This data and the insights it provides on population health are key to improving peoples health. But due to the electricity needed to run the huge servers that host cloud services, safely storing data in the cloud can take up to one million times more energy than saving data directly to devices.

To reduce the environmental impacts of data centres, initiatives like green cloud computing (which aims for carbon-neutral data processing, for example, by investing in carbon offsets) and virtualisation (which reduces the physical numbers of servers needed to store data by shifting that data to virtual servers) should become key priorities.

The carbon costs of running artificial intelligence and blockchain health technologies to better support patients are also significant. As such, the use of environmentally conscious technologies such as tiny machine learning and compact AI, that reduce software size and power, need to be implemented.

Third, we need to consider whether the promise that digital health will lower carbon emissions due to reducing travel to physical health centres is likely to materialise.

Although the increase in telehealth tech means that more patients are accessing healthcare from their homes or workplaces, these reductions in local travel are shown to have minimal effects on emissions and only become cost-effective when telehealth replaces local trips of at least 7.2km (or just over four miles).

A more pressing and overlooked concern, however, is the cost associated with housing large telehealth operations in call centres. As with cloud servers, telecommunications centres need vast amounts of energy to power and cool equipment.

The NHS has recently pledged to achieve a net zero carbon footprint by 2040. However, as the recent IPCC report assessing the state of the worlds climate indicates, change must be more rapid.

In the Philippines home to a large hub of international telehealth operators green information technologies such as recyclable office equipment and remote working are used to reduce the environmental costs associated with communication. Such practices must become commonplace.

Green initiatives should be adopted across the healthcare sector as far as possible. The problem is that many digital health technologies result from design decisions beyond the field of healthcare: so big tech must also do its part in creating more sustainable systems.

Without taking such steps, we run the risk that digital health will only lead to additional global health burdens, particularly among the worlds most vulnerable populations.

Read more here:
Digital health is a vital tool: here's how we can make it more sustainable - The Conversation UK

Read More..