Page 2,950«..1020..2,9492,9502,9512,952..2,9602,970..»

New Georgia Bills Will Affect Public’s Access to Cybersecurity Details – University of Georgia

Georgia House Bill 156, signed by Gov. Brian Kemp in late March, increases data sharing between different parts of government about data breaches and cyber-attacks, according to Sarah Brewerton-Palmer, chair of the Georgia First Amendment Foundations Legislative Committee.

However, Brewerton-Palmer is concerned the bill could exempt an entire report about cybersecurity breaches from the Open Records Act depending on the interpretation of the law.

Georgia House Bill 156 went into effect March 25. It allows government agencies to conduct proceedings related to cybersecurity to be held in executive session, and provide for certain information, data, and reports related to cybersecurity and cyber attacks to be exempt from public disclosure and inspection, according to the bills summary.

Another bill, House Bill 134, which has passed both chambers in the Georgia General Assembly, says that if youre discussing anything related to cyber attacks or cyber security, that could all be done in executive sessions, according to Brewerton-Palmer. That doesnt include contracts and payment for services; those would still be made public.

Brewerton-Palmer is concerned, though, that these bills will affect the publics access to information regarding cybersecurity.

Theyre sort of a one-two punch, said Brewerton-Palmer. While House Bill 156 is a reporting requirement House Bill 134s main function is to amend the Open Records Act and Open Meetings Act. House Bill 134 would allow government agencies to go into executive sessions, which are portions of open meetings closed to the public.

Georgia House Rep. Todd Jones (R-South Forsyth) is a sponsor for House Bill 134. He said, The open records and open meetings law have measures that exempt certain information that would affect public safety.

Jones said the issue of cyber security warrants the use of executive sessions and the exemption of information regarding cyber security plans from the Open Records and Open Meetings acts in order to ensure the protection of the publics private data.

I think our citizens would say that the security of their personal information is tantamount, said Jones.

However, Brewerton-Palmer said the language of the bill is too broad and could potentially allow government agencies to interpret the law in which any discussion regarding cyber security could be held in executive session.

Information like the existence of a data breach or attack or more generic information like what kind of information was disclosed, that information the public might want to know that would probably not compromise security efforts going forward, said Brewerton-Palmer.

Were not asking for the code to the software to be posted; were not asking for the details for how hackers got into the system to be made public. What should be made public is how much the government is spending on internet security, how much the system has been compromised, said Richard Griffiths, media ethicist and member of the board of directors for the Georgia First Amendment Foundation.

Georgia Sunshine Laws were updated in 2012. They allow the public to more easily use open records and open meetings laws. Georgias open records laws already do have exemptions for information regarding public safety.

While journalists often make use of open records laws, the public is most affected by legislation that limits the publics right to information, according to Griffiths.

If the public doesnt know whats going on in government, they cant hold the people they elect to account for the good decisions and the not-so-good decisions they make, Griffiths said.

According to Jones, if exemptions to the open records and open meeting laws are to take place, the use of executive sessions should be made transparent.

Government and sunshine is one of the key ways that we ensure that the government is working for the people. They should know if were going to give any exceptions to that rule it should be done through statute, and it should be done in the sunshine, said Jones.

Government transparency is good not just for journalists; its good for every person in this country. Government transparency allows the public to keep the government accountable for rational decisions to be made on the best possible information, said Griffiths.

Micheal Prochaska, editor for the Oconee Enterprise, said he believes citizens have a right to know what their government is working on and how they are using taxpayer dollars.

Its important that people know what their taxpayer dollars are going toward. Keep in mind, all these government entities levy taxes on citizens and citizens have a right to know where their tax money is going, said Prochaska.

Information regarding cybersecurity details warrants exemptions from open records and open meetings laws, according to Jones.

The bill was crafted in such a way that the actual detail planning could be done in work session so in that way, it was not public information so potential hackers, potential ransomware perpetrators, they wouldnt have the key to determine how you would break through on any of the cyber security plans, said Jones.

According to Prochaska, local journalism depends on the transparency of government in order to disseminate information to the public.

A lot of what we write about is government meetings, things that we cover are city council meetings for the municipalities in Oconee County, said Prochaska.

According to Pew Research Center, trust in government has been on a decline for several years. Open government allows for greater transparency and gives the public the opportunity to hold government officials accountable, according to Griffiths.

As our society increasingly goes online, the public will need to know how their data is being protected, said Griffiths. Holding government officials accountable is vital to democracy.

As of now, the Georgia House Bill 134 has passed both the Georgia House and Senate. The bill has been sent by the House to be signed by Gov. Kemp. Unless vetoed, the bill will become law.

The Georgia General Assembly legislative session has concluded this spring. Palmer-Brewerton hopes concerns regarding these bills can be addressed during future Georgia legislative sessions.

Fabian Munive is a senior majoring in journalism at the Grady College of Journalism and Mass Communication

More here:
New Georgia Bills Will Affect Public's Access to Cybersecurity Details - University of Georgia

Read More..

G7 Nations Sign Declaration to Keep the Internet Safe and Open – Infosecurity Magazine

G7 nations have signed a new declaration that promises to boost online safety worldwide in accordance with open democratic principles.

The joint ministerial declaration, signed by tech leaders from the UK, Canada, France, Germany, Italy, the US, and the EU, agreed on a range of principles to tackle cyber-risks. These emphasize that any action taken to tackle cybercrime must support democratic ideals and respect human rights and fundamental freedoms.

The announcement has come amid growing concerns about the influence of nations with illiberal values, such as China, in cyberspace, and the market power of big tech platforms, which potentially threatens competition and even free speech online.

The agreements relate to the following areas:

During the virtual meeting, hosted by UK digital secretary Oliver Dowden, the representatives of the G7 also discussed the need to enhance security and resilience in critical digital infrastructure, especially in telecommunications technologies such as 5G.

Dowden commented: As a coalition of the worlds leading democracies and technological powers, we want to forge a compelling vision of how tech should support and enhance open and democratic societies in the digital age.

Together we have agreed a number of priorities in areas ranging from internet safety to digital competition to make sure the digital revolution is a democratic one that enhances global prosperity for all.

The agreements are part of the first of seven ministerial declarations expected to be signed this year by the G7 governments.

View post:
G7 Nations Sign Declaration to Keep the Internet Safe and Open - Infosecurity Magazine

Read More..

Letter: The ‘big lie’ and voter integrity – INFORUM

Having been the target of fraud rhetoric, Dominion Voting Systems or SmartMatic have responded by bringing disinformation or defamation lawsuits against Rudy Giuliani (former mayor of New York), Sydney Powell (former advisor to President Trump), Mike Lindell (CEO of My Pillow), Fox News, and three of its broadcasters because they have tied voting irregularities to these two companies.

RELATED

It has been almost six months since the election. So has any evidence of fraud been uncovered, or is the big lie just that a big lie?

John Poulos, CEO of Dominion, allegedly told the Michigan State Oversight Committee on Dec. 15, 2020, that voting systems are by design meant to be used as closed systems that are not networked meaning they are not connected to the Internet. Why is this important? If a voting machine is not connected to the internet it cannot be manipulated remotely. However, hackers have found Dominion voting machines through the internet.

Former US Army Col. Phil Walgrin (who worked in an information warfare unit and now works in cyber security), as well as Mary Fanning (a National Intelligence researcher) investigated internet traffic to and from voting machines starting before the 2020 election and ending after the election. What did they find? They found that voting machines were connected to the internet, and servers receiving and storing votes, resided in foreign countries. Server locations, down to street addresses, were found in Germany, Spain, Serbia and Toronto.

Data collected by Walgrin, Fanning and colleagues showed that voting machines were manipulated during the election from IP and MAC addresses in China, Iran and other places modifying votes in Bidens favor.

Voting machines vulnerabilities to hacking have been known for awhile. Sen. Amy Klobuchar, D-Minn., for example, in a 2018 interview acknowledged this and voiced concerns.

While it is not possible to know whether the FBI and CISA (the U.S. cyber security agency) are investigating this, no reports have been issued yet. General Thomas McInerney (ret.) has said the November 2020 election was the most severe cyber attack in history, and to his knowledge and concern there has not yet been any audits.

Mike Lindell has stated he does not fear Dominions suit against him. His reason: an insurmountable defense against defamation is truth. So his take is basically "bring it." The facts are on his side and he can now issue subpoenas to force the release of additional evidence. In light of this, perhaps the big question is how do we fix this before the 2022 election?

Scott Hoaby lives in Fargo.

This column does not necessarily reflect the opinion of The Forum's editorial board nor Forum ownership.

See more here:
Letter: The 'big lie' and voter integrity - INFORUM

Read More..

Fact check: Hackers using visually similar characters to deceive in phishing schemes – USA TODAY

While the world is focused on battling the coronavirus, cyber attacks have increased in the healthcare field and for individuals. Veuers Justin Kircher has the story. Buzz60

Online attackers bent on stealing personal information are using a visual deception to trick people into visiting malicious websites, a post circulating on social media claims.

The April 20 Facebookpostshows two web addresses that, at first glance, appear identical. A closer look, though, shows that one character in this case, the letter a is slightly different in each one.

An average internet user can easily fall for this, the post reads. Be careful for every mail requiring you to click on a link.

The post has been shared hundreds of times on Facebook.

The claim appears to be true. Credible sources dating back to the early 2000s give a similar warning against this kind of spoof of the website a user intends to visit. But similar exploitations have emerged recently as well.

The user who shared the post could not be reached for comment.

The attack is a form of spoofing, when someone poses as a legitimate institution in an attempt to obtain personal information.

Most people by now have gotten a little bit suspicious. ... The idea is how can they trick you into thinking you know who it is or what it is when it isnt, said Stuart Madnick, founding director of Cybersecurity at MIT Sloan.

In this instance, it exploits the visual similarities between characters in the Roman alphabet used in the English language and the Cyrillic alphabet, which Britannica.com said was developed for Slavic-speaking people and is used in more than 50 languages, including Russian.

Substituting Cyrillic characters for Roman letters that look similar, such as the lowercase a, hackers can direct a user who intended to visit one website to another. Madnick said there are other ways to deceive without changing the alphabet, such as replacing a lowercase "L" with a capital "I" in some fonts.

Instead of going to a legitimate site, you may be directed to a malicious site, which could look identical to the real one, notes a 2008 security notice from the U.S. Cybersecurity & Infrastructure Security Agency. If you submit personal or financial information while on the malicious site, the attacker could collect the information and then use and/or sell it."

Fact check: Coronavirus vaccines dont cause death, wont decimate worlds population

The scheme is possible because of internationalized domain names and how web browsers read them, according to the agencys notice, which was updated in 2019.

The so-called homograph attacks have been around since the early 2000s. A 2005 post on The Register, an online technology news publication, called them a new vector for phishing attacks.

But they have popped up again recently. Last year, researchers discovered domain names designed to deceive users into thinking they were going to a legitimate website, The Register reported, despite efforts to contain the problem.

These bogus sites are designed to look real while phishing (to gather) credentials or distributing malware, according to the March 2020 post. You think youre logging into Google.com from an email or instant-chat link, but really youre handing over your password to a crook.

CISA also warned of the potential for homograph attacks in a December 2020 alert about cyber attacks designed to disrupt remote learning as children attended virtual classrooms during the COVID-19 pandemic.

Phishing scams lure you to a phony website. The American National Red Cross and its individual states, as well as the Canadian Red Cross, has seen several coronavirus phishing scams that claim to be from its organizations.(Photo: Marc Saltzman)

Spoofed hyperlinks and websites are a red flag for a potential attempt to steal personal information, according to CISA, part of the U.S. Department of Homeland Security. CISA recommends three steps to avoid falling victim tothe scheme:

People should assume they eventually will be thetarget of an attack and take steps in advance to mitigate any damage, MITs Madnick said. He recommended using software to protect against viruses and malware and having data backups that would make ransomware attacks less effective.

Take a good look at a phishing e-mail from a hacker(Photo: screenshot)

The claim that hackers use letters that look similar but come from another alphabet to deceive people in online phishing schemes is TRUE, based on our research. The deception known as a homograph attack has been going on since at least the early 2000s. Letters from the Cyrillic alphabet are substituted for those that are visually similar in the Latin alphabet to direct unknowing users to malicious websites.

Thank you for supporting our journalism. You cansubscribe to our print edition, ad-free app or electronic newspaper replica here.

Our fact check work is supported in part by a grant from Facebook.

Autoplay

Show Thumbnails

Show Captions

Read or Share this story: https://www.usatoday.com/story/news/factcheck/2021/04/30/fact-check-hackers-use-similar-looking-characters-phishing-schemes/4891437001/

The rest is here:
Fact check: Hackers using visually similar characters to deceive in phishing schemes - USA TODAY

Read More..

CISA tests cloud log aggregation to ID threats – GCN.com

CISA tests cloud log aggregation to ID threats

The Cybersecurity and Infrastructure Security Agency is testing how well aggregated cloud logs can feed its cybersecurity analysis efforts and improve cloud network visibility.

CISAs Cloud Log Aggregation Warehouse collects, aggregates and analyzes national cybersecurity protection system data from agencies that use commercial cloud services. It combines that information with data from Einstein sensors in a cloud-based architecture for improved situational awareness.

CISA wants to see if it can make sense of [the logs] as a community together, CISA CTO Brian Gattoni said at an April 28 event hosted by FCW. "We've run pilots through the [Continuous Diagnostics and Mitigation] program team, through our capacity building team, to look at end point visibility capabilities to see if that closes the visibility gap for us."

In public settings, CISA officials have made clear the government's current programs were not designed to monitor the vectors that Russian intelligence agents exploited during their espionage campaign. They have begun seeking out new capabilities that present a clearer picture on individual end points in agency networks.

In March, Eric Goldstein, a top CISA official, told House lawmakers that "CISA is urgently moving our detective capabilities from that perimeter layer into agency networks to focus on these end points, the servers and workstations where we're seeing adversary activity today,"

Gattoni said during his panel discussion that some cloud providers already have the infrastructure built into their service to help CISA aggregate the security information it wants, but he also said the federal government can't depend on that always being the case.

"There's a lot of slips between the cup and the lip when it comes to data access rights for third-party services, so we at CISA have got to explore the use of our programs like [CDM] as way to establish visibility and also look at possibly building out our own capabilities to close any visibility gaps that may still persist," he said.

This article was first posted to FCW, a sibling site to GCN.

About the Author

Justin Katz covers cybersecurity for FCW. Previously he covered the Navy and Marine Corps for Inside Defense, focusing on weapons, vehicle acquisition and congressional oversight of the Pentagon. Prior to reporting for Inside Defense, Katz covered community news in the Baltimore and Washington D.C. areas. Connect with him on Twitter at @JustinSKatz.

See the rest here:
CISA tests cloud log aggregation to ID threats - GCN.com

Read More..

The evolution and future of cloud-native security – SDTimes.com

With the acquisition of my company, StackRox, by cloud-native technology vendor Red Hat, it seems like a good time to reflect on the state of cloud-native security. Security in the cloud has been my life for the past five years, and its changed very quickly as new cloud-native platforms have taken over the industry. Weve had to create new tools and approaches to meet the new technologies and workflows of todays cloud and will need to continue evolving them to meet the challenges of tomorrows.

Before we get into the future of cloud-native security, though, lets look at where we started in the distant past of seven years ago.

Our industry started with a focus on basic security hygiene for containers, which formed the basis for container security. While container-related technologies had existed for over a decade, Docker provided the toolset that popularized the Linux container as a standard distribution format for applications, making it widely accessible and adopted. While it started out with developers building and running containerized apps on their local machines, Docker containers rapidly found their way into many software environments.

RELATED CONTENT:4 reasons the future of cloud-native software is open source

Suddenly, with thousands of applications being distributed via Docker Hub, people realized this new, emerging area of the stack created new security problems. One of the most straightforward to address first was preventing obviously vulnerable software from being introduced into production environments. Container image scanning became commonplace, with many different options available, including open-source scanners like Clair and OpenSCAP, paid offerings like Black Duck, and ones proprietary to cloud providers.

The Clair team built it in 2015 to detect vulnerabilities as soon as images were pushed to a registry. By making your container contents more visible, we helped mitigate the distribution of vulnerable applications across servers and workstations. This may sound historical, but many popular public container images are still vulnerable, remarked Louis DeLosSantos of the Clair project.

Image scanning was good enough for most users since they were still running containers in a limited context, such as for non-sensitive web apps, or strictly in development and testing. But then organizations started running containers in production and everyone had to think about baseline security best practices for the underlying container infrastructure, which led to the Center For Internet Security (CIS) Benchmark for Docker and other tools and guidelines such as those published by the National Institute of Standards and Techonlogy (NIST). A few platforms, like OpenShift and CoreOS, extended this approach with security modules to further lock down the operating system on the underlying nodes.

Generally speaking, this combination of image scanning and secure infrastructure configuration then became the new good enough for production deployments, partly because there was no standard for container orchestration yet. The major competing orchestration systems (including Kubernetes, Fleet, Docker Swarm, Marathon, and others) each varied in their feature set, meaning that security tools would have to play to the lowest common denominator to support all of them. Where the security functionality they provided wasnt sufficient for users, a new ecosystem of container security vendors quickly emerged to fill in the gaps and augment the major platforms. They provided and continue to provide solutions for security use cases such as runtime security, compliance, and network segmentation.

As Kubernetes became the dominant orchestration platform, container security evolved into Kubernetes security, the foundation for cloud-native security today. Enterprises rapidly increased their adoption of cloud-native technologies and matured their usage patterns of containerized applications: running in production, deploying sensitive workloads, scaling to hundreds of nodes, and implementing multi-tenant and multi-cluster scenarios. As a result, it eventually became clear that the only way to effectively manage security is to align with the system that is managing the applications that need to be protected.

As a result, we started extending security use cases into the Kubernetes infrastructure itself. Vulnerability management meant supplementing image scanning with scanning for, and fixing, vulnerabilities within the Kubernetes control plane and node components. Configuration management evolved to encompass securing Kubernetes configurations rather than just container configurations. CIS released a Kubernetes security benchmark. Security vendors developed threat detection methodologies focused on finding exploits to Kubernetes components like the Dashboard and malicious activity such as cryptojacking; Microsoft researchers published a Kubernetes Threat Matrix based on the well-known MITRE ATT&CK framework.

This shift to Kubernetes security was also reflected in community efforts that focused on identifying security issues within, and protecting, Kubernetes itself. The Cloud Native Computing Foundation performed a security audit of the main Kubernetes components. The Kubernetes community launched SIG-Security, as well as requiring all component teams to have a member responsible for security, and switching the default settings for controls such as Role-Based Access Control (RBAC) in Kubernetes from optional to mandatory.

The next phase of cloud-native security is already underway, and we are progressing from Kubernetes security to Kubernetes-native security, as we describe in our whitepaper. The small difference between those two phrases belies a widespread evolution in integration, tooling, and approaches. Kubernetes-native security ensures that security is tightly coupled with the underlying Kubernetes platform (such as OpenShift) and extends security controls by taking advantage of the extensibility of Kubernetes. Features like Custom Resource Definitions (CRDs), created to enable application automation, also allow us to achieve security automation.

A key element of Kubernetes-native security is making the stack secure by default. We know that users frequently stick to default configurations, which historically have been left insecure for operational convenience or backwards compatibility. With Kubernetes-native security, there is also the opportunity to provide all the capabilities that someone needs across the full application lifecycle for many different common scenarios, whether dev/test or production, single or multi-cluster, and public web apps or ones that process and store sensitive data.

Aside from integration with native Kubernetes extension points, cloud-native security will also succeed through close integration with DevOps practices and teams, allowing them to manage their security declaratively the same way they manage their infrastructure and workloads. This is what we mean when we refer to the phrase shift left: embed and automate security in the workflows that people already use instead of making it an exception. DevOps teams are the new security users we must enable, and our security tooling must be built with them in mind.

By shifting security left with DevSecOps and leveraging Kubernetes to define security controls as code with a trusted, automated application and deployment pipeline, organizations can achieve highly scalable security and compliance, while spending less time remediating and more time innovating, explained Chris Van Tuin, West Region Chief Solutions Architect, Red Hat.

Newer technologies like serverless platforms and service meshes, like early orchestration, are still more fragmented and as a result dont yet have comprehensive security practices. However, since most of these are built on top of Kubernetes, they too benefit from a Kubernetes-native security approach. We can also extend our approach to cover the new security use cases that arise when they are used.

Cloud-native security continues to evolve and improve rapidly. Since so much of it is open source, you can keep current on it by participating in the Kubernetes and CNCF security SIGs and following projects like Clair, StackRox, OpenShift, and many others. As you continue on your journey with Kubernetes, you can expect security to continually evolve to meet the demands of your business.

To learn more about the transformative nature of cloud-native applications and open source software, check out KubeCon / CloudNativeCon Europe 2021, a virtual event hosted by the Cloud Native Computing Foundation, which takes place May 4May 7. For more information or to register for the event,go here.

Go here to see the original:
The evolution and future of cloud-native security - SDTimes.com

Read More..

Questions to ask when modernising IT infrastructure using the cloud – Finextra

As financial organisations continue their digital transformation, using a cloud-based infrastructure is no longer a choice: it has become a must-have. The debate lies in choosing the right type of cloud environment and the associated tools and processes. There are multiple aspects to bear in mind, and this articledoesnotcover every single one of those (a whole book could be written on the topic). However, to aid with the decision-making process, here are some questions and considerations.

A good starting point is defining the aim of digital transformation. Often there are multiple interconnected reasons. Taking a hypothetical mid-sized bank with 200 banking centres across a European country and with a full suite of financial products, here are a few examples:

Different flavours of cloud

Once the end goals are clear, the next step is to look at what type of cloud environment to use, together with other supporting technologies. There are various types of cloud, mainly multi-cloud, hybrid, hybrid-multi cloud and distributed cloud.

Multi-cloud means using multiple public cloud providers, and the benefits include vendor independence and improving disaster recovery by replicating workloads across different cloud providers. Hybrid cloud refers to a combination of both public and private clouds (and they could all be from the same provider). For financial service providers, the appeal is they can choose to have certain data reside within their own data centres.

In a distributed cloud environment, a public cloud can be run in multiple locations: on the cloud providers infrastructure, on-premise, even in other cloud providers data centres, but all managed from a single point of control.=Eventually,it willalso supportsedge computingas it evolves, whereby servers and applications are brought closer to whereconsumers arelocated.

Container consideration

Another consideration is which container orchestration to use. There is no doubt that containers have revolutionised how software is developed, deployed and managed, speeding up time-to-market and reliability. They have become fundamental to flexible, cloud-based digital transformation.

Various orchestration technologies each have their pros and cons, but often they can co-exist and run side-by-side so banks can pick-and-mix. Another question is whether to use an orchestration tool from a cloud vendor, or install your own choice?

Factors in containerisation orchestration technology choice include the number of clusters that need management and how to address that. Typically banks and other financial institutions find they are managing multiple clusters, perhaps even hundreds, especially when IoT andedge devices such asmobile payment terminals, cheque scanners, and ATMs are involved.The greater the complexity, the higher the level of risk, which in turn can jeopardise security.

Therefore, the selected containerisation orchestration tool needs to reduce, not contribute, to complexity. The simplicity of implementation, management and trouble-shooting is a vital requirement. Associated with that is how well can security be implemented across clusters. For many organisations, straightforward scalability is going to be necessary too.

Since maintaining regulatory compliance is a big consideration for financial services firms, the overall cloud environment must continue to comply with GDPR, Sarbanes-Oxley and other local regulations. How is authentication between different components of the environment handled?

As is usually the case with technology, when it comes to choosing the right cloud environment, there is no one-size-fits-all solution. What matters is selecting the cloud infrastructure and supporting tools that best fit the financial organisation today and for years to come. Digital transformation is not a one-time event but rather a continuing evolving process, which is why embarking on modernising IT infrastructure sooner rather than later is so essential.

Originally posted here:
Questions to ask when modernising IT infrastructure using the cloud - Finextra

Read More..

Developer asks, is AWS and Azure killing Linux? – MSPoweruser – MSPoweruser

While Microsoft is building Linux into Windows 10, the companys cloud services may be quietly killing Linux on the server.

More specifically, Engineering Director Mariano Rentera argues that the cloud, in the form of Amazons AWS and Microsofts Azure, is killing off Linux jobs.

Whereas before when companies had an IT project they would host it themselves on their own (likely Linux-based) server farm, these days companies build to the cloud, and they do not even build to Linux virtual machines, but rather platform-agnostic APIs and micro-services which are abstracted from the OS they are built on.

While the cloud may still be built on Linux servers, they are now centrally administered by a much smaller number of technicians, and Rentera argues that if Amazon wanted to, they could easily shift their servers to another operating system without affecting the APIs companies connect to.

Given the move away from writing to (and managing) the metal, the interest in becoming a Linux architect has plunged while the interest in becoming a cloud architect has soared.

It is also cheaper to certify as a cloud architect than a Linux architect with the AWS exam costing $150 and the RedHat Certified Engineer costing $400 per exam.

Rentera concludes:

I see less useful to know Linux in a cloud first era, where the number of people getting certified to be a Cloud Architect is growing, while the number of people looking to get a Linux certification is decreasing.

The current tools make a great abstraction of service without needing to have strong knowledge of Linux, are more developer friendly and allow to build products faster.

Im not saying this is a bad thing, this is just something that could happen sooner than we have thought about.

While Linux is finding application outside of company server rooms, such as IoT devices, it seems it makes increasing sense for new IT trainers to look elsewhere for a career path.

Original post:
Developer asks, is AWS and Azure killing Linux? - MSPoweruser - MSPoweruser

Read More..

Anticipating Amazon’s Formal Disruptive Entry Into The PC Space | eWEEK – eWeek

Next month, Amazon will launch its first PC-like product with a version of the Amazon Fire tablet bundled with a keyboard; it includes one-year of Microsoft Office 365 use. At a price well below $300, this product provides an interesting alternative to offerings like the iPad Pro and Surface Go, both of which are more expensive.

This new Fire Tablet 10+ has things like wireless charging that low-end products often dont get and it comes preloaded with Amazons core offerings and access to a curated app store with a subsection of Android apps.

But, if you put the Microsoft Virtual Desktop on the device and connected it to a robust cloud solution like, oh, I dont know, AWS, and then you provided more extensive screen offerings, Amazon could jump early to where everyone else seems to be going: the Virtual Desktop. And, because they have no PC installed base, this move wouldnt put any existing products for Amazon or current customers at risk.

Lets talk about the promise and danger of Amazons entry into the PC market this week.

As we improve performance and drop networking latency with products like Wi-Fi 6e and 5/6G, we increase the ability to provide a mobile thin client, or terminal, like experience. The market has wanted this experience back ever since it moved from Windows but has been hampered by the lack of performance with Thin Client solutions. But new servers created by companies like IBM (ZLinux) and those coming based on the new NVIDIA Grace processor promise i/o capability we havent yet seen in the cloud. And these advancements should future enable virtual PCs.

But this wave needs a champion, and, other than Microsoft, none of the major PC vendors has the necessary cloud back-end prominence to make this work. But Amazon certainly does. While theyve mostly played in the low-cost tablet and digital assistance space until now, this latest Amazon Fire 10+ tablet comes close to providing the foundation for this appliance like virtual PC future.

Currently, the Amazon offer is only using a subscription to Office 365 as the bridge technology to the PC but, for most working from home that lives in Office and a Web Browser that may be adequate. This offering will allow Amazon, with their sub $300 PC, to explore the opportunities of PAAS (PCs As A Service) with a relatively small hardware and software commitment to the effort.

That knowledge and the related data should allow them to carve out an AWS service that mirrors what we once had with mainframes a host-centric, centrally managed, PC service with very low-cost hardware, high cloud-based security and performance options, and a far less complex (read: lower operating cost) option to PCs.

If successful, this initial offering will likely lead to larger-screened alternatives tied to more and more enterprise-class cloud features that Amazon has and could uniquely bundle into desktop offerings. This effort stands as a warning that at any time, any one of the major cloud vendors, here or in China, could massively disrupt the PC market in both the consumer and business space. This would be much like Netflix and Amazon took out Blockbuster in movies and Amazon took out some bookstores with Kindle and online purchasing.

The economies of scale, security, performance, and cost advantages of a cloud-based PC offering could, driven by any of these Cloud vendors, do to the PC market what Apple did to the Smartphone market, forcing some existing players to exit the market prematurely.

Markets can change dramatically over short periods, through changes in products or regulations. While many of the changes last century, like the collapse of Standard Oil, RCA, and AT&T, were partially based on regulatory changes, later changes were more driven by vendor disruption.

Netscape took out America Online and CompuServe. Microsoft took out Netscape Navigator; Google took out most prior search products like Ask Jarvis; Apple took crippled or took out Microsoft Phone, Nokia, Research in Motion, Palm, and Motorola, and Sony Walkman.

This move by Amazon could foreshadow a very similar pivotal event, and the shifting WFH (Work From Home) requirements could significantly accelerate this pivot. The market is moving to a cloud-centric, terminal model for PC productivity for reliability, security, cost of ownership, and remote management benefits tied to working from home. It looks like Amazon wants to pull an Apple and get there first.

Link:
Anticipating Amazon's Formal Disruptive Entry Into The PC Space | eWEEK - eWeek

Read More..

Opinion: Actually, the new Mighty browser is the Chrome Cloud Tabs feature Ive been waiting for – 9to5Google

Mighty is a new browser project that puts Google Chrome in the cloud and streams it to your PC. While I dont know if Mighty will end up making sense for my personal situation, it does perfectly resemble a product Ive found myself hoping Google itself would build as a feature in Chrome. How worthwhile is Mighty itself, though? And will (should) Google copy it?

First, the should the web be apps or documents? debate. We all know where various Silicon Valley companies land on this. If you dont, its basically the Google side, which is that the web is a glorious operating system built on technologies that naturally supplant the need for many native applications (hence things like Chrome OS and Instant Apps), and the Apple side, which is that the web should primarily be lightweight, static documents and native apps are best place to build more involved use cases.

The convergence of these two lines of thinking are where we all live today. Web apps have taken over the world, but depending on the platform you use or the task, theres lots of native applications to use as well. But the reality is that a good percentage of people do use the web as an OS whether thats good or not. This reality is pervasive. Electron apps, for example, which are basically web apps in a native macOS app container, are pervasive on the Mac, and very controversial.

That world isnt without its ills. Enter Mighty, a new app that wants to make web apps feel more like highly-optimized native apps and eliminate the various bottlenecks that make using lots of apps in Chrome at the same time a drag. So the first question is for those people that run into this issue occasionally (I am one of them) does Mighty (as pitched today) realistically or practically solve two problems: Running lots of web apps in Chrome being 1) a RAM hog and 2) a battery hog?

(As an aside, theres actually something of a parallel with Electron apps, which basically solve a problem (needing to build-it-once-and-fast-and-ship-it-everywhere) by putting things in a container. Mighty solves the limitations of Chromes resource hogginess by putting it in a container. This probably makes the entire idea of Mighty a non-starter for lots of native-app-purists, but not for me, really.)

From what I can tell, Mighty does what it says it does on the tin, which is offload all the resource-hungry parts of Chrome to the cloud, meaning the only thing your local PC has to do is just stream a video feed. Mighty obviously does all the tricks necessary to send your keyboard and mouse input to the cloud, as well as connect itself to all the normal browser connections to other areas of your desktop (default browser, links, downloads, etc.).

Mighty has its own drawbacks, though. For one, its expensive (supposedly $30+/mo), and two, as mentioned, it sends all your keystrokes and the entirety of your browser activity through Mightys servers. Mighty is also up against ongoing hardware innovation that makes this less a problem for people over time (see: M1 battery efficiency and super fast RAM swaps).

Given all these various factors, my initial impression is that Mighty does indeed makes sense and solves a real problem (today) for a tiny subset of Chrome users. Someone running four instances of Figma, swapping between four Slack channels, and editing 15 Google Docs at once, and wants to do all that on relatively underpowered hardware. A newer Mac could handle all of that surprisingly well, but a 2015 MacBook Pro with 8GB of RAM is still a perfectly usable machine but would struggle.

But even then, Mighty is a monthly subscription mostly competing with the idea of just buying better hardware. The M2 Macs are coming later this year, and if the initial run of M1 machines is anything to go by, a lot of these my computer gets bogged down and battery drains because of too many tabs running web apps complaint is on the verge of ending for many people! (And it never existed for desktop users, or those already using devices the tippy-top of the specs pyramid.)

I think its safe to say that the kind of people that would be reading this article or would even know what Mighty is are those who are least likely to need it in the nearing future?

In tandem with my answer to the first question of whether Mighty actually solves a problem, which I think is a yes, sort of, probably, for some very specific subset of web professionals in certain circumstances, and even then its maybe not economical, the next question is, given that, does Mighty make more sense as a feature or a product?

Here, I land firmly in the Mighty makes way more sense as a feature camp. A Chrome feature, to be exact.

One big thing is scaling. Mighty is a startup and can only scale so fast (not fast at all)! They apparently have some kind of proprietary backend and have to be careful not to frontload tons of server hardware before the demand exists! Mightys founder admits as much on their Product Hunt page:

Were kind of this hybrid software and hardware company. We must buy and capacity plan building lots of custom servers (unlike pure software) and must do so across the world to achieve low latency. That means its tough to scale instantly world-wide without Google-level resources.

Suhail

Google already has the scale! I often feel like my 2018 $2,500 MacBook Pro with 16GB of RAM doesnt handle my web app multitasking very well in terms of battery and performance, and one of my very first thoughts when Stadia launched was Why doesnt Google let me run a Chrome tab in this?

I know its not popular to root for Google, a tech monolith, and against Mighty, a startup thats clearly put in tons of money and effort trying to solve a real pain point, but I really cant help but do it here.

This is definitely a feature I want, but I just cant see myself (or anyone, really): 1) paying a hefty subscription for this in the long run, 2) trusting a startup with all the web keystrokes, or 3) needing this thing bad enough to choose it over simply upgrading hardware especially when this service also needs a constant high-speed internet connection.

So if I were to guess the fate of Mighty today, that would be it. It will either be acquired or sherlocked by Google within 12 months. Itll be called Chrome Cloud Tabs, and itll run on Stadia infrastructure, and itll be tied to your Google One subscription. Its awesome to imagine I could open a new tab in Chrome that runs on Stadia servers and streams to my desktop for those rare contexts where Id be OK with the trade-offs.

Im not going to get into the technicalities and possible reasons this hasnt happened already, but its probably some combination of the privacy concerns, the economics, and various technical restraints that are keeping Google from doing it the right way. Or maybe Google has done the due diligence and come away with the conclusion that its not a good long term bet for one or more of the reasons I outlined.

But whenever the stars do align, if they align, its certainly a feature Id use. Not all the time, but Id use it, and Id maybe even upgrade my Google One subscription for the luxury.

FTC: We use income earning auto affiliate links. More.

Check out 9to5Google on YouTube for more news:

See the article here:
Opinion: Actually, the new Mighty browser is the Chrome Cloud Tabs feature Ive been waiting for - 9to5Google

Read More..