Page 4,024«..1020..4,0234,0244,0254,026..4,0304,040..»

Three Key Takeaways From AWS re:Invent 2019 – Forbes

AWS re:Invent 2019, the industrys largest cloud computing conference concluded last week. With over 65,000 attendees and 2500 sessions, the scale of the event only grows each year.

Andy Jassy, CEO

Its not just the scale, the pace of innovation at Amazon refuses to slow down. This years re:Invent witnessed more than 100 new features, products, and services added to AWS portfolio.

Here are three key themes that emerged from AWS re:Invent 2019:

1. Redefining Edge Computing

Traditionally, the cloud is considered to be a highly centralized resource, much like an IBM mainframe. But the advent of edge computing has made the cloud truly distributed. Developers dont need to work around the limitations of latency and the performance trade-off involved in deploying applications in remote data centers.

Edge computing is often used either in the context of delivering static content through CDN or moving the processing closer to IoT devices. But, AWS has redefined edge computing in more than one way. It has built a continuum of compute by delivering a new form of edge computing.

First, there are AWS Outposts - hardware appliances that customers can rent from Amazon to run within their data center. Customers can launch Amazon EC2 instances on AWS Outposts, and manage them from the same set of tools such as CloudFormation and AWS Console. Each Outpost deployment is associated with an AWS region that is capable of running managed services such as Amazon RDS and Amazon S3.

If AWS Outposts are confined to the data center, AWS Local Zones make cloud hyper-local by bringing compute, storage, and network services closer to users within a city or a metro. Each AWS Local Zone location is an extension of an AWS Region where customers can deploy latency-sensitive applications using AWS services in geographic proximity to end-users. Developers can now deploy applications across availability zones that span traditional and local zones. The new distributed architecture of cloud opens up additional avenues for developers and businesses building modern applications for AR/VR experiences, smart cities, connected cars and more. The Los Angeles AWS Local Zone is generally available by invitation.

Amazon is expanding the footprint of AWS to telecom providers that offer 5G networks. AWS Wavelength Zones are AWS infrastructure deployments that embed AWS compute and storage services within telecommunications providers data centers at the edge of the 5G network. Wavelength brings the power of the AWS cloud to the network edge to enable latency-sensitive use cases that require near-real-time responses.

From the data center to the metro to the telco infrastructure, AWS is bringing the cloud to new territories. Amazon is certainly redefining edge computing.

2. Investments in Next-Gen Hardware

Amazon is investing in a new breed of chips and hardware that makes AWS infrastructure efficient and cost-effective.

Since the acquisition of Annapurna Labs in 2016, Amazon has been moving software-based heavy lifting to purpose-built hardware. Nitro System, a collection of hardware accelerators, offloads hypervisor, storage, and network to custom chips freeing up resources on EC2 to deliver the best performance.

AWS Graviton2 processors use a combination of hardware from ARM and Nitro System. The new breed of EC2 instances based on Graviton2 uses the Nitro System that features the Nitro security chip with dedicated hardware and software for security functions, and support encrypted EBS storage volumes by default. When compared to the first generation of Graviton processor-based EC2 instances, the current generation of instances delivers 40% price performance.

Amazon has also announced the general availability of Amazon EC2 Inf1 instances, a family of instances exclusively built for machine learning inference applications. Combined with 2nd Gen Intel Xeon Scalable processors and 100Gbps networking, these instances deliver unmatched performance for running applications that depend on machine learning inference. Inf1 instances are powered by AWS Inferentia chips that are purpose-built for accelerating inference.

3. Big Bets on Machine Learning and Artificial Intelligence

Amazon continues to expand the AI and ML portfolio through new services and platform offerings. At re:Invent 2019, AWS announced multiple services that take advantage of ML and AI.

Amazon CodeGuru is a managed service that helps developers proactively improve code quality and application performance through AI-driven recommendations. The service comes with a reviewer and profiler that can detect and identify issues in code. Amazon CodeGuru can review and profile Java code targeting the Java Virtual Machine.

Venturing into enterprise search, AWS has launched Amazon Kendra, a managed service that brings intelligent, contextual search to applications. Amazon Kendra can discover and parse documents stored in a variety of mediums including file systems, web sites, Box, DropBox, Salesforce, SharePoint, RDBMS and Amazon S3.

Amazon Fraud Detector is a service that can identify potentially fraudulent online activities such as online payment fraud and the creation of fake accounts. Based on the experience of fraud detection from Amazon.com, AWS has built this service to help customers integrate sophisticated fraud detection techniques in their applications.

Amazon SageMaker, the platform as a service (PaaS) offering from AWS has got new capabilities that make developers and data scientists productive. Built on top of Jupyter Notebooks, Amazon SageMaker now has a full-blown development environment. Branded as Amazon SageMaker Studio, the new tooling experience includes debugging, pipeline management, integrated deployment, and model monitoring. The platform is tightly integrated with Git and Jupyter Notebooks to bring collaboration among developers.

Here is the original post:
Three Key Takeaways From AWS re:Invent 2019 - Forbes

Read More..

Quantum computing will be the smartphone of the 2020s, says Bank of America strategist – MarketWatch

When asked what invention will be as revolutionary in the 2020s as smartphones were in the 2010s, Bank of America strategist Haim Isreal said, without hesitation, quantum computing.

At the banks annual year ahead event last week in New York, Israel qualified his prediction, arguing in an interview with MarketWatch that the timing of the smartphones arrival on the scene in the mid-2000s, and its massive impact on the American business landscape in the 2010s, doesnt line up neatly with quantum-computing breakthroughs, which are only now being seen, just a few weeks before the start of the 2020s.

The iPhone already debuted in 2007, enabling its real impact to be felt in the 2010s, he said, while the first business applications for quantum computing won't be seen till toward the end of the coming decade.

But, Israel argued, when all is said and done, quantum computing could be an even more radical technology in terms of its impact on businesses than the smartphone has been. This is going to be a revolution, he said.

Quantum computing is a nascent technology based on quantum theory in physics which explains the behavior of particles at the subatomic level, and states that until observed these particles can exist in different places at the same time. While normal computers store information in ones and zeros, quantum computers are not limited by the binary nature of current data processing and so can provide exponentially more computing power.

Quantum things can be in multiple places at the same time, said Chris Monroe, a University of Maryland physicist and founder of IonQ told the Associated Press . The rules are very simple, theyre just confounding.

In October, Alphabet Inc. GOOG, -0.18% subsidiary Google claimed to have achieved a breakthrough by using a quantum computer to complete a calculation in 200 seconds on a 53-qubit quantum computing chip, a task it calculated would take the fastest current super-computer 10,000 years. Earlier this month, Amazon.com Inc. AMZN, +0.03% announced its intention to collaborate with experts to develop quantum computing technologies that can be used in conjunction with its cloud computing services. International Business Machines Corp. IBM, -0.82% and Microsoft Corp. MSFT, +0.84% are also developing quantum computing technology.

Israel argued these tools will revolutionize several industries, including health care, the internet of things and cyber security. He said that pharmaceutical companies are most likely to be the first commercial users of these devices, given the explosion of data created by health care research.

Pharma companies are right now subject to Moores law in reverse, he said. They are seeing the cost of drug development doubling every nine years, as the amount of data on the human body becomes ever more onerous to process. Data on genomics doubles every 50 days, he added, arguing that only quantum computers will be able to solve the pharmaceutical industrys big-data problem.

Quantum computing will also have a major impact on cybersecurity, an issue that effects nearly every major corporation today. Currently cyber security relies on cryptographic algorithms, but quantum computings ability to solve these equations in the fraction of the time a normal computer does will render current cyber security methods obsolete.

In the future, even robust cryptographic algorithms will be substantially weakened by quantum computing, while others will no longer be secure at all, according to Swaroop Sham, senior product marketing manager at Okta.

For investors, Israel said, it is key to realize that the first one or two companies to develop commercially applicable quantum-computing will be richly rewarded with access to untold amounts of data and that will only make their software services more valuable to potential customers in a virtuous circle.

What weve learned this decade is that whoever controls the data will win big time, he said.

Read more:
Quantum computing will be the smartphone of the 2020s, says Bank of America strategist - MarketWatch

Read More..

Global Cloud Computing Stack Layers Market 2019 by Company, Regions, Type and Application, Forecast to 2025 – Global Market News 24

The "Cloud Computing Stack Layers Market" report contains a wide-extending factual assessment for Cloud Computing Stack Layers, which enables the customer to separate the future complicity and estimate the right execution. The advancement rate is evaluated dependent on insightful examination that gives credible information on the worldwide Cloud Computing Stack Layers market. Imperatives and advancement points are merged together after a significant comprehension of the improvement of the Cloud Computing Stack Layers market. The report is all around made by considering its essential information in the overall Cloud Computing Stack Layers market, the essential components in charge of the interest for its products and administrations. Our best analysts have surveyed the Cloud Computing Stack Layers market report with the reference of inventories and data given by the key players (Amazon Web Services, Salesforce, Microsoft Azure, IBM, Google Cloud Platform, SAP, Rackspace, H&P Helion, OVH, Avaya, Oracle), flexible sources and records that help to upgrade cognizance of the related methodological conditions.

Ask for sample link here: http://www.marketresearchstore.com/report/global-cloud-computing-stack-layers-market-2019-by-499660#RequestSample

The Cloud Computing Stack Layers market report shows a point by point division (Sofrware as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS)) of the overall market reliant on development, product type, application, and distinctive techniques and systems. The point-to-point elucidation of the Cloud Computing Stack Layers market's assembling system, the usage of advancement, conclusions of the world market players, dealers and shippers' order, and the explicit business data and their improvement plans would help our customers for future courses of action and movement planned to make due in the Cloud Computing Stack Layers market.

The Cloud Computing Stack Layers market report includes the latest mechanical enhancements and new releases to engage our customers to the configuration, settle on taught business decisions, and complete their required executions in the future. The Cloud Computing Stack Layers market report moreover focuses more on current business and present-day headways, future methodology changes, and open entryways for the Cloud Computing Stack Layers market. Nearby progression frameworks and projections are one of the key segments that clear up overall execution and incorporate key geological analysis (Commercial Use, Public Services, Others).

Inquire about the report here: http://www.marketresearchstore.com/report/global-cloud-computing-stack-layers-market-2019-by-499660#InquiryForBuying

The overall Cloud Computing Stack Layers market is made with the fundamental and direct conclusion to exploit the Cloud Computing Stack Layers market and participate in business progression for imperative business openings. The correct figures and the graphical depiction of the Cloud Computing Stack Layers market are shown in a represented strategy. The report shows an examination of conceivable contention, current market designs and other basic characteristics all around the world.

Thanks for reading this article; you'll be able to additionally get individual chapter wise section or region wise report versions like North America, Europe, Asia-Pacific, South America, geographic area and continent.

Sorry! The Author has not filled his profile.

Read the rest here:
Global Cloud Computing Stack Layers Market 2019 by Company, Regions, Type and Application, Forecast to 2025 - Global Market News 24

Read More..

Over 750,000 Applications for Copies of US Birth Certificates Left Exposed Online – Security Boulevard

Quick question, were you born in the United States? Have you recently applied for a new copy of your birth certificate? Well, you could be one of the unfortunate people whose birth certificate application was left exposed online.

It has been reported that more than 750,000 applications for copies of U.S. birth certificates have been left exposed without any access control in a misconfigured cloud server within an Amazon Web Services (AWS) storage bucket.

It is understood that a British security company discovered the data container with no password protection leaving the door wide open for cybercriminals to steal the information for fraudulent purposes. Whats worrying is the cache is seemingly being updated on a weekly basis with more applications being added.

The data was being collated by a third-party partner of the U.S. government which provides a service to U.S. citizens who wish to have copies of their birth and death certificates from state governments.

The company at fault has not been named as it is believed the critical data is still online and currently exposed. The leak exposed traditional sensitive information like names, date of birth, home addresses, email addresses and phone numbers, however, more historical information has also been revealed. For example, the server also contained past names of family members, old addresses linked to the applicant, and even the reason as to why the individual was seeking this information, which could be as trivial as applying for a new passport or even to research their familys history.

Sadly, this is not the first time an unprotected AWS server has resulted in a high profile data leak as in June 2019, Netflix, Ford, and many other brands all had data exposed in an open Amazon AWS bucket which amounted to 1TB worth of information being left unprotected.

With these incidents frequently occurring, it begs the question as to why these online cloud servers are being left unprotected. Identity theft and fraud is widespread, and these leaks do not give people the confidence that companies, governments, and other organizations are doing enough to secure their critical data.

Service providers and processors need to wake up to the reality that data needs to be protected in a data-centric fashion to eliminate the risks of having a lapse or lack of due diligence. Adopting a data-centric protection model ensures that data is protected anywhere it is stored, moved, shared, or used and is the only true firebreak that can quench identity theft.

Here is the original post:
Over 750,000 Applications for Copies of US Birth Certificates Left Exposed Online - Security Boulevard

Read More..

For Cloud-native App Security, Few Companies Have Embraced DevSecOps – Security Boulevard

A recent study identified some startling news when it comes to the state of security and cloud-native apps. The Security for DevOps Enterprise Survey Report, conducted by the research firm Enterprise Strategy Group on behalf of Data Theorem found only 8% of companies are securing 75% or more of their cloud-native applications with DevSecOps practices today.

The survey did provide some hope: those companies employing DevSecOps practices should jump from that paltry 8% to 68% of companies that are securing 75% or more of their cloud-native applications with DevSecOps practices in two years.

The study results also revealed that API-related vulnerabilities are the top threat respondents are concerned about, at 63%, when it comes to serverless usage within organizations. Additional study findings include:

The survey found that workloads not only continue to move to public cloud platforms, but more organizations are also embracing serverless capabilities. The report found the general shift to everything consumed as-a-service continues. The report predicts that production workloads will continue to shift to public cloud platforms as organizations report that more than 40% of their production applications run on public cloud infrastructure.

Given this affinity for and commitment to public cloud infrastructure, it follows that there is already an appreciable use of serverless functions, especially in the enterprise, with many evaluating or planning to use serverless functions. Specifically, more than half of respondents indicate that their organizations software developers are already using serverless functions to some extent, with another 44% either evaluating or planning to start using serverless within the next two years. Those who are planning or evaluating will need to understand the associated threat model and means of mitigating risks. The report states.

While enterprises are increasingly turning to public infrastructure and serverless, the future will most likely be a mix of workload types. Containers and serverless are marginally cannibalizing virtual machines and bare metal servers and are expected to coexist with these server types as the underpinnings of both cloud-native and legacy applications, the report states.

However, while the server type mix for the typical organization is skewed toward VMs and bare metal today, this is expected to shift noticeably in the next 24 months, with containers and serverless platforms supporting, on average, 46% of production applications, it continues.

This study reveals that while organizations have started, there is more work to be done when it comes to securing their cloud-native apps with the benefits DevSecOps offers, says Doug Cahill, senior analyst and group practice director of cybersecurity for ESG. Fundamental changes to application architectures and the infrastructure platforms that host them are antiquating existing cybersecurity technologies and challenging traditional approaches to protecting business-critical workloads, he continues.

The report advises organizations to consider newer approaches to securing their cloud-native apps, especially technologies that mitigate the risks associated with API-related vulnerabilities. The report found that API risks topped the minds of respondents.

If organizations are going to ultimately get a handle on cloud, serverless and, more broadly, API risks they cant have (as 82% of organizations claim to do today) separate security teams for cloud-native apps and other systems. Its a good sign that 50% of respondents plan to merge those security efforts.

Its problematic that 32% currently have no such plans.

This study, Security for DevOps Enterprise Survey Report, is based on responses from 371 IT and cybersecurity professionals at organizations in North America responsible for evaluating, purchasing, and managing cloud security technology products and services.

Visit link:
For Cloud-native App Security, Few Companies Have Embraced DevSecOps - Security Boulevard

Read More..

Amping Up The Arm Server Roadmap – The Next Platform

Competition in and of itself does not directly drive innovation customer needs that might be met by some other product is really what makes suppliers hop to and get the lead out. No matter what you do in this world, there is always a chance that someone else will do it better and quicker or both.

The nascent Arm server chip is littered with companies that that attempted to break into the server space and compete against the hegemony of the X86 architecture, or those who thought they might take a run at it and then, either early on or just before announcing products, decided against it.

Breathe in.

Calxeda launched in 2011 and then famously flamed out for complicated reasons it didnt have 64-bit processors and it could not force the hardware stack into datacenters before the software stack was ready. It also ran out of money trying to do tech support for partners. Nvidia launched its Project Denver Arm server effort and then quietly killed it off. Samsung never did make its plans known, and killed them off before admitting anything. AMD jumped in with the K12 Arm server project and its low-end Seattle APUs to try to save face in systems, but then pulled back to concentrate on the Epyc X86 server chips without a doubt the right thing to do. If the world wants or needs high volume Arm servers at some future date, AMD will be able to create one fairly quickly by global replacing Epyc cores with Arm cores which is basically what the K12 project was about anyway. Qualcomm was gung-ho with its Centriq line, and then decided after putting a prototype and a production chip into the field that this was not going to work out well financially and spiked it. Phytium made a fuss three years ago with its Mars Arm server chip, and was never heard from again, probably because Huawei Technologys HiSilicon Kunpeng 920 looks to be the Arm choice for China. Broadcom put together the very good Vulcan Arm server chip and mapped out a plan to take on Intel in the datacenter, and then in the middle of trying to buy Qualcomm, decided to jettison the Vulcan effort, which Cavium picked up and created its variant of the ThunderX2 chip. In the interim, Marvell bought Cavium and also picked up the Arm server design team from Qualcomm, so Marvell has benefited a few times over from the failures of others, particularly given that it is really the only vendor of Arm server chips that has anything close to production installations. (We just did an in-depth review of the ThunderX roadmap here.) Fujitsu has done a fine job with the HPC-centric A64FX processor, aimed at traditional supercomputing as well as AI workloads, which we have covered at length. And Amazon Web Services has cooked up its own Graviton family of Arm server chips, which it is putting up for sale by the hour on its EC2 compute service. The Graviton chips have the potential to be a higher volume product than the ThunderX line if they take off on the AWS cloud.

Breathe out.

That leaves one more credible maker of Arm server processors, and that is Ampere Computing, the company that created out of the ashes of the Applied Micro X-Gene Arm server chip business nearly two years ago, which is notable because it has Renee James, former president of Intel, as its chief executive officer as well as a slew of former Intel chip people on staff and equity backing from The Carlyle Group to boot. Jeff Wittich, senior vice president of products at Ampere, had a chat with The Next Platform about what is coming next for the company as it builds out its roadmap and tries to being Arm servers into the datacenter among the hyperscaler and public cloud elite.

Wittich is no stranger to these customers, which is why Ampere tapped him for his role in June. Wittich got is bachelors in electrical engineering at the University Notre Dame and then went on to get a masters in electrical engineering at the University of California Santa Barbara, where he also worked for two years as a graduate student researcher before joining Intel. He was a process engineer working on etching equipment for a year, and then a senior device engineer in the foundries to work on the 45 nanometer Hi-K metal gate processes that debuted in Intel Xeon server chips in the late 2000s. After that for a five year stint, Wittich was a product reliability engineer for Intels 22 nanometer products, and in 2014, became senior director of cloud business and platform strategy at the chip giant. Until joining Ampere, that Xeon chip business grew by 6X in five years significantly higher than the company had expected.

Suffice it to say, James and Wittich know these hyperscaler and cloud builder customers intimately. And that is perhaps more dangerous to Intel than an instruction set and a clever arrangement of transistors on a wafer of silicon.

Here was our starting thesis in the conversation. If you looked at the past eight to ten years from outside the IT sector and you didnt know much about it, you might think that somebody was intentionally benefiting from the end of Denard scaling and the slowdown in Moores Law advances in transistors. All of the CPU vendors started stumbling around, in a bit of a daze and not getting important work done on time, and this is coincident with the rise of the hyperscalers and cloud builders and Intel being able to maintain 50 percent gross margins with its datacenter products because, even with increasing theoretical competition, those alternative Arm chip suppliers from days gone by could not deliver the right chip at high volume at a predictable cadence. It is one thing to make a few hundred or thousand samples; it is quite another to build a few tens of thousands or hundreds of thousands per quarter.

I completely agree, Wittich tells The Next Platform. Thats one thing that I think its really important, the fact that our whole executive team at Ampere, weve all done this before. So, 500,000 units doesnt sound like much at all to me. I did that for 15 years, and our head architect and our head of engineering, theyve all done this for ten or more generations of high volume, server-class CPUs. I think we know what it takes to deliver at scale and at high volume. Thats why I think we are particularly well suited to succeed in this space.

Take a look at the executive team at Ampere and you will see that Wittich is not kidding. These are very seasoned people from Intel. And they all believe that the time is right to create that alternative, and that an Arm architecture is the way to go.

If somebody can come in and establish that they have a reliable cadence of product delivery, can meet the volume requirements, can get through a qual cycle in an efficient and reliable manner, provide the customer support that the hyperscalers expect, then theres a big opportunity there, says Wittich. We are not just trying to go and compete on matching and exceeding the exact same performance metrics or TCO metrics that the broad datacenter market has looked at for the last decade. We are specifically delivering the type of performance that you need in a multitenant cloud, with the type of performance consistency, with the type of security that you need. So it goes beyond just basic performance and basic TCO. Its also about the type of power efficiency that hyperscalers need. Its the type of scalability theyre looking for, and its that foundation of a cloud architected features that provide quality of service, manageability, and security. Theres an opportunity to come in and reshape the landscape by doing something thats truly different and truly innovative right now.

One could argue that the eMAG 1 chip, which was based on the Skylark X-Gene 3 chip created by Applied Micro with some tweaks by Ampere, was not that product, although it was a perfectly respectable server chip. The 32-core Skylark chip started shipping in volume in September 2018, stacked up pretty well against the 16-core Skylake Xeon SP processors from Intel, providing the same number of threads (when Intel turned on HyperThreading). All Ampere chips use real cores without threads to scale up compute, and this is a conscious choice as it simplifies the pipelines some and, moreover, some workloads do worse rather than better with simultaneous hyperthreading turned on. And finally, adding threads provides another way that security can be compromised because threads means virtualizing and sharing resources like registers and L1 caches, and virtual CPUs (vCPUs) on the public clouds usually have a thread (not a core) as their finest level of granularity. This is, Wittich says, inherently less secure.

With the next generation Arm server processor which is code-named Quicksilver but which will not be called the eMAG 2, by the way, and the official brand has not been unveiled as yet Ampere will be scaling up and out on a bunch of different vectors. And it is still focusing on multitenant cloud and edge use cases and is not really pursuing legacy enterprise platforms or the HPC sector as other Arm suppliers are trying to do. Ampere will be using the Ares core created by Arm Holdings for its Neoverse N1 platform, and it has modifications that are being done by Ampere to optimize performance and to make use of Amperes own mesh interconnect for on-die communication. The next-generation Ampere chip will scale up to 80 cores on a monolithic die, and will be etched in the 7 nanometer processes created by fab partner Taiwan Semiconductor Manufacturing Corp. The 32-core eMAG 1 did alright for a first stab at the market by Ampere, but Wittich says that it did not have enough single core performance, and this will be fixed in the next generation chip.

The Ampere Quicksilver chip will have eight memory channels, just like the eMAG 1 did, and Wittich says it will have as more memory bandwidth as the eMAG 1 provided, and that further it is getting its DRAM memory controllers from a third party as many Arm server chip makers do. The Quicksilver chip will be supporting the CCIX interface for linking to accelerators like GPUs, and will support two-socket NUMA configurations as well as single-socket implementations. CCIX will be the transport for these NUMA links. This stands to reason since Applied Micro did not really have a hardware-based NUMA technology of its own and was resorting to software-based NUMA over PCI-Express for the X-Gene 2. (Other Arm chip makers are using CCIX for NUMA links.) The future CPU will also have PCI-Express 4.0 peripheral controllers, but the number of lanes is not yet clear.

The clock speeds on that 7 nanometer Quicksilver chip have not been revealed, but it is hard to imagine that even with the process shrink from 16 nanometers for the eMAG 1 that Ampere can maintain a sustained turbo speed of 3.3 GHz on the cores which boosting the core count to 80. What we do know is that, owing to its edge, hyperscaler, and cloud target markets, the Quicksilver chip will have a much wider compute and thermal range than the Skylark chip had. The Skylark chip had SKUs that ranged from 75 watts to 125 watts, but Quicksilver will range from a low of 45 watts all the way up to 200 watts or more. That implies SKUs ranging from maybe 10 cores all the way up to 80 cores, depending on how much juice the uncore region of the Quicksilver chip burns.

The Quicksilver chip came back from the foundry this week and samples will be shipped out to key partners before the end of the year, according to Wittich. The plan is to ramp up volumes towards the middle of 2020.

As you can see, Ampere has two more chips on the roadmap that it is showing publicly, with a 7 nanometer follow-on in development now and a 5 nanometer kicker to that in definition stages now. This hews more or less to the Arm Holding Neoverse roadmap, and we expect for Ampere to stay more or less in synch with that, picking and choosing technologies as it sees fit from Arm and as it has done with Quicksilver. That implies a more or less annual cadence of chip rollouts.

Read this article:
Amping Up The Arm Server Roadmap - The Next Platform

Read More..

Over 750,000 Applications for US Birth Certificates Left Exposed Online – Security Boulevard

Quick question, were you born in the United States? Have you recently applied for a new copy of your birth certificate? Well, you could be one of the unfortunate people whose birth certificate application was left exposed online.

It has been reported that more than 750,000 applications for copies of U.S. birth certificates have been left exposed without any access control in a misconfigured cloud server within an Amazon Web Services (AWS) storage bucket.

It is understood that a British security company discovered the data container with no password protection leaving the door wide open for cybercriminals to steal the information for fraudulent purposes. Whats worrying is the cache is seemingly being updated on a weekly basis with more applications being added.

The data was being collated by a third-party partner of the U.S. government which provides a service to U.S. citizens who wish to have copies of their birth and death certificates from state governments.

The company at fault has not been named as it is believed the critical data is still online and currently exposed. The leak exposed traditional sensitive information like names, date of birth, home addresses, email addresses and phone numbers, however, more historical information has also been revealed. For example, the server also contained past names of family members, old addresses linked to the applicant, and even the reason as to why the individual was seeking this information, which could be as trivial as applying for a new passport or even to research their familys history.

Sadly, this is not the first time an unprotected AWS server has resulted in a high profile data leak as in June 2019, Netflix, Ford, and many other brands all had data exposed in an open Amazon AWS bucket which amounted to 1TB worth of information being left unprotected.

With these incidents frequently occurring, it begs the question as to why these online cloud servers are being left unprotected. Identity theft and fraud is widespread, and these leaks do not give people the confidence that companies, governments, and other organizations are doing enough to secure their critical data.

Service providers and processors need to wake up to the reality that data needs to be protected in a data-centric fashion to eliminate the risks of having a lapse or lack of due diligence. Adopting a data-centric protection model ensures that data is protected anywhere it is stored, moved, shared, or used and is the only true firebreak that can quench identity theft.

See the original post:
Over 750,000 Applications for US Birth Certificates Left Exposed Online - Security Boulevard

Read More..

AWS outperforms rivals in test of cloud capabilities – TechRepublic

The 2020 Cloud Report from Cockroach Labs finds Google Cloud Platform and Microsoft Azure are catching up to Amazon.

Cockroach Labs tested the speed and strength of the three major cloud providers and found that Amazon Web Services holds an edge over Google Cloud Platform and Microsoft Azure.

In the 2020 Cloud Report, Azure did best with the CPU performance test but AWS offered the best network and I/O capabilities. The testers found that GCP made significant improvements over last year's report and had the best showing in network throughput.

Cockroach Labs tested the three providers on a series of microbenchmarks and customer-like workloads. The goal was to understand the performance of each cloud provider overall as well as the strength of each company's machine types.

Cockroach Labs vetted the results with the major cloud providers for a review of the setup of the machines and benchmarks. Cockroach Labs posted the testing process and the results in this public repository. Paul Bardea, Charlotte Dillon, Nathan VanBenschoten, and Andy Woods of Cockroach Labs wrote the 2020 report.

The performance tests and testing tools included:

In this category, the best performing Azure machines achieved significantly better results on the CPU microbenchmark.

The testers found that the "top performing Azure machines use 16 cores with 1 thread per core while the other clouds use hyperthreading across all instances and use 8 cores with 2 threads per core to achieve 16 vCPUs."The authors caution that the effects of avoiding hyperthreading may have inflated the benchmark and may not represent performance on other workloads. They also said that these results are highly correlated with the clock frequency of each instance type.

The reviewers changed this test setup this year by testing load from multiple clients and observing the results from a single destination server.

The throughput comparison tests found that GCP's network performed much better than AWS or Azure: "Not only do their top performing machines beat each network's top performing machines but so do their bottom performing machines."

The report's authors note that last year AWS outperformed GCP in network tests.

In the latency comparisons, GCP improved over last year's report but AWS won the race again with Azure far behind both competitors: "Even the best machine on Azure is more than five times worse than on AWS or GCP."

Cloud providers offer two types of storage hardware: locally attached storage and network attached storage. Each provider has a different label for these two types:

Locally attached storage Network attached storage

AWS Instance store volumes Elastic-block storage volumesAzure Temporary disks Managed disksGCP Local SSDs Persistent disks

Cockroach also tested for throughput and latency in this category as well. The testers used a "configuration of sysbench that simulates small writes with frequent syncs for both write and read performance" and measured read and write capabilities separately.

AWS won the write round with "superior write storage performance with the i3en machine type."

Azure had an advantage over the other two providers in the ability of managing threads: AWS and GCP hit a bottleneck at four threads but Azure continues to increase write iOPs until 16 threads. The report states that Azure write iOPs excel at managing applications with more threads after falling behind initially on smaller thread sizes.

SEE: Special feature: Industry cloud (free PDF)

AWS's storage optimized machines live up to their billing as strong choices when optimizing for storage performance. Azure can't reliably outperform AWS on read throughput and the provider's read latency is extremely variable.

The report found that AWS wins the combined storage read comparison across all categories with its i3 machine type.

In this category, the testers measured the number of orders processed per minute and the total number of warehouses supported. Testers found that all clouds were within 5% of each other, although AWS came out on top.

The comparison found that "the highest performing machine types from each cloud are also the same machine types which performed the best on the CPU and Network Throughput tests."

Both AWS's c5n.4xlarge and GCP's c2- standard-16 won the CPU, Network Throughput, and Network Latency tests while Azure's Standard_DS14_v2 won the CPU and Network Throughput throughput tests.

However, the machine types that won the read and write storage testsAWS i3.4xlarge and i3en.6xlarge, GCPs n2-standard-16, and Azure's Standard_GS4 varied in their TPC-C performance.

The authors said this suggests that these tests are less influential in determining OLTP performance and that OLTP workloads like TPC-C are often limited by compute resources.

Your go-to knowledge base for the latest about AWS, Microsoft Azure, Google Cloud Platform, Docker, SaaS, IaaS, cloud security, containers, the public cloud, the hybrid cloud, the industry cloud, and much more. Delivered Mondays

Image: Cockroach Labs

Image: Cockroach Labs

Image: Cockroach Labs

See the article here:
AWS outperforms rivals in test of cloud capabilities - TechRepublic

Read More..

Hyperscale operators accounted for a third of all data center spending in first three quarters of 2019 – FierceTelecom

It's the hyperscalers' world and we're just living in it. By most any measure, the hyperscale service providers are ascendant in the industry across all levels.

New data from Synergy Research Group (SRG) found that hyperscale operators accounted for 33% of all spending on data center hardware and software in the first three quarters of 2019. That's an increase from 26% in the first three quarters of 2017 and from the 15% posted in the same timeframe in 2014.

Over the same time period, the total market has increase in size by more than 34%, primarily due to increased spending by the hyperscale providers.

Like this story? Subscribe to FierceTelecom!

The Telecom industry is an ever-changing world where big ideas come along daily. Our subscribers rely on FierceTelecom as their must-read source for the latest news, analysis and data on the intersection of telecom and media. Sign up today to get telecom news and updates delivered to your inbox and read on the go.

By contrast, spending by service providers and enterprises has increased by a measly 6%, according to SRG.

The hyperscale spending is being driven by the continued robust growth in social networking and the strong demand for public cloud services. Enterprise spending has been under pressure due to the ongoing shift in workloads from private networks to the public cloud, according to SRG.

Hyperscale operators are the world's largest providers across various service sectors including infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), software-as-a-service, search engines, social networking and e-commerce.

SRG's data showed that total data center infrastructure equipment revenues, including both cloud and non-cloud, hardware and software, were $38 billion in the recent third quarter. Combined, servers, operating systems, storage, networking and virtualization software accounted for 96% of the data center infrastructure market, with the remainder coming from network security and management software.

According to recent research by Dell'Oro Group, the worldwide server and storage systems market declined 2% in 2019 due to macroeconomic factors and declining commodity costs.

Dell EMC is the leader in both the server and storage revenue, according to both SRG and Dell'Oro Group, while Cisco was dominant in the networking sector.

Among the top vendors, SRG said Microsoft and VMware featured heavily in the vendor rankings due to their leadership status in server OS and virtualization applications, respectively.

Below those four, the other leading vendors in the market were HPE, Huawei, Inspur, and Lenovo. Among the major vendors, Inspur and Huawei chalked up the largest growth.

Original design manufacturers (ODM)were also represented in the rankings due to supplying hardware to the hyperscale providers, according to SRG.

We are seeing very different scenarios play out in terms of data center spending by hyperscale operators and enterprises, said John Dinsdale, a chief analyst at Synergy Research Group, in a statement. On the one hand revenues at the hyperscale operators continue to grow strongly, driving increased demand for data centers and data center hardware. There is an ever-increasing number of hyperscale data centers, many of which continue to be expanded. Those huge data centers are crammed full of servers and other hardware, which are on a frequent refresh cycle.

"On the other hand we see a continued decline in the volume of servers being bought by enterprises. The impact of those declines is balanced by steady increases in server average selling prices, as IT operations demand ever-more sophisticated server configurations, but overall spending by enterprises remains almost flat. These trends will continue into the future.

RELATED: Hyperscale data center count passes the 500 milestone in 3Q - report

In October, SRG said hyperscale data centers hit a new high-water mark in the third quarter.

The number of hyperscale data centers increased to 504 at the end of the third quarter, which tripled the total from the beginning of 2013. The total number of data centers increased by 55 over the last four quarters, which marked a bigger increase than was seen in the previous four quarters, according to SRG.

Over the past four quarters, new data centers by likes of Google, Amazon Web Services, and Alibaba Cloud have been opened in 15 different countries with the U.S., Hong Kong, Switzerland and China having the largest number of additions.

Originally posted here:
Hyperscale operators accounted for a third of all data center spending in first three quarters of 2019 - FierceTelecom

Read More..

IoT news of the week for Dec. 13, 2019 – Stacey on IoT

Heres a security partnership to merge IT and OT:Pulse Secure has teamed up with Nozomi Networks to provide a comprehensive security monitoring service that can track badly behaving devices on IT (information technology) and OT (operational technology) networks. When malicious behavior is detected, Pulse Secure can quarantine the device or segment it from the network. Pulse Secure has several products designed to protect IT networks and Nozomi has the OT expertise. Whats interesting about the deal is that security has become an ecosystem play as opposed to something a single company can offer. Prakash Mana, VP of product management with Pulse Secure, saysthat in the last few years those in the security world have realized that they have to work together to stop threats, which makes partnerships like the onePulse and Nozomi just signed more important. (Pulse Secure)

UL is building security labels for IoT devices: UL, the standards company that ensures our electrical devices arent going to cause a fire or blow out our wiring, has been wading into the cybersecurity realm for a while. A few years back, it suggested some security measures for smart home devices, and now its offering a bit more clarity (and probably tackling the complexity associated with IoT security) with a five-tier labeling scheme. The tiers start with bronze for the devices meeting the least stringent parameters and go all the way up to diamond.In an interview withCNET,a UL spokesperson said that most IoT devices out there today probably wouldnt meet the bronze standard. The interview comes on the heels of aJune whitepaperpublished by UL that lays out the different categories. Of course, with all UL standards, you have to pay to see what they entail.(CNET)

Using IoT to keep an eye on infrastructure: Its no secret that our nations roads, bridges, dams, and other physical infrastructure components need some TLC. While the government stalls on funding, municipalities are turning to IoT to figure out which of them need the most urgent attention. But this is also a global problem, and companies are trying to build 3-D models of hard infrastructure with drones, cameras, and AI to figure out exactly which bridges, sewers, railways and more are in the most danger. (ZDnet)

Ecosystems are the new oil: Yall know that I am all about the need for ecosystems and ongoing relationships when it comes to building out comprehensive IoT products. It looks like others are coming around to this thinking, too. Not only did Dr. Irene Petrick from Intel, my podcast guest this week, spend 10 minutes talking about it with me, but the CEO ofPacket also wrote an excellent blog post on the topic. He likens our current obsession with data extraction to fossil fuel extraction, pointing out that it can be a short-sighted goal and has the power to cause irreparable harm. Instead, he urges taking an ecosystem approach with efforts to create value from data, whereby companies and players invest in the ecosystem to create value for all of the participants. I wholeheartedly believe this is what all of our investments in digital transformations should be about, and predict that were on the verge of a shift in how we think about building a business. Read it and be forewarned. (Packet)

Look, a UL-certified in-wall smart outlet: There are a lot of firsts with this product from ConnectSense. The company has made an in-wall outlet that can beconnected to the internet and controlled via an app, and the outlet has passed a rigorous certification process from Underwriters Laboratories (UL). The UL certification is rare in the smart outlet category. The outlet comes in a 15- or 20-amp version that costs $79 and $99, respectively. It also works with Siri, Alexa, and Google Assistant. (The Verge)

LORIOT adds three new availability zones for its LoRa WAN network offering:LoRa is a long-range wireless networking technology that operates on unlicensed networks. Several companies are it using to build out connectivity services for the IoT. One of them is LORIOT, which makes software for organizations that want to build and operate their own LoRaWAN networks. LORIOT serves up its software froma worldwide network comprised of 16 distributed public servers, three of which were just announced this month. Those servers are in Oregon, Singapore, and The Netherlands, and the new zones will mean lower latencies for folks providing networks in those areas. (LORIOT)

IoT jobs are moving to the cloud: Nutanix, a company that provides software to help build and manage flexible cloud-style computing on-premise, has issued its annual report on trends in cloud computing that includes a small tidbit focused on where applications are running. Surprisingly, while overall more workloads are moving away from the cloud, IoT workloads are moving toward the cloud. The report notes a 3.2%increase in IoT workloads moving to the cloud, whereas workloads happening in a traditional data center have fallen 1.3%and the number of jobs running in private clouds is down 8.1%. Equally surprising is how even as more IoT workloads move to the cloud, the report cites IoT and edge computing as reasons for the increase in computing jobs leaving the cloud and returning to local servers or traditional data centers. The rationale behind that shiftis to reduce latency and also to ensure more security. (Nutanix)

Wearables are taking clinical trials by storm: Ivediscussed the potential impactthat wearable devices can have on the way clinical trials are performed, in particular to make them more accessible and cheaper. And now heres an entire paper discussing the topicin depth, including pointing out that since 2013 the number of digital elements used in clinical trials has skyrocketed. No wonder Apples so interested in health. (Harvard Business School)

GreenWaves launches a new IoT chip: Iprofiled GreenWaves, a startup building a RISC-V-based processor for the IoT, last year ahead of the launch of its first chip. Now it has released a second processor designed to process 10x larger neural networks while consuming 5x less power. It also added support for bank-grade encryption and several other features designed to improve how it handles machine learning at the edge. For a deep dive into the specs, click through on the release. (GreenWaves)

Related

See the original post:
IoT news of the week for Dec. 13, 2019 - Stacey on IoT

Read More..