Page 3,863«..1020..3,8623,8633,8643,865..3,8703,880..»

How AI In Edge Computing Drives 5G And The IoT – SemiEngineering

Edge computing, which is the concept of processing and analyzing data in servers closer to the applications they serve, is growing in popularity and opening new markets for established telecom providers, semiconductor startups, and new software ecosystems. Its brilliant how technology has come together over the last several decades to enable this new space starting with Big Data and the idea that with lots of information, now stored in mega-sized data centers, we can analyze the chaos in the world to provide new value to consumers. Combine this concept with IoT, and connected everything, from coffee cups to pill dispensers, oil refineries to paper mills, smart goggles to watches, and the value to the consumer could be infinite.

However, many argue the market didnt experience the hockey stick growth curves expected for the Internet of Things. The connectivity of the IoT simply didnt bring enough consumer value, except for specific niches. Over the past five years however, technology advancements as artificial intelligence (AI) has begun to revolutionize industries and the concepts of the amount of value that connectivity can provide to consumers. Its a very exciting time as the market can see unlimited potential in the combination of big data, IoT, and AI, but we are only at the beginning of a long road. One of the initial developments that helps harness the combination is the concept of edge computing and its impact on future technology roadmaps.

The concept of edge computing may not be revolutionary, but the implementations will be. These implementations will solve many growing issues including reducing energy use by large data centers, improving security of private data, enabling failsafe solutions, reducing information storage and communication costs, and creating new applications via lower latency capabilities.

But what is edge computing? How is it used, and what benefits can it provide to a network? To understand edge computing, we need to understand what is driving its development, the types of edge computing applications, and how companies are building and deploying edge computing SoCs today.

Edge computing, edge cloud, fog computing, enterpriseThere are many terms for edge computing, including edge cloud computing and fog computing. Edge computing is typically described as the concept of an application running on a local server in an effort to move cloud processes closer to the end device.

Enterprise computing has traditionally been used in a similar way as edge computing but more accurately describes the networking capabilities and not necessarily the location of the computing. Fog computing, coined by Cisco, is basically the same as edge computing although there are many who delineate the fog either above or below the edge computing space or even as a subset of edge computing.

For reference, end point devices and end points are often referred to as edge devices, not to be confused with edge computing, and this demarcation is important for our discussion. Edge computing can take many forms, including small aggregators, local on-premise servers, or micro data centers. Micro data centers can be regionally distributed in permanent or even movable storage containers that strap onto 18-wheel trucks.

Value of edge computingTraditionally, sensors, cameras, microphones, and an array of different IoT and mobile devices collect data from their locations and send the data to a centralized data center or cloud.

By 2020, more than 50 billion smart devices will be connected worldwide. These devices will generate zettabytes (ZB) of data annually growing to more than 150 ZB by 2025.

The backbone of the Internet was built to reliably connect devices to each other and to the cloud, helping ensure that the packets get to their destination.

However, sending all this data to the cloud poses several immense problems. First, the 150 ZB of data will create capacity issues. Second, it is costly to transmit that much data from its location of origin to centralized data centers in terms of energy, bandwidth, and compute power. Estimates project that only 12% of current data is even analyzed by the companies that own it and only 3% of that data contributes to any meaningful outcomes (thats 97% of data that was collected and transmitted, wasted, for us environmental mathematicians). This clearly outlines operational efficiency issues that need addressed. Third, the power consumption of storing, transmitting and analyzing data is enormous, and finding an effective way to reduce that cost and waste is clearly needed. Introducing edge computing to store data locally reduces transmission costs; however, efficiency techniques are also required to remove data waste, and the predominant method today is to look to AI capabilities. Therefore, most local servers across all applications are adding AI capabilities, and the predominate infrastructure now being installed are new, low-power edge computing server CPUs with connectivity to AI acceleration SoCs, in the form of GPUs and ASICs or an array of these chips.

In addition to addressing capacity, energy, and cost problems, edge computing also enables network reliability as applications can continue to function during widespread network outages. And security is potentially improved by eliminating some threat profiles such as global data center denial of service (DoS) attacks.

Finally, one of the most important aspects of edge computing is the ability to provide low latency for real-time use cases such as virtual reality arcades and mobile device video caching. Cutting latency will generate new services, enabling devices to provide many innovative applications in autonomous vehicles, gaming platforms, or challenging, fast-paced manufacturing environments.

By processing incoming data at the edge, less information needs to be sent to the cloud and back. This also significantly reduces processing latency. A good analogy would be a popular pizza restaurant that opens smaller branches in more neighborhoods, since a pie baked at the main location would get cold on its way to a distant customer.

Michael Clegg | Vice President and General Manager of IoT and Embedded | Supermicro

Applications driving edge computingOne of the most vocal drivers of edge computing is 5G infrastructure. 5G telecom providers see an opportunity to provide services on top of their infrastructure. In addition to traditional data and voice connectivity, 5G telecom providers are building the ecosystem to host unique, local applications. By putting servers next to all of their base stations, cellular providers can open up their networks to third parties host applications, thereby improving both bandwidth and latency.

Streaming services like Netflix, through their Netflix Open Connect program, have worked for years with local ISPs to host high traffic content closer to users. With 5Gs Multi-Access Edge Compute (MEC) initiatives, telecom providers see opportunity to deliver similar services for streaming content, gaming, and future new applications. The telecom providers believe they can open this capability to everyone as a paid service, enabling anyone that needs lower latency to pay a premium for locating applications at the edge rather than in the cloud.

Credence Research believes by 2026 the overall edge computing market will be around $9.6B. By comparison, the Research and Markets analysis sees the Mobile Edge Computing market growing from a few hundred million dollars today to over $2.77B by 2026. Although telecoms are the most vocal and likely the fastest growth engines, they are estimated to make up only about one-third of the total market for edge computing. This is because web scale, industrial, and enterprise conglomerates will also provide edge computing hardware, software, and services for their traditional markets that expect edge computing will also open opportunities for new applications.

Popular fast food restaurants are moving towards more automated kitchens to ensure food quality, reduce employee training, increase operational efficiencies, and ensure customer experiences meet expectations. Chick-fil-A is a fast food chain that successfully uses on-premise servers to aggregate hundreds of sensors and controls with relatively inexpensive local equipment that runs locally to protect against any network outages. This was outlined in a 2018 Chick-Fil-A blog claiming that By making smarter kitchen equipment we can collect more data. By applying data to our restaurant, we can build more intelligent systems. By building more intelligent systems, we can better scale our business. The blog went on to outline that many restaurants can now handle 3x the amount of business that was originally planned due to the help of edge computing.

Overall, a successful edge computing infrastructure requires a combination of local server compute capabilities, AI compute capabilities, and connectivity to mobile/automotive/IoT computing systems (Figure 1).

Figure 1: Edge computing moves cloud processes closer to end devices by using micro data centers to analyze and process data.

As the Internet of Things (IoT) connects more and more devices, networks are transitioning from being primarily highways to and from a central location to something akin to a spiders web of interconnected, intermediate storage and processing devices. Edge computing is the practice of capturing, storing, processing and analyzing data near the client, where the data is generated, instead of in a centralized data-processing warehouse. Hence, the data is stored at intermediate points at the edge of the network, rather than always at the central server or data center.

Dr. James Stanger | Chief Technology Evangelist | CompTIA

Use case for edge computing Microsoft HoloLensTo understand the latency benefits of using edge computing, Rutgers University and Inria analyzed the scalability and performance of edge computing (or, as they call it, edge cloud) using the Microsoft HoloLens.

In the use case, the HoloLens read a barcode scanner and then used scene segmentation in a building to navigate the user to a specific room with arrows displayed on the Hololens. The process used both small data packets of mapping coordinates and larger packets of continuous video to verify the latency improvements of edge computing vs traditional cloud computing. The HoloLens initially read a QR Code, sending the mapping coordinates data to the edge server, which used 4 bytes plus the header and took 1.2 milliseconds (ms). The server found the coordinates and notified the user what the location was, for a total of 16.22 ms . If you sent the same packet of data to the cloud, it would take approximately 80 ms (Figure 2).

Figure 2: Comparing latency for edge device to cloud server vs edge device to edge cloud server.

Similarly, they tested the latency when using OpenCV to do scene segmentation to navigate the user of the Hololens to an appropriate location. The HoloLens streamed video at 30 fps, with the image processed in the edge compute server on an Intel i7 CPU at 3.33 GHz with 15GB RAM. Streaming the data to the edge compute server took 4.9 ms. Processing OpenCV images took an additional 37 ms, for a total of 47.7ms. The same process on a cloud server took closer to 115 ms, showing a clear benefit of edge computing for reduced latency.

This case study shows the significant benefit in latency for edge computing, but there is so much new technology that will better enable low latency in the future.

5G outlines use cases with less than 1ms latency today (Figure 3) and 6G is already discussing reducing that to 10s of microseconds (s). 5G and Wi-Fi 6 are increasing the bandwidth for connectivity. 5G intends to increase up to 10Gbps and Wi-Fi 6 already supports 2Gbps. AI accelerators claim scene segmentation in less than 20s which is a significant improvement from the quoted Intel i7 CPU processing each frame in about 20ms in the example technical paper described above.

Figure 3: Bandwidth improvements up to 10Gbps, compared to 10s and 100s of Msps in Figure 2, from Hololens to router and router to edge server combined with AI processing improvements (20ms to 20s) enable roundtrip latency <1ms.

Clearly if edge computing shows benefits over cloud computing, wouldnt moving computing all the way into the edge devices be the optimal solution? Unfortunately, not for all applications today (Figure 4). In the HoloLens case study, the data uses an SQL database that would be too large to store in the headset. Todays edge devices, especially devices that are physically worn, dont have enough compute power to process large datasets. In addition to the compute power, software in the cloud or on edge servers is less expensive to develop than software for edge devices because cloud/edge software does not need to be compressed into smaller memory resources and compute resources.

Figure 4: Comparing cloud and edge computing with endpoint devices.

Because certain applications run ideally based on the compute capabilities, storage capabilities, memory availability, and latency capabilities of different locations of our infrastructure be it in the cloud, in a edge server or in an edge device, there is a trend to support future hybrid computing capabilities (Figure 5). Edge computing is the initial establishment of a hybrid computing infrastructure throughout the world.

Figure 5: AI installed at Hololens, at edge server, and in the cloud enable hybrid computing architectures optimize compute, memory, and storage resources based on application needs.

Understanding edge computing segmentsEdge computing is about computing locations closer to the application than the cloud. However, is that 300 miles, 3 miles or 300 feet? In the world of computing, the cloud theoretically has infinite memory and infinite compute power. At the device, there is theoretically just enough compute and memory resources to capture and send data to the cloud. Both theoreticals are a bit beyond reality but lets use this as a method to describe the different levels of edge compute. As the cloud computing resources get closer to the end point device or application, theoretically, the storage, memory and computing resources become less and less. The power that is consumed by these resources is also lowered. The benefits of moving closer not only lower the power but lower the latency and increase the efficiency.

Three basic edge computing architectures are starting to emerge within the space (Figure 6). First and closest to traditional data centers are regional data centers that are miniature versions of cloud compute farms placed strategically to reduce latency but maintain as much of the compute, storage and memory needed. Many companies and startups address this space but SoCs designed specifically to address regional data centers do little to differentiate from classic cloud computing solutions today, which focus on high-performance computing (HPC).

Local servers and on-premise servers, the second edge computing segment, are where many SoC solutions address the power consumption and connectivity needs of edge computing specifically. There is also a large commercialized development on software today, in particular with the adoption of more flexible platforms that enable containers such as Dockers and Kubernetes. Kubernetes is used in the Chick-Fil-A example described earlier. The most interesting piece of the on-premise server segment with respect to semiconductor vendors are the advent of introducing a chipset adjacent to the server SoC to handle the AI acceleration needed. Clearly an AI accelerator is located in the compute farms in the cloud, but a slightly different class of AI accelerator is built for the edge servers because this is where the market is expected to grow and there is opportunity to capture a foothold in this promising space.

A third segment for edge computing includes aggregators and gateways that are intended to perform limited functions, maybe only running one or a few applications with the lowest latency possible and with minimal power consumption.

Each of these three segments have been defined supporting real world applications. For instance, McKinsey has identified over 107 use cases in their analysis of edge computing. ETSI, via their Group Specification MES 002 v.2.1.1, has defined over 35 use cases for 5G MEC including for gaming, service level agreements, video caching, virtual reality, traffic deduplication, and much more. Each of these applications have some predefined latency requirements based on where in the infrastructure the edge servers may exist. The OpenStack Foundation is another organization that has incorporated edge computing into their efforts with Central Office ReArchitected as a Data Center (CORD) latency expectations where traditional telecom offices distributed throughout networks are now hosting edge cloud servers.

The 5G market expects use cases as low as 1ms latency roundtrip, from the edge device, to the edge server, back to the edge device. The only way to achieve this is through a local gateway or aggregator, as going all the way to the cloud typically takes 100ms. The 6G initiative, which was introduced in the fall of 2019, announced the goal for 10s of S latency.

Each of the edge computing systems support a similar architecture of SoCs that include a networking SoC, some storage, a server SoC, and now an AI accelerator or array of AI accelerators. Each type of system offers its own levels of latency, power consumption, and performance. General guidelines for these systems are described in Figure 6. The market is changing and these numbers will likely move quickly as the technology advances.

Figure 6: Comparing the three main SoC architectures for edge computing: Regional data centers/edge cloud; on-premise servers/local servers; and aggregators/gateways/access.

How is edge computing impacting server system SoCs?The primary goal of many of the edge computing applications is around new services related to lower latency. To support lower latency, many new systems are adopting some of the latest industry interface standards including PCIe 5.0, LPDDR5, DDR5, HBM2e, USB 3.2, CXL, PCIe-based NVMe, and other next-generation standards based technologies. Each of these technologies provide lower latency via bandwidth improvements when compared to previous generations.

Even more pronounced than the drive to reduce latency is the addition of AI acceleration to all of these edge computing systems. AI acceleration is provided by some server chips with new instructions such as the x86 extension AVX-512 Vector Neural Network Instructions (AVX512 VNNI). Many times, this additional instruction set is not enough to provide the low latency and low power implementations needed for anticipated tasks, so custom AI accelerators are added to most new systems. The connectivity required for these chips are commonly adopting the highest bandwidth host to accelerator connectivity possible. For example, use of PCIe 5.0 is rapidly expanding today due to these bandwidth requirements which directly impact latency, most commonly in some sort of switching configuration with multiple AI accelerators.

CXL is another interface that is gaining momentum as it was built specifically to lower latency and provide cache coherency. Cache coherency can be important due to the heterogenous compute needs and extensive memory requirements of AI algorithms.

Beyond the local gateways and aggregator server systems, a single AI accelerator typically does not provide enough performance, so scaling these accelerators is required with very high bandwidth chip-to-chip SerDes PHYs. The latest released PHYs support 56G and 112G connections. Chip-to-chip requirements to support scaling of AI has seen many different implementations. Ethernet may be one option to scale in a standards-based implementation and a few solutions are offered today with this concept. However, many implementations today leverage the highest bandwidth SerDes possible with proprietary controllers. The differing architectures may change future SoC architectures of server systems to incorporate the networking, the server, the AI, and the storage components in more integrated SoCs vs 4 distinct SoCs that are being implemented today.

Figure 7: Common server SoC found at the edge with variability of number of processors, Ethernet throughput and storage capability based on number of tasks, power, latency and other needs.

The AI algorithms are pushing the limits with respect to memory bandwidth requirements. To give an example, the latest BERT and GPT-2 models require 345M and 1.5B parameters respectively. Clearly high capacity memory capabilities are needed to host these as well as the many complex applications that are intended to perform in the edge cloud. To support this capacity, designers are adopting DDR5 for new chipsets. In addition to the capacity challenges, the AI algorithms coefficients need accessed for the massive amount of multiple accumulate calculations done in parallel in non-linear sequences. Therefore, HBM2e is one of the latest technologies that is seeing rapid adoption with many instantiations per die.

Figure 8: Common AI SoC with high speed, high bandwidth, memory, host to accelerator, and high-speed die-to-die interfaces for scaling multiple AI accelerators.

The moving targets and the segmentation of edge computingIf we take a closer look at the different types of needs for edge computing we will see the regional data centers, local servers, and aggregation gateways have different compute, latency, and power needs. Future requirements are clearly focused on lowering the latency of the round trip response, lowering the power of the specific edge application and ensuring there is enough processing capabilities to handle the specific tasks.

Power consumed by the servers SoCs differs based on the latency and processing requirements. Next-generation solutions will not only lower latency and lower power, but also include AI capabilities, in particular AI accelerators. The performance of these AI accelerators also changes based on the scaling of these needs.

It is evident, however, that AI and edge computing requirements are rapidly changing and many of the solutions we see today have progressed multiple times over the past 2 years and will continue to do so. Todays performance can be categorized but the numbers will continue to move, increasing performance, decreasing power, and lowering overall latency.

Figure 9: The next generation of server SoCs and the addition of AI accelerators will make edge computing even faster.

ConclusionEdge computing is a very important aspect of enabling faster connectivity. It will bring cloud services closer to the edge devices. It will lower latency and provide new applications and services to consumers. It will proliferate AI capabilities, moving them out of the cloud. And it will be the basic technology that enables future hybrid computing where computing decisions can be made real time locally, in the cloud or at the device based on latency needs, power needs and overall storage and performance needs.

Continued here:
How AI In Edge Computing Drives 5G And The IoT - SemiEngineering

Read More..

Online voting takes another hit – GCN.com

Online voting takes another hit

The Voatz blockchain-secured mobile voting app took a shellacking from researchers at MIT, who reported they uncovered several security vulnerabilities.

The MIT researchers said their security analysis pointed to weaknesses that would allow hackers to "alter, stop, or expose how an individual user has voted," poses "potential privacy issues for users" and has limited transparency, limiting security researchers' ability to assure the apps integrity.

"Our findings serve as a concrete illustration of the common wisdom against Internet voting, and of the importance of transparency to the legitimacy of elections," they wrote in a paper describing their analysis of the Voatz system.

For their analysis, the MIT researchers reversed engineered the app and created a model of the Voatz server. They said the company's "minimal available documentation of the system" prevented them from running tests on the actual voting process, so their study presents "an analysis of the election process as visible from the app itself."

Before releasing the paper, the MIT team took its findings to the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency, whose Hunt and Incident Response Team (HIRT) investigated whether there was any evidence of current or previous malicious activity in the Voatz network environment.

According to the week-long evaluation conducted in September 2019 focusing on Voatz's corporate and cloud networks, CISA found no evidence of active threats, according to a report by Coindesk. In the HIRT report, investigators said they uncovered some issues that could pose future concerns, but overall they commended the company for its "proactive measures in the use of canaries, bug bounties, Shodan alerts, and active internal scanning and red teaming."

HIRT did not assess the security of the app itself.

In a blog post titled "Voatz Response to Researchers Flawed Report," the company detailed three "fundamental" flaws with the research.

First, company officials said, the MIT team used an Android version of the Voatz app that was "at least 27 versions old at the time of their disclosure and not used in an election." Second, the app never connected to the Voatz servers, which are hosted in Amazon Web Services and Microsoft Azure clouds, making the researchers unable to register with the app, verify their identity or receive or cast a ballot. Third, the company said that rather than accessing the Voatz servers, the researchers "fabricated an imagined version" of the servers, hypothesized as to how they worked and made assumptions "that are simply false."

Addressing the researchers complaints about the company's lack of transparency, Voatz said it works with "qualified, collaborative researchers." It also emphasized that in all the elections that have used the Voatz app which have involved less than 600 voters no issues have been reported.

"The reality is that continuing our mobile voting pilots holds the best promise to improve accessibility, security and resilience when compared to any of the existing options available to those whose circumstances make it difficult to vote," the blog said.

The Voatz app has been used most extensively in West Virginia. Secretary of State Mac Warner first tested the option for qualified overseas military service members to cast absentee ballots in county primary elections in May 2018. It was also used in the state's November 2018 election, where 144 voters in 30 different countries were able to cast their ballots. In February, the app will be made available to absentee voters with physical disabilities.

Users download the app to their smartphones, verify their identities by providing a photo of their drivers license, state ID or passport that is matched to a selfie. Once voters' identities are confirmed, they receive a mobile ballot based on the one that they would receive in their local precinct. The distributed ledger technology ensures the votes cannot be tampered with once they've been recorded. The app has also been used in Colorado and Utah.

One Voatz advocate contacted by CoinDesk said the accessibility benefits of the app far outweigh any security risks. Amelia Powers Gardner, an election auditor in Utah County, Utah, who supervised her use of the Voatz system for disabled voters and service members deployed overseas, said the Voatz system is a much better option than email ballots for otherwise disenfranchised voting groups.

While these concerns of around mobile loading can be valid, they don't rise to a level of security that causes me to even question the use of the mobile app, she told Coindesk.

About the Author

Susan Miller is executive editor at GCN.

Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDGs ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginias Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.

Miller has a BA and MA from West Chester University and did Ph.D. work in English at the University of Delaware.

Connect with Susan at [emailprotected] or @sjaymiller.

Originally posted here:
Online voting takes another hit - GCN.com

Read More..

Security Researchers Find Flaws in Online Voting System Tested in Five States – Mother Jones

An online voting technology that has been tested in five states can be hacked to alter, block, or expose voters ballots, according to research published Thursday by a trio of MIT researchers.

Voatz, a Boston-based company, claims its app allows for widely accessible and secure voting from smartphones by relying on security features built into the phones themselves. It has run pilots in several states including West Virginia, where the technology was used during the 2018 midterms to facilitate online voting for Americans living overseas, including military personnel. The app has also been used in various elections in Denver, Oregon, and Utah. In 2016, the Massachusetts Democratic Convention and Utah Republican Convention relied on this technology. This year, thousands more people in West Virginia were set to use the app under expanded access laws in the state designed to help absentee voters with disabilities, but now officials there are reconsidering their options.

The MIT researchersgraduate students Michael Specter and James Koppel and their adviser Daniel Weitznerclaim in their new paper that they found the vulnerabilities and disclosed them to the Department of Homeland Security in order to alert election administrators in the jurisdictions using the app.

Voatz is not a stranger to national headlines. In October 2019, then-CNN reporter Kevin Collier reported that a student from the University of Michigan had been referred to the FBI for investigation after the company claimed the student tried to break into its systems during the 2018 election. Last week, information security journalist Yael Grauer took a deeper look at the case, reporting how the company may have changed the terms of its bug bounty programwhich offers rewards to researchers who find and report vulnerabilitiesafter the news broke, suggesting it may have sought to deter research on its tech.

Last November, Sen. Ron Wyden (D-Ore.) called for the Department of Defense and the NSA to audit Voatz, after complaining the company wouldnt release security audits and wouldnt identify the security researchers it claimed to be working with.

I raised questions about Voatz months ago, because cybersecurity experts have made it clear that internet voting isnt safe, Wyden said in a statement Thursday. Now MIT researchers say this app is deeply insecure and could allow hackers to change votes. Americans need confidence in our election system. It is long past time for Republicans to end their election security embargo and let Congress pass mandatory security standards for the entire election system.

In a response posted to its blogChronicles of an Audacious ExperimentVoatz called the MIT report flawed. The company claimed the researchers tested the companys Android app that was at least 27 versions old. And it said the outdated app was never connected to the companys servers but rather to simulated servers, and therefore made false assumptions about how the back end of the system works. In short, the company said, to make claims about a backend server without any evidence or connection to the server negates any degree of credibility on behalf of the researchers.

The company claimed that past elections using its technology had run smoothly, and it attacked the MIT researchers for seeking media attention, contending their true aim is to deliberately disrupt the election process, to sow doubt in the security of our election infrastructure, and to spread fear and confusion.

Alex Halderman, an election security expert at the University of Michigan, tweeted Thursday that the findings show theres a much greater risk than there should be that a network-based attacker, like a malicious WiFi router or ISP, could access Voatzs private key, impersonate the Voatz API server, and then intercept and change votes. He said it was shocking how primitive the app is and that no responsible jurisdiction should use Voatz in real elections any time soon.

Of Voatzs rebuttal to the MIT report, Halderman said: The Voatz response doesnt seem to dispute any of the specific technical claims in the MIT paper. Thats very telling, in my view. If any of it is wrong, Voatz should say what, specifically, that is. They dont seem to even say the more recent version of the app works differently.

The researchers claim that their analysis shows the app could allow an adversary to see a users vote or disrupt the transmission of voting data. An attacker could control their vote, the researchers claim, and if someone controls of the back-end server theyd have full power to observe, alter, and add votes as they please. This table outlines the researchers summary findings based on the level of access the adversary gains.

A summary of potential attacks a hacker could launch against the Voatz app, according to the MIT researchers.

Michael Specter, James Koppel, Daniel Weitzner

The Department of Homeland Securitys Cybersecurity and Infrastructure Agency (CISA) worked with the MIT researchers to alert election officials, a CISA spokesperson told Mother Jones, and shared relevant information with Voatz as well. The election officials were able to speak with the researchers and CISA to understand and manage risks to their systems, the spokesperson said, adding that there is no known exploitation of the vulnerabilities to the bring-your-own-device mobile voting system described in the research.

Donald Kersey, general counsel for West Virginia Secretary of State Mac Warner, said in a statement provided to Mother Jones that the state appreciates the responsible and ethical reporting of this research through the Department of Homeland Security by the research team at MIT, and that Warner hasnt decided which technology to use for the May 12 primary election or the general election in November. Warners office also provided a copy of a declassified DHS assessment of the Voatz network. The audit, conducted in Voatz headquarters last fall, found some security gaps but did not identify any threat actor activity within Voatzs network environment.

The report doesnt examine the app directly, but it does cover the cloud servers used to support it. While the team saw no evidence of malicious activity, it did find determine some server settings could unintentionally lead to a reduced security posture. Voatz reported to DHS that those concerns had been addressed.

Read the original:
Security Researchers Find Flaws in Online Voting System Tested in Five States - Mother Jones

Read More..

Five cloud-based tools your business needs – IT PRO

Cloud-based subscription services are the key components of the modern business toolbox, embodying the screwdrivers and spanners necessary to construct a digital workspace. As such, they should be viewed as central to any digital transformation strategy.

Microsofts cloud offering Office 365 hit 200 million users in FY20 Q1, dwarfing its main competitor G-Suite. However, whileGoogles cloud suiteis but a drop in Office 365s ocean, G-Suite is rapidly snapping up market share, and not necessarily to Office 365s detriment.

Thats because the markets growth is incremental. Year upon year, demand for cloud-based subscription services intensifies. In the past decade, AWS has emerged as a rival to Microsofts throne, whileother applications have firmly embedded themselves within the enterprise, such as Salesforce, evidencing a trend which is showing no signs of slowing.

Journey to a modern workplace with Office 365: which tools and when?

A guide to how Office 365 builds a modern workplace

Building a future workspace begins with the deployment of cloud-based services, each offering a particular tool or set of tools which support a workflow. The bestare those which are easily integrated with existing and additional applications; better still are single cloud-based, enterprise-wide servicesthat provide a single-pane-of-glass approach, delivering a unified experience for both workers and customers alike.

Advertisement - Article continues below

Read on to learn which cloud-based tools are needed to deliver an optimised digital workspace for your business.

Centralised collaboration tools are quickly becoming the heart of the digital workplace, providing a platform which often acts as the focal point of otherwise disparate cloud applications; all bridges - should they stem from email, analytics, or storage - lead to the workflow hub.

For example, Microsoft Teams is able to host the Office 365 toolset, facilitating a more collaborative, productive, and efficient way for users, teams and businesses to work; instead of jumping between apps, tools are accessed from one simple-to-operate platform, easing usability and boosting productivity.

Microsoft Teams is jockeying for market share with Slack. Though Slack outdates Teams, recently the scales have tipped in Teams favour. Workplace from Facebook is the new kid on the block, offering similar file-sharing, storage, and communication functionalities.

The aforementioned workflow hub equips users with file-sharing abilities through its instant communication channels, however often - as is the case with Slack, for example - shared files are downloaded straight onto servers.

Having a cloud-based file hosting tool allows employees to share documents and collaborate online, with files being downloaded securely and directly to the cloud.

Advertisement - Article continues below

Microsoft offers OneDrive as a core element of Office 365, a tool able to securely store files that can then be accessed by remote workers, regardless of their physical location. Documents uploaded to OneDrive can then be distributed by SharePoint, Office 365s document management and storage system that integrates smoothly with the wider Microsoft Office suite.

Google does things a little differently. Sheets instead of Excel, Docs replace Word, and files are uploaded to Google Drive. Interestingly enough, Google has announced plans to add Microsoft Office file format support to its range of apps, adding an element of versatility to suite of collaboration tools.

Email is obviously nothing new, but the advantages of embedding your system within a cloud application can transform a lethargic communication medium into a management tool, one that includes helpful additions such as a calendar, a task manager, and a web browser.

Hosting email systems in the cloud also brings additional backup and security features, whilealso bringing about a reduction in maintenance costs through rendering physical servers obsolete.

Journey to a modern workplace with Office 365: which tools and when?

A guide to how Office 365 builds a modern workplace

Cloud-based tools dont only allow employees to make better, faster decisions by smoothing communication channels. Business intelligence/analytics tools can be employed which usean organisations data to help employees make informed decisions.

Microsofts Power BI, part of the Office 365 suite, transforms data into a more visual form, making distinct analysis easy, whileadditionally allowing users to create bespoke reports and dashboards.

Advertisement - Article continues below

Cloud-based business intelligence is quickly becoming an integral part of digital transformation strategies, with an all-time high of 48% of organisations stating cloud business intelligence and analytics was important to their operations in 2019.

The digital transformation process is overseeing migration en masse of applications to the cloud, and theres no denying that this surfaces problems, cementing the role of reporting tools within the enterprise.

Reporting tools such as JIRA provide a centralised dashboard which employees navigate to post and resolve tickets, typically related to internal IT infrastructural issues. Whilst cloud versions of popular reporting tools may come with caveats such as limited capability, the general advantages of cloud-based applications apply, from lower costs resulting from cheaper and easier maintenance caused by not having to deal with servers, to backup solutions being easily implemented.

How inkjet can transform your business

Get more out of your business by investing in the right printing technology

Journey to a modern workplace with Office 365: which tools and when?

A guide to how Office 365 builds a modern workplace

Modernise and transform your sales organisation

Learn how a modernised sales process can drive your business

Your guide to managing cloud transformation risk

Realise the benefits. Mitigate the risks

View post:
Five cloud-based tools your business needs - IT PRO

Read More..

DDoS report reveals that the complexity and volume of attacks continues to grow – Continuity Central

DetailsPublished: Wednesday, 12 February 2020 09:22

Link11 has released findings from its annual DDoS Report, which revealed a rising number of multivector and cloud computing attacks during 2019.

The latest Link11 DDoS report is based on data from repelled attacks on web pages and servers protected by Link11s Security Operations Center (LSOC).

Key findings from the annual report include:

The data showed that the frequency of DDoS attacks depends on the day of the week and time of the day, with most attacks concentrated around weekends and evenings. More attacks were registered on Saturdays, and between 4pm and midnight on weekdays.

There was also a number of new amplification vectors registered by the LSOC last year including WSDiscovery, Apple Remote Management Service and TCP amplification, with registered attacks for the latter doubling compared to the first six months of the year. The LSOC also saw an increase in carpet bombing attacks in the latter part of 2019, which involves a flood of individual attacks that simultaneously target an entire subnet or CIDR block with thousands of hosts. This popular method spreads manipulated data traffic across multiple attacks and IPs. The data volume of each is so small that it stays under the radar and yet the combined bandwidth has the capacity of a large DDoS attack.

More details.

Continued here:
DDoS report reveals that the complexity and volume of attacks continues to grow - Continuity Central

Read More..

How To Fill Your Data Lakes And Not Lose Control Of The Data – Forbes

Data lakes are everywhere now that cloud services make it so easy to launch one. Secure cloud data lakes store all the data you need to become a data-driven enterprise. And data lakes break down the canonical data structures of enterprise data warehouses, enabling users to describe their data better, gain better insights and make better decisions.

Data lake users are data-driven. They demand historical, real-time and streaming data in huge quantities. They browse data catalogs, prefer text search, and use advanced analytics, machine learning (ML) and artificial intelligence (AI) to drive digital transformation into the business. But where exactly does all the data come from?

The complexity of compliance and governance

Filling data lakes is a complex process that must be done properly to avoid costly data preparation and compliance breakdowns. Data is collected from everywhere, and ingestion involves high volumes of data from IoT, social media, file servers, and structured and unstructured databases. Such large-scale data exchange poses significant data availability and data governance challenges.

Big data governance shares the same disciplines as traditional information governance, including data integration, metadata management, data privacy and data retention. But one important challenge is how to achieve centralized compliance and control over the vast amounts of data traversing multicloud networks of distributed data lakes.

And there is a sense of urgency. As digital transformation becomes a priority, data governance, data security and compliance must always be in place. Recently passed laws, specifically GDPR and CCPA, require robust data privacy controls, including the right to be forgotten. For many organizations, such compliance is a real challenge, even when it comes to answering the seemingly simple question, Do you know where your data is?

Federated Data Governance

One solution is a federated data governance model. Federated data governance solves the centralized versus decentralized dilemma. By establishing compliance controls at the point of data ingestion, information life cycle management (ILM) policies may be applied to classify and govern data throughout its life cycle. As high volumes of data move from databases and file servers and transform into cloud-based object storage, policy-driven compliance controls are needed like never before.

IMAGE COURTESY OF JOHN OTTMAN

As a best practice to set up federated data governance, compliance policies and procedures should be standardized across the enterprise. Proper data governance involves business rules that are followed hard and fast. "Comply or explain" systems lead to distrust by audit authorities and require rigorous follow-up to ensure proper remedies are consistently applied. Once noncompliant data is released to the network, recall may not be possible.

Enterprise Data Lakes

An enterprise data lake is the centerpiece of the interconnected data fabric. Enterprise data lakes ingest data, prepare it for processing and provide a federated data governance framework to manage the data throughout its life cycle. Centralized, policy-driven data governance controls ensure compliant data is available for decentralized data lake operations.

Enterprise data lakes also speed up data ingestion. Centralized connections to import data from structured, semi-structured, unstructured and siloed S3 object stores simplify compliance control. Whether the data arrives as a simple "copy" or more complicated "move" function (for archiving), centralized ingestion enables data to be cataloged, labeled, transformed and governed with ILM and retention plans. As data is classified during ingestion, centralized security management and access control become possible as well.

The decision to move versus copy data is important. For many organizations, data growth is reaching crisis proportions. Response times struggle to perform when datasets are too large. Batch processes may fail to complete in time, upending schedules. Downtime windows required for system upgrades may require extension. Storage costs are increased, and disaster recovery processes become even more challenging. A move process purges data at the source, relieving performance pressure on production systems, whereas a copy process increases infrastructure requirements by doubling the amount of data to process.

Conclusion

So, as data lakes roll out within your organization, remember that filling them may be the hardest part. An enterprise data lake with a federated big data governance model establishes a more reliable system of centralized compliance and enables decentralized data lakes to flourish.

Original post:
How To Fill Your Data Lakes And Not Lose Control Of The Data - Forbes

Read More..

The Biometric Threat by Jayati Ghosh – Project Syndicate

As with so many other convenient technologies, the world is underestimating the risks associated with biometric identification systems. India has learned about those risks the hard way and should serve as a cautionary tale to the governments and corporations seeking to expand the use of these technologies.

NEW DELHI Around the world, governments are succumbing to the allure of biometric identification systems. To some extent, this may be inevitable, given the burden of demands and expectations placed on modern states. But no one should underestimate the risks these technologies pose.

Biometric identification systems use individuals unique intrinsic physical characteristics fingerprints or handprints, facial patterns, voices, irises, vein maps, or even brain waves to verify their identity. Governments have applied the technology to verify passports and visas, identify and track security threats, and, more recently, to ensure that public benefits are correctly distributed.

Private companies, too, have embraced biometric identification systems. Smartphones use fingerprints and facial recognition to determine when to unlock. Rather than entering different passwords for different services including financial services users simply place their finger on a button on their phone or gaze into its camera lens.

It is certainly convenient. And, at first glance, it might seem more secure: someone might be able to find out your password, but how could they replicate your essential biological features?

But, as with so many other convenient technologies, we tend to underestimate the risks associated with biometric identification systems. India has learned about them the hard way, as it has expanded its scheme to issue residents a unique identification number, or Aadhaar, linked to their biometrics.

Originally, the Aadhaar programs primary goal was to manage government benefits and eliminate ghost beneficiaries of public subsidies. But it has now been expanded to many spheres: everything from opening a bank account to enrolling children in school to gaining admission to a hospital now requires an Aadhaar. More than 90% of Indias population has enrolled in the program.

Subscribe today and get unlimited access to OnPoint, the Big Picture, the PS archive of more than 14,000 commentaries, and our annual magazine, for less than $2 a week.

SUBSCRIBE

But serious vulnerabilities have emerged. Biometric verification may seem like the ultimate tech solution, but human error creates significant risks, especially when data-collection procedures are not adequately established or implemented. In India, the government wanted to enroll a lot of people quickly in the Aadhaar program, so data collection was outsourced to small service providers with mobile machines.

If a fingerprint or iris scan is even slightly tilted or otherwise wrongly positioned, it may not match future verification scans. Moreover, bodies can change over time for example, daily manual labor may alter fingerprints creating discrepancies with the recorded data. And that does not even cover the most basic of mistakes, like misspelling names or addresses.

Correcting such errors can be a complicated, drawn-out process. That is a serious problem when ones ability to collect benefits or carry out financial transactions depends on it. India has had multiple cases of lost entitlements whether food rations or wages for public-works programs as a result of biometric mismatches.

If honest mistakes can do that much harm, imagine the damage that can be caused by outright fraud. Police in Gujarat, India, recently found more than 1,100 casts of beneficiary fingerprints made on a silicone-like material, which were used for illicit withdrawals of food rations from the public distribution system. Because we leave fingerprints on everything we touch, we are all vulnerable to such replication.

And manual replication is just the tip of the iceberg. Researchers have created synthetic MasterPrints that enabled them to achieve a frighteningly high number of imposter matches.

Further risks arise during the transmission and storage of biometric data. Once collected, biometric data are usually moved to a central database for storage. They have to be encrypted while in transit, but the encryptions can be and have been hacked. Nor are they necessarily safe once they arrive in local, foreign, or cloud servers.

In India, one of the web systems used to record government employees work attendance was left without a password, allowing anyone access to the names, job titles, and partial phone numbers of 166,000 workers. Three official Gujarat-based websites were found to be disclosing beneficiaries Aadhaar numbers. And the Ministry of Rural Development accidentally exposed nearly 16 million Aadhaar numbers.

Moreover, an anonymous French security researcher accused two government websites of leaking thousands of IDs, including Aadhaar cards. That leak has now reportedly been plugged. But, given how many public and private agencies have access to the Aadhaar database, such episodes underscore how risky a supposedly secure system can be.

Of course, such vulnerabilities exist with all personal data. But exposure of someones biometric information is far more dangerous than exposure of, say, a password or credit card number, because it cannot be undone. We cannot, after all, simply get new irises.

The risk is compounded by efforts to use collected biometric data for monitoring and surveillance, as is occurring in China and elsewhere. In this sense, the large-scale collection and storage of peoples biometric data pose an unprecedented threat to privacy. And few countries have anything close to adequate laws to protect their residents.

In India, revelations of the Aadhaar programs weaknesses have largely been met with official denials, rather than serious efforts to protect users. Worse, other developing countries, such as Brazil, now risk replicating these mistakes, as they rush to adopt biometric technology. And, given the large-scale data breaches that have occurred in the developed world, these countries citizens are not safe, either.

Biometric identification systems are permeating every facet of our lives. Unless and until citizens and policymakers recognize and address the complex security risks they entail, no one should feel safe.

Follow this link:
The Biometric Threat by Jayati Ghosh - Project Syndicate

Read More..

Throwing Down The Gauntlet To CPU Incumbents – The Next Platform

The server processor market has gotten a lot more crowded in the past several years, which is great for customers and which has made it both better and tougher for those that are trying to compete with industry juggernaut Intel. And it looks like it is going to be getting a little more crowded with several startups joining the potential feeding frenzy on Intels Xeon profits.

We will be looking at a bunch of these server CPU upstarts in detail, starting with Nuvia, which uncloaked itself from stealth mode last fall and which has said precious little about what it can do to differentiate in the server space with its processor designs. But Jon Carvill, vice president of marketing with long experience in the tech industry, gave The Next Platform a little more insight into the companys aspirations as it prepares to break into the glass house.

Before we even get into who is behind Nuvia and what it might be up to, its very existence begs the obvious question: Why would anyone found a company in early 2019 that thinks there is room for another player in the server CPU market?

And this is a particularly intriguing question given the increasing competition from AMD and the Arm collective (lead by Ampere and Marvell) and ongoing competition from IBM against Intel, which commands roughly 99 percent of server CPU shipments and probably close to 90 percent server revenue share. We have watched more than three decades of consolidation in this industry, where there were north of three dozen different architectures and almost as many suppliers of operating systems to Intel dominating almost all of the shipments with its Xeons, almost all of the server CPU revenue, and Windows Server and Linux splitting most of the operating system installations and money.

Why now, indeed.

Or even more precisely, why havent the hyperscalers, who own their own workloads, as distinct from the big public cloud providers, who run mostly Windows Server and Linux code on X86 servers on behalf of customers who have zero interest in changing the applications, much less the processor instruction set, just thrown in the towel and created their own CPUs? It always comes down to economics, and specifically performance per watt and dollars per performance and the confluence of the two. And that is why the founders of Nuvia think they have a chance when others have tried and not precisely succeeded even if they have not failed. To be sure, AMD is getting a second good run now at Intel with the Epyc processors after a pretty good run with the Opterons more than a decade ago. But up until this point, Intel has done more damage to itself, with manufacturing delays, unaggressive roadmaps, and premium pricing, than AMD has done to it.

Clearly the co-founders of Nuvia see an opportunity, and they are seeing it from inside the hyperscalers. Gerard Williams, who is the companys president and chief executive officer, had a brief stint after college at Intel, designed the TMS470 microcontroller at Texas Instruments back in the mid-1990s, and was the CPU architect lead for the Cortex-A8 and Cortex-A15 designs that breathed new life into the Arm processor business and landed it inside smartphones and tablets. Williams went on to be a Fellow at Arm, and in in 2010, when Apple no longer wanted to buy chips of its own from Samsung, it tapped Williams to be the CPU chief architect for a slew of Arm-based processors used in its iPhone and iPad devices namely, the Cyclone A7, the Typhoon A8, the Twister A9, the Hurricane and Zephyr A10 variants, the Monsoon and Mistral A11 variants, and the Vortex and Tempest A12 variants. And Williams was also the SoC chief architect for unreleased products and that can have a bunch of interesting meanings.

The two other co-founders, Manu Gulati, vice president of SoC engineering at Nuvia, and John Bruno, vice president of system engineering, both most recently hail from hyperscaler and cloud builder Google. Gulati cut his CPU teeth back in the mid-1990s at AMD, doing CPU verification and designing the floating point unit for the K7 chip and the HyperTransport and northbridge chipset for the K8 chip. Gulati then jumped to SiByte, a designer of MIPS cores, in 2000 and before the year was out Broadcom acquired the company and he spent the next nine years working on dual-core and quad-core SoC. Gulati then moved to Apple and was the lead SoC architect for the companys A5X, A7, A9, A9X, A11, and A12 SoCs. (Not just the CPU cores that Williams focused on, but all the stuff that wraps around them.) Between 2017 and 2019, Gulati was chief SoC architect for the processors used in Googles various consumer products.

Bruno has a similar but slightly different resume, landing as an ASIC designer at GPU maker ATI Technologies after college and significantly as the lead on the design of several of ATIs mobile GPUs prior to its acquisition by AMD in 2006 and for the Trinity Fusion APUs from AMD, which combine CPU and GPU compute in the same die. Bruno then did nearly six years at Apple as the system architect on the iPhone generations 5s through X, and like Gulati, moved to Google in 2017, in this case to be a system architect.

Both Gulati and Bruno left Google in March last year to join Williams as co-founders of Nuvia, which is not a skin product or a medicine, but a server CPU upstart. Carvill joined Nuvia last November soon after it uncloaked, and so did Jon Masters, formerly chief Arm software architect for Linux distributor Red Hat.

What do these people, looking out to the datacenter from their smartphones and tablets, see as not only an opportunity in servers, but as a chance to school server CPU architects on how to create a new architecture that leads in every metric that matters to datacenters: performance, energy efficiency, compute density, scalability, and total cost of ownership?

This is a situation where Gerard, Manu, and John obviously had a pretty substantial role to play at a certain company in Cupertino in building a series of processors that were really designed to establish a step function improvement in performance, and also either a decrease in or, at a minimum, a consistent TDP, Carvill tells The Next Platform. And that has essentially redefined the performance level that people expect out of mobile phones. And now you have a scenario where those phones are performing very close to, if not in some cases exceeding, what you get out of a client PC and they are within striking distance of a server. Now, if you look at the servers, by contrast, now we have a similar problem that is beginning to manifest, especially at the hyperscalers, is that their datacenters have thermal envelopes that are becoming more and more constrained. They have not seen any meaningful improvement in IPC in CPU performance in some time. If you look at the last five years, they have largely had the same architectures. They have had incremental improvements in basic CPU performance. Theres been some new workloads on the scene and theres been a lot of improvements in areas like AI and some other corner cases, for sure. But if you look at the core CPU, can you think of the last time you have seen a big meaningful difference or change in the datacenter?

We have seen some big instructions per clock (IPC) jumps think of the big jump with the initial Zen cores from AMD used in the Naples Epyc chips or in the Armv8 cores designed by Arm Holdings moving from its Cosmos to Ares reference chips. Even IBM has relatively big jumps in IPC between Power generations, but it takes more than three years for a generation to come to market. And when these big IPC jumps do happen, it is often one-off jumps because the architectures had been lagging for years. Speaking very generally, instructions per clock has been stuck at somewhere around 5 percent, sometimes 10 percent, and rarely more per generation. But heres the kicker. As the IPC goes up, the clock speed goes down because the core count is going up because this is the only way to not increase heat dissipation more than is already happening. Over the past decade, server CPUs have been getting hotter and hotter and top-bin parts running full bore will soon be as searing as a GPU or FPGA accelerator.

We agree this is undesirable, but were under the impression that it was also mostly unavoidable if you wanted to maintain absolute compatibility of processors from today back through a very long range of time, which the IT industry very clearly does want to do.

The trick with Nuvia is that it is not trying to build a server CPU for the entire industry, but rather one that is focused on the specific and thankfully more limited needs of the hyperscalers.

This is a server-class CPU, with an SoC surrounding it, and it is designed to be the clear-cut winner on each of those categories and in totality, says Carvill, throwing down the gauntlet to all of the remaining CPU players, who each have their own ideas about how to take on Intels hegemony. And we are not talking about the incremental performance improvements that we have come to expect over the past five years. We are talking about really meaningful, significant, double-digit performance improvements over what anyone has seen before. It will be designed for the hyperscale world we are not going after everybody. We are not going after the entire enterprise, we are starting with the hyperscalers, and we are doing that very deliberately because thats an area where you can take a lot of the legacy that you have had to support in the past and push that aside to some degree and design a processor for modern workloads from the ground up. What we are doing is custom, and we will not be using off the shelf, licensed cores. We are going to use an Arm ISA, but we are doing it as a clean sheet architecture from the ground up that is built for the hyperscaler world.

So that begs the question of what you can throw out and what you can add without breaking the licensing requirements to stick to the compatibility of the Arm ISA. We dont have an answer as to what this might be, but certainly this is precisely what Applied Micro (reborn as Ampere) was trying to do with its X-Gene Arm server chips and what Broadcom and then Cavium and then Marvell were doing with the Vulcan ThunderX2 chips; others, like Qualcomm, would claim that they did the same thing. So we are very intrigued about what portion of the Arm ISA the hyperscalers needs and what parts they can throw away, as well as any other interesting bits for acceleration that Nuvia might come up with. For the moment, the Nuvia team is not saying much about what it is, except that numerous hyperscalers are privy to what the company is doing and have given input from their workloads to help the architects come up with the design.

What is also obvious is that this is for hyperscalers, not cloud builders, at least in the initial implementation of the Nuvia chip. By definition, the raw infrastructure services of public clouds run mostly X86 code on either Linux or Windows Server operating systems, and this chip certainly wont support Windows Server externally on any public cloud, although there is always a chance that Microsoft will run Nuvia Arm chips on internal workloads in its Azure cloud. Microsoft has made no secret about its desire to have half of its Azure compute capacity running on Arm architecture chips, and all of the other hyperscalers notably Google and Facebook as well as Apple, which is not quite a hyperscaler but is probably interested in what the Nuvia team is up to since it probably has millions of its own servers and would no doubt love to have a single architecture spanning its entire Apple stack if it could happen. We could even see Apple get back into the server business with Nuvia chips at the end of this adventure, which would be interesting indeed but perhaps only for its own internal consumption working with a bunch of ODMs but perhaps through the Open Compute Project.

John and Manu were the founders who really had the initial idea because they were working at Google with a lot of the internal teams, looking at the limitations and challenges in their datacenter architecture and infrastructure, and they thought they could build something a lot better for what Google needs to scale this thing forward. But they needed a CPU architect who came with the pedigree and legacy to be able to go build something that was custom that had been successful at scale. And then thats what when they got Gerard.

The point is this: Google, Apple, and Facebook do not have to design a hyperscale-class CPU because they can get Nuvia to do it and spread the cost across Silicon Valley venture capitalists instead of spending their own dough.

There are precedents for the kind of tight focus on hyperscalers, and it comes from none other than Broadcom with its distinction between its Trident Ethernet switch ASICs aimed at the enterprise, which frankly did not have as good of a cost, thermal, and performance profile as the hyperscalers in this case Google and Microsoft wanted. And so, they worked with Broadcom and Mellanox Technologies to cook up the 25 G Ethernet standard, whether or not the IEEE standards committee would endorse it. In the case of Broadcom, the company rolled out the Tomahawk line, with trimmed down Ethernet protocols and more routing functions as well as better bang for the buck and better thermals per port. Innovium, another upstart switch ASIC maker, just went straight to making an Ethernet switch ASIC aimed at the hyperscalers.

There are not a lot of details about what Nuvia will do, but here is what the company is planning:

All of this work is being supported by an initial $53 million Series A investment from Capricorn Investment Group, Dell Technologies Capital, Mayfield, WRVI Capital, and Nepenthe.

As soon as we learn more, we will tell you.

Original post:
Throwing Down The Gauntlet To CPU Incumbents - The Next Platform

Read More..

China retreats online to weather coronavirus storm – The Jakarta Post – Jakarta Post

Virus-phobia has sent hundreds of millions of Chinese flocking to online working options, with schools, businesses, government departments, medical facilities even museums and zoos wrapping themselves in the digital cloud for protection.

China remains in crisis mode weeks after the epidemic exploded, with much of the country shut down and the government pushing work-from-home policies to prevent people gathering together.

That has been a boon for telecommuting platforms developed by Chinese tech giants such as Alibaba, Tencent and Huawei, which have suddenly leapt to the ranks of China's most-downloaded apps, leaving them scrambling to cope with the increased demand.

Tencent said its office collaboration app WeChat Work has seen a year-on-year tenfold increase in service volume since February 10, when much of the country officially came back from a virus-extended Lunar New Year holiday.

Alibaba's DingTalk has observed the highest traffic in its five-year existence, company officials told state media, with around 200 million people using it to work from home.

Huawei said its WeLink platform is experiencing a fiftyfold increase, with more than one million new daily users coming on board.

Eric Yang, chief executive of Shanghai-based iTutorGroup, which operates a range of online courses, said his company's business has surged 215 percent.

"We just helped an art education school open online painting classes, and are also helping another music school to open virtual classes," Yang said.

"More kids in third- and fourth-tier cities are increasingly taking our online courses because of the outbreak. In the past, most users came from first-tier cities [such as Beijing and Shanghai]."

The online migration received an implicit endorsement from President Xi Jinping, who on Monday was shown on the nightly state television news broadcast watched by tens of millions giving a pep talk to medical staff in the contagion epicenter city of Wuhan via Huawei WeLink.

The virus, which has killed more than 1,100 people and infected nearly 45,000, has shuttered factories across the country and is forecast to cut Chinese economic growth.

But China's highly developed online sector and population of more than 850 million mobile internet consumers may soften the blow.

The similar Severe Acute Respiratory Syndrome of 2003 is widely credited with helping to kickstart e-commerce development in China, and the coronavirus also is expected to "further the long-term structural shift" to an online economy, said S&P Global Ratings.

Hospitals, overwhelmed by people seeking a virus test at the first sign of sniffles, have pivoted to online telemedicine to help sort through the patients, with tens of millions of consultations taking place, state media said.

Countless museums and cultural sites have been closed, but many including Beijing's Forbidden City and the terracotta warriors in Xi'an have put exhibits online or created new virtual tours, and animal lovers can watch the Beijing Zoo's pandas on social media.

Even China's foreign ministry briefing the government's primary daily interface with the outside world has been converted into an online Q&A.

With schools nationwide shut until March, online learning has received a particular jolt. Institutions are scrambling to comply with an Education Ministry order to "stop classes, but don't stop learning."

Grace Wu, whose nine-year-old daughter Charlotte attends the now-shuttered Shanghai American School, had faced the prospect of a lengthy learning break with the family "self-quarantining" at home.

"It's like kind of a double worry. We worry first about the virus... the second worry is about learning," Wu said.

But the school last week re-launched lessons online until normality returns.

Charlotte and her classmates have embraced the situation, even organizing a virtual birthday party on video-conferencing platform Zoom.

"It's a birthday party in the cloud," said Wu, a 37-year-old blogger.

Alibaba said that as of Monday, schools in more than 300 cities across 30 provinces were utilizing a classroom function, with participating students totaling 50 million.

It has not all been smooth.

Users across the country complained last week that major Chinese platforms were glitch-prone or crashed frequently due to heavy traffic, sending providers scrambling to shore up their networks.

Alibaba told state media it had installed more than 10,000 new cloud servers in response.

Some providers were creating new features such as allowing users to blur their backgrounds to avoid looking "unprofessional" by logging in from their living rooms.

Chinese already are deeply connected to their mobile phones, going online to shop, order meals, find partners, pay bills and express themselves.

Wang Guanxin, an instructor with iTutorGroup, said this would only grow as a result of the virus.

Speaking after a video-conference training session he gave to a wall-length bank of 36 Chinese-language instructors on screen at the company's Shanghai offices including one woman who lay in bed in red pajamas Wang said the virus was a "turning point" for his industry.

"Objectively speaking, it will allow people who didn't really trust or rely on online learning to change their views," he said.

Read this article:
China retreats online to weather coronavirus storm - The Jakarta Post - Jakarta Post

Read More..

Global IT Security Market Size, Share, Growth Rate and Gross Margin, Industry Chain Analysis, Development Trends & Industry Forecast Report 2025 -…

The Global IT Security Market Research Report 2020 Provides In Depth Analysis Of The Industry along with Important Statistics and Facts. With the help of this information, investors can plan their business strategies.

TheGlobal IT Security Marketstatus, future forecast, growth opportunity, key market and key players. The study objectives are to present the IT Security development in United States, Europe and China.

IT security is the practice of preventing unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction of information. To standardize this discipline, academics and professionals collaborate and seek to set basic guidance, policies, and industry standards on password, antivirus software, firewall, encryption software, legal liability and user/administrator training standards. This standardization may be further driven by a wide variety of laws and regulations that affect how data is accessed, processed, stored, and transferred.

The increasing use of mobile devices and cloud servers to store sensitive data and the subsequent rise in technologically sophisticated cyber criminals threatening to steal that data have accelerated growth in the IT Security Consulting industry. This industry offers managed IT security services, such as firewalls, intrusion prevention, security threat analysis, proactive security vulnerability and penetration testing and incident preparation and response, which includes IT forensics.

In 2018, the global IT Security market size was xx million US$ and it is expected to reach xx million US$ by the end of 2025, with a CAGR of xx% during 2019-2025.

Request a sample of this report @https://www.orbisresearch.com/contacts/request-sample/3221137

The key players covered in this study

Blue Coat

Cisco

IBM

Intel Security

Symantec

Alert Logic

Barracuda Networks

BT Global Services

CA Technologies

CenturyLink

CGI Group

CheckPoint Software Technologies

CipherCloud

Computer Sciences

CYREN

FishNet Security

Fortinet

HP

Microsoft

NTT Com Security

Panda Security

Proofpoint

Radware

Trend Micro

Trustwave

Zscaler

Market segment by Type, the product can be split into

Internet security

Endpoint security

Wireless security

Network security

Cloud security

Market segment by Application, split into

Commercial

Industrial

Military and Denfense

If enquiry before buying this report @https://www.orbisresearch.com/contacts/enquiry-before-buying/3221137

Market segment by Regions/Countries, this report coversUnited StatesEuropeChinaJapanSoutheast AsiaIndiaCentral & South America

The study objectives of this report are:To analyze global IT Security status, future forecast, growth opportunity, key market and key players.To present the IT Security development in United States, Europe and China.To strategically profile the key players and comprehensively analyze their development plan and strategies.To define, describe and forecast the market by product type, market and key regions.

About Us:Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.

Contact Us:Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: +1 (214) 884-6817; +912064101019Email ID:[emailprotected]

Original post:
Global IT Security Market Size, Share, Growth Rate and Gross Margin, Industry Chain Analysis, Development Trends & Industry Forecast Report 2025 -...

Read More..