Page 4,025«..1020..4,0244,0254,0264,027..4,0304,040..»

Data Sovereignty: The imperative for action – DatacenterDynamics

In this frenetic landscape, in which huge amounts of data are harvested, stored and analyzed 24 hours a day, governments have moved with uncommon swiftness to provide statutory instruments that seek to regulate the flow of information.

This has included the assertion of data sovereignty, in which governments enforce their own privacy laws on data stored within their jurisdictions. It is a rebuff of sorts to the global economy, a reimposition of sovereign interest.

For businesses, this has created a raft of compliance obligations and strategic imperatives, as well as the need for informed decisions about where their data is stored, how that data is managed and protected when shared across borders, and how IT systems are set up.

The rapid take-up of cloud-based data storage exposes companies to issues of data sovereignty. With the rising popularity of cloud computing, data sovereignty issues have become a greater focus for companies concerned about threats to the integrity and security of their data.

Data sovereignty becomes an issue when a companys data servers are located outside the country in which the business is domiciled, and governments insist that this data is subject to the laws of the country in which it is collected or processed.

Businesses need to have a robust and comprehensive data security strategy and vigorous internal procedures to protect and secure data. The onus is on businesses to understand how their data is stored, who owns it and how it moves.

Businesses also need to:

Data gravity is a metaphor introduced into the IT lexicon by San Francisco software engineer Dave McCrory in 2010. The idea is that data and applications are attracted to each other, similar to the attraction between objects that is explained by the law of gravity. As data sets grow larger and larger they become more difficult to move. So, the data stays put and applications and processing power moves to where the data resides.

Barriers become even more challenging if you want to run analytics in the cloud on data stored in the enterprise, or vice-versa. These new realities for a world of ever-growing data sets suggests the need to design enterprise IT architectures in a manner that reflects the reality of data gravity. Alternatively, companies could consolidate their data in a cloud platform where the analytics capabilities reside (and which includes data sovereignty guarantees).

General Data Protection Regulation (GDPR)

The European Unions GDPR covers data protection for EU citizens. The GDPR also addresses the transfer of personal data outside the EU and European Economic Area (EEA). It supersedes the Data Protection Directive.

With the advent of the GDPR, organizations have reviewed their data sovereignty requirements and capabilities.

Brexit: in or out?

All countries in the EU benefit from what might be called the free movement of data. This currently applies to the UK in the same way that it does to the other 27 members.

However, when the UK leaves the EU, it may or may not still be included in this free market in data. Current EU data protection legislation states that special precautions need to be taken when personal data is transferred to countries outside the European Economic Area that do not provide EU-standard data protection.

If data sovereignty isnt included in any finalized Brexit deal, or if the no deal scenario eventuates, then UK businesses could be directly affected. Post-Brexit, the UK would no longer be covered by data agreements between the EU and other countries, such as the EU-US Privacy Shield Framework.

If the EU does not grant equivalency to the UK post-Brexit, the safest thing to do when it comes to data sovereignty issues is to make sure that data is migrated to UK-based data centers.

In the digital economy, organizations are information-rich. They have never possessed such extensive reserves of personal data nor have they been closer to their customers as a result. Digital consumers have benefited from customized product and service offerings, enhanced customer experiences and the ability to intimately engage with their favorite brands across multiple platforms.

But with the ability of organizations to collect unprecedented amounts of data across multiple technology platforms comes great responsibility, and challenges - not least compliance obligations and strategic imperatives, as well as the need for informed decisions about where their data is stored, how that data is managed and protected, and how vendors are chosen.

How well organizations deal with the risks posed by data sovereignty is the latest challenge in the digital transformation of the economy.

Read the original post:
Data Sovereignty: The imperative for action - DatacenterDynamics

Read More..

Google Transfer Service makes moving your data to the cloud easier than ever – ITProPortal

After building services that allows businesses to physically move their data from local data centres and on-premise solutions to Google, the search engine and cloud giant has now built another complementary solution to make the process even smoother.

Transfer Service is a new part of Google Cloud designed to move the data digitally. It is first and foremost for businesses with billions of files and petabytes of data, and should bear the brunt of the work, validating the integrity of the data, as it moves it to the cloud.

The service will use as much bandwidth as it has on its disposal, to make sure transfer times are as short as they can be. Any potential failures are automatically handled by the agent.

Google promises a relatively painless process. All the business needs to do is install the agent on the local server and select the directories that need moving. The rest is handled by the service itself. Obviously, the business can monitor and manage the transfer through the Google Cloud console.

Even though the main benefits and key selling points seem to be archiving and disaster recovery, it was said that Google also wants to onboard organisations looking to shift workloads and use machine learning to analyse data.

I see enterprises default to making their own custom solutions, which is a slippery slope as they cant anticipate the costs and long-term resourcing, Senior Analyst Scott Sinclair says in aGoogle blog post .

With Transfer Service for on-premises data (beta), enterprises can optimize for TCO and reduce the friction that often comes with data transfers. This solution is a great fit for enterprises moving data for business-critical use cases like archive and disaster recovery, lift and shift, and analytics and machine learning.

Read the original here:
Google Transfer Service makes moving your data to the cloud easier than ever - ITProPortal

Read More..

Why Middle East startups are choosing the cloud – Khaleej Times

The cloud has revolutionised the way businesses operate, especially startups. It's uncommon now to find a startup that isn't cloud native; most chose to adopt a cloud infrastructure from the beginning. Businesses across the Middle East such as Careem, Anghami, Boutiqaat, Mrsool and many more, have been able to grow and innovate quickly, seamlessly underpinned by their highly secure, agile and flexible cloud infrastructure. Startups approach cloud and, more importantly, security, with a different viewpoint when compared to larger established organisations who are still struggling to marry together new capabilities with legacy systems.

Smart investmentsWhen starting a business, managing costs is critical, therefore investments that deliver the highest possible value and return on investment are a must, so startups only pay for the services they use. This approach enables them to avoid the large upfront expense of owned infrastructure, and manage their IT at a lower cost than an on-premises environment. However, low cost does not mean low functionality. On the contrary, a startup operating on cloud infrastructure has access to the same services and capabilities as the largest enterprise or government customers. This investment includes entire teams dedicated to security that satisfy the security and compliance needs of the most risk-sensitive organisations. This allows them to compete on an even playing field, innovate quickly and bringing products to the market, all with the knowledge that they have world-class security in place to protect against the most prevalent threats.

ScalabilityStartups are ambitious, tenacious and hungry to expand, so choosing to build and scale their business on the cloud is a natural choice. Simply by embracing the cloud, they can scale rapidly, giving them the ability to add or remove resources to meet evolving business demands as required. Instead of investing in data centres, servers and service level agreements, cloud technology allows startups to react faster and more flexibly, to experiment, innovate and better serve customers.

Speed and agilityThe cloud provides an opportunity for startups to optimise existing IT systems and to increase operational efficiencies, while driving business agility and growth. This is achieved by allowing companies to significantly decrease the time it takes to provision and de-provision IT infrastructure. While a physical server could take months or weeks to procure and provision, a cloud server takes minutes.

SecurityStartups must make security a top priority, regardless of size. A security breach can impact start-ups by hurting their reputation and customer base, and can have repercussions on the larger organisations these businesses do business with. Start-ups need to bake-in security from the ground up to make sure they are not the weak link in a supply chain.

Security automationTime is precious for startups and automating security tasks enables startups to be more secure by reducing human configuration errors and giving teams more time to work on other tasks critical to the business. Automation can also offer a smarter approach to detecting potential threats through its ability to monitor patterns of behaviour. Being able to identify changes in behavior means potential attacks can be identified and dealt with immediately.

Vinod Krishnan is head of the Mena at Amazon Web Services. Views expressed are his own and do not reflect the newspaper's policy.

See original here:
Why Middle East startups are choosing the cloud - Khaleej Times

Read More..

The 10 Hottest New Business And Enterprise Servers Of 2019 – CRN: Technology news for channel partners and solution providers

The demand for enterprise computing over the past two years has been at an historic high, as businesses seek next-generation server performance to meet their digital transformation needs.

In 2019, the worldwide server market is projected to exceed $100 billion in revenue with each quarter generating on average more than $20 billion in server sales, according to IT market research firm IDC. Although market leaders like Dell Technologies and Hewlett Packard Enterprise witnessed a fall in server sales this year, the global server market is expected to pick back up in 2020.

Hardware server innovation is being led by the likes of Dell, HPE, Inspur and Lenovo who are accelerating computing performance, storage class memory and next-generation input/output (I/O) for workloads around artificial intelligence, cloud, virtual desktop infrastructure (VDI) and edge computing.

CRN breaks down the ten hottest business and enterprise servers that led the way in 2019.

Get more of CRN's 2019 tech year in review.

Original post:
The 10 Hottest New Business And Enterprise Servers Of 2019 - CRN: Technology news for channel partners and solution providers

Read More..

Pulseway Introduces the All New, Integrated Cloud Backup Solution – PR Web

Pulseway Cloud Backup

DUBLIN (PRWEB) December 12, 2019

Pulseway, a leading provider of mobile-first, cloud-first remote monitoring and management (RMM) software, is excited to close 2019 with the launch of a brand new cloud backup product that is built directly into both the web-based platform and mobile application, allowing users to backup their files, regardless of their location. Pulseway Cloud Backup delivers a clean and easy-to-use interface built to follow a coherent structure that is lined with the rest of the platform, allowing users to check backup statuses, calculate backup health score, schedule backup jobs, and recover data for physical and virtual servers, workstations and documents. This new feature helps organizations ensure that their data is always protected.

"Pulseway Cloud Backup has been in the works for some time now and we are extremely proud of the end result that enables our customers to easily and securely backup their data," said Marius Mihalec, Founder and CEO of Pulseway. "I am thrilled to end 2019 with the launch of Pulseway Cloud Backup. Simple, efficient and flexible - that's what lies at the heart of our product vision and Pulseway Cloud Backup is no exception."

Pulseway Cloud Backup is fully integrated with the Pulseway ecosystem. The user can configure it from the WebApp under the cloud backup section and additionally can perform critical tasks from the mobile app. The native mobile application gives users the ability to backup already enrolled systems with a click of a button and allows time-critical operations, such as restoring deleted files and folders, to be performed on-the-go from anywhere, using a device closest to them.

"I am extremely excited about the new Pulseway Cloud Backup launch, it's built directly into the platform and enables me and my team to manage all of our backup needs from one unified dashboard," said Phil Law, Managing Director of Spicy Support. "The UI and the experience are very seamless and the flow correlates to the IT management functionality, which saves us a lot of time and eliminates the need for multiple portals and platforms."

About PulsewayMMSOFT Design, Ltd. is the maker of Pulseway, a mobile-first IT management software that helps busy IT administrators look after their IT infrastructure on the go. Pulseway is used by over 5,800 businesses worldwide including DELL, Louis Vuitton, Canon and Siemens.

Share article on social media or email:

The rest is here:
Pulseway Introduces the All New, Integrated Cloud Backup Solution - PR Web

Read More..

Edge predictions for 2020: From SD-WAN and cloud interconnection to security – Small Business

Image: Stockfresh

By 2023, half of enterprise-generated data will be created and processed outside the data centre or cloud

Print

Read More: cloud interconnection Edge networking predictions SD-WAN security

Few areas of the enterprise face as much churn as the edge of the network.Experts say a variety of challenges drive this change from increasedSD-WANaccess demand to cloud interconnected resources andIoT, the traditional perimeter of the enterprise is shifting radically and will continue to do so throughout 2020.

One indicator: Gartner research that says by 2023, more than 50% of enterprise-generated data will be created and processed outside thedata centreor cloud, up from less than 10% in 2019.

Hand-in-hand with that change is a shift in what technologies are supported at the edge of the network and that means information processing, content collection and delivery are placed closer to the sources, repositories and consumers of this information. Edge networking tries to keep the traffic and processing local to reduce latency, exploit the capabilities of the edge and enable greater autonomy at the edge,Gartner says.

The scope of enterprise WAN networks is broadening. No longer is it only from a branch edge to a data-centre edge. Now the boundaries have shifted across the LAN from individual clients and devices on the one end and across the WAN to individual containers in data centres or clouds on the other, said Sanjay Uppal, vice president and general manager of VMwares VeloCloud Business Unit. This broadening of the WAN scope is a direct consequence of the democratisation of data generation and the need to secure that data. So, we end up with edges at clients, servers, devices, branches, private data centres, public data centres, telco POP, RAN and the list goes on. Additionally, with IoT and mobility taking hold at the enterprise, the edge is moving out from the traditional branch to the individual clients and devices.

The evolution of business applications from monolithic constructs to flexible containerised workloads necessitates the evolution of the edge itself to move closer to the application data, Uppal said. This, in turn, requires the enterprise network to adjust and meet and exceed the requirements of the modern enterprise.

Such changes will ultimately make defining what constitutes the edge of the network more difficult.

With increased adoption of cloud-delivered services, unmanaged mobile and IoT devices, and integration of networks outside the enterprise (particularly partners), the edge is more difficult to define. Each of these paradigms extend the boundaries of todays organisations, said Martin Kuppinger, principal analyst withKuppingerCole Analysts AG. On the other hand, there is a common perception that there is no such perimeter anymore with statements such as the device is the perimeter or identity is the new perimeter. To some extent, all of this is true and wrong. There still might be perimeters in defined micro-segments. But there is not that one, large perimeter anymore.

The enterprise is not the only arena that will see continued change in 2020, there are big changes afoot on the WAN was well.

Analysts fromIDC wroteearlier this year that traditional enterprise WANs are increasingly not meeting the needs of digital businesses, especially as it relates to supporting SaaS apps and multi- andhybrid-cloudusage. Enterprises are interested in easier management of multiple connection types across their WAN to improve application performance and end-user experience hence the growth of SD-WAN technologies.

The market for branch-office WAN-edge functionality continues to shift from dedicated routing, security and WAN optimisation appliances to feature-rich software-defined WAN and, to a lesser extent, [universal customer-premises equipment] platforms,Gartner wrote.SD-WAN is replacing routing and adding application-aware path selection among multiple links, centralised orchestration and native security, as well as other functions. Consequently, it includes incumbent and emerging vendors from multiple markets (namely routing, security, WAN optimisation and SD-WAN), each bringing its own differentiators and limitations.

One of the biggest changes for 2020 could come around the SD-WAN. One of the drivers stems from the relationships that networking vendors such as Cisco, VMware, Juniper, Arista and others have with the likes of Amazon Web Services, Microsoft Azure, Google Anthos and IBM RedHat.

An indicator of those changes came this month when AWS announced a slew of services for its cloud offering that included new integration technologies such asAWS Transit Gateway, which lets customers connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway. Aruba, Aviatrix Cisco, Citrix Systems, Silver Peak and Versa already announced support for the technology which promises to simplify and enhance the performance of SD-WAN integration with AWS cloud resources.

The ecosystem around this type of cloud interconnection is likely one of the hottest areas of growth for 2020, experts say.

SD-WAN is critical for businesses adopting cloud services, acting as a connective tissue between the campus, branch, IoT, data centre and cloud, said Sachin Gupta, senior vice president, product management, with Cisco Enterprise Networking in a recentNetwork Worldarticle. It brings all the network domains together and delivers the outcomes business requires.

It must align user and device policies, and provide assurance to meet application service-level agreements. It must deliver robust security to every device and every cloud that the enterprises data touches. The AWS Transit Gateway will let IT teams implement consistent network and data security rules, he said.

All of these edge transformations will most certainly bring security challenges. Kuppinger noted a few including:

Each of these situations is beyond the traditional edge and can increase your enterprise attack surface and risk, Kuppinger said. Once identified, enterprises must figure out how to secure the edges and get more complete visibility to all risks and mitigations. New tools may be needed. Some organisations may choose to engage more managed security services.

The perimeter needs to be everywhere and hence the advent of the zero-trust architecture, VMwares Uppal said. This requires an end-to-end view where posture is checked at the edge, and based on that assessment network traffic is segmented both to reduce the attack surface and also the blast radius. ie, first reduce the likelihood that something is going to go wrong, but if it does then minimise the impact, Uppal said.

As traffic traverses the network, security services, both letting through the good while blocking the bad are inserted based on policy. Here again the network of cloud services that dynamically sequences security based on business policy is critical, Uppal said.

Going forward enterprise organisations might need to focus less on the network itself. Protect the services, protect the communication between devices and services, protect the devices and the identities of the users accessing these devices. This is very much what the zero trust paradigm has in mind notably, this is not primarily zero-trust networks, but zero trust at all levels, Kuppinger said.

The most important learning is: Protecting just the network at its edge is not sufficient anymore. If there is a defined network either physical such as in OT or virtual such as in many data centers this adds to protection, Kuppinger said.

The mixture of cloud and security services at the edge will lead to another trend in 2020, one that Gartner calls secure access service edge (SASE) which is basically the melding of network and security-as-a-service capabilities into a cloud-delivered package. By 2024, at least 40% of enterprises will have explicit strategies to adoptSASE, up from less than 1% at year-end 2018, Gartner says.

SASE is in the early stages of development, Gartner says. Its evolution and demand are being driven by the needs of digital business transformation due to the adoption of SaaS and other cloud-based services accessed by increasingly distributed and mobile workforces, and to the adoption of edge computing.

Early manifestations of SASE are in the form of SD-WAN vendors adding network security capabilities and cloud-based security vendors offering secure web gateways, zero-trust network access and cloud-access security broker services, Gartner says.

Regardless of what it is called, it is clear the melding of cloud applications, security and new edge WAN services will be increasing in 2020.

We are seeing the rise of microservices in application development, allowing applications to be built based upon a collection of discrete technology elements. Beyond new application architectures, there are demands for new applications to support IoT initiatives and to push compute closer to the user for lower latency and better application performance, VMwares Uppal said. With the maturation of Kubernetes, what is needed is the next set of application development and deployment tools that work cooperatively with the underlying infrastructure, compute, network and storage to serve the needs of that distributed application.

IDG News Service

Read More: cloud interconnection Edge networking predictions SD-WAN security

Read the original:
Edge predictions for 2020: From SD-WAN and cloud interconnection to security - Small Business

Read More..

Broadcom Launches Another Tomahawk Into The Datacenter – The Next Platform

If hyperscalers, cloud builders, HPC centers, enterprises, and both OEMs and ODMs like one thing, it is a steady drumbeat of technology enhancements to drive their datacenters forward. It is hard to reckon what is more important: the technology or the drumbeat, but it is painfully obvious when both fail and it is a thing of beauty to watch when both are humming along together.

Broadcom, a company that was founded in the early 1990s as a supplier of chips for cable modems and set-top boxes, started down the road to datacenter switching and routing in January 1999 with its $104 million acquisition of Maverick Networks, and followed that up in September 2002 with a $533 million acquisition of Altima Communications. Broadcom was already designing its own ASICs for datacenter networking gear, but these were for fairly simple Layer 2 Ethernet switches, and Maverick was working on higher-end, beefier ASICs that combined Layer 2 switching and Layer 3 routing functions on the same device. Altima made networking chips that ended up in networking devices sold mostly to SMBs, but gave Broadcom more networking customers and a broader engineering and patent portfolio to pull from.

Broadcom got serious about switching when blade servers took off in the datacenter in the early 2000s, when the hyperscalers were not even really megascale yet and when the public cloud was still just a bunch of talk about utility computing and grid computing. It unveiled its first mass produced collection of chips for building 10 Gb/sec Ethernet switches which did not even have codenames, apparently out of nine chips. In 2007, the Scorpion chip provided 24 ports running at 10 Gb/sec or 40 Gb/sec and 1 Tb/sec of aggregate bandwidth, and the merchant silicon business was off to the races as the hyperscalers were exploding and Amazon had just launched its public cloud a year earlier. The $178 million deal in December 2009 to take control of Dune Networks, which still carries on as the Jericho StrataDNX line of deep buffer switches, was pivotal for the companys merchant silicon aspirations and coincides with the rise of the hyperscalers and cloud builders and their particular needs on their network backbones.

The Trident family, which really ramped up merchant capabilities compared to the captive chips made by the networking incumbents such as Cisco Systems, Juniper Networks, Hewlett Packard (3Com), and Dell (Force10 Networks), came out in 2010, aimed mostly at enterprises that needed more features and bandwidth than the Jericho line could provide but that did not need the deep buffers. The Tomahawk line debuted in 2014, which stripped out features that hyperscalers and cloud builders did not need (such as protocols they had no intention of using) but which included more routing functions and larger tables, and lower power consumption made possible by 25 GHz lane speeds that Google and Microsoft drove the IEEE to accept when it really wasnt in the mood to do that initially.

Broadcom has been advancing all three families of silicon with a pretty steady cadence. The Jericho 2 chip, rated at 9.6 Tb/sec of aggregate bandwidth and driving 24 ports at 400 Gb/sec with deep buffers based on HBM stacked memory, was announced in in March 2018 and started shipping in production in February of this year. With the Trident 4 ASIC unveiled in June of this year, Broadcom supported up to 12.8 Tb/sec of aggregate bandwidth and using PAM-4 encoding to drive 25 GHz lanes on the SERDES to an effective speed of 50 Gb/sec per lane and able to drive 128 ports at 100 Gb/sec or 32 ports at 400 Gb/sec. The Trident 4 chip weighed in at 21 billion transistors and is a monolithic device etched in 7 nanometer processes from fab partner Taiwan Semiconductor Manufacturing Corp.

Believe it or not, the Trident 4, which was the fattest chip in terms of transistor count we had ever heard of when it was unveiled this year, was not up against the reticle limit of chip making gear. But we suspect that the Tomahawk 4 announced this week is pushing up against the reticle limits, with over 31 billion transistors etched using the same 7 nanometer processes. The Trident 4 and Tomahawk 3 chip from January 2018 were pin compatible, but they had an equal number of SERDES. With the doubling up of SERDES with the Tomahawk 4, there was no way to keep Tomahawk 4 pin compatible with these two prior chips. But, there is hope for Trident 5. . . .

The Tomahawk line has come a long way in its five years, as you can see:

The original Tomahawk 1 chip from 2014 was etched using 28 nanometer processes from TSMC and had a mere 7 billion transistors supporting its 128 Long reach SERDES running at 25 GHz using non-return on zero (NRZ) encoding, which has two levels of encoding to encode a bit per signal. The Tomahawk 1 delivered 3.2 Tb/sec of aggregate bandwidth, which was top of the line five years ago. With the PAM-4 encoding added with recent switch ASICs, you can have four signals per lane and encode two bits of data, driving up the effective bandwidth without increasing the clock speed above 25 GHz. This is how the Tomahawk 3, Trident 4, and Tomahawk 4 have been growing their bandwidth. The SERDES count on the die has also been going up as processes have shrunk, with the Tomahawk 4 doubling up to 512 of the Blackhawk SERDES, of which the Tomahawk 3 had 256 implemented in 16 nanometers, thus delivering a doubling of aggregate bandwidth across the Tomahawk 4 ASIC to 25.6 Tb/sec.

The Tomahawk 4 is a monolithic chip, like prior generations of Broadcom StrataXGS and StrataDNX, chips, and Broadcom seems intent in staying monolithic as long as it can without resorting to the complexity of chiplets. Even if smaller chips tend to increase yields, adding two, four, or eight chiplets to a package creates assembly and yield issues of their own. Some CPU suppliers (like AMD and IBM) have gone with chiplets, but others are staying monolithic (Intel, Ampere, Marvell, HiSilicon, Fujitsu, and IBM with some Power9 chips), and there are reasons for both.

When it comes to networking, Peter Del Vecchio, who is the product line manager for the Tomahawk and Trident lines at Broadcom, monolithic is the way to go.

We have seen some of our competition move to multi-die implementations just to gate to 12.8 Tb/sec, Del Vecchio tells The Next Platform, and the obvious one there is the Tofino2 chip from Intel (formerly Barefoot Networks). Just for the benefits of power and performance, if you can keep all of the traces on a single piece of silicon, that definitely provides benefits. And that is why we wanted to stay with a monolithic design for this device.

Having a fatter device means eliminating hops on the network, too, and also eliminating the cost of those chips and the networking gear that has to wrap around them. If you wanted to build a switch with 25.6 Tb/sec of aggregate networking bandwidth using the prior generation of Tomahawk 3 ASICs, you would need six such devices, as shown below:

It takes six devices to connect 256 ports using the current Tomahawk 3 chip, assuming that half the bandwidth (6.4 Tb/sec) of the bandwidth on each ASIC is used for server downlinks running at 100 Gb/sec (64 ports) and half the bandwidth is aggregated and used as uplinks to the next level up in the modular switch (we presume it would be 16 ports running at 400 Gb/sec). It takes four of those first-level Tomahawk 3 ASICs to create 256 100 Gb/sec downlinks plus two more to cross connect the four chips together in a non-blocking fashion with two paths across the pair of additional Tomahawk 3 ASICs. This architecture adds two more hops to three-quarters of the port hops (some of them stay within a single switch ASIC), so the latency is not always higher than with a single chip, but the odds favor it. If you cut down on the number of second level ASICs, then you might get congestion, which would increase latency.

Now, shift to a single Tomahawk 4 ASIC, and you can have 256 100 Gb/sec ports all hanging off the same device, which in the case of the prototype platform built by Broadcom will be a 2U form factor switch with 64 ports running at 400 Gb/sec and four-way cable splitters breaking each port down into 100 Gb/sec ports. Every port is a single hop away when any other port across those 256 ports, and according to Del Vecchio, the cost will go down by 75 percent at the switch level and the power consumption will also go down by a factor of 75 percent.

Broadcom is not providing specific pricing for its chips, and it is an incorrect assumption that Broadcom will charge the same price for the Tomahawk 4 as it did for the Tomahawk 3. On the contrary, with these improvements, we expect that Broadcom will be able to charge more for the ASIC (but far less than 2X of course) probably on the order of 25 percent to 30 percent more for that 2X increase in throughput and reduction in latency.

Speaking of latency, here is another chart that Del Vecchio shared that put some numbers on the latency decrease using servers chatting with an external NVM-Express flash cluster:

In this case, the flash cluster gets twice as many endpoints running at 100 Gb/sec and the latency between the servers and the disaggregated NVM-Express flash servers drops by 60 percent. (Exact latency numbers were not given by Broadcom, and neither is price or watts per port on any of its ASICs or die size or watts for any of the ASICs.)

Lets think about this for a second. The CPU business has been lucky to double the number of cores every two to three years, and in many cases has not really done this. (Intels Cascade Lake-AP doubled up processors sort of count, but not really given the wattages.) So that means you can get a little less than 2X the performance in the same two-socket machine every two to three years. There will be exceptions, when a lot of vendors can double up within one year, but this will not hold for the long term.

What Broadcom is doing here is cutting the number of chips it needs to provide a port at a given speed by a factor of 6X every two years. Not 2X every two to three years, but 6X every two years like clockwork. Even if every successive chip gives you 30 percent more money, you need to sell 4.X more chips to get the same revenue stream, which means that your customer base has to be more than doubling their port counts at a given speed, or doubling up their port speed, or a mix of both, every year for the money to work out for the chip maker like Broadcom. This is a much rougher business in this regard than the CPU business for servers. But clearly, the demand for bandwidth is not abating, and despite intense competition, Broadcom still has dominant share of the bandwidth sold into datacenter networks, as it has had for the better part of a decade.

That 25.6 Tb/sec of aggregate bandwidth on the Tomahawk 4 chip can be card up in a number of ways, including 64 ports at 400 Gb/sec, 128 ports at 200 Gb/sec, and 256 ports at 100 Gb/sec. It takes cable splitters to chop it down by a factor of two or four, and you might be thinking: Why stop there? Why not 512 ports running at 50 Gb/sec or even 1,024 ports running at 25 Gb/sec and really push the radix to the limits and also create a massive muscle of network cables coming off each port? The answer is you cant because to keep the chip size manageable, Broadcom had to limit the queues and other features to a maximum of 256 ports. The cutting down of physical ports with splitters is not free. So, for instance, supporting 100 Gb/sec ports requires more queues and buffering. Which is why you dont see ports split all the way down to 10 Gb/sec natively on the chip, although you can get a 100 Gb/sec port to negotiate down to 40 Gb/sec or 10 Gb/sec and throw the extra bandwidth out the window.

In a certain sense, a modern CPU, whether it is monolithic or comprised of chiplets, is really a baby NUMA server crammed down into a socket and it takes fewer and fewer servers in a distributing computing cluster to reach a certain number of cores, the unit of compute performance more or less. Similarly, with every new generation of switch ASICs, vendors like Broadcom are able to eliminate layers of the network by constantly doubling up the number of SERDES on the device, and thus allowing for whole layers of the network to be eliminated assuming of course you want a non-blocking network as hyperscalers and cloud builders do. And as we have shown above, the increasing bandwidth and radix of each generation of device allows for each network cluster for that is what a modular switch and a full-blown Clos network spanning a hyperscale datacenter is, after all to have fewer and fewer nodes for a given port count.

The architecture of the Tomahawk 4 chip is very similar to that of the Tomahawk 3, and while you might not be aware of it, there were 1 GHz Arm processor cores on both switch ASICs to run some firmware and do telemetry processing, plus some other Arm cores on the SERDES to run their own firmware. (A switch chip is a hybrid computing device these days, too, just like an FPGA or DSP generally is.) The Trident 4 and Tomahawk 4 ASICs have four of the Arm cores for running the telemetry and instrumentation, twice that of their respective predecessors.

The buffer size on the Tomahawk 3 was 64 MB, and we presume that it is at least double this on the Tomahawk 4, but Broadcom is not saying.

The thing to remember about the hyperscalers is that their packet processing pipelines are not that complicated, but their need to have a lot of telemetry and instrumentation from their networks is vital because with 100,000 devices on a datacenter-scale network, understanding and then shaping traffic is the key to consistent performance.

So the Broadcom networking software stack includes in-band telemetry, real-time SERDES link quality meters, a way to see into all packet drops, flow and queue tracking, and detection of microbursts and elephant flows.

Perhaps equally importantly for hyperscaler and cloud builder customers, Broadcom is documenting and opening up each and every API used with the Tomahawk, Trident, and Jericho families of chips. Among other things, this will help these companies, which by and large create their own network operating systems, better support them, but it will also allow for open NOS initiatives (such as ArcOS from Arrcus) to more easily port their code and support it on Broadcom chips. The OpenNSA API documentation is a superset of the OpenNSL API library that maps to the Broadcom SDK, which was previously available. It is the whole shebang, as they say.

The Tomahawk 4 chip is sampling now and production will be ramping fast, with Del Vecchio expecting as fast of a ramp or faster than was done with the Tomahawk 3 or the Trident 4. So expect Tomahawk 4 devices next summer.

Read more here:
Broadcom Launches Another Tomahawk Into The Datacenter - The Next Platform

Read More..

Four Technologies That Are Changing The Way We Do Business – TechShout!

Most technology investors keenly monitor the progress of companies in these sectors. Governments also typically grant R&D tax credits to them because of the immense potential they have for the economy. Here are the top four technologies that are changing business as we know it:

Probably the most useful innovation on the internet, cloud computing has almost wholly revolutionised how businesses run. Cloud computing allows individuals and organisations to create, share, process and store files and programs on internet servers.

Cloud computing services are broadly divided into three categories:

There is a massive drop in overhead cost for corporations that embrace cloud computing. However, thats not all; businesses that have taken advantage of cloud computing services have reported increased productivity by almost 40%. Theyve also reported an increase in flexibility and collaboration. Companies can outsource work and monitor it effectively, mainly due to cloud computing.

IoT is a term used to describe a complex system of interrelations between multiple devices and users that communicate with one another without human intervention. The advent of cloud computing is what has made IoT possible.

With IoT, large businesses can now monitor their operations in real-time while ensuring efficiency.

Another change IoT has made to businesses is making them data-centric. Data collected from multiple sources, including user interactions, are now being used to inform corporate decisions. The long term benefit of this is that organisations can better accurately predict what customers want and produce it a win-win for both parties.

One of the great innovations of modern-day technology is teaching machines to learn. A by-product of IoT, AI is a field that has taken a life of its own. Artificial Intelligence allows machines to perform complex functions without the help of humans.

AI, through the help of machine learning, allows for organisations to go beyond the automation of processes. Algorithms are designed to pick up trends from IoT or other data sources and analyse them to make decisions.

One of the more common examples of how AI is influencing businesses is the chatbots on e-commerce websites. These bots are trained to respond to consumer inquiries and help them find products that they are looking for. As it is with most AI applications, they are cheaper and more efficient than hiring manual assistants.

Although blockchain is almost synonymous with cryptocurrencies, they are two completely different notions. Blockchain is a secure digital ledger that is used to keep track of things, including financial transactions. The distributed nature of its storage also means that it allows for records to be inputted at an incredible rate.

Unsurprisingly, one of the most significant applications of blockchain technology is in finance. The differing currencies and regulations across borders make sending and receiving money complex for multinational organisations. With blockchain technology, payments can now be made instantly across continents through instruments like bitcoin.

Other applications of blockchain technology in recent times include supply chain technology and food safety. According to some estimations, at least 30% of manufacturing organisations with over $5bn in assets will have implemented blockchain technology by 2023.

So, while the hype around cryptos might have reduced to a whimper, blockchain in itself is growing by leaps and bounds.

Business growth is one of the drivers of technology development. In the race to make services and operations more efficient, technology will always be at the fore. At the end of the day, everyone is better off; whether they are producers or consumers.

Follow this link:
Four Technologies That Are Changing The Way We Do Business - TechShout!

Read More..

Investors who lost $190m demand exhumation of cryptocurrency mogul – The Guardian

Lawyers for customers of an insolvent cryptocurrency exchange have asked police to exhume the body of the companys founder, amid efforts to recover about $190m in Bitcoin which were locked in an online black hole after his death.

Miller Thomson LLP sent a letter to the Royal Canadian Mounted Police on Friday, requesting authorities conduct an exhumation and postmortem autopsy on the body of Gerald Cotten, founder of QuadrigaCX, citing what the firm called the questionable circumstances around his death earlier this year.

Citing decomposition concerns, lawyers requested the exhumation be completed no later than spring 2020.

Gerald Cotten, 30, died abruptly in December 2018 of complications relating to Crohns disease while on honeymoon in Jaipur, India, with his wife, Jennifer Robertson. His body was repatriated to Canada and a funeral was held in Halifax, Nova Scotia.

Soon after his death, however, reports surfaced that nearly 80,000 users of QuadrigaCX at the time Canadas largest cryptocurrency exchange were unable to access funds totalling more $190m.

Cotten was the only one with access to necessary permissions. While Robertson has possession of the laptop containing the necessary passwords, she remains locked out.

The laptop computer from which Gerry carried out the companies business is encrypted and I do not know the password or recovery key. Despite repeated and diligent searches, I have not been able to find them written down anywhere, she said in court filings.

Uncertainty about the missing funds has fueled speculation that Cotten may still be alive. In their letter to the RCMP the law firm underlined the need for certainty around the question of whether Mr Cotten is in fact deceased.

The accounting firm Ernst & Young, tasked with auditing the company as it undergoes bankruptcy proceedings, discovered numerous money-losing trades executed by Cotten, using customers funds.

They also found a substantial amount of money was used to fund a lavish lifestyle for the couple, including the use of private jets and luxury vehicles. Ernst & Young was able to recover $24m in cash and $9m in assets held by Robertson.

Both Canadas tax authorities and the FBI are also investigating the company.

Read the original here:
Investors who lost $190m demand exhumation of cryptocurrency mogul - The Guardian

Read More..

This little-known cryptocurrency is heading towards the top ten – Decrypt

Cosmos (ATOM), a cryptocurrency used in an ecosystem of different blockchains, is slowly sneaking towards the top ten coins by market cap. The price of cosmos gained 7% today to reach $3.92 while bitcoin and the vast majority of other cryptocurrencies see losses of between 1-3%.

Overall, cosmos has been in an uptrend for the past three monthsand is currently up more than 26% since September 12, 2019. Meanwhile, bitcoin, ether (ETH) and XRP are down 31%, 20.5% and 13.7% respectively across the same time period.

Its most recent rally is likely attributed to news that Binance users will be able to stake their ATOMs on the Binance staking platform, allowing them to earn a passive income in the form of regular staking rewards.

Staking is a hot topic at the moment and announcements of exchanges and wallets supporting staking of Tezos have caused its price to surge in recent months.

As a result, cosmos continues to make headway towards breaking into the top ten coins by market cap. Cosmos has climbed from rank 20 to its current position at rank 16 in the last three months.

Part of the reason behind cosmos's meteoric growth throughout the latter half of 2019 could be the result of gradually improving trade volumes. Since July, the average daily trade volume of ATOM has doubled and now regularly exceeds $200 million traded per day.

Be the first to get Decrypt Members. A new type of account built on blockchain.

Likewise, cosmos has also seen its adoption improve in recent months, after being listed on several major cryptocurrency trading platforms, including Kraken, Crypto.com, Poloniex, Huobi Global and most recently, Binance.US.

Go here to read the rest:
This little-known cryptocurrency is heading towards the top ten - Decrypt

Read More..