Page 2,984«..1020..2,9832,9842,9852,986..2,9903,000..»

These Three Overlooked Altcoins Are Set To Erupt, According to Top Analyst – The Daily Hodl

A closely-followed crypto analyst and trader is naming three overlooked altcoins that he says are gearing up to break out of consolidation and ignite the next phase of their bull cycles.

In a series of tweets, Cantering Clark tells his 47,500 followers that hes keeping a close watch on SRM, the native token on decentralized exchange (DEX) Serum as he says the crypto asset is getting ready to go on a tear upwards.

Looks like I will be spending a good amount of time inside today now.

Free Alpha SRM looks like it wants to run.

Another under-the-radar altcoin the analyst got his eye on is Orakuru (ORK), an oracle protocol built on the Binance Smart Chain (BSC).

Looks like ORK is turning the corner after the pre-salers exit for a quick buck.

1st BSC audited oracle

I am holding.

The very next day, as per the traders call, ORK exploded and posted a 64% rally from $3.16 to $5.20 according to CoinMarketCap. At time of writing, ORK is trading at $5.08.

Cantering Clark also has an ultra-bullish outlook on Cosmos (ATOM). The trader draws out a chart suggesting that the coin is gearing up to double its value after a long consolidation.

Is it me or does ATOM look like it is about to do something like this?

Weekly and Monthly VWAP (volume weighted average price) holding as support and a really balanced structure dating back to Feb.

Looks ready for someone to create some imbalance in that to the upside.

Often acting as a signal to the entire altcoin market, Ethereum is on the analysts watchlist as well. According to him, something about ETHs price action is hinting at a clean break of $3,000 and a subsequent leap up to $3,850.

I dont know, something about ETH right now makes me think it see above $3,000 really soon.

Featured Image: Shutterstock/PHOTOCREO Michal Bednarek/Dario Lo Presti

Go here to see the original:
These Three Overlooked Altcoins Are Set To Erupt, According to Top Analyst - The Daily Hodl

Read More..

Early Bitcoin and Ethereum Whale Says Hes Bullish on This Obscure Altcoin – The Daily Hodl

A long-time Bitcoin and Ethereum investor is sharing his strategy for finding promising new projects while shining a spotlight on one asset that meets his rigorous standards.

The pseudonymous trader, known in the industry as Tetranode, says the small-cap project Olympus Finance (OHM) has checked all his boxes and captured a small portion of his crypto portfolio.

In an interview with UpOnly, the crypto whale explains why he is so bullish on the platform and adds that he is also advising the Olympus team to help them avoid exploits and to improve their tokenomics.

Olympus Finance, its the new experiment in algo stablecoins 2.0, so unlike Fei Protocol (FEI) I dont believe its going for partial collateralization Its going for a fully backed reserve. So every single OHM is backed by at least one Dai, so there is a floor that you can count on.

The OHM token is supposed to be sort of a floating/quote unquote stable coin where you can count on the value staying relatively stable compared to ETH. So I have a small investment in there so Im giving the founder some input into some things like their tokenomics and certain exploits they might be vulnerable to.

Currently OHM holds a roughly $53 million market cap and is trading at around $668. If the project is successful, holders who stake and provide liquidity for the token could reap the rewards.

Tetranode notes that when evaluating newer projects, he steers away from those that have large venture capitalists involved, and looks for the projects that have a strong network of supporters.

I would say non-VC involvement is the first thing [I look for in a project], like grassroots involvement starting out. Just relatively good relations with the community and the ability to listen and the ability to evolve. Its hard to quantify, but I know it when I see it because I have a rock solid foundation on how the entire [process] works, all the way down to the consensus assumptions on the blockchain. So its really easy for me to cut through the marketing and get through to the heart of the matter.

The veteran trader says that in order to gain a broad and in-depth understanding of each project, he investigates its use case and from there breaks down how the technology functions on a molecular level. Tetranode utilizes this careful process in order to determine how he will participate in various projects from Bitcoin, to more novel projects like Olympus.

Always approach things from the principle. Always approach it from the atomic level, understand it from the basics. So you go in and you ask, Whats Bitcoin? Bitcoin is internet money which cannot be changed and cannot be reversed. So you have to ask yourself, How can it not be reversed? It cannot be reversed because theres a consensus system working. How is it irreversible and so you keep diving down until you grasp like the first principle and then you can view everything with a very clear eye and thats how you make rock solid decisions without any doubts in your mind.

l

Featured Image: Shutterstock/Anne Powell

Go here to see the original:
Early Bitcoin and Ethereum Whale Says Hes Bullish on This Obscure Altcoin - The Daily Hodl

Read More..

Not a cheaper Bitcoin Bitcoin Gold soars 173% as traders may be buying the wrong BTG – Cointelegraph

A long-forgotten Bitcoin (BTC) hard fork has surged almost 200% in a week simply because traders are buying the wrong altcoin.

Bitcoin Gold (BTG), a relic from the 2017 forking season, traded at $113 on Friday the highest price since February 2018 after 30% daily gains as trading volumes topped $250 million.

A curious contrast to an otherwise lackluster cryptocurrency market, Bitcoin Golds success appears to be not wholly genuine but also not the fault of its holders or developers.

As Cointelegraph reportedon Tuesday, a new Bitcoin fund from Brazillian investment bank BTG Pactual moved a step closer to launching this week after teaming up with the Winklevoss twins Gemini exchange for custody management.

While having nothing to do with Bitcoin Gold, the biggest Brazilian investment bank does share the altcoins ticker and the coincidence was good enough for hungry buyers.

After the Gemini news broke, Bitcoin Gold saw a flurry of demand, which, at the time of writing, shows no sign of abating. This, reactions argued, showed that the cryptocurrency industry was still very much in its nascent phase.

Were so early that people buy Bitcoin Gold BTG, because there is a new fund called BTG Pactual, podcast host Anita Posch warned Twitter users.

By contrast, Bitcoin Gold announcing an actual partnership in late March did hardly anything for it price action.

Bitcoin Gold came about as one of the multiple hard forks of Bitcoin in 2017, the year that also spawned Bitcoin Cash (BCH), Bitcoin Diamond (BCD) and others.

Unlike their parent, the forks have achieved only limited success. As Cointelegraph recently reported, despite the broad uptick sweeping through altcoins, BCH, for example, continues to lose value in BTC terms.

Some of their proponents have adopted an aggressive marketing stance that positions the fork as Bitcoin itself, a misleading move that has likely also contributed to demand.

Cointelegraph reiterates that buying Bitcoin Gold, Bitcoin Cash or any cryptocurrency with a ticker other than BTC means that one holds an altcoin, not Bitcoin itself.

See more here:
Not a cheaper Bitcoin Bitcoin Gold soars 173% as traders may be buying the wrong BTG - Cointelegraph

Read More..

Dell, Nvidia power new "cloud-native supercomputer" in the UK – ZDNet

Researchers at the University of Cambridge in the UK are expanding their HPC systems with servers from Dell and resources from Nvidia. Delivering multi-tenant, bare-metal high performance computing, the systems amount to "cloud-native supercomputers," Nvidia says.

The university is adding hundreds of new Dell EMC PowerEdge servers to the Cambridge Service for Data Driven Discovery (CSD3) system, which is a UK National Research Cloud. Specifically, it's deploying more than 400 PowerEdge C6520 servers with the newly-announced 3rd Gen Intel Xeon Scalable processors.

It's also adding more than 80 PowerEdge XE8545 servers with 3rd Gen AMD Epyc processors, as well as Nvidia A100 Tensor Core GPUs with NVLink. These systems are also using Nvidia InfiniBand networking and Nvidia BlueField-2 DPUs to offload infrastructure management, while isolating and accelerating workloads.

Meanwhile, the University of Cambridge, along with StackHPC and funding from the DiRAC HPC Facility and the IRIS Facility, developed a cloud HPC software stack called Scientific OpenStack.

All told, the CSD3 system is expected to deliver four petaflops of application performance, enabling research in a range of areas, including astrophysics, nuclear fusion power generation and clinical medicine applications.

Another UK research institution, Durham University's Institute for Computational Cosmology (ICC), is slated to fully deploy the COSMA8 supercomputer this October. The COSMA8 will launch with more than 90 Dell EMC PowerEdge C6525 servers.

The system uses 2nd and 3rd Gen AMD Epyc processors, direct liquid cooling and Nvidia HDR InfiniBand networking. It powers research into subjects like dark energy, black holes and how the universe was formed.

The university plans to expand the system to more than 600 compute nodes over the next year.

Go here to read the rest:
Dell, Nvidia power new "cloud-native supercomputer" in the UK - ZDNet

Read More..

How the DoD’s future war-fighting needs are shaping cloud vendors’ products – C4ISRNet

The U.S. Defense Departments expectation that future wars will be fought across dispersed, disconnected environments is driving changes to its cloud needs. Industry is preparing for that reality.

With the nascent concept of connecting the best sensor from any location with the best shooter in any service, known as Joint All-Domain Command and Control, the defense industrial base is seeing a shift in the Pentagons need for tools that people can access from any location.

Cloud computing, which allows users to store data more cheaply and access it remotely, is a core principle of the departments digital modernization strategy. With distributed war fighting on the horizon, the department will need tactical cloud abilities available in remote places.

At IBM, customers ask for cloud environments that will allow users to access data across security classifications.

Were seeing more much interest in more mobile environments [including] more distributed, mobile environments that can operate at multiple [classification] levels, said Terry Halvorsen, former Defense Department chief information officer and IBM general manager for public sector client development.

The departments cloud computing needs are expected to grow from an estimated $2.1 billion in fiscal 2020 to about $2.7 billion in 2023 an increase of about 29 percent, according to an analysis by Bloomberg Government.

Several efforts will drive much of that growth increased use of cloud-native applications and remote collaboration tools, continued migration of legacy systems, and the departments artificial intelligence push.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

The demand for cloud access from any setting is part of a fundamental transition in how the DoD views the technology, said Hillery Hunter, chief technology officer of IBM cloud.

Cloud is no longer a conversation about being in one place, she said. In the next three to five years it really is a conversation about cloud being a consistent platform that spans all the way from the original data center out to the edge.

In the future the military wants to process data, such as drone footage or vehicle-mounted sensor data, in the tactical environment, rather than transporting it back to data centers thousands of miles away, a process that sucks up precious bandwidth and takes too much time.

The need is driving investment by major cloud providers in smaller servers and processing devices for war fighters in remote environments. The need is evident, for example, with the Air Forces Advanced Battle Management System, which uses cloud services from vendors Microsoft and AWS through indefinite delivery, indefinite quantity contracts.

We have to create devices that can operate in those austere environments, said Rick Wagner, president of Microsoft Federal for government customers. Some of that goes with custom design. This is clearly obviously not simply pulling something off the shelf thats been developed, so its driving us into doing more work in creating our own devices, optimizing them to work in those environments and work with our cloud.

At Microsoft, joint war fighting is pushing the company to think about how they can get cloud-computing capabilities into disconnected and challenging environments, such as outfitting an individual ship with cloud-enabled devices or giving every battle group a data center.

How can we get to the point where we get pieces of the cloud everywhere the DoD is operating and then you can start tying things together? Wagner asked. At Microsoft, that is one of the things we are trying to optimize for how we do compute at the edge, where the data sits, reduce the amount of time youve got to move data back and forth and to be able and operate with it?

Beyond that, the department needs to be able to easily and securely pass data between classified and unclassified environments another requirement that has industry brainstorming new options.

There is a push to do things more remotely, and so the idea of cross-domain solutions starts to become a big capability, Wagner said. How do we work from unclassified to secret to top secret and beyond over a consolidated environment where youve got the same tool sets on every environment?

The Pentagon plans to provide data access at different classification levels through the Joint Enterprise Defense Infrastructure cloud, but its enterprise cloud has been mired in controversy and is potentially on the verge of getting scrapped over a court battle. That cloud contract, experts said, is a gaping question for what the DoDs future cloud needs looks like.

The JEDI cloud also would provide some tactical edge capabilities. Former DoD CIO Dana Deasy consistently presented JEDI as a solution that would allow soldiers at the battlefront to access data, once noting that on a trip to Afghanistan, soldiers had to use three different systems to identify an adversary, make a decision and find friendly forces.

Over the next three to five years, itll really depend on if JEDI gets off the ground in fiscal 2021, which it may possibly do at the very end [of the year] if the legal decision goes the Pentagons way, said Chris Cornillie, a federal market analyst at Bloomberg Government. Otherwise, you have the DoD starting from scratch and looking to replace that big general-purpose cloud that JEDI represents with a more federated structure.

The JEDI program is billed as a cloud that will host 80 percent of the DoDs systems, deliver data to the war fighter at the edge and enable artificial intelligence development. In the absence of the cloud due to court cases and protests, services and other components have had to find other solutions. Cornillie suspects the military branches and fourth estate agencies will continue with the solutions they started to fill the void.

If JEDI is scrapped, will they try to recompete another big cloud contract? I think thats yet to be determined, Cornillie said. I dont think well have one big cloud contract and certainly not destined for a single vendor and a single cloud provider.

See the original post:
How the DoD's future war-fighting needs are shaping cloud vendors' products - C4ISRNet

Read More..

Where the Cloud Can’t Save You – Analytics Insight

Generals will divide their forces to ensure that their army survives a battle. The cloud takes you to the next level, you can achieve the same by replicating your application, being in multiple places at once to keep a service window always open for your users.

However, if the split forces still report to a single commander, and he gets taken captive, it doesnt matter how many places you are located, none of them will be able to function. The army as a whole is vulnerable at a single point of failure.

A distributed application can operate in several locations throughout a cloud network, but if every instance of that application is connected to the same database, you have the same problem. Your entire system is vulnerable at the point of its database even on the cloud.

In order for an army to split up and continue on its mission, local commanders must possess the same information as the general and possess the authority to make commands under his authority.

In order for a database to enable the same agility to the entire application, it must be distributed where several copies of that database exist across multiple nodes, each having the same information and ability to operate independently.

Relational databases were first used in the 1960s, long before the cloud, or any type of distributed network. They are designed to work on a server and to provide all your data needs from one location.

Their size and speed enable applications worldwide to serve their users with accuracy and speed.

Their challenge is their perpetual monolithic structure. Due to the complexity of how a relational model puts together data, appearing on multiple nodes while constantly replicating to itself increases that complexity to unsustainable at any cost.

Relational models restrict their applications to a single point of failure. Even at above average availability, cloud platforms do have outages. If such an outage were to hit a relational database, the entire application relying on that database will be disabled until a new server can be found.

Nonrelational databases were developed alongside the cloud a little over a decade ago. Like cloud platforms, they were developed to be distributed from scratch.

Without the need for multiple tables or even schemas, the most common type of nonrelational database, the document database, is natural for a distributed system where multiple copies of your database can sit at the backend of multiple copies of your application.

The best type of database has a master-master structure where each copy can perform both reads and writes to your data. If you have a database cluster of 3 nodes and one goes down, two databases have full ability and authority to keep working. Even if the majority of your nodes go down, you can still provide service to your users.

Provisioning servers on the cloud closest to where your users are reduces the distance from their device to your application, increasing performance. It also reduces load on any one database at any one time.

Its like being on the longest of three lines in the supermarket when an announcement is made: Will the people at the end of the longest line please move to the front of the shortest? How awesome is that.

Like our universe, data is in a state of constant expansion. There is always more traffic, more volume, even the amount of information you can store in one unit of data is rising.

To manage that, while keeping performance robust in the face of more information to constantly administer, you need a distributed system. If your application relies on one massive database, the cloud cant save you.

The best way to attack the data of tomorrow is with the finest tools of today.

Oren Einiis the CEO ofRavenDB, a NoSQL Distributed Database, and RavenDB Cloud, itsManaged Cloud Service(DBaaS). Oren is a Microsoft MVP and a DZone Hall of Famer with over 3.5 million views over ten years writing about NoSQL Database Technology, the .NET Ecosystem, and Software Development. He has been blogging for more than 15 years using his aliasAyende Rahien.

Read this article:
Where the Cloud Can't Save You - Analytics Insight

Read More..

Server Microprocessor Market Detailed Analysis of Business Overview, Statistics and Forecasts to 2026| IBM, Oracle KSU | The Sentinel Newspaper – KSU…

The server microprocessor market was valued at USD 15.19 billion in 2020 and is expected to reach USD 17.89 billion by 2026, at a CAGR of 2% over the forecast period 2021 2026.

Increasing data center footprint and demand from cloud service providers prompt the growth of the server microprocessor market. Dominated by the duopoly of Intel and AMD, the market for server microprocessors is undergoing product innovations. Companies are realizing the performance needs for modern workloads such as data analytics, Machine Learning, Artificial Intelligence and are improving their designs accordingly.

The expansion of mobile broadband, growth in big data analytics, and cloud computing are driving the demand for new data center infrastructures. As of 2017 US holds almost 45% of the global cloud and internet data centers according to CNNIC. North America alone (US and Canada) house around 2854 data centers as of 2017, making it a prominent market for server microprocessors.

In line with global cloud penetration, cloud service providers such as Google, are expanding their data center footprint across regions to keep up with the demand for high-performance computing. Such trends drive the server microprocessors.

Request Sample Copy of this report at (Special Offers: Get 25% Discount)

https://www.marketinsightsreports.com/reports/02082592750/server-microprocessor-market-growth-trends-covid-19-impact-and-forecasts-2021-2026/inquiry?Mode=132

Competitive Landscape

The server Microprocessor Market has been dominated by Intel for the past few years, with AMD as the close competitor and other low-volume players like IBM, Oracle among others. The market demands strong investments in R&D and technology partnerships to address the needs of the servers and data centers. AMD is likely to cut through a slight share of Intel in the server microprocessor market, considering the new product rollouts and competitive pricing strategy. Vendors in the market have been involving in the launch of new microprocessors for next-generation data centers. Following are the recent developments in the market:

May 2018 Cavium announced the general availability of ThunderX2, their second generation of Armv8-A SoC processors for next-generation data center, cloud, and high-performance computing applications. The company, recently acquired by Maxwell, worked with over 60 different partners (including OEMs, ODMs, and independent software and hardware vendors) to enable the deployment of ThunderX2 based platforms and finally reached the commercialization phase.

June 2019 Marvell (parent company of Cavium) announced a broader strategic partnership with Arm, to accelerate the design and development of next-generation Marvell ThunderX server processor technology. With this new agreement, Arm will support Marvells R&D in the server processor technology area for at least three more years (until 2022). With this partnership, Marvell aims to expand its Arm-based server roadmap to enable the next generation of cloud and data center infrastructure.

Inquire here to avail discount on this report:

https://www.marketinsightsreports.com/reports/02082592750/server-microprocessor-market-growth-trends-covid-19-impact-and-forecasts-2021-2026/discount?Mode=132

Key Market Trends

Open Instruction Set Architecture (ISA) to Gain Traction Amid US-China Trade War

The trade war between China and the United States may have a certain impact on the market. this is considering US companies (Intel), as well as ARM, that have pledged to cut off Huaweis access to critical semiconductor components, such as SoCs and CPUs. This might trigger increased dependency on open-source Instruction Set Architecture (ISA) such as MIPS, RISC-V, etc. For instance, in July 2019, Alibaba Group unveiled its first self-designed microprocessor, which marks a key step in Chinas efforts to promote chip self-sufficiency. The launch prompts the efforts of the Chinese technology giants to address the trade clashes with the U.S. (over access to technology). Alibabas (not produced by itself, but some other Chinese foundry such as Semiconductor Manufacturing International Corp.) has been designed to power consumer devices such as smart speakers, self-driving cars, or other internet-connected equipment requiring high-performance computing.

Browse Full Report at:

https://www.marketinsightsreports.com/reports/02082592750/server-microprocessor-market-growth-trends-covid-19-impact-and-forecasts-2021-2026?Mode=132

Europe Making Efforts to Develop Key Competence in Microprocessors

Europe is estimated to provide significant scope for servers powered by microprocessors as compute-intensive applications and cloud adoption is increasing. Cloud computing is one of the strategic digital technologies that has been promoted by the European Union for enhancing productivity and better services by enterprises. These initiatives provide significant demand for data centers in the region, thus driving the market.

Although North America has been a larger source for Server Microprocessors, regional initiatives such as the European Processor Initiative(EPI) is likely to have an impact on the market.

The European Commission, in December 2018, declared the selection of the Consortium European Processor Initiative (EPI). The aim of the initiative is to develop, co-design, and introduce a low-power microprocessor to the European market, thus being able to retain a significant part of that technology in Europe.

The EPI consortium proposes to create a long-term economic model by delivering a family of processors for the following markets: High-Performance Computing, Data centers and servers, and Autonomous vehicles.

About Us:

MarketInsightsReports provides syndicated market research on industry verticals including Healthcare, Information, and Communication Technology (ICT), Technology and Media, Chemicals, Materials, Energy, Heavy Industry, etc. MarketInsightsReports provides global and regional market intelligence coverage, a 360-degree market view which includes statistical forecasts, competitive landscape, detailed segmentation, key trends, and strategic recommendations.

Contact Us

Irfan Tamboli (Head of Sales) Market Insights Reports

Phone: + 1704 266 3234 | Mob: +91-750-707-8687

sales@marketinsightsreports.com

irfan@marketinsightsreports.com

See the original post here:
Server Microprocessor Market Detailed Analysis of Business Overview, Statistics and Forecasts to 2026| IBM, Oracle KSU | The Sentinel Newspaper - KSU...

Read More..

Global Server Shipment for 2021 Projected to Grow by More than 5% YoY, with Successive QoQ Increases in Demand for ODM Direct Servers, Says TrendForce…

Enterprise demand for cloud services has been rising steady in the past two years owing to the rapidly changing global markets and uncertainties brought about by the COVID-19 pandemic. TrendForces investigations find that most enterprises have been prioritizing cloud service adoption across applications ranging from AI to other emerging technologies as cloud services have relatively flexible costs. Case in point, demand from clients in the hyperscale data center segment constituted more than 40% of total demand for servers in 4Q20, while this figure may potentially approach 45% for 2021. For 2021, TrendForce expects global server shipment to increase by more than 5% YoY and ODM Direct server shipment to increase by more than 15% YoY.

Global server shipment for 2Q21 is expected to increase by 20% QoQ and remain unaffected by material shortage

Thanks to the accelerating pace of enterprise cloud migration and the long queue of unfulfilled server orders last year as a result of the pandemic, server ODMs will likely receive an increasing number of client orders throughout each quarter this year. For instance, ODM vendors saw a 1% QoQ growth in L6 server barebones orders from their clients in 1Q21, but this growth is expected to reach 15-18% in 2Q21. TrendForces analysis indicates that apart from server ODMs maintaining a strong momentum, server OEMs (or server brands) will also be able to significantly raise their unit shipments in 2Q21. The quarterly total shipments from server OEMs for 2Q21 is currently projected to increase by 20% compared with 1Q21 that was the traditional off-season. The COVID-19 pandemic is a major contributor to shipment growth because it has caused a paradigm shift in corporate work practices and spurred companies to accelerate their cloud migrations. The effects of the pandemic have also provided a window of opportunity for the traditional server OEMs, including HPE and Dell, to develop new business models such as hybrid cloud solutions or colocation services that allow their customers to pay as they go, in addition to their existing sales of whole servers.

It should be pointed out that, not only is the shortage of materials within the server supply chain as yet unresolved, but the long lead times for certain key components are also showing no signs of abating. However, in response to the pandemics impact on the industry last year, server manufacturers have now transitioned to a more flexible procurement strategy by sourcing from two or three suppliers instead of a single supplier for a single component, as this diversification allows server manufacturers to mitigate the risk of potential supply chain disruptions. TrendForce therefore believes that the current supply of key components including BMCs and PMICs is sufficient for server manufacturers, without any noticeable risk of supply chain disruptions in the short run.

Huawei and Inspur maintain brisk server shipments due to favorable domestic governmental policies and demand from cloud service providers

Chinas server demand, which accounted for about 27.2% of the global total in 1Q21, continues to grow annually. Favorable policies and support from domestic cloud service providers are the main demand drivers in the country. Shipments from domestic server OEMs have remained fairly robust in China on account of the build-out of the hyperscale data centers across the country. Another reason is that Chinese telecom companies procure servers mostly from domestic manufacturers. Taken together, these aforementioned factors directly contributed to the server shipments of Inspur and Huawei in 1Q21.

Huaweis server shipments are relatively unaffected by the US-China dispute, even though the sanctions enforced by the US government constrained Huaweis component supply. The demand for Huawei servers has been boosted by telecom tenders and procurement from domestic enterprise clients. A QoQ growth rate of roughly 10% is projected for 2Q21 on account of a new round of government tenders. As for the whole 2021, Huaweis annual shipments are still forecasted to register a YoY growth rate of about 5%. Thanks to infrastructure programs and rising orders from data centers, Inspur is expected to capture around 30% of Chinas total server demand in 2021. On the matter of product strategy, Inspur already has a sizable ODM business with tier-1 Chinese cloud service providers (i.e., Baidu, ByteDance, Alibaba, and Tencent). The volume of incoming orders for the first half of this year will also be quite massive because tier-2 cloud service providers and e-commerce platforms such as JD.com, Kuaishou, and Meituan will be injecting significant demand.

For more information on reports and market data from TrendForces Department of Semiconductor Research, please clickhere, or email Ms. Latte Chung from the Sales Department atlattechung@trendforce.com

Original post:
Global Server Shipment for 2021 Projected to Grow by More than 5% YoY, with Successive QoQ Increases in Demand for ODM Direct Servers, Says TrendForce...

Read More..

Introducing the Cloud-Native Supercomputing Architecture – HPCwire

Historically, supercomputers were designed to run a single application and were confined to a small set of well-controlled users. With AI and HPC becoming primary compute environments for wide commercial use, supercomputers now need to serve a broad population of users and to host a more diverse software ecosystem, delivering non-stop services dynamically. New supercomputers must be architected to deliver bare-metal performance in a multi-tenancy environment.

The design of a supercomputer focuses on its most important mission: maximum performance with the lowest overhead. The goal of the cloud-native supercomputer architecture is to maintain these performance characteristics while meeting cloud services requirements: least-privilege security policies and isolation, data protection, and instant, on-demand AI and HPC services.

The data processing unit, or DPU, is an infrastructure platform thats architected and designed to deliver infrastructure services for supercomputing applications while maintaining their native performance. The DPU handles all provisioning and management of hardware and virtualization of servicescomputing, networking, storage, and security. It improves overall performance of multi-user supercomputers by optimizing the placement of applications and by optimizing network traffic and storage performance, while assuring quality of service.

DPUs also support protected data computing, making it possible to use supercomputing services to process highly confidential data. The DPU architecture securely transfers data between client storage and the cloud supercomputer, executing data encryption on behalf of the user.

The NVIDIA BlueField DPU consists of the industry-leading NVIDIA ConnectX network adapter, combined with an array of Arm cores; purpose-built, high-performance-computing hardware acceleration engines with full data-center-infrastructure-on-a-chip programmability; and a PCIe subsystem. The combination of the acceleration engines and the programmable cores enables migrating the complex infrastructure management and user isolation and protection from the host to the DPU, simplifying and eliminating overheads associated with them, as well as accelerating high-performance communication and storage frameworks.

By migrating the infrastructure management, user isolation and security, and communication and storage frameworks from the untrusted host to the trusted infrastructure control plane that the DPU is a part of, truly cloud-native supercomputing is possible for the first time. CPUs or GPUs can increase their compute availability to the applications and operate in a more synchronous way for higher overall performance and scalability.

The BlueField DPU enables a zero-trust supercomputing domain at the edge of every node, providing bare-metal performance with full isolation and protection in a multi-tenancy supercomputing infrastructure.

The BlueField DPU can host untrusted multi-node tenants and ensure that supercomputing resources used by one tenant will be handed over clean to a new tenant. As part of this process, the BlueField DPU protects the integrity of the nodes, reprovisions resources as needed, clears states left behind, provides a clean boot image for a newly scheduled tenant, and more.

HPC and AI communication frameworks such as Unified Communication X (UCX), Unified Collective Communications (UCC), Message Passing Interface (MPI), and Symmetrical Hierarchical Memory (SHMEM) provide programming models for exchanging data between cooperating parallel processes. These libraries include point-to-point and collective communication semantics (with or without data) for synchronization, data collection, or reduction purposes. These libraries are latency and bandwidth sensitive and play a critical role in determining application performance. Offloading the communication libraries from the host to the DPU enables parallel progress in the communication periods and in the computation periods (that is, overlapping) and reduces the negative effect of system noise.

BlueField DPUs include dedicated hardware acceleration engines (for example, NVIDIA In-Network Computing engines) to accelerate parts of the communication frameworks, such as data reduction-based collective communications and tag matching. The other parts of the communication frameworks can be offloaded to the DPU Arm cores, enabling asynchronous progress of the communication semantics. One example is leveraging BlueField for MPI non-blocking, All-to-All collective communication. The MVAPICH team at Ohio State University (OSU) and the X-ScaleSolutions team have migrated this MPI collective operation into the DPU Arm cores with the OSU MVAPICH MPI and have demonstrated 100 percent overlapping of communication and computation, which is 99 percent higher than using the host CPU for this operation.

Parallel Three-Dimensional Fast Fourier Transforms (P3DFFT) is a library used for large-scale computer simulations in a wide range of fields, including studies of turbulence, climatology, astrophysics, and material science. P3DFFT is written in Fortran90 and is optimized for parallel performance. It uses MPI for interprocessor communication and greatly depends on the performance of MPI All-to-All. Leveraging the OSU MVAPICH MPI over BlueField, the OSU and X-ScaleSolutions teams have demonstrated a 1.4X performance acceleration for P3DFFT.

1The performance tests were conducted by Ohio State University on the HPC-AI Advisory Councils Cluster Center, with the following system configuration: 32 servers with dual-socket Intel Xeon 16-core CPUs E5-2697A V4 @ 2.60GHz (total of 32 processors per node), 256GB DDR4 2400MHz RDIMMs memory, and 1TB 7.2K RPM SATA 2.5 hard drive per node. The servers were connected with NVIDIA BlueField-2 InfiniBand HDR100 DPUs and NVIDIA Quantum QM7800 40-port HDR 200Gb/s InfiniBand switch.

Extracting the highest possible performance from supercomputing systems while achieving efficient utilization has traditionally been incompatible with the secured, multi-tenant architecture of modern cloud computing. A cloud-native supercomputing platform provides the best of both worlds for the first time, combining peak performance and cluster efficiency with a modern zero-trust model for security isolation and multi-tenancy.

Learn more about the NVIDIA Cloud-Native Supercomputing Platform.

2021 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, BlueField, ConnectX, DOCA, and Magnum IO are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. All other trademarks are property of their respective owners.

ARM, AMBA and ARM Powered are registered trademarks of ARM Limited. Cortex, MPCore and Mali are trademarks of ARM Limited. ARM is used to represent ARM Holdings plc; its operating company ARM Limited; and the regional subsidiaries ARM Inc.; ARM KK; ARM Korea Limited.; ARM Taiwan Limited; ARM France SAS; ARM Consulting (Shanghai) Co. Ltd.; ARM Germany GmbH; ARM Embedded Technologies Pvt. Ltd.; ARM Norway, AS and ARM Sweden AB.

More:
Introducing the Cloud-Native Supercomputing Architecture - HPCwire

Read More..

NSI to sponsor British Security Awards’s Apprentice of the Year Award | Security News – SourceSecurity.com

Physical security and the cloud: why one cant work without the other

Human beings have a long-standing relationship with privacy and security. For centuries, weve locked our doors, held close our most precious possessions, and been wary of the threats posed by thieves. As time has gone on, our relationship with security has become more complicated as weve now got much more to be protective of. As technological advancements in security have got smarter and stronger, so have those looking to compromise it. CybersecurityCybersecurity, however, is still incredibly new to humans when we look at the long relationship that we have with security in general. As much as we understand the basics, such as keeping our passwords secure and storing data in safe places, our understanding of cybersecurity as a whole is complicated and so is our understanding of the threats that it protects against.However, the relationship between physical security and cybersecurity is often interlinked. Business leaders may find themselves weighing up the different risks to the physical security of their business. As a result, they implement CCTV into the office space, and alarms are placed on doors to help repel intruders.Importance of cybersecurityBut what happens when the data that is collected from such security devices is also at risk of being stolen, and you dont have to break through the front door of an office to get it? The answer is that your physical security can lose its power to keep your business safe if your cybersecurity is weak.As a result, cybersecurity is incredibly important to empower your physical security. Weve seen the risks posed by cybersecurity hacks in recent news. Video security company Verkada recently suffered a security breach as malicious attackers obtained access to the contents of many of its live camera feeds, and a recent report by the UK government says two in five UK firms experienced cyberattacks in 2020.Cloud computing The solutionCloud stores information in data centres located anywhere in the world, and is maintained by a third partyCloud computing offers a solution. The cloud stores your information in data centres located anywhere in the world and is maintained by a third party, such as Claranet. As the data sits on hosted servers, its easily accessible while not being at risk of being stolen through your physical device.Heres why cloud computing can help to ensure that your physical security and the data it holds arent compromised.Cloud anxietyIts completely normal to speculate whether your data is safe when its stored within a cloud infrastructure. As we are effectively outsourcing our security by storing our important files on servers we have no control over - and, in some cases, limited understanding of - its natural to worry about how vulnerable this is to cyber-attacks.The reality is, the data that you save on the cloud is likely to be a lot safer than that which you store on your device. Cyber hackers can try and trick you into clicking on links that deploy malware or pose as a help desk trying to fix your machine. As a result, they can access your device and if this is where youre storing important security data, then it is vulnerable.Cloud service providersCloud service providers offer security that is a lot stronger than the software in the personal computerCloud service providers offer security that is a lot stronger than the software that is likely in place on your personal computer. Hyperscalers such as Microsoft and Amazon Web Service (AWS) are able to hire countless more security experts than any individual company - save the corporate behemoth - could afford.These major platform owners have culpability for thousands of customers on their cloud and are constantly working to enhance the security of their platforms. The security provided by cloud service providers such as Claranet is an extension of these capabilities.Cloud resistanceCloud servers are located in remote locations that workers dont have access to. They are also encrypted, which is the process of converting information or data into code to prevent unauthorised access.Additionally, cloud infrastructure providers like ourselves look to regularly update your security to protect against viruses and malware, leaving you free to get on with your work without any niggling worries about your data being at risk from hackers.Data centresCloud providers provide sophisticated security measures and solutions in the form of firewalls and AIAdditionally, cloud providers are also able to provide sophisticated security measures and solutions in the form of firewalls and artificial intelligence, as well as data redundancy, where the same piece of data is held within several separate data centres.This is effectively super-strong backup and recovery, meaning that if a server goes down, you can access your files from a backup server.Empowering physical security with cybersecurityBy storing the data gathered by your physical security in the cloud, you're not just significantly reducing the risk of cyber-attacks, but also protecting it from physical threats such as damage in the event of a fire or flood.Rather than viewing your physical and cybersecurity as two different entities, treat them as part of one system: if one is compromised, the other is also at risk. They should work in tandem to keep your whole organisation secure.

Continue reading here:
NSI to sponsor British Security Awards's Apprentice of the Year Award | Security News - SourceSecurity.com

Read More..