Page 3,519«..1020..3,5183,5193,5203,521..3,5303,540..»

Edge Computing Is A Red-Hot Tech Trend, Here’s How To Invest in It – TheStreet

Youve heard of artificial intelligence, cloud computing, machine learning, 5G, Internet of Things and a host of other tech phrases that are legitimately disrupting how businesses and societies function.

I have what may be a new one for you, and it helps all those aforementioned technologies work: edge computing.

Almost everything we use on a daily basis has a computer in it, and as the Internet of Things permeates every facet of our lives, that will only accelerate. Not only will those computers multiply, they will collect exponentially rising amounts of data. The Covid-19 shutdowns and work-from-home trends have only accelerated this trend.

Cars, factories, phones, wearable devices and all other gadgets in the "smart" ecosystem receive data inputs and then must analyze all that information, and it's no small task to perform this seamlessly in real time. Typically this data is relayed back to the cloud, the computationally expensive work is performed, and the results are sent back to the device.

However as data volumes and sophistication needs grow, the sending of information to cloud servers for processing, analysis, and storage and then sending them back when a command is made is untenable. Latency issues constrain functionality, and all those tech concepts mentioned earlier simply wont be able to perform to their full capacity without the use of edge computing.

What edge computing essentially is is the ability for smart devices to perform these functions locally, either on the device itself or on a close-by edge server. "Edge refers the edge of the network, as close as possible to the physical device thats being used and running fewer processes in the cloud. By conducting operations on the edge, systems and networks can perform more reliably, swiftly and efficiently without compromising functionality.

The technology is proliferating rapidly, and the estimations on its growth and market size vary widely, and all are bullish. The edge market was estimated between $1.7B to $7.9B in 2017, and is projected to grow to $16.5B to $43.4B between 2025 and 2027, producing a compound annual growth rate of between 11%-37%.

So how can you capitalize on this emerging tech? Many of the megacap names you may already own such as Microsoft (MSFT) - Get Report, Amazon (AMZN) - Get Reportand Alphabet (GOOGL) - Get Reportwill be integral to edge computing as they incorporate it into their IoT platforms. Hewlett Packard Enterprise (HPE) - Get Reporthas made a $4 billion investmentinto its edge network, and IBM (IBM) - Get Reportand Cisco (CSCO) - Get Reportare emphasizing it as well.

However these arent very direct plays. If you want more explicit exposure to the edge, there are two smaller names that have had parabolic moves as of late that deserve your attention.

Cloudflare (NET) - Get Report, which went public last September,is a content delivery network (CDN), web security, and infrastructure company. Its presence in the edge is found in its CDN edge servers and Cloudflare Workers service that allows developers to capitalize on the proximity benefits of edge computing. It has also recently partnered with private edge computing companies to continue to expand its services. The stock is up 110% YTD.

And the hottest name in all of tech lately, not just the edge, is Fastly (FSLY) - Get Report. It recently supplanted Zoom as the best-performing tech company of the coronavirus pandemic, with a staggering 278% YTD return, on the heels of a torrid 60% move over the just last seven trading days. Fastlys edge cloud platform provides CDNs, load balancing, internet security, and streaming services. Its grown revenues 39% and 38% YOY in each of the last two quarters, and is targeting 42% for the year.

Shopify (SHOP) - Get Reportis one of its largest clients, and part of Fastly's huge run has been growth expectations around the recent Shopify/Facebook (FB) - Get Reportand Shopify/Walmart (WMT) - Get Reportpartnerships. The added online activity this drives towards Shopify will increase demand for Fastly's services.

Now that edge computing is part of your vocabulary, be sure to keep an eye on this space.

Microsoft, Amazon and Alphabet are holdings in Jim Cramers Action Alerts PLUS Charitable Trust Portfolio. Want to be alerted before Cramer buys or sells these stocks? Learn more now.

See the article here:
Edge Computing Is A Red-Hot Tech Trend, Here's How To Invest in It - TheStreet

Read More..

Docker servers infected with DDoS malware in extremely rare attacks – ZDNet

Up until recently, Docker servers misconfigured and left exposed online have been historically targeted with cryptocurrency-mining malware, which has helped criminal groups generate huge profits by hijacking someone else's cloud resources.

However, in a report published this week, security researchers from Trend Micro have discovered what appears to be the first organized and persistent series of attacks against Docker servers that infect misconfigured clusters with DDoS malware.

According to Trend Micro, the two botnets are running versions of the XORDDoS and the Kaiji malware strains. Both malware operations have a long and well-documented history, especially XORDDoS, which has been spotted used in the wild for many years.

However, the two DDoS botnets had usually targeted routers and smart devices, and never complex cloud setups, such as Docker clusters.

"XORDDoS and Kaiji have been known to leverage telnet and SSH for spreading before, so I see Docker as a new vector which increases the potential of the botnet, a green field full of fresh fruit to pick with no immediate competitors," Pascal Geenens, cybersecurity evangelist at Radware, told ZDNet via email earlier this week.

"Docker containers will typically provide more resources compared to IoT devices, but they typically run in a more secured environment, and it might be hard to impossible for the container to perform DDoS attacks," Geenens added.

"The unique perspective of IoT devices such as routers and IP cameras is that they have unrestricted access to the internet, but typically with less bandwidth and less horsepower compared to containers in a compromised environment," the Radware researcher told ZDNet.

"Containers, on the other hand, typically have access to way more resources in terms of memory, CPU, and network, but the network resources might be limited to only one or a few protocols, resulting in a smaller arsenal of DDoS attack vectors supported by those 'super' bots."

However, these limitations don't usually impact crypto-mining botnets, which only need an open HTTPS channel to the outside world, Geenens said.

But despite the limitations in how a DDoS gang could abuse hacked Docker clusters, Geenens says this won't stop hackers from attacking this "green field full of fresh fruit to pick" as there are very few vulnerable IoT devices that haven't been infected already, which has forced hackers to target Docker servers to begin with.

And on a side note, Geenens also told ZDNet that he suspects that DDoS operators are already quite familiar with Docker systems already.

While this is the first time they're hacking Docker clusters, Geenens believes hackers often use Docker to manage their own attack infrastructure.

"I have no immediate proof, but I'm pretty sure that in the same way as legitimate applications benefit from [Docker's] automation and agility (DevOps), so will illegal applications."

The most common source of Docker hacks is the management interface (API) being left exposed online without authentication or being protected by a firewall. For readers looking to secure their servers, that would be a good first thing to check.

In its report, Trend Micro also recommends that server administrators secure their Docker deployments by following a series of basic steps, detailed here.

See the article here:
Docker servers infected with DDoS malware in extremely rare attacks - ZDNet

Read More..

Nebulon emerges with software-defined storage, but from the cloud – ComputerWeekly.com

Cloud-defined storage. Its a new departure, where a substantial chunk of storage controller management operations are offloaded to the cloud. But not input/output (I/O), which is handled by local PCIe-based cards that connect to form pools of storage.

The proposed benefits are that such an arrangement allows for improved management of storage at scale while cutting IT budget spend on costly controller-based hardware in the datacentre by 50% it is claimed.

Thats whats on offer from Nebulon, a Silicon Valley startup that plans general availability for its products from September.

The company proposes two things, in essence. First, capacity in partner-supplied servers with relatively cheap flash drives. Each server is equipped with a Nebulon PCIe card that offloads storage processing and which connects to other Nebulon cards in the datacentre.

Second, and here is where it gets more interesting, functionality around monitoring and provisioning that would normally form part of the controllers job is offloaded to the cloud.

The result is pools of block storage that are built from relatively cheap components and managed via a cloud interface.

Nebulon is something like hyper-converged (but with no storage overhead subtracted from local hardware) combined with software-defined storage (that runs in the cloud, although there is a local hardware element in the PCIe card).

The hardware component is a full-height, full-length PCIe card that fits in the server GPU slot, because thats where it will get the power and cooling it needs. These cards are called SPUs in Nebulon-speak (storage processing units). There is not storage on the card and the card appears to hosts as a SAS HBA storage networking card.

Unlike existing hyper-converged products, Nebulon can present volumes to virtualised or bare metal environments. As mentioned above, anything that runs locally runs on the SPU so theres no impact on the server CPU.

Up to 32 SPUs can connect via 10/25Gbps Ethernet to form a pool of storage (an Npod) that can be carved up into provisioned volumes, via the Nebulon ON cloud control plane.

Nebulon ON is where the topology is defined, storage provisioned, telemetry collected and management functions such as updates carried out. Should cloud connectivity be lost, storage continues to work as configured, with the local SPUs acting as controller cache to which configuration settings will have already been pushed.

Storage can be provisioned with application-aware templates that have suggested preset parameters for things such as number and size of volumes, levels of redundancy and protection and snapshot scheduling.

Replication is not set to feature until after general availability.

For now, Nebulon ON runs in the Amazon Web Services (AWS) and Google Cloud Platform (GCP) clouds.

The cloud control plane in Nebulon brings fleet management and the simplicity that comes with that, said Martin Cooper, solution architecture director at Nebulon. It brings distributed management at scale, and means you can run applications on the server that you intended to when you bought it.

Nebulon is intended to be run on a per-site basis to start with, although stretched clusters will be offered in forthcoming product revisions.

More here:
Nebulon emerges with software-defined storage, but from the cloud - ComputerWeekly.com

Read More..

Ampere’s New 128-Core Altra CPU Targets Intel, AMD In The Cloud – CRN: Technology news for channel partners and solution providers

Ampere, a semiconductor startup founded and led by former Intel executive Renee James, has revealed a new 128-core, Arm-based server processor designed to take on Intel and AMD in the cloud.

The new processor, Altra Max, was unveiled Tuesday, three months after the Santa Clara, Calif.-based chipmaker launched its first product, the 80-core Altra. Like Apple, which revealed plans Monday to use Arm-based processors for Mac computers later this year, Ampere pays license fees to use silicon IP developed by British chip designer Arm to build its own specialized processors.

[Related: Top 500 Supercomputers: New No. 1 Uses Arm-Based Fujitsu CPUs]

Ampere, which is not to be confused with Nvidia's new Ampere GPU architecture, said the new 128-core Altra Max is best suited for scale-out and elastic cloud architectures and provides the "industry's highest socket-level performance and I/O scalability." The processor is expected to start sampling with customers in the fourth quarter.

At 128 cores, Altra Max has double the number of cores of AMD's 64-core EPYC Rome processors but the same number of threads, 128, because Altra Max doesn't support hyperthreading. But even without hyperthreading, the company said it can provide "ideal performance scaling."

Jeff Wittich, who had a 15-year career at Intel and is now senior vice president of products at Ampere, told CRN that the chipmaker was able to design processors that can provide better performance scaling than Intel and AMD as the number of cores go up. Beyond providing predictable high performance and scalability, Ampere also promises optimal power efficiency, he added.

"With x86 today, if you look at EPYC, as you fill up all of the physical cores and start to bring in all of the sibling threads, you don't get that much incremental performance from each of the sibling threads," he said. "And so whether that means that every [core's] performance is going down, or it means the incremental [cores] aren't really getting what they expect, this is not an ideal case for the cloud."

An AMD spokesperson said the chipmaker stands by its "record with cloud providers using EPYC right now across a variety of instance types for a variety of workloads."

"Were in multiple instances from Azure, AWS, Oracle, Google, IBM, Tencent and others doing everything from HPC in the cloud, to memory bound computing, to general purpose, to burstable," the spokesperson added.

Intel also defended its processors, which are widely used by cloud service providers.

"Customers around the world have developed solutions and services optimized for Intel Xeon Scalable processors because of their proven performance on a wide range of workloads, steadfast reliability, and the broad ecosystem compatible with the Intel platform," an Intel spokesperson said.

Wittich said Altra Max is compatible with Altra's dual-socket server platforms and will share other features, like eight channels of DDR4 3,200GHz memory and 192 lanes of PCIe 4.0 connectivity.

"If you're more in the space of, 'I'm throughput-limited and I want to take advantage of all of that I/O and memory bandwidth and I'm not compute-bottlenecked,' great, Ampere Altra is awesome for you," he said. "If you can scale out to a ton of cores with 128, Ampere Altra Max is going to give you the highest socket performance and the highest overall performance for those applications."

Ampere didn't have any performance benchmarks for Altra Max, but Wittich said the original Altra processor has been shown to provide more than two times the performance of Intel's highest-end Xeon Scalable processor with 28 cores from the Cascade Lake Refresh lineup. In comparison to AMD's highest-end EPYC, Altra provides performance that is a couple of percentage points higher.

"These are 80 cloud-class, data center-class cores, delivering leading performance, more cores than anyone else, more performance at the SOC level than anyone else, as much memory bandwidth as anyone else and more than Intel and more I/O attached than anybody else," he said.

While Altra's flagship processor has 80 cores, the product actually consists of 11 SKUs, with four 80-core processors ranging in thermal dynamic power (TDP) from 150 to 250 watts and in frequency from 2.6GHz to 3.3 GHz. The midrange processor has 72 cores, 3GHz and a 195-watt TDP. Four additional processors have 64 cores, ranging in TDP from 95 watts to 220 watts and in frequency from 2.4GHz to 3.3GHz. The second-to-last Altra has 48 cores, 2.2GHz and an 85-watt TDP while the final SKU has 32 cores, 1.7GHz and a 45-watt to 58-watt TDP.

Beyond Altra Maxwhich like the original Altrais based on Taiwanese chip foundry TSMC's 7-nanometer process technologyWittich said the company has already taped out a 5nm processor that is set for volume production in 2022.

"So pretty fast cadence. We'll keep expanding core count, we'll keep expanding performance capabilities," he said. "I think it's a pretty exciting picture."

One of Ampere's early partners in the channel is Phoenics Electronics, a semiconductor and board distributor owned by Avnet, which plans to offer Ampere processors and servers in the coming months.

"Adding the Ampere Altra processor and associated servers to our portfolio will enable Phoenics to provide our customers with key solutions for cloud, storage, edge and other server applications," Peter Rooks, president of Phoenics Electronics, said in a statement.

Other early ecosystem supporters include security and performance services vendor Cloudflare, Android development tool provider Genymobile, GPU maker Nvidia, Equinix-owned bare metal instance provider Packet and cloud computing provider Scaleway.

"We are excited to extend our partnership with Ampere by providing early access to their new Ampere Altra processors," Zac Smith, managing director at Packet, said in a statement. "Our shared passion for pushing the boundaries of performance for cloud-native applications is complemented by an equally deep commitment to engaging with the software ecosystem early and often. With this early access program our users, and the community at large, can experience what's next with silicon, today."

More:
Ampere's New 128-Core Altra CPU Targets Intel, AMD In The Cloud - CRN: Technology news for channel partners and solution providers

Read More..

Cloud IT Infrastructure Spending Continued to Grow in Q1 2020 While Spending on Non-Cloud Environments Saw Double-Digit Declines, According to IDC -…

FRAMINGHAM, Mass.--(BUSINESS WIRE)--According to the International Data Corporation (IDC) Worldwide Quarterly Cloud IT Infrastructure Tracker, vendor revenue from sales of IT infrastructure products (server, enterprise storage, and Ethernet switch) for cloud environments, including public and private cloud, increased 2.2% in the first quarter of 2020 (1Q20) while investments in traditional, non-cloud, infrastructure plunged 16.3% year over year.

The broadening impact of the COVID-19 pandemic was the major factor driving infrastructure spending in the first quarter. Widespread lockdowns across the world and staged reopening of economies triggered increased demand for cloud-based consumer and business services driving additional demand for server, storage, and networking infrastructure utilized by cloud service provider datacenters. As a result, public cloud was the only deployment segment escaping year-over-year declines in 1Q20 reaching $10.1 billion in spend on IT infrastructure at 6.4% year-over-year growth. Spending on private cloud infrastructure declined 6.3% year over year in 1Q to $4.4 billion.

IDC expects that the pace set in the first quarter will continue through rest of the year as cloud adoption continues to get an additional boost driven by demand for more efficient and resilient infrastructure deployment. For the full year, investments in cloud IT infrastructure will surpass spending on non-cloud infrastructure and reach $69.5 billion or 54.2% of the overall IT infrastructure spend. Spending on private cloud infrastructure is expected to recover during the year and will compensate for the first quarter declines leading to 1.1% growth for the full year. Spending on public cloud infrastructure will grow 5.7% and will reach $47.7 billion representing 68.6% of the total cloud infrastructure spend.

Disparity in 2020 infrastructure spending dynamics for cloud and non-cloud environments will ripple through all three IT infrastructure domains Ethernet switches, compute, and storage platforms. Within cloud deployment environments, compute platforms will remain the largest category of spending on cloud IT infrastructure at $36.2 billion while storage platforms will be fastest growing segment with spending increasing 8.1% to $24.9 billion. The Ethernet switch segment will grow at 3.7% year over year.

At the regional level, year-over-year changes in vendor revenues in the cloud IT Infrastructure segment varied significantly during 1Q20, ranging from 21% growth in China to a decline of 12.1% in Western Europe.

Top Companies, Worldwide Cloud IT Infrastructure Vendor Revenue, Market Share, and Year-Over-Year Growth, Q1 2020 (Revenues are in Millions)

Company

1Q20Revenue(US$M)

1Q20 MarketShare

1Q19Revenue(US$M)

1Q19 MarketShare

1Q20/1Q19RevenueGrowth

1. Dell Technologies

$2,535

17.4%

$2,509

17.6%

1.0%

2. HPE/New H3C Group**

$1,495

10.3%

$1,695

11.9%

-11.8%

3T. Inspur/Inspur Power Systems* ***

$868

6.0%

$636

4.5%

36.4%

3T. Cisco*

$847

5.8%

$1,038

7.3%

-18.4%

5. Lenovo

$674

4.6%

$670

4.7%

0.5%

ODM Direct

$4,726

32.5%

$4,422

31.1%

6.9%

Others

$3,390

23.3%

$3,258

22.9%

4.1%

Total

$14,535

100.0%

$14,228

100.0%

2.2%

IDC's Quarterly Cloud IT Infrastructure Tracker, Q1 2020

Notes:

* IDC declares a statistical tie in the worldwide cloud IT infrastructure market when there is a difference of one percent or less in the vendor revenue shares among two or more vendors.** Due to the existing joint venture between HPE and the New H3C Group, IDC reports external market share on a global level for HPE as "HPE/New H3C Group" starting from Q2 2016 and going forward.*** Due to the existing joint venture between IBM and Inspur, IDC reports external market share on a global level for Inspur and Inspur Power Systems as "Inspur/Inspur Power Systems" starting from 3Q 2018.

In addition to the table above, a graphic illustrating the worldwide market share of the top 5 cloud IT infrastructure companies in 1Q20 and 1Q19 is available by viewing this press release on IDC.com.

Long term, IDC expects spending on cloud IT infrastructure to grow at a five-year compound annual growth rate (CAGR) of 9.6%, reaching $105.6 billion in 2024 and accounting for 62.8% of total IT infrastructure spend. Public cloud datacenters will account for 67.4% of this amount, growing at a 9.5% CAGR. Spending on private cloud infrastructure will grow at a CAGR of 9.8%. Spending on non-cloud IT infrastructure will rebound somewhat in 2020 but will continue declining with a five-year CAGR of -1.6%.

A graphic illustrating IDC's worldwide cloud IT infrastructure market forecast by deployment type (public cloud, private cloud, and traditional IT) is available by viewing this press release on IDC.com.

IDC's Worldwide Quarterly Cloud IT Infrastructure Tracker is designed to provide clients with a better understanding of what portion of the server, disk storage systems, and networking hardware markets are being deployed in cloud environments. This tracker breaks out each vendors' revenue by the hardware technology market into public and private cloud environments for historical data and provides a five-year forecast by the technology market. This Tracker is part of the Worldwide Quarterly Enterprise Infrastructure Tracker, which provides a holistic total addressable market view of the five key enabling infrastructure technologies for the datacenter (servers, external enterprise storage systems, purpose-built appliances: HCI and PBBA, and datacenter switches).

Taxonomy Notes

IDC defines cloud services more formally through a checklist of key attributes that an offering must manifest to end users of the service. Public cloud services are shared among unrelated enterprises and consumers; open to a largely unrestricted universe of potential users; and designed for a market, not a single enterprise. The public cloud market includes variety of services designed to extend or, in some cases, replace IT infrastructure deployed in corporate datacenters. It also includes content services delivered by a group of suppliers IDC calls Value Added Content Providers (VACP). Private cloud services are shared within a single enterprise or an extended enterprise with restrictions on access and level of resource dedication and defined/controlled by the enterprise (and beyond the control available in public cloud offerings); can be onsite or offsite; and can be managed by a third-party or in-house staff. In private cloud that is managed by in-house staff, "vendors (cloud service providers)" are equivalent to the IT departments/shared service departments within enterprises/groups. In this utilization model, where standardized services are jointly used within the enterprise/group, business departments, offices, and employees are the "service users."

IDC defines Compute Platforms as compute intensive servers. Storage Platforms includes storage intensive servers as well as external storage and storage expansion (JBOD) systems. Storage intensive servers are defined based on high storage media density. Servers with low storage density are defined as compute intensive systems. Storage Platforms does not include internal storage media from compute intensive servers. There is no overlap in revenue between Compute Platforms and Storage Platforms, in contrast with IDCs Server Tracker and Enterprise Storage Systems Tracker, which include overlaps in portions of revenue associated with server-based storage.

For more information about IDC's Quarterly Cloud IT Infrastructure Tracker, please contact Lidice Fernandez at lfernandez@idc.com.

About IDC Trackers

IDC Tracker products provide accurate and timely market size, vendor share, and forecasts for hundreds of technology markets from more than 100 countries around the globe. Using proprietary tools and research processes, IDC's Trackers are updated on a semiannual, quarterly, and monthly basis. Tracker results are delivered to clients in user-friendly excel deliverables and on-line query tools.

Click here to learn about IDCs full suite of data products and how you can leverage them to grow your business.

About IDC

International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets. With more than 1,100 analysts worldwide, IDC offers global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries. IDC's analysis and insight helps IT professionals, business executives, and the investment community to make fact-based technology decisions and to achieve their key business objectives. Founded in 1964, IDC is a wholly-owned subsidiary of International Data Group (IDG), the world's leading tech media, data and marketing services company. To learn more about IDC, please visit http://www.idc.com. Follow IDC on Twitter at @IDC and LinkedIn. Subscribe to the IDC Blog for industry news and insights: http://bit.ly/IDCBlog_Subscribe.

All product and company names may be trademarks or registered trademarks of their respective holders.

More here:
Cloud IT Infrastructure Spending Continued to Grow in Q1 2020 While Spending on Non-Cloud Environments Saw Double-Digit Declines, According to IDC -...

Read More..

Empowering Edge Cloud in the 5G & IoT Hyper-Connected Era – insideHPC

Sponsored Post

Introduction

It is well documented that the amount of data that is being produced on a daily/monthly/yearly basis is growing at astronomical rates. IDC (Reference 1) have estimated that by 2025, 175 zettabytes of data will be created each year and will continue to grow. The data will be in both structured and unstructured forms and there will be major logistical challenges in moving this data from the devices that create the data to where the data is acted upon and decisions made.

Promoting Edge With Private Cloud

Edge computing refers to the computing paradigm that moves the computation and storage of the data closer to where the data is generated or where it is needed. In this model, the bandwidth from the edge of the infrastructure to the longer term storage or where the higher powered analytics reside is reduced as well as increasing response times (lower latencies) in certain instances. The appropriate computing and storage power, optimized and balanced, is needed at or near the edge to not waste space and energy.

In a highly distributed and optimized data environment, the data that is generated will be aggregated and then acted upon closer to where decisions based on the data need to be made. While a certain percentage of the data can be passed to the cloud, latency and security sensitive data needs to be processed locally. This is also the case of large amounts of information that can be discarded after initial ingest and analytics. For example, with the Covid-19 pandemic, we have seen how critical the management of hospital resources is to quickly react to unexpected threats. Being able to provide a real-time response to fast changing situations can save lives. (Reference 2)

New Technologies 5G, AI & IoT

In addition to the massive amounts of data that is now being generated, new technologies that are in early stages of deployment will surely increase the amount of data that is generated as new applications come on-line. This includes the new 5G communication standard that speeds up network bandwidth with lower latencies. The Internet of Things (IoT) finds a perfect growth partner in 5G, which provides support to the increasing number of connected devices.

However, it is not as simple as stating that some data that is generated will remain at the edge and some data will be transmitted to more centralized and larger data centers (both cloud and on-premises). For example, the increasing use of Artificial Intelligence (AI) in applications such as Factory 4.0 or Smart City will need to process gigabytes of data locally but will also need to communicate bi-directionally with both data centers and other edge devices to enable intelligent solutions such as machine vision, autonomous driving or advanced security. Optimized edge hardware will be critical to the smooth integration of edge technologies to and from the back end data centers.

Advantech Solutions

Advantech is a leader in providing a wide range of servers that facilitate efficient computing and storage at the edge. For example, by using the SKY-7221 software defined storage server, organizations can easily process a large volume of data with very low latencies. Data that is obtained can be filtered, aggregated and analyzed closer to where the data is generated, reducing clogging up networks with un-needed data. The SKY-7221 server also acts as a powerful computing engine for analytics through the use of dual Intel 2nd Gen Xeon Scalable Processors that also can contain powerful accelerators. As mentioned before, AI at the edge is a novel infrastructure concept which can speed up decision making in real-time. Besides powerful compute and flexible storage a server sitting at the edge must have powerful and fast networking capabilities. The SKY-7221 is a high bandwidth, low latency system that easily integrates with applications that require 5G capabilities. More information about the Advantech SKY-7221 server can be found here. (Reference 3)

In addition to superior hardware designs, Advantech offers an optimized edge-to-cloud software platform for IoT. By working with key customers, Advantech was able to develop and fine-tune a software stack that includes security features and user defined access pathways that ensure the privacy of data as the data moves from the edge to more centralized servers. The software stack integrates with Advantech WISE-STACK Private Cloud solution (Reference 4) which has been tested in IoT environments and with software solutions that require AI capabilities. The advantage when using WISE-STACK is that applications that require high availability, elastic expansion and advanced security can now have all these capabilities concurrently and integrated without having to piece together various software packages. With optimization of WISE-STACK to work with the SKY-7221 hardware, new and innovative applications can easily be developed that take advantage of cutting-edge hardware, robust storage and secure and fast networks. More information about the WISE-STACK can be found here.

Conclusion

Advantech has developed highly optimized servers that enable new applications to be developed that utilize the power of edge computing with the back-end processing and storage of hybrid clouds. With the growth of the Industrial Internet of Things (IIoT), the integration of edge data capture and analytics with fast networking will open up a new category for applications that can adapt to changing data requirements. IIoT devices provide valuable data that needs to be analyzed, but not necessarily transmitted via networks beyond the edge. As IIoT becomes more ubiquitous, IT suppliers that can provide integrated hardware and software, will become an obvious choice to partner with to offer more intelligent services. Visit http://www.advantech.com.

References

[1] https://www.forbes.com/sites/tomcoughlin/2018/11/27/175-zettabytes-by-2025/#52c8e8835459

[2] https://www.advantech.com/resources/industry-focus/smart-healthcare-command-centers-wise-paas-data-qualily-efficiency

[3] https://www.advantech.com/products/9fe67b0b-26e8-493e-9e73-2ab4eed2b38e/sky-7221/mod_55f02993-4fe6-4479-81c5-622801f85941

[4] https://www.advantech.com/srp/wise-stack-private-cloud

See the original post here:
Empowering Edge Cloud in the 5G & IoT Hyper-Connected Era - insideHPC

Read More..

This Ransomware Campaign is Being Orchestrated from the Cloud – Computer Business Review

Add to favorites

Malware hosted on Pastebin, delivered by CloudFront

Amazons CloudFront is being used to host Command & Control (C&C) infrastructure for a ransomware campaign that has successfully hit at least two multinational companies in the food and services sectors, according to a report by security firm Symantec.

Both [victims were] large, multi-site organizations that were likely capable of paying a large ransom Symantec said, adding that the attackers were using the Cobalt Strike commodity malware to deliver Sodinokibi ransomware payloads.

The CloudFront content delivery network (CDN) is described by Amazon as a way to give businesses and web application developers an easy and cost effective way to distribute content with low latency and high data transfer speeds.

Users can register S3 buckets for static content and and EC2 instances for dynamic content, then use an API call to return a CloudFront.net domain name that can be used to distribute content from origin servers via the Amazon CloudFront service. (In this case, the malicious domain was d2zblloliromfu.cloudfront.net).

Like any large-scale, easily accessible online service it is no stranger to being abused by bad actors: similar campaigns have been spotted in the past.

Malware was being delivered using legitimate remote admin client tools, Symantec said, including one from NetSupport Ltd, and another using a copy of the AnyDesk remote access tool to deliver the payload. The attackers were also using the Cobalt Strike commodity malware to deliver the Sodinokibi ransomware to victims.

The attackers also, unusually, scanned for exposed Point of Sales (PoS) systems as part of the campaign, Symantec noted. The ransom they demanded was significant.

The attackers requested that the ransom be paid in the Monero cryptocurrency, which is favored for its privacy as, unlike Bitcoin, you cannot necessarily track transactions. For this reason we do not know if any of the victims paid the ransom, which was $50,000 if paid in the first three hours, rising to $100,000 after that time.

Indicators of Compromise (IoCs)/bad domains etc. can be found here.

With ransomware predicted by Cybersecurity Ventures to hit a business every 11 seconds this year, businesses should ensure that they have robust backups.

As Jasmit Sagoo from security firm Veritas puts it: Companies have to take their data back-up and protection more seriously as a source of recovery.

The 3-2-1 rule is the best approach to take.

This entails each organisation having three copies of its data, two of which are on different storage media and one is air-gapped in an offsite location. With an offsite data backup solution, businesses have the option of simply restoring their data if they are ever locked out of it by criminals exploiting weaknesses in systems. Realistically, in todays world, theres no excuse for not being prepared.

See the original post:
This Ransomware Campaign is Being Orchestrated from the Cloud - Computer Business Review

Read More..

Ampere’s 128-Core Processor Challenges Intel and AMD in a Cloud-Based Processor Showdown – News – All About Circuits

Ampere recently announced that its 128-core cloud-native processor will be leading the way for addressing the demanding workloads in data centers.

Because workloads in data centers are rough on servers, processorspeed and adaptability are critical to prevent bottlenecks. The workloads simultaneously consist of analytics, high capacity management, and application testing verification. Amperes Altra processors drive efficiency in the data centers infrastructure workloads with a specialized design methodology and precise EDA tools.

Earlier this year, Ampere was able to design the worlds first cloud-native processor, a system built for cloud computing with a modern 64-bit Arm server-based architecture. Amperes Altra processor family is planning to continue to give users the freedom to accelerate the delivery of all cloud-computing applications. Ampere recently shared the news of its Altra Max 7 nm, 128-core processor that will launch at the end of this yearaneffort to alleviate data centers.

Ampere initially launched an 80-core processor called Altra earlier this year. This processor is said to addressinfrastructure workloads found in data centers. Amperes Altra Max is the expansion of the newly-released Altra family.

The Altra Max processor will be usefulfor applications that take advantage of scale-out and elastic cloud architectures. The highly anticipated 128-core processor will also be compatible with its predecessor's robust rack servers.

There is a big lingering question to answer:how will Amperes new processors stand against competition such as Intel or AMD?

Intels Xeon Gold 6238R processor provides 28 cores and 56 threads. Whenthese cores and threads are combined, the device provides performance similar toan 84-core processor. A processors amount of threads is a hardware support line; if a workload is running on each core and it stalls due to a memory access issue, a thread can start executing on the free core with minor setbacks.

Ampere Altra avoids hyper-threading, which is the method used by Intel to compensate for a lower amount of cores by breaking up physical cores into virtual ones to increase performance. However, in cases where the user needsto improve the cloud workloads and requirements found in data centers, there is little to no room for error or setbacks that can be brought on by relying on threads more than physical cores.

Intel has its 3rd generation of processors available that are built specifically to run complex artificial intelligence (AI) workloads on the same hardware as existing workloads, stepping up embedded hardware performance. Amperes family of Altra products claims to improve cloud workloads by utilizing all 128 high-performing cores and high-memory bandwidth while power management workstowardlow consumption.

AMDs EPYC 7662 is asecond-gen 64-core processor coupled with 128 threadsthe real competition in addressing data center workloads.

AMDs EPYC processors offer a consistent set of features across the product line, allowing you to optimize the number of cores required for the workload without sacrificing features like memory channels, memory capacity, or I/O lanes.

Amperes Chief Executive Jeff Wittich states, If you can scale out to a ton of cores with 128, Ampere Altra Max is going to give you the highest socket performance and the highest overall performance for those applications. Regardless of the number of physical cores per socket, AMD and Ampere surpass Intel in per-core performance.

AMD holds a slight advantage in terms of cache memory; the way AMDs EPYC handles copies of data closer to the processors core is at memory size of 4 MB at its level 1 cache while Amperes Altra is at 64KB. There are three levels of cache memory. Each level gets slower as it increases in memory size but improves the performance of previous levels. AMDs EPYC has more cache memory spaceto store frequently opened programsthan Amperes Altra.

Amperes Altra addresses many data center workloads including data analytics, artificial intelligence (AI), database storage, edge computing, and web hosting. It is an aptchoice for data centers since it avoids relying on threads by offering a higher number of physical cores.

However, for AI-based workloads, Intels Xeon Platinum and 3rd-gen scalable processors provide an accelerated inference performance for these deep learning workloads.

Ampere, AMD, and Intel all have processors that are pushing the boundaries to provide clients with dependable, reactant, high-performing processors. Each manufacturer has processors designed for high-performance computing workloads with a supporting eight-channel DDR4-3200 MHz memory. But for addressing the demanding workloads of data centers, evaluating memory storage, speed, and per-core performance, Ampere and AMD may be the most fruitfuloptions.

View original post here:
Ampere's 128-Core Processor Challenges Intel and AMD in a Cloud-Based Processor Showdown - News - All About Circuits

Read More..

Ad industry spots money in the cloud | Industry Trends | IBC – IBC365

Itsno secret that the advertising industry is under pressure.

Global ad spend is set to fall by almost $50 billion or 8.1% -this year as businesses in all sectors cancel or postpone media buys due to coronavirus, according to theWorld Advertising Research Center (WARC).

Against such a challenging background, agenciesas well ascommercial broadcasters, ad-funded streamers and pay-TV operators areurgentlylooking for ways tocut costs, to streamlinecommercialsproduction and delivery processes, and to make spot advertising more attractive to brands.

For many, the solution to their problems could lie in the cloud.

The cloud is, of course, transforming the way we all work, with theCovid-19pandemicleadingto a growth in the use ofcloud-basedservices as employees work from home.

However,adoption of the cloud in the advertising and commercials industry varies wildly between organisations. While some may be a few steps ahead, true end-to-end cloud workflows are not yet commonplace, says Rowan De Pomerai, head of delivery and growth at business network the DPP, which has been working for the past few months on bringing its work on the Interoperable Master Format (IMF) to bear on advertising workflows.

The advertising workflowis complex and time-consuming, andinvolves many parties accessing or operating on a piece of content - from the brand and their agency to the broadcaster or online platform, plus those involved in post-production, clearing, compliance, and more.Advertisers are also delivering commercials to more platforms than ever, and in many different formats.And this is often happening on a global level.

Cloud-led workflows have huge potential to simplify this collaborationand workflow.

IMF, in particular,enablesa dramatic improvement in the efficiency and flexibility of content mastering, versioning, and localisation. Yet its use in advertising is highly nascent, says De Pomerai.The DPP is working with companies like Adstream, Peach, Clearcast, Google, and ITV to explore how IMFmightimprovethe efficiency of producing, storing, clearing, and distributing multivariate commercials.

True end-to-end cloud workflows are not yet commonplace, Rowan De Pomerai,DPP

Increasing ROIOne firm offeringIMF capabilitiesis Ownzones, a cloud-based video supply chain firm. Its Ownzones Connect product, says CEO Dan Goman, can deliver higher efficiency, lowercostsand increased ROI for advertiser users, specifically around versioning.

During the course ofa year,largeadvertising companies can produce over 100 million versions of content for their customers, says Goman. The process is highly inefficient-each time there is a new version, they send a link in an email, the customer downloads the content, they view it, and somehow provide feedback. This process takes a long time.

With Ownzones Connect, the advertiser creates a master IMF package in Connect, their customer can see the product through an app, they provide input through the app, and theinput shows up on a timeline in Connect. The ad company editor sees it, makes changes on the timeline, and then a supplemental package is produced.

The process is quick and efficient, says Goman, whoexplainsthat because IMF is a component-based media format, users can easily swap media elements in and out of a package without having to re-transcode the entire master file. In an industry like advertising, having maximum flexibility when it comes to localising ads for several different languages and different platform requirements is of the utmost importance.Doing so in the cloud, he adds,means thatuserscan get high processing power from anywhere in the world, scaling up or down depending on volumes and urgency.

The cloud is also powering the ability for brands to deliver personalised, addressable advertising to consumers.Addressable television advertising, in particular, ishelping broadcastersto finallygo head-to-head with the likes of Facebook and Google byletting themtargetviewers better.

This, of course, isleading to growing complexity in the advertising workflow and is an area where the cloud can make processes more efficient.

Global tech firmMediaKindscloud-basedPrismaproductis one solutiondesigned to make addressable ad delivery easier.

Prismais pitched mainly at broadcasters and pay-TV operators, giving them the ability to offer addressable TV advertising on content offered through digitalplatforms such as connected TVs, tablets, phones and through set top boxes.

MediaKind director of market development Paul ODonovansaysPrismacan act as an interface between pay-TV operators and broadcasters on the one hand and advertisers on the other. At a very simple level, it could be that they know Im 39, in Cambridge and Im accessing on an iPad. Based on that, they can pull down an advert for, say BMWOr,if you know someone is a Manchester United fan, and their team is winning, you might want to serve them a very different advert than if their team is losing at half time.

In an industry like advertising, having maximum flexibility when it comes tolocalisingads for several different languages and different platform requirements is of the utmost importance.DanGoman,Ownzones

ODonovan says consumers tend to watch more adverts if theseare more engaging and relevant to them, andsothe ability to deliver addressable ads can lead to a hugely increased revenue stream for broadcasters and operators.

What this does is give the advertising industry the ability to really bring the digital advertising ecosystem into the TV space, says ODonovan.

MediaKind, he says, is currently deploying Prisma with two tier one pay-TV operators in Europeat the moment, and with two large public service broadcasters in Europe too.

The tier one operators and large public broadcasters are all starting to wake up to this because they are starting to see where the revenue can come from, says ODonovan. He points to IAB researchpublishedMay titled Covids Impact on Ad Pricing - which shows that connected TVs have been the most resilient in terms of holding on to ad spend during Covid-19, registering a 6% decline in advertising rates (CPMs) compared to desktop (-27%), smartphone (-28%) and tablet (-29%). One of the reasons is the higher degree of trust consumers have for TV advertising.

The cloud, he adds, is the enabler for the Prisma technology. The cloud gives us the ability for a faster time to market, and that we dont need to wait for hardware to arrive on premise. It means we can deploy remotely.

Advertising insertionThe cloud is also poweringVerizonMedias platform, which delivers linear TV channels to OTT devices and hasbroadcasterssuch as ESPN, Discovery,Foxand Viacom as clients in North America.

Advertising insertion is one of the core offerings ofVerizon Mediasplatform. It can insert targeted adsanddynamicallyswitch ads if users are moving through different regions. Server sidead insertion is the underpinning of our advertising offering, saysDarren Lepke, head of video product management at Verizon Media.

Personalised ads are processed in advance and inserted server-side to help deliver a seamless TV-like experience from content to ads. When the viewer arrives at a commercial break,the commercial just starts, says Lepke. Theres no web-style rebuffering or new player that has to load, or a spinning wheel. If youre watching a stream at 4K, the ad doesnt start at 720p and then readjust. Its a very seamless experience that people have come to expect from television and satellite services everywhere.

Lepke explains thatserver sideinsertion is not just in vogue because it improves the user experience, but also because it is effective at defeating ad blockers. These can recognise the signals from a client device, and know when to turn off the ads. When youre stitching the ads up in the cloud, theres no opportunity for the ad blockers to actually understand what is happening to the device.

Delivering a seamless ad experience, adds Lepke, results in much higher viewer completion rates of commercials. Soa seamless experience definitely results in higher revenue due to higher completion rates.

Many within theadvertisingindustry think the possibility of greater efficiency and higher revenues will drivewider adoption of cloud-based technologies.

Perhaps the biggest barrier to cloud adoption, says Ownzones Dan Goman, is the reliance on legacy workflows, which usually consist of tons of emails, no centralized repositories, and a lack of effective asset management. The process of migrating a content library and workflows to the cloud can seem daunting, he adds.

Aseamless experience definitely results in higher revenue due to higher completion rates,DarrenLepke, Verizon Media

That said,Gomannotes thatcutting edgetechnology can help differentiate companies from the competition. Once companies transition to working on cloud-based platforms, the opportunities for advanced workflow automation, shorter delivery timelines, more flexible collaboration, and increasedmonetisationare endless.

SaysGoman: Technology is, in many cases, the deciding factor for potential customers.

The coronavirus pandemic is also likely to have an impact. The advertising industry has, likeall others, needed to increase remote working during the pandemic.And thats doubtless led to a growth in the use ofcloud basedservices, says the DPPs De Pomerai.

However, he thinks the dramatic downturn in advertising revenuethis yearis having a corresponding effect on the budgets which might be spent on technical transformation in the advertising supply chain.

So while interest is certainly there, its fair to say we see a slowing of investment in the short term, says De Pomerai, who adds that this is likely to rebound as theworld builds back from the pandemic.As with most media companies, advertisers and agencies will have a renewed appreciation for the benefits of the kind of flexible, scalable, resilient, location-independent infrastructure that the cloud can provide.

View post:
Ad industry spots money in the cloud | Industry Trends | IBC - IBC365

Read More..

The Winston-Salem Symphony Announces Newly Elected Directors – Yes! Weekly

WINSTON-SALEM, N.C. (June 25, 2020) The Winston-Salem Symphony is proud to announce newly elected Directors for the Class of 2020. Elected Directors serve three-year terms. The Class of 2020 includes four new and 15 renewing Directors. The new Directors are Jonathan Allen, Dawnielle Grace, Esq., Christopher Gyves, Esq., and Katie Hall.

Jonathan Allenis a client development manager at Inmar, where in May of 2018 he moved into the companys Client Development organization. Prior to this move, he was in the Technology organization at Inmar for 12 years. The new role allows him to combine technical skills with meeting the needs of Inmars clients. Allen takes pride in the design, deployment, and management of technology systems, ranging from frontend hardware such as laptops and mobile devices to backend systems such as cloud servers and network routers.

Apart from a career at Inmar, Allen is an active servant at St. Paul United Methodist Church, located in Winston-Salem. At St. Paul, he serves as a musician, media, and technology team lead, and a leader of high school youth. Other organizations that he works with include United Ways Young Leaders United, LEADGirls NC, and Love Out Loud. In 2017, Allen was awarded one of the Winston Under 40 Leadership Awards, which are presented annually to 20 leaders under the age of 40 in Winston-Salem. As well, in 2020, Allen received recognition as one of the Most Influential African Americans Under 40 in the Piedmont Triad, by Black Business Ink. Allen holds a Bachelor of Science in Electrical Engineering from Hampton University (2003), and a Master of Business Administration from Wake Forest University (2017). He is a member of several boards in the city of Winston-Salem, including Greater Winston-Salem, Inc. (formerly the W-S Chamber of Commerce), The Royal Curtain Drama Guild, and HandsOn Northwest NC.

Dawnielle Grace, Esq.is the founder and owner of enlign counsel+compliance. Her primary areas of practice are corporate and employment law, as well as regulatory healthcare compliance. She advises clients on FMLA, ADA, HR, and corporate governance related issues, conducts workplace investigation trainings, and has a focus on church/religious organization legal and compliance related issues.

Prior to starting enlign, Grace was an associate at Spilman Thomas & Battle, PLLC. Before joining the firm, she served as vice president, corporate secretary, general counsel, and compliance officer for Computer Credit, Inc., a North Carolina medical collections agency and hospital extended business office. Prior to a career in law, she served as a teacher and project manager for four years. Grace has both a Master of Business Administration from Benedictine University and a law degree from Wake Forest, she is proficient in Japanese from her undergraduate studies at both Purdue and Ochanomizu Universities.

Christopher Gyves, Esq. is a seasoned corporate and securities lawyer at Womble Bond Dickinson who helps public companies overcome their most significant legal issues. He is a partner in the Corporate & Securities Practice Group and chair of the firms Public Company Advisors Team. In the boardroom, Gyves counsels boards of directors on leading-edge and highly confidential transactional and governance matters and special committee investigations. He is experienced in public and private mergers and acquisitions, takeover preparedness, shareholder activism, and executive compensation. Gyves helps his clients succeed by understanding their businesses and sharply focusing the myriad of legal issues through a lens of practical, strategic legal advice. He has been recognized by Chambers USA (Mergers & Acquisitions) and received a Client Choice Award from the Association of Corporate Counsel and Lexology in recognition of his commitment to outstanding client service.

In addition to his legal practice, Gyves speaks and writes on areas relating to his professional interests. He has served as a corporate governance panelist alongside current and former vice chancellors of the Delaware Court of Chancery and moderated panels on shareholder activism and mergers and acquisitions for the National Association of Corporate Directors (Carolinas Chapter). Gyves contributed two chapters to the Directors Handbook: A Field Guide to 101 Situations Commonly Encountered in the Boardroom and is co-author of several publications with noted governance scholar Charles Elson. Gyves is an adjunct law professor at the Wake Forest University School of Law, where he teaches mergers and acquisitions and corporate finance courses, and has guest-lectured on mergers and acquisitions and private equity at the Duke University School of Law and the Wake Forest University Babcock Graduate School of Management.

Katie Halljoined Vela Agency after experiences in communications, client relations, and business development at IFB Solutions, Bethesda Center for the Homeless, Hege Financial Group, and M Creative. After graduating summa cum laude from Salem College, she learned how to challenge the status quo and implement strategies that promote marginalized groups of people. Over the past three years, Hall has served as an integral part of alumnae efforts to help reenergize Salem Colleges strategic direction and fundraise $14M for the Step Up for Salem campaign. These duties include serving as a member and committee chair of the Salem College Alumnae Association, class president and fundraising committee member of the Class of 2014, and a member of the Friends of the Salem College School of Music. In April 2019, her efforts were recognized at the Salem College Alumnae Association Annual Meeting when the Class of 2014 received the Young Alumna Award for outstanding service to Salem College.

In July 2019, Hall was appointed to the Winston-Salem Local Governance Study Commission by Mayor Allen Joines to evaluate the City of Winston-Salems local governance structure. Prior to these volunteer experiences, Hall served Authoring Action as a board member and marketing committee member. In September 2019, Katie graduated from Wake Forest Universitys Master of Business Administration (MBA) program. During her time at Wake Forest, she spent countless hours serving as an ambassador for the School of Business and co-chairing the Women of Wake group dedicated to connecting female MBA students with opportunities to network and support one another. While working as business development manager at Vela Agency, Hall has had the privilege to work alongside 30+ Winston-Salem arts organizations and help elevate BTHVN Rocks Winston-Salem to a new scale thanks to the generous support of Mercedes-Benz Winston-Salem. Hall is a native of Midway, North Carolina.

Renewing Directors are: Betsy Annese; James M. (Jim) Apple; Pam Cash; William F. (Bill) Clingman; James (Jim) Dossinger; Steve Holland; Martin L. (Mark) Holton, III, Esq.; Francis (Frank) M. James, M.D.; Steve Koelsch; Stephen I. Kramer, M.D.; Jeffery T. Lindsay; John E. Pueschel, Esq.; Myra Denise Robinson; Deborah Debbie Wesley-Farrington, RN, BSN, CCRC, CCA; and Erna Womble, Esq.

For 202021, the Officers for the Board of Directors of the Winston-Salem Symphony are: Board Chair, Ann Fritchman-Merkel; Treasurer, Thomas Bornemann; and Secretary, Pam Cash.

About the Winston-Salem Symphony

Under the baton of Music Director Timothy Redmond, the Winston-Salem Symphony is one of the Southeasts most highly regarded regional orchestras. Now in its 74th season, the Symphony also hosts four youth orchestra ensembles, and a multitude of educational and community engagement programs, including the P.L.A.Y. (Piedmont Learning Academy for Youth) Music program providing instrumental music instruction and more, primarily to under-served youth. The Symphony is supported by Season Presenting Sponsors BB&T Wealth and Bell, Davis & Pitt, P.A.; Music Director Season Sponsor Betty Myers Howell; Symphony Unbound Sponsors Chris and Mike Morykwas; as well as generous funding from the Arts Council of Winston-Salem/Forsyth County, the North Carolina Arts Council, and other dedicated sponsors. For more information, visit wssymphony.org.

The rest is here:
The Winston-Salem Symphony Announces Newly Elected Directors - Yes! Weekly

Read More..