Page 2,595«..1020..2,5942,5952,5962,597..2,6002,610..»

SA’s internet access not quite the world’s worst, but far from the best – – The Citizen

South Africas digital wellbeing is in ICU due to lax security, low number of users, and broadband growth, according to the latest Digital Quality of Life Index. The countrys worst criteria rankings were for cybersecurity (95th place), number of internet users (91st) and broadband speed growth (80th). Surfshark, an internet security firm, indexed 110 countries that covered 90% of the global population in the third edition, which ranks countries digital wellbeing according to the five pillars of internet quality, internet affordability, e-infrastructure, e-government, and e-security. An additional 25 countries were added to the index this time. Internet quality measures the...

South Africas digital wellbeing is in ICU due to lax security, low number of users, and broadband growth, according to the latest Digital Quality of Life Index.

The countrys worst criteria rankings were for cybersecurity (95th place), number of internet users (91st) and broadband speed growth (80th).

Surfshark, an internet security firm, indexed 110 countries that covered 90% of the global population in the third edition, which ranks countries digital wellbeing according to the five pillars of internet quality, internet affordability, e-infrastructure, e-government, and e-security. An additional 25 countries were added to the index this time.

Internet quality measures the stability, speed and year-on-year growth of online connections, while internet affordability measures working hours required to pay for broadband and mobile internet. E-infrastructure measures the percentage of internet users per country and its network readiness, while e-security measures the ability to tackle cybercrime and the status of data protection laws, and e-government measures the roll-out of online government services and AI readiness.

ALSO READ: Local brands work to bridge the digital-learning gap

South Africa ranked 68th globally out of 110 countries, a drop of nine places compared to 2020, but placed first among the 18 African countries surveyed. The countrys best criteria rankings were a seventh place for mobile internet stability, 21st for mobile affordability, and 24th for broadband internet affordability.

ALSO READ: How fibre is being rolled out to townships through Gbitel

Denmark took the top position in the index for the second year in a row, and the new overall top 10 has changed considerably, with a new entrant, South Korea, taking the second spot, ahead of Finland (3) and Israel (4). The United States jumped to the 5th position from the 22nd spot the year before, with significant improvements in internet quality and e-infrastructure.

Singapore was in sixth place, France in seventh, Switzerland eighth, Germany ninth, and Britain in 10th place.

Digital opportunities have proved to be more important than ever during the COVID-19 crisis, stressing the importance for every country to ensure fully remote operational capacities for their economies, says Vytautas Kaziukonis, CEO of Surfshark.

That is why, for the third year in a row, we continue the Digital Quality of Life research, which provides a robust global outlook into how countries excel digitally. The index sets the basis for meaningful discussions about how digital advancement impacts a countrys prosperity and where improvements can be made.

ALSO READ: Calls for transparency after justice department cyber attack

Read the original here:
SA's internet access not quite the world's worst, but far from the best - - The Citizen

Read More..

Cybercrime is hitting communities of color at higher rates, study finds – CyberScoop

Written by Tonya Riley Sep 27, 2021 | CYBERSCOOP

Black people, Indigenous people, and people of color (BIPOC) are more likely to suffer from identity theft and financial impact from the fallout, according to survey data collected by internet security company Malwarebytes with the nonprofits Digitunity and the Cybercrime Support Network.

The survey found, for instance, that just 47% of BIPOC respondents were able to avoid a financial impact due to identity theft, compared to 59% of overall respondents. Compared to overall respondents, BIPOC on average reported roughly $200 more in financial losses.

Forty-seven percent sounds like okay, well, thats not so bad its like 50-50 whether youre losing money, right? But 47% is compared to 59% of all respondents, said David Ruiz, an online privacy advocate at Malwarebytes. That means that everyone else has a better chance at not being financially hit, everyone else has a better chance of skirting by kind of unscathed.

Ruiz says the reports findings on cybercrime should be considered within the wider context of the way communities experience the Internet in unequal ways. For instance, the Pew Center reports that significantly larger numbers of women and Black and Hispanic Americans have reported online harassment compared to white men.

This survey, for me at least, really showed that the internet is not an equal experience for everyone, said Ruiz. And people are telling us that loud and clear. There are groups who feel less private, there are groups who feel less safe.

The survey, which looks at the demographics of cybercrime, polled 5,000 people across the United States, the United Kingdom and Germany. While the three nations have very different privacy regulations, Ruiz said there was not a substantial difference when looking at the data by country.

Malwarebytes study also reflects the interconnectedness of online and offline harms, Ruiz noted. Women were twice as likely as men to attribute credit card information fraud to a physical attack or theft. Similarly, Ruiz offered the example of how online attacks such as doxing can lead to physical attacks against a person.

Malwarebytes numbers generally align with data collected by the U.S. government in recent years. A 2016 Federal Trade Commission study provided to Congress found that African American and Latino consumers were more likely to become fraud victims than non-Hispanic whites. The study was a part of the agencys outreach initiative to help reduce fraud-related crime against minority communities. Prior to 2016, the agency had not generally collected demographic information about fraud victims. However, the survey relied on a relatively smaller sample size of 3,700 individuals.

Federal data has also been limited by self-selection. Heavily Black and heavily Hispanic communities register far fewer complaints to the agency than non-minority communities compared to their level of victimization, FTC economist Devesh Raval wrote in the journal Marketing Science.

Nonprofits that work with cybercrime victims have also seen higher rates of minority victims.

While the Identity Theft Resource Center only collects the demographic data of U.S. identity crime victims that reach out for help, the organization still sees a higher percentage of victims who self-identify as African American compared to the overall U.S. population, said James Lee, chief operating officer.

Visit link:
Cybercrime is hitting communities of color at higher rates, study finds - CyberScoop

Read More..

Quad involvement expected to boost coast guard’s role –

By Wu Su-wei and William Hetherington / Staff reporter, with staff writer

Taiwan could play a pivotal role in coast guard activities and cybersecurity in the Asia-Pacific region, Taiwanese academics said on Saturday.

They made the remarks following reports that Taiwan might participate in activities of the Quadrilateral Security Dialogue, a security grouping between Australia, India, Japan and the US also known as the Quad.

Leaders of the four nations issued a joint statement after a meeting in Washington on Friday saying that they were committed to promoting the free, open, rules-based order, rooted in international law and undaunted by coercion.

We stand for the rule of law, freedom of navigation and overflight, peaceful resolution of disputes, democratic values and territorial integrity of states, they said.

The formation of AUKUS a trilateral security alliance between Australia, the UK and the US as well as an earlier joint statement by Quad members showed that the US seeks to expand the grouping, said Kuo Yu-jen (), a political science professor at National Sun Yat-sen University.

Although the latest joint statement did not specifically mention Taiwan, the issue of Taiwans possible role in an expanded Quad Plus grouping, and the nations cooperation with the coast guards of the US and Japan were discussed at the meeting in Washington, he said.

AUKUS was formed specifically as a military alliance, so its likely that the US intends the Quad to be something different more of a mechanism for the four member countries to cooperate on a variety of issues, he said.

Aside from cooperation on coast guard affairs, Taiwan could work with the Quad on the detection of submarines, Internet security and logistics affairs, he said.

Lai I-chung (), a consultant at the Taiwan Thinktank, said that the nation is likely to play a key role in international cooperation on technology and medicine involving the Quad members, as Taiwan excels in the two sectors.

Lai said that 5G mobile networks and other technology, supply chains, and vaccines were mentioned in Fridays Quad statement, which indicates that Taiwan could also play a role in these areas.

As the formation of AUKUS frees up Quad nations capacities, it can now focus its efforts on other areas that will strengthen the freedoms of other regional countries, he said. This will attract more countries to participate in a Quad Plus.

Asked whether Taiwan could participate in military drills with Quad countries, Lai said that such maneuvers would not make use of Taiwans strengths.

However he said that by improving its defensive capabilities, Taiwan would be contributing to regional stability.

Under AUKUS, Australia would likely commit its troops to helping Taiwan should a war break out in the Taiwan Strait, he said, adding that Japan would likely also commit its military to the cause.

Taiwan just needs to focus on its asymmetrical warfare capabilities, strengthen its defenses and work out how to coordinate its defenses with the US, Japan and Australia, he said.

Comments will be moderated. Keep comments relevant to the article. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned. Final decision will be at the discretion of the Taipei Times.

More:
Quad involvement expected to boost coast guard's role -

Read More..

Jordan B Peterson – Google Scholar

Between facets and domains: 10 aspects of the Big Five.

CG DeYoung, LC Quilty, JB Peterson

Journal of personality and social psychology 93 (5), 880, 2007

SH Carson, JB Peterson, DM Higgins

Creativity research journal 17 (1), 37-50, 2005

SH Carson, JB Peterson, DM Higgins

Journal of personality and social psychology 85 (3), 499, 2003

CG DeYoung, JB Peterson, DM Higgins

Personality and Individual differences 33 (4), 533-552, 2002

JB Peterson

New York: Routledge, 1999

RA Mar, K Oatley, JB Peterson

Walter de Gruyter GmbH & Co. KG 34 (4), 407-428, 2009

CG DeYoung, JB Peterson, DM Higgins

Journal of personality 73 (4), 825-858, 2005

JB Hirsh, RA Mar, JB Peterson

Psychological review 119 (2), 304, 2012

D Morisano, JB Hirsh, JB Peterson, RO Pihl, BM Shore

Journal of applied psychology 95 (2), 255, 2010

JB Hirsh, CG DeYoung, X Xu, JB Peterson

Personality and Social Psychology Bulletin 36 (5), 655-664, 2010

RA Mar, K Oatley, J Hirsh, J Dela Paz, JB Peterson

Journal of research in personality 40 (5), 694-712, 2006

JB Peterson, J Rothfleisch, PD Zelazo, RO Pihl

Journal of studies on alcohol 51 (2), 114-122, 1990

M Djikic, K Oatley, S Zoeterman, JB Peterson

Creativity research journal 21 (1), 24-29, 2009

SB Kaufman, LC Quilty, RG Grazioplene, JB Hirsh, JR Gray, JB Peterson, ...

Journal of personality 84 (2), 248-258, 2016

JB Peterson

Penguin UK, 2018

CG DeYoung, RG Grazioplene, JB Peterson

Journal of Research in Personality 46 (1), 63-78, 2012

RO Pihl, J Peterson, PR Finn

Journal of Abnormal Psychology 99 (3), 291, 1990

JB Hirsh, JB Peterson

Journal of research in personality 43 (3), 524-527, 2009

DM Higgins, JB Peterson, RO Pihl, AGM Lee

Journal of personality and social psychology 93 (2), 298, 2007

CG DeYoung, LC Quilty, JB Peterson, JR Gray

Journal of Personality Assessment, 2013

Follow this link:
Jordan B Peterson - Google Scholar

Read More..

Jordan Peterson: The collapse of our values is a greater threat than climate change – The Global Herald – The Global Herald

The Telegraph published this video item, entitled Jordan Peterson: The collapse of our values is a greater threat than climate change below is their description.

When society forgets its moral values nihilism and terror reign. The internationally best-selling author and clinical psychologist Dr Jordan Peterson joins Steven Edginton to discuss the moral crisis facing the West, how people become radicalised and what is filling the void religion once held within society. Watch the full interview above or listen on your podcast app.

Watch more: https://www.telegraph.co.uk/news/0/jordan-peterson-collapse-values-greater-threat-climate-change/

Subscribe to The Telegraph on YouTube https://bit.ly/3idrdLH

Get the latest headlines: https://www.telegraph.co.uk/

Telegraph.co.uk and YouTube.com/TelegraphTV are websites of The Telegraph, the UKs best-selling quality daily newspaper providing news and analysis on UK and world events, business, sport, lifestyle and culture.

Got a comment? Leave your thoughts in the comments section, below. Please note comments are moderated before publication.

Original post:
Jordan Peterson: The collapse of our values is a greater threat than climate change - The Global Herald - The Global Herald

Read More..

Restore Data in Cloud Computing: The Best Option for You – MarylandReporter.com – MarylandReporter.com

If you are planning to invest in cloud computing or are already a cloud customer, you probably know the benefits of data backup like efficiency, high availability, high accessibility, and elasticity. However, data loss due to natural disasters like earthquakes, floods, and fires can also be effectively addressed by cloud computing. Natural disasters are not only limited to damages caused by storms, lightning, and other external factors. Damage caused by viruses and malware like hacker attacks can also be effectively addressed through cloud computing. Therefore, you should also consider the benefit of data security in cloud computing.

When data is lost due to natural causes, you can call in experienced professionals from cloud computing services that will retrieve the lost data. With cloud security measures, you can rest assured that your data will be protected even if the disaster zones strike all over again. Cloud-based services ensure the safety of your data by implementing multiple layers of security and protection procedures. With its several advantages, cloud computing is becoming more popular.

You can easily make use of cloud storage for your data, especially when disaster strikes. The best thing about cloud computing is its cost-effectiveness. You can get a large amount of storage space without paying a single cent for your data backup. This means that you can enjoy unlimited access to various applications and can run your business without any hitches.

Another advantage of the cloud is that it provides application security better than a traditional hosting environment. You need not worry about application security when you host your data in the cloud. Application security in SAAS is better than that in a traditional site because of the lack of physical hardware to provide protection. In a traditional site, you may want to install strong physical security measures such as firewalls and security cameras. But this cannot be done with SAAS.

You also do not have to worry about data reliability. Disaster recovery from the cloud can be done successfully. You do not have to wait for a disaster to restore data in the cloud. As long as you have an active internet connection, you can restore data in the cloud pretty quickly.

You also gain several other benefits such as improved user accessibility, easy collaboration, reduced IT costs, reduced vendor lock-ins, easier collaboration, and enhanced functionality. These are just some of the benefits you stand to enjoy when you implement the use of the community cloud. The biggest benefit is, of course, cost savings. When you use the public cloud, you pay for usage just like you would for a local server. You dont have to buy any hardware, manage any servers, manage any application servers or pay any licensing fees. All of these costs are eliminated when you use the community cloud.

With the mainframe server offering you a choice, you need to choose whether you want to convert your physical server into a virtual machine or convert it into a cloud computing virtual machine. Both solutions have their pros and cons. Converting the physical server to a virtual machine offers you greater flexibility but at a higher price. Converting it to a cloud computing virtual machine offers you ease of use with reduced costs, greater capacity, and better performance.

Read more from the original source:
Restore Data in Cloud Computing: The Best Option for You - MarylandReporter.com - MarylandReporter.com

Read More..

This self-sustainable cloud server is powered by the energy of growing tomatoes indoor! – Yanko Design

Picture a post-apocalyptic future where human beings dont have the liberty of dependence on power stations. Self-sustainable systems are the norm and utilizing every ounce of available energy is vital for survival. A dystopian tech-infused future world where computing systems dont have any external source of abundant energy. Straight out from that sci-fi futuristic scenario is the Warm Earth server system by Ilja Schamle, a Design Academy Eindhoven graduate.

The DIY cloud server system embodies the symbiotic relationship between technology and nature. This project is all about utilizing the renewable energy extracted out of tomato vines to solely run the cloud server. In turn, the energy produced by the heat dissipation is cyclically used to maintain the optimum temperature for the vegetables to grow. As concept-like this might seem, the project was a part of the Missed Your Call graduate exhibition at the Milan design week.

The DIY project houses the tomato plants within the server racks and the server is mounted on the exterior of the rig. The ventilation shaft equipped with fans, channels the hot air to the interior of the cabinet essentially turning it into a greenhouse. Tomatoes power the server courtesy of the plant-microbial fuel cell technology developed by researchers at Wageningen University, Netherlands. This turns vegetables into batteries literally!

Nothing goes to waste as the plants perform photosynthesis turning sunlight into chemical energy, and storing the sugars and proteins. The excess nutrition is excreted via the roots as waste, where the bacteria break it down to release energy. This energy is then leveraged as electricity. Since the servers are indoors, the solar-powered grow lamps act as a source of sunlight. The electrons released by microbes are attracted to iron and the activated-carbon grid functioning as a conductor is placed at the bottom of the pot. For now, the system can produce energy to sustain a single website, and we can expect this to develop into a massive system with more research and development.

Warm Earth is a self-sustainable geeky mashup that not many could think of before this. According to Schamle, the amount of content consumed at present and in the future is destined to rise and the energy required to run such systems is going to be colossal. The artificial ecosystem will change the perception of data centers as being mere dungeons for hosting servers. They will become an important entity of future homes, where they arent kept hidden from sight!

Designer: Ilja Schamle

Read more here:
This self-sustainable cloud server is powered by the energy of growing tomatoes indoor! - Yanko Design

Read More..

What IBM i Shops Want From Cloud, And How To Do It Right – IT Jungle

September 27, 2021Timothy Prickett Morgan

It is no secret to readers of The Four Hundred that we are big proponents of so-called cloud computing, which doesnt just include access to slices of servers but also storage to keep their data and networking to link them to the world and, if multiple slices share work, to link them to storage and to each other.

We never liked the term cloud, because it connotes a fuzzy kind of infrastructure when quite the opposite is true. We still dont like calling it cloud computing, but language is created by consensus, not by fiat, so sometimes we have to yield. But there was a better metaphor, and one we might want to revive if this term can shake off some of its own bad connotations.

Way back in the dawn of time in 2003, when Big Blue launched its Supercomputing On Demand service and standards for what the academics were calling grid computing were evolving to allow computing centers to interoperate and share work, the term we came up with to describe what was happening was the obvious and far more accurate utility computing. And as we pointed out at the time, almost two decades ago, it was not entirely obvious how this On Demand model being espoused by the major IT platform providers was different from the Application Service Provider (ASP) wave that started as the client/server revolution of the late 1980s and early 1990s merged with the Internet software stack of the mid-to-late 1990s and for the first time allowed for companies to use applications remotely and under a subscription model that looked like electricity service, telephone service, or cable service. This has evolved over the ensuing time into what we now know as Software as a Service, or SaaS, which is all well and good for those companies who can get by using code designed for some kind of class average across industries and sizes.

But as AS/400 and IBM i shops know perhaps more than any other base, true differentiation in the market comes from crafting applications that specifically match the needs of the business. There was never a question that IT matters, which was a tempest in a teacup when Nicholas Carr wrote IT Doesnt Matter for the Harvard Business Review around the same time that IBM started its On Demand effort under new chief executive officer Sam Palmisano. A few months later, after online retailer Amazon.com had noticed that when it opened up APIs on its online store so people could build rudimentary applications on top of it, Andy Jassy, now chief executive officer at Amazon, took control of what would become Amazon Web Services, today the worlds largest, most complex, most complete, and arguably most expensive public cloud, which has managed to attain millions of unique customers.

It is not lost on us that many of the attributes of the original AS/400 platform and integrated stack of operating systems, databases, file systems, and programming runtimes all running on highly available, distributed computing hardware are embodied by the AWS cloud and its followers, such as Microsoft Azure and Google Cloud. In fact, in 2012, we quipped that it should be called AWS/400, and at that time, only six years after it had been launched, had about the same revenue stream and the same customer count as the original AS/400 base at its peak, which by the way it took IBM 29 years to reach after the launch of the System/3 in 1969.

Despite the success of AWS and its imitators and the realization of something that looks like the utility model that we and others conceived of two decades ago a kind of return with a new twist to the early days of the shared computing, service bureau model that IBM started off with mainframes in the 1960s we are simultaneously perplexed that cloud has not taken off in the IBM i base and also not surprised because the cloud, as it is currently delivered by the many excellent providers in the market, is missing a few vital things.

The first thing to remember is that cloud is a consumption model for a highly scalable platform that has utility pricing and a shared service bureau to bring the price down well, down more than it might otherwise be, but it still aint cheap. But cloud is not a panacea. The worlds largest clouds have very sophisticated and scalable infrastructure, and it can be made to run some of the biggest distributed computing jobs on the planet. While this is intellectually interesting, it just doesnt matter to a lot of companies, which is why there are still many tens of millions of companies that are still buying their own infrastructure and installing it in their own dataclosets and datacenters.

Most IBM i shops have persistent databases with fairly consistent workloads. Yes, they have processing peaks during key buying seasons and they also have peaks at the end of the week, the end of the month, the end of the quarter, and the end of the year, too. But there are ways of buying utility-style capacity on a temporary basis with the Capacity Upgrade On Demand (CUoD) features of IBMs Power Systems to deal with this, or just simply overprovisioning the server from the get-go to deal with peaks. This may not be the most efficient way to use capital, but it works and firing up cloud capacity 247 for the five or six or seven years that many IBM i on Power Systems make use of their machine is far more expensive.

Moreover, IBM i shops have long since figured out how to make use of that excess capacity when it is not needed for running online transaction processing (OLTP) workloads, supporting partitions with other infrastructure workloads like file serving or Web serving or even analytics and batch processing. And at some point, we suspect that future Power Systems machines will be running machine learning training models by night and applying machine learning inference by day, embedded in the applications themselves.

The point is, while the cloud utility model is attractive from an intellectual standpoint, and being able to scale workloads up and down and to turn them off and therefore not pay for them when you are not using them is truly evolutionary, it just isnt all that valuable for IBM i shops. And as evidence, all we need to do is talk to the big clouds. IBM has 125 customers on its Power Systems Virtual Server cloud instances, and the other true cloud providers have several dozens to hundreds of their own. There are even more companies that have what are really hosted IBM i instances, which are not utility as we have defined it you can turn it on and turn it off at will. Call it 500 to 1,000 true cloud customers and maybe several thousand hosted customers, against an IBM i base that numbers somewhere between 120,000 and 150,000 unique customers, depending on who you ask.

This is after a decade and a half of pushing very hard by many companies, many of whom are listed in the Related Stories section below. And while many of these companies have been successful, it is hard to say that cloud has taken the IBM i base by storm in a way that it has for other customers. We are beginning to think that IBM i shops need something that feels like cloud in terms of the operational expense pricing model, but it really is a combination of hosting plus managed services layered on top of them that solves real problems.

Think about it. The public clouds are successful because developers needed a cheap place to try out new ideas and new services to make new kinds of applications, and when their companies were successful think of Netflix on running on AWS they needed to scale like crazy as well as increase their application scope to try to make some money. The big clouds solved the infrastructure problems of millions of developers and for several thousand and now several tens of thousands of enterprises. While there are some companies who have gone all in with AWS and other clouds, this is a lot more rare than anyone wants to talk about. IBM is right that hybrid cloud models, mixing on premises and cloud infrastructure, is the future for most companies.

IBM i shops are not fearful, but they are conservative. There is a lot of talk about how IBM i shops are afraid of change, afraid of loss of control, and afraid of the lack of security out there on the cloud. They arent afraid of change most IT managers, system administrators, and programmers in the IBM i space have seen so much change in their many decades that it will make your head spin if you were born after 1990. They are not believers in change for the sake of change no question about that. So lets just put to bed the idea that IBM i shops are afraid of anything.

They surely are skeptical of some of the claims people make about cloud being cheaper than on premises infrastructure, and from the survey data that we have seen, they are indeed worried about security and performance on what is in essence a shared utility. They have data sovereignty issues many of them compelled by law in financial services, insurance, healthcare, and other industries. They rightly worry about connectivity between their users and the systems running in the cloud, and because of the pricing complexity of cloud services, they worry how they can budget the costs.

There is a lot to worry about, and no one wants to go first to find out about the differences between on premises and the cloud the hard way. And even though they pay a premium for their IBM i on Power Systems iron, they cant get nickled and dimed to death on a cloud or dollared or ten dollared, for that matter. They want to bring order to the financing of IT, but they dont want to lose control of IT. That is taking it too far, and that is why we are seeing so many datacenter repatriations after a wave of all-in cloud customer stories.

But we think the issue of resistance to the cloud among the vast majority of IBM i customers is even larger than all of this. After watching this for years, we have come to the conclusion that IBM i shops want a full, vertically integrated experience out of their infrastructure provider. This is the ideology that the AS/400 represented and that the IBM i platform continues. And we think they want to throw back all the way to what IBM originally delivered with the System/360 mainframe, when capacity on the machines was rented, often located in a service bureau because few companies could afford to buy mainframes, and Big Blue provided all kinds of training and programming services to help customers get the full use of the capacity they bought. The capacity was expensive and the help was free.

These days, the capacity is nearly free thanks to Moores Law, and the help that IBM i shops with an aging population and a large technical debt need desperately are too expensive. Something has to give, and someone needs to provide a vertically integrated set of hardware, software, and services that helps customers get their platforms and the applications that run on them all modernized. Updating the hardware is necessary, but not sufficient. We need a utility model for application programming and modernization as much as we need a utility model for hardware capacity and technical support. And anyone who can bring these all together will probably be able to get IBM i shops excited about what will still very likely be called the cloud. But we will all know it is more than that.

Public Cloud Dreams Becoming A Reality for IBM i Users

Comarchs PowerCloud Gives IBM, Microsoft, And Google A Run For The Money

Thoroughly Modern: Clearing Up Some Cloud And IBM i Computing Myths

Skytap To Expand IBM i Cloud Offering

IBM i on Google Cloud Appears To Be Stuck in Alpha

Skytap Offers Deals and Discounts in IBM, Azure Clouds

IBM i Headed To Azure By Way Of Skytap

Microsoft Wants to Migrate Your IBM i Code to Azure

IBM i Clouds Proliferating At Rapid Clip

Its Getting Cloud-i In Here

Big Blue Finally Brings IBM i To Its Own Public Cloud

Public Cloud Dreaming For IBM i

A Better Way To Skin The IBM i Cloud Cat

Blue Chip Builds Out 1.5 Million CPW IBM i Cloud

Key Info Unlocks Its Cloud

Deconstructing IBM i Cloud Migration Myths

Steady Growth For The Connectria Cloud

Sirius Considers Expanding Its Power Cloud Capacity

Mobile, Modernization, And Cloud See The Money In 2013

Abacus Wants You To Run In Its CloudAnd For Your Health

IBM Buys SoftLayer To Build Out Hosting, Cloud Businesses

Corus360 Builds Power Systems Cloud In Atlanta

Infor and Abacus Launch System i Cloud

One MSPs Clear View Of The Future Of Cloud ERP

I, Cloud-i-us

IBMs Power-Based SmartClouds on the Horizon

Wanted: Cloud-i i-nfrastructure

See the article here:
What IBM i Shops Want From Cloud, And How To Do It Right - IT Jungle

Read More..

Server market size to reach $145.31 billion by 2028 – Help Net Security

The global server market size is expected to reach $145.31 billion by 2028, according to ResearchAndMarkets. It is expected to expand at a CAGR of 7.8% from 2021 to 2028.

The demand for servers is anticipated to grow considerably over the forecast period owing to the growing focus on the timely update of IT infrastructure worldwide. The rising adoption of data analytics among enterprises to understand consumer trends has resulted in the growing adoption of IT networking equipment.

Furthermore, the rollout of 5G networks and technologies such as the Internet of Things (IoT), cloud computing, and virtualization is expected to fuel the demand for high-performance computing servers.

The rising preference for contactless payments and remote working amid the COVID-19 pandemic is expected to drive the need for high-speed data processing and storage capacity across various industry verticals.

Advanced technologies have paved the way for connected appliances and autonomous vehicles, which has prompted IT infrastructure companies to opt for the latest, advanced storage solutions, including flash memory and solid-state drives (SSD), for storing crucial business data.

Meanwhile, the demanding and changing configurations required by cloud service providers are driving the demand for servers. For instance, in May 2020, Facebook released its third generation Yosemite scalable server, which is equipped with Cooper Lake CPU and six memory modules. Such developments are expected to cause an increase in the average selling prices of servers, which is expected to subsequently benefit the market growth.

Several enterprises are shifting to managed data center services from colocation data centers owing to the cost advantages offered by managed data center services.

Managed data centers allow enterprises to adopt virtual servers by renting the networking equipment, connecting devices and peripherals, and cloud space. The cloud server space can be private or shared, which again allows the enterprises to reduce the total cost of ownership.

The market is witnessing increasing competition between OEMs and Original Design manufacturers (ODMs). OEMs are the companies that manufacture servers as well as sell them through resellers and distributors, while ODMs design and manufacture similar servers and directly sell them to the customer.

Besides, ODMs cater to the demand for servers customized according to the user configuration. The increasing demand for customized requirements is expected to drive server sales through ODMs.

The market is characterized by intense competition among established market players. Key market players are focused on product innovation and the introduction of new technologies to their server portfolios. For instance, in September 2019, Dell EMC introduced new products in its PowerEdge server portfolio.

These new servers are equipped with 2nd Gen AMD EPYC processors, which help to easily manage the platform and offer superior performance to the user. The new servers are built specifically for modern data centers for multi-cloud approaches.

View original post here:
Server market size to reach $145.31 billion by 2028 - Help Net Security

Read More..

Arm Neoverse: Powering the Next-Generation of High-Performance Computing – Eetasia.com

Article By : Stephen Las Marias

Arm's Neoverse platform and ecosystem can help foster innovation and growth with successful deployment in the hyperscale and enterprise cloud data centers.

Indias digital economy is in a stage of exciting growth. With over a billion mobile phones in use in the country and around 700 million internet subscribers, the opportunities for an ecosystem powered by digitalization are endless.

In fact, India now is one of the leaders in data consumption and generation worldwide. The outbreak of the COVID-19 pandemic in 2020 further accelerated the adoption of cloud computing in the country as enterprises sent employees to work from home and schools turned to online education. Add to this the demand for online services brought about by video streaming and gaming as people get to stay at home amid lockdowns and movement control orders, social media platforms, as well as increasing e-commerce activities.

All of these trends are fueling the growth of the countrys data center infrastructure industry. According to JLL India, Indias data center industry is expected to reach 1,007 MW by 2023, more than double its existing capacity of 447 MW.

The growth of the digital economy is going to lead to the growth of Indias data center industry over the next few years, said Eddie Ramirez, Sr. Director of Marketing, Infrastructure, at Arm. The Ministry of Electronics and IT (MeitY) published a report last year saying that by 2025, there will be $4.9 billion spent on data centers within the country.

Ramirez leads the go-to market and ecosystem team for Arms infrastructure line of business. For us, infrastructure is everything in the data center, including the networks such as 5G that power data that goes across the world, he said. We are the group thats looking at how to improve compute power for the infrastructure.

In a recent webinar titled Disrupting Cloud Data Centers with Arm Neoverse, Ramirez discussed how Arms Neoverse platform and ecosystem can help foster innovation and growth with successful deployment in the hyperscale and enterprise cloud data centers. He also highlighted the comprehensive hardware and software ecosystem that enable and optimize customers application development and deployment on Arm-based infrastructure.

The Neoverse Platform

Conceptualizing the Neoverse platform, Ramirez said they started with the simple question of How do we build a platform that can get you more compute from the same power output?.

If all these data centers are going to be built over the next five years in India, how do we scale the compute to use that space most efficiently? Every data center has a certain power footprint that they have to operate about, he said.

That was the fundamental question that Arm addressed with the Neoverse platform. Designed specifically for infrastructure and cloud computing segments, Arms Neoverse platformstarting with the N1 and E1 released in 2019, followed by N2 and V1 released this yearis the foundation for the next generation cloud-to-edge infrastructure, delivering high-performance, secure, and scalable computing solutions along with a robust hardware and software ecosystem.

Since 2018 when Arm first announced Neoverse, the company has seen a wave of adoption throughout cloud-to-edge infrastructure. The rich diversity of hardware and software solutions that have come to market enabled by Neoverse-based compute are now deployed in cloud data centers, HPC systems, 5G networks, and out to edge gateways, providing cost savings, power efficiency, and compute performance gains.

We are now seeing cloud service provides like AWS [Amazon Web Services] and Oracle adopting Arm and offering compute instances that are both high performing and offer costs advantages, said Ramirez.

Designed by their AnnapurnaLabs team, AWSs own server CPU called Graviton2 delivers 64 Arm Neoverse N1 cores on 7nm manufacturing technology. With Neoverse, AWS was able to demonstrate a 40% better price performance running on Graviton2-based compute instances than what they had before with the legacy architectures.

Thats really significant because not only are they able to build their own processors, but they are also now more in control of their supply chain, said Ramirez. But to actually be able to pass on these very significant performance and cost savings to their end customer really puts them in a different class of cloud providers.

AWS now has several EC2 compute instances running on Graviton2. Most recently, the company launched new extra-high memory X2gd instances which, in some cases, are providing over 80% better throughput compared to older X1 models.

We were excited by the performance benefits that we at Arm are now shifting more of our EDA workflows to Graviton2. And were happy with the overall performance and TCO benefits we have achieved, said Ramirez.

Another cloud service provider embracing Arms Neoverse platform is Oracle. Oracles known for their database software, but they also have quite a significant presence in their Oracle Cloud. They launched their Arm-based cloud instances utilizing two socket servers equipped with Ampere Computings Altra 80-core CPUs for a total of 160 Arm Neoverse N1 cores per server. The systems include 1TB of memory and 250 Gbps networking. This powerful server allows customer flexibility to enable right size of compute and memory to support their needs, explained Ramirez.

He said Oracle was the first to announce a penny per core-hour that customers can usebringing the cost of compute down significantly for customers that are using the public cloud.

Enabling the Next-Generation of HPC

One of the things that differentiates Neoverse from some of the x86-type architectures out there is that we focus our designs on single-threaded performance versus using this concept of multithreading, where different threads share the same core, said Ramirez.

This brings a more predictable performance, according to him. If you are using a public cloud on an Arm Neoverse core, you can be sure that your virtual machine is accessing the full core on its ownyou are not time sharing with other customers, Ramirez explained.

This also provides benefits from a security standpoint because you are isolated to that single core.

And then it is not just about the cores, but the interconnects. Our Neoverse CPU cores combined with our Coherent Mesh Interconnect products enables superior performance for high core count systems, said Ramirez.

Last but not the least, it is also about the generation uplift. The other thing that we look at is how do we ensure that we deliver generation to generation performance improvements. With our newer roadmap on the N series and V series, we are now able to achieve 40 to 50% performance uplift. Thats really kind of been unheard of. And that level of performance improvement from one generation to the other is very unique to Arm, said Ramirez.

The future CPU designs that will be powered by Arm Neoverse will enable continued scaling in data center performance.

We are already seeing traction with our new Neoverse platforms. One example is MeitY in India has decided to license the Neoverse V1 platform for their exascale HPC CPU design. They join other HPC initiatives for Exascale computing project in Europe via SiPearl and in Asia through ETRI, who have also announced adoption of Neoverse V1, said Ramirez.

Enabling an Ecosystem

At Arm, we are working every day to ensure that software can easily be developed and deployed on Arm platforms, said Ramirez. We see a future where all of the worlds shared digital data will, at some point in its lifetime, be processed on Arm. To execute this vision requires significant investment in software and support for developers who write the code.

He noted that developers are also rapidly adopting cloud native software. We have a significant footprint of OSS projects, independent software vendors already supporting Arm 64-bit architecture, Ramirez said. We were really excited to learn from Docker that there are now over 100,000 containers that are written for Arm processors that are on their site today.

He added that the other part of cloud native is deploying CI/CDcontinuous integration continuous development toolsto ensure that anything developers changed or modify, features they add to their software, get tested daily.

One of the things that we have done to help spur that is the Works on Arm program, where we are offering CPU cyclesthey could be virtualized or they could be bare metal serversthat developers can take advantage of for free, as part of their co-development process, said Ramirez. The ecosystem has come a long way on Arm and it also helps that we have partners, like AWS, who are contributing to the ecosystem, as well as several independent software vendors that have made their efforts to port and optimize on Arm.

There are now several OEMs and ODMs offering Arm-based servers in India. Companies like Foxconn, Wiwynn, and Gigabyte have deployed multiple skews of Ampere Altra-based servers, said Ramirez. We continue to see more OEMs engaging us every day. And we are also excited to work with local vendors in India who may be interested in supporting Arm-based servers as well.

Innovations in the Pipeline

Arm is a company that focuses on relentless IP innovation. And one of the things that the company introduced earlier this year is the Armv9 architectureits first major upgrade in a decade.

According to Ramirez, one of the improvements in v9 is security, enabling things like confidential compute. This is where you can ensure that a customers user data is effectively protected not only within the processor, but even within the virtual machine or within the container application that runs on that processor, he explains. We are also introducing enhancements to performance. Part of v9 is our scalable vector technology. Vectorssort of one-dimensional arrays of datahave been around since the first supercomputers. With Armv9 and the SVE2 upgrade, chip designers now have a lot more flexibility in the vector links that they want to deploy. This will all help with delivering higher performance for workloads like genomics, computer vision, VR, and even machine learning on CPU.

India and Beyond

Guru Ganesan, President of Arm India, sees wide adoption of Arm technology in the Indian cloud computing and telecom space in the coming years.

Public cloud end-user spending in India is forecasted to be over $4B in 2021. Large enterprises, medium businesses, and start-ups in India will see significant performance and costs benefits by moving to higher performance and power-efficient Arm-based CPUs in the cloud. Additionally, as companies become more conscious of the environmental impact, it is important to consider the energy efficiency of Arm-based computing. Our engagement with the Indian government on the HPC front is progressing well, with MeITY starting to develop an HPC processor based on Arm Neoverse Technology

We have done a few supercomputing projects, most notably is the Fugaku supercomputer in Japan, where we helped enable RIKEN to build the most powerful supercomputer in the world using Fujitsus A64FX CPU, said Ramirez. That has delivered almost 7.6 million cores of processing power, so we are very excited to see what we can do with entities like MeitY in India. Not only for the cloud space, but for the HPC space and academics, or companies using supercomputing powerwe are hoping that we bring such high-performance solutions to the India market.

Arm is also working with other countries beyond India, in projects including 5G network deployments.

Indias telecom ecosystem consisting of the network operators as well as OEMs, are actively pursuing development of modular and interoperable, best-in-class hardware and software elements, to build state of the art, scalable and manageable 5G networks, said Ganesan. Arm-based products, built on the concept of heterogenous compute, offer a complete set of solutions all the way from radio-unit to the core, to enable deployment of high-performance networks with the lowest TCO.

Weve been working very closely with different countries in Southeast Asia on how to enable, for example, O-RAN initiatives. Weve been a big part of O-RAN, and this will have a big impact in Southeast Asia, as they are now looking at deploying 5G networks in different countries, said Ramirez. That has been a big initiative for usto participate in those standard groupsto drive this open architecture for 5G networks.

With respect to Southeast Asia, opportunities abound both with AWS as Eddie mentioned and Oracleas it is expanding its Arm instance presence in all regions globally, said Amaresh Iyer, Senior Manager, Segment Marketing, Infrastructure BU at Arm. And their pricing offer that they have on Arm instance in the OCI is a golden opportunity for a lot of developers, especially in countries like India and Southeast Asia, to take advantage of when it comes to testing, porting, and recategorizing their workloads on Arm, at a very low cost.

Iyer noted that another big factor offered by Arm is the sustainable power-efficient processor. In these countries where power is a big issue, having power-efficient datacenter infrastructure is very important. Thats very attractive to markets like India and countries in Southeast Asia, he explained. We see a lot of developments happening in 5G, Internet of Thingsall of those market segments we also play in as part of the Arm IP infrastructure. And we have a global viewfrom edge to cloudand Arm has an IP offering in each of those segments. These are extensive technology offerings that are secure, scalable, power efficient, and high-performance, and suitable for many different markets worldwide.

Related

Visit link:
Arm Neoverse: Powering the Next-Generation of High-Performance Computing - Eetasia.com

Read More..