Page 2,287«..1020..2,2862,2872,2882,289..2,3002,310..»

Connected Car Market to Expand at a Healthy CAGR Of 22.70% from 2021 to 2031 TMR – Business – Inter Press Service

According to the report, the globalconnected car marketsurpassedUS$ 24.07 Bnin 2020, expanding at a CAGR of22.7%during the forecast period. Connected cars utilizes Internet services such as WLAN (Wireless Local Area Network), 4G, LTE, and 5G for creating connection with the local and cloud servers and for connecting with other vehicles. The connected cars use advanced technologies for recording the vehicle and user behavior and to recommend particular actions in the future.

Increasing production and sales of the IC engine and electric and hybrid vehicles across the globe, led by several rising demand for cars, vehicle emission laws, government incentive, and environment problems, which is expected to propel the growth of the connected car market. There is rising disposable income of consumer, which enhances their purchasing power that further leads to surge in vehicle sales. Moreover, the cars sold in the developed regions are equipped with advanced features, which has improved safety and security solutions that also leads to growth in the market. Furthermore, the adoption of luxury and premium cars in most of the countries is anticipated to boost the market growth during forecast period.

Request Sample- https://www.transparencymarketresearch.com/sample/sample.php?flag=S&rep_id=1617

Due to the onset of COVID-19 pandemic, production and sales of the vehicles has hampered, as all the manufacturing units and sales showrooms were closed. Due to lockdowns, public was restricted from traveling, which further restrain the growth of the connected cars market. The supply chain and transportation channels where disrupted that decrease supply of raw material for vehicles production.

Expansion of Connected Car Market

Rising government regulation and mandate to integrated advanced solutions in the cars, to improve the safety of the consumers and reduce road accidents. Countries like the U.S., Germany, the U.K. France, Italy, Spain, and other countries has adopted the use of autonomous and advanced technology such as artificial intelligence (AI), machine learning (ML), IoT, and other in the vehicles, as it provides real-time data from the vehicle and surveillance and monitoring feature for the government.

Enquiry Before Buying- https://www.transparencymarketresearch.com/sample/sample.php?flag=EB&rep_id=1617

Moreover, connected cars has embedded dashboard camera, GPS tracking, SOS services and telematics based insurance, which records and stores the data, which is further prone to cyber-attacks. Since, rising government mandates for data security, the service providers or manufacturers provides multi-layer security for vehicle data protection, which is further expected to boost the growth of the connected car market.

Based on car type, sedan and SUV car type segment held a prominent share of the connected car market, in terms of revenue, in 2020. The consumers in the developed countries utilizes sedan and SUV cars, as these cars has more safety and security features as compared to hatchback cars. Moreover, hatchback cars are designed for high traffic regions, whereas, the traffic in the North America and European countries is very less as compared to other developing region, which is another factor for the adoption of sedan and SUV cars.

Request Brochure- https://www.transparencymarketresearch.com/sample/sample.php?flag=B&rep_id=1617

In terms of connectivity, the dedicated short range communication (DSRC) segment held a highest share of the connected car market, in terms of revenue, in 2020. The dedicated short range communication is considered as wireless communication which helps the intelligent transportation system (ITS) for creating connection with other vehicles and cloud services for tracking and surveillance purpose. Moreover, wireless connectivity is expected to witness fastest growth in the future.

Regional Analysis of Global Connected Car Market

In terms of region, the global connected car market has been segmented into North America, Europe, Asia Pacific, Middle East & Africa, and Latin America. Europe and North America has major share in the global connected car market in terms of revenue in 2020. In such developed regions, consumers are more tend towards technological features in their cars and as they opt for more safety and security functions. Moreover, most of cities in the Europe and North America are smart cities and hence, they have better connectivity on the highways and other parts, which make connected cars to work with high efficiency and performance. In addition, due to rising government mandates and support for integrating technology with automotive industry, the manufacturers are focusing to develop advanced and more autonomous cars, which will further propel the growth of the connected car market.

Get COVID-19 Analysis- https://www.transparencymarketresearch.com/sample/sample.php?flag=covid19&rep_id=1617

Global Connected Car Market Players

Prominent players operating in the global connected car market include Qualcomm Technologies, Inc., Robert Bosch GmbH, Continental AG, Airbiquity Inc., CloudMade, Daimler AG, Intellias Ltd., Sierra Wireless, Tesla, LUXOFT, Ford Motor Company, Audi AG, BMW Group, TomTom International, AT&T Inc., etc.

More Trending Reports by Transparency Market Research

Retreaded Tire Market: The increasing demand for effectively managing automotive tire waste by recycling and reusing is a major concern expressed in developed and developing countries worldwide. Government agencies, along with tire makers are focusing on effective tire waste management and recycling to overcome these challenges. Retreading is a potential solution for effective recycling of worn-out tires and saves considerable energy consumed in the manufacture of new tires.

Hypercars Market : Performance-enhancing technology, such as the use of hydrogen gas rather than conventional fuels and carbon fiber as a lightweight substance, is increasingly being incorporatedin hypercars. Such technological integrations are expected to drive sales opportunities in the global hypercars market.In addition, prominent hypercar manufacturers are focused on the development of completely electric hypercars with superior technology, which is projected to drive the demand for electric hypercars during the forecast period.

About Us

Transparency Market Research is a global market intelligence company, providing global business information reports and services. Our exclusive blend of quantitative forecasting and trends analysis provides forward-looking insight for thousands of decision makers. Our experienced team of analysts, researchers, and consultants, uses proprietary data sources and various tools and techniques to gather and analyze information.

Our data repository is continuously updated and revised by a team of research experts, so that it always reflects the latest trends and information. With a broad research and analysis capability, Transparency Market Research employs rigorous primary and secondary research techniques in developing distinctive data sets and research material for business reports.

Contact

Transparency Market Research,

90 State Street, Suite 700,

Albany, NY 12207

Tel: +1-518-618-1030

USA Canada Toll Free: 866-552-3453

Email:sales@transparencymarketresearch.com

Website:https://www.transparencymarketresearch.com/

The post Connected Car Market to Expand at a Healthy CAGR Of 22.70% from 2021 to 2031 TMR appeared first on 360PRWire.

Continued here:
Connected Car Market to Expand at a Healthy CAGR Of 22.70% from 2021 to 2031 TMR - Business - Inter Press Service

Read More..

Sustainability in the creative sector goes far beyond cloud computing – Prolific North

Scott Riley, Director at Cloud Nexus, argues that creative agencies can still do more when it comes to environmental responsibility - and it means more than justmoving to the cloud and giving up the office.

Sustainability in business has been huge for a while now, but since COP26, its been thrust into the spotlight and actually become a deciding factor for some as to whether a work partnership can be formed.

Theres no denying that every business should be making sustainability a top priority in 2022, but its not uncommon to get it wrong believing that moving to the cloud and giving up the office outweigh the overall impact of everyday activities. From email marketing to digital advertising, the impact when added up can be significant.

Im not by any means suggesting that the UKs creative industries dont have a valuable impact on society - in fact quite the opposite.

According to The Guardian, they generate 36 billion for the economy, employ 1.5 million people and bring so much to the world, from design to theatre. Andin a post-pandemic era where having a digital presence has never been more important, they play a vital role in keeping businesses relevant and profitable.

But creative agencies in particularhave an opportunity and responsibility to shape the world and the health of the planet by becoming more aware of how their operations impact the planet, looking beyond cloud computing.

Sustainability conversations to date focus in on fast fashion, recycling and food waste, but less about our own personal computing habits, which are having more of an impact than people recognise.

Cloud computing has become mainstream in recent years and in the creative sector, it's seen as a big move towards more sustainable personal computing - much of the narrative is around how we can use cloud computing to minimise use of physical products.

But simply putting everything on the cloud and thinking weve ticked the sustainability box is not enough. There needs to be closer attention to our overall digital carbon footprint and how this can be reduced company-wide.

Theres no doubt about it, cloud computing does have environmental benefits; but like many things in life, the journey to sustainability is about understanding the true impact of your actions - the misuse of cloud computing could, in fact, push back your sustainability efforts.

Whilehugely beneficial, cloud computing doesnt automatically give a business a sustainability stamp.

Most businesses are taking a step in the right direction by migrating their data to the cloud, but often dont envisage where their data is really going. I dont want to be the one to tell you this, but its not actually floating around freely and without harm.

Even though businesses have got rid of their physical hard disks, their data is still stored in a physical location, only out of sight. These are called data centres, or often server farms.

As you can imagine, these centres take an enormous amount of power to run. Luckily, many data centres and cloud providers have begun using renewable energy to source their power, as well as using smart facilities that use free-air cooling methods and energy monitor systems for the devices within the data centres.

An effective way to ensure you are practicing true sustainability is to ensure your data centre has BREEAM certification, a sustainability assessment for projects, infrastructure, and buildings. Certification means they're not using dirty energy to store your data.

Whileits easy to lobby for billboards and prints to use recycled papers and non-toxic ink, the sustainability of online creativity is much harder to understand.

But its something that the creative sector needs to. Carbon footprints need to be measured effectively in order to make real strides towards sustainability. Research shows that the internet represents 4% of global emissions, and whilst that doesnt sound like much, its almost double the aviation industry.

Take online advertising. It's not just the advert itself which is racking up the carbon usage. The graphics that the consumer sees are made up of pixels, which needs a lot of data to run.

What's more, third-party programmes that track the uptake of the ad and collect the data of the consumer to create targeted profiles are often forgotten when it comes to sustainability tracking - but these kinds of outputs are what's dampening the creative sectors sustainability efforts.

Creative agencies should be conscious of choosing green hosts for websites and be more conscious of how much carbon is racked up through third-party programmes. It's worthwhile to get a true calculation of how much carbon you are producing as an individual or creative agency.

Whether its emails, social posts, landing pages, or impressions, using a carbon footprint calculator can give you a more accurate understanding of the impact of your campaigns.

Im sure Im not the only one who can admit that their inbox has been filled with unwanted emails in the past. The thing is, an email equates to roughly 4 grams of CO, with an average business user creating a carbon footprint of around 135kg just from sending, reading and storing emails. This raises the question - does that email really need to be sent?

OVO Energy research suggests that if every adult in the UK sent one less thank you email, it could save 16,433 tonnes of carbon a year. For perspective, thats the same as taking over 3,000 diesel cars off the road.

The problem is that communication between stakeholders is key. Its imperative to have that contact in order to produce the work, but if we are deeming means of communication as unsustainable, how do we proceed?

Its all about circumstance and seeing the bigger picture. The truth is, an SMS text generates significantly less CO than an email, a phonecallgeneratesonly slightly more than a text, while a video call produces more than any of the above.

Sustainability is about perspective, though. If you jump on a video call with a global client, with whom you generally communicate via email, youre of course going to be much more sustainable than flying to meet in person.

We have to think about what works within the context of our businesses to keep the creative sector thriving, but with a priority of sustainable practices. Awareness is key. If we know the impact of the tools we're using, we can be more conscious of how much they are utilised in creative practice.

Creative agencies have a reliance upon a vast range of tools and platforms to operate, from creating graphicsand writing copyto analytics reports and CRM tools. We've already raised that there is an environmental cost to tools needed in the creative sector, but are agencies using an unnecessary amount?

Research by the Winterberry Group shows that marketers on average use 12 different platforms regularly, with around 10% saying they often need 31 or more different tools, platforms or systems to be able to do their job. Marketing is just one industry within the creative sector - the number of tools used across the industry is unimaginable.

The key to maintaining control with the amount of tool usage and its impact on sustainability is to try and opt for tools and platforms that offer integrated services and packages that give you multiple capabilities under one umbrella.

As a simple example, Office 365 offers built-in, collaborative features under one integrated experience. Being able to access apps such as Word, Excel and PowerPoint, as well as Online File Sharing and Video Conferencing allows you to use multiple different tools from one overriding system, meaning youre generating less power to run more of the tools you need.

Do you really need to be storing files in Google, Box, Dropbox, Sharefile and Microsoft? Allthose files being actively available online needs servers and datacentres, and that all adds the problem.

The fact the creative sector is looking for new ways to continue their professional practice in a more sustainable way is a step in the right direction. But a lot of improvements need to be made.

Cloud computing is a great step towards sustainability, but you still must be conscious of how it works, and mindful it does not grant you sustainable immunity.

Follow this link:
Sustainability in the creative sector goes far beyond cloud computing - Prolific North

Read More..

Get lifetime VPN protection and 10TB of cloud storage for $77 this Valentine’s Day – ZDNet

StackCommerce

If you're worried about your data, whether it's on your computer or via your browsing habits, you should consider a couple of failsafes that will make your life more convenient. And right now, you can do both in one fell swoop with the Lifetime Backup & Security Subscription Bundle. You'll never have to worry about your privacy or the security of your data while online, losing important files, or running out of space on your computer again.

We've all gotten used to downloading big files between videos, music, software, high-quality photos, and more. Eventually, they will add up to the point that you'll have to start purging some to make room on your hard drive. Not only will that be a pain to deal with, but you will have so much at risk to lose from malware, accidents, malfunctions, and more.

With a Degoo Premium Lifetime Backup Plan, you get 10TB of storage and automatic backups that replicate themselves every time they occur. Degoo can be installed on an unlimited number of devices, so you're always covered. So if you're a photographer who needs plenty of space for project files, Degoo has your back. It's no wonder it's rated 4.5 out of 5 stars on the App Store.

Meanwhile, with a lifetime subscription to KeepSolid VPN Unlimited, you can keep five devices protected online with military-grade encryption, a zero-logging policy, a kill switch, and more. Plus, VPN Unlimited offers more than 400 speedy servers in over 80 locations. That means you can enjoy all of your favorite content no matter where you are with no latency, buffering, or geographic restrictions. You also get the convenience of features such as Favorite Servers, Trusted Networks, Ping Tests, and much more.

Don't pass up this chance to have a lifetime of maximum VPN protection, 10TB storage, and backups. This Valentine's Day, get the Lifetime Backup & Security Subscription Bundle for only $76.49 by using coupon code VDAY2022 at checkout.

See more here:
Get lifetime VPN protection and 10TB of cloud storage for $77 this Valentine's Day - ZDNet

Read More..

Calamu scores funding for self-healing data protection tech Blocks and Files – Blocks and Files

Data-Protection-as-a-Service startup Calamu has just gained funding to develop double-encrypted, multi-site, multi-cloud, fragmented data protection technology that is self-healing, defeats ransomware, and meets compliance rules.

The company was started in 2019 by CEO Paul Lewis, incorporated in 2020, and took in two rounds of seed funding in 2021; $2.4m in April and an undisclosed amount from Dell Technologies Capital in June. It has just raised $16.5m in an A-round led by Insight Partners with Dell Technologies Capital putting in money again. Total funding is now more than $18.9m.

Lewiss funding announcement statement said: With billions spent on cyber resilience, the frequency of cyber attacks continues to escalate. Instead of trying to prevent an attack, Calamu is capable of absorbing an attack without impacting the business.

How does this Calamu Protect technology work?

A file or other piece of data to be protected is first possibly compressed and then encrypted. Then it is split into redundant fragments and the fragments re-encrypted using different keys. The fragments are then scattered across multiple geo-dispersed sites, such as AWS, Azure, GCP, and on-premises storage (NAS and drives), as a Calamu diagram indicates:

The string of sites is called a data harbour and forms a redundant virtual storage environment.

The result is that if any one sites copy of the data is accessed, all you as a malware actor or hacker get is a double-encrypted fragment of data with no key and no indication where the rest of the data fragments are located. Its a string of meaningless bytes.

If this particular cloud or on-premises site goes down or is corrupted, Calamu Protects monitoring function will detect it, rebuild the list data from the other fragments in the other sites then store the rebuilt encrypted fragment in another location. In other words it is self-healing.

Because the data in its entirety is not stored in any one location and none of the storing locations can disclose the actual data they are storing, this technology can meet certain compliance regulations concerning data jurisdictions.

Lewis said: Calamu enables data to be stored with jurisdiction-independence. This intrinsically provides compliance with regulations such as GDPR and HIPAA, and opens the door for many businesses who wish to migrate to the cloud but are fearful of non-compliance.

Calamu Protect writes and reads data to the scattered locations using parallel engines so that access time for the data is typically faster than downloading the equivalent file in its entirety from any one single public cloud location; up to 30 per cent faster.

Users access a Calamu Console as a central pane of glass to create data harbours or add new storage locations. Calamu Drive operates with Windows File Explorer or macOS Finder to present items stored in Calamu Protect as familiar files in folders.

Data is written to/read from the multiple sites using Calamu Connectors, which support Amazon S3 buckets, local servers or database snapshots. They support a Docker or Kubernetes containerised architecture.

A Calamu document by chief software architect Rob Pledger discusses its technology. As the company only emerged from stealth last year and has just had an A-round, it is now developing its first real product version. You can request a demo here.

We found a downloadable patentUS9405926B2 Systems and methods for jurisdiction independent data storage in a multi-vendor cloud environment assigned to Calamus Paul Lewis. It states: This invention relates to data protection and security in a cloud-based environment. In particular, the invention allows data to be stored in a cloud environment whereby it is inaccessible to any third party and not subject to data privacy laws of any given jurisdiction.

Go here to read the rest:
Calamu scores funding for self-healing data protection tech Blocks and Files - Blocks and Files

Read More..

Thoroughly Modern: Taking Charge of Your Hardware Refresh in 2022 – IT Jungle

February 14, 2022Peder Lundblad

IBM i shops are well known for paying a premium for their systems because they are worth it, but they are also just as well known for hanging onto aging hardware a bit past its prime, which has its own kinds of economic costs in higher maintenance and operational costs as well as the risk of a failure in some key component of the hardware or of having an older operating system release that does not have a key security vulnerability or bug patched.

IBM does a very good job of keeping hardware technology cycles regular, and so there is really no excuse for this. Continuing to rely on aging, unsupported hardware can eat away at your IT budget in unproductive ways, which lowers the ROI of the IT organization and can stifle innovation as well as leave your systems exposed to security risks.

A hardware refresh buying a new system or upgrading to one from an older system(s) one of the best ways to improve functionality and performance newer hardware configurations both help reduce the systems footprint in your data center and secure your business-critical data better.

When the average lifespan of your system is approaching three to five years, it is time to consider a hardware refresh. As with everything else, a planned approach gives you the best results; out-of-cycle upgrades might be a band aid fix, such as adding some incremental memory or storage to boost performance, or activating a few extra cores to add compute capacity if you happen to have a Power Systems machine that has this capability not all of them do and the spare capacity to do so on hand. But such measures do not add any lasting value.

An effective hardware refresh is about finding the right balance of hardware, software, and services that can scale with changing business needs. This also includes evaluating the most efficient consumption models to run your workloads, which means truly considering cloud and hybrid cloud options instead of just buying another on-premises system. With IBM getting ready to launch Power10-based entry and midrange servers this spring and the high-end Power E1080 server already out and shipping in volume, one way to think about a Power10 upgrade is to make a lot of the headache of setting up a new machine someone elses problem out there in the cloud. Then all you have to worry about is application certification and testing on the new systems software and the hardware underneath, and you shift from having to do a large capital outlay for the new system and having to amortize it over time to just paying from the operational budget, utility style, for the machine out in the cloud. And even if you do decide to stay on-premises with a new Power10 machine or even a new Power9 machine, as some customers will do to save money by sacrificing some performance and features in the Power10 machines you still need to have a cloud strategy.

In 2022, your hardware refresh should take into account the new and continuing strategies of the business and IT, as well as a push toward moving into a cloud environment appropriate for your business: studies indicate that companies can reduce their infrastructure costs by 30 percent to 40 percent by simply moving to cloud.

As you look to optimize your infrastructure in 2022, here are a few things youll want to think about:

Taking a long, hard look to see if refreshing your system is really the best way forward is worth the time does it make more sense to move some of the workload to the cloud or adopt a hybrid approach?

Some other questions you should ask as part of your assessment:

HMCs are critical for functionalities like remote system management and Capacity on Demand (CoD) handling. You might require an HMC for your environment if you have multiple LPARs and systems.

Ultimately, if you find yourself needing an additional server(s), consider switching to the Infrastructure-as-a-Service (IaaS) model from a cloud provider. With the right fit, you will be able to reduce total cost of ownership, achieve better data redundancy and security, while also improving server scalability to meet future needs. And get your IBM i people focused on doing what they need to do, which is to modernize their applications.

If you are unsure of what you need or would like to explore options, then its worth your while to work with an expert who can help you build a plan and right-size your IBM i environment.

In order to size your capacity needs accurately, its important to look at all aspects of your data storage needs including people, technology and future business goals.

From a technical standpoint, there are many different approaches and even drive types to choose from (ranging from inexpensive HDD to high performance FCMs). Consider your everyday business requirements to determine which type is best for you.

Additionally, these are some of the advanced features that you can take advantage of:

Considering your next steps is an important part of closing the loop on your current hardware refresh cycle. This conversation is primarily driven by whether your team of engineers/ administrators/ operators have the capacity to efficiently manage your on-prem/ enterprise systems today. Additionally, do they have the capacity to scale to meet future needs?

Explore engagements with experts to optimize the process, thereby freeing up your resources to work on things that add value to the business. An increasing number of companies leverage IaaS and MSPs to configure, migrate and run 24x7x365 on purpose-designed infrastructure that can be scaled up (and down) as required.

These questions (and more) are part of future proofing the business, so take the time to evaluate options and map out a long-term plan.

There are many reasons and benefits to updating your hardware and optimizing your infrastructure namely better security, workload efficiencies, opportunities for cloud and streamlined operations. It might be tempting to put off a refresh until it becomes absolutely necessary but waiting may put your business at risk and cost more in the long run. By taking a birds eye view of the project, you can ensure that your hardware is able to keep pace with your business as it grows, while also keeping costs in check and interruptions minimal.

It can be intimidating to approach or commit to a long-term plan without accounting for all the moving parts. Our experience with organizations across industries over the last couple of decades has proven time and again that there is no one-size-fits-all solution. At Abacus Solutions (part of the Fresche Solutions family), we have the right solutions to help you position your business for continued success.

Whether you are mapping out your refresh plan or dealing with hardware that needs imminent help, reach out to me to determine the right configuration for you, explore options, pricing and deployment models. Additionally, look out for a special session Im hosting with my colleague Patrick Schutz on March 3, on how to right size your hardware and lower TCO and have efficient ways to run your workloads.

Coming up the ranks in the IBM pre-owned/refurbished systems side of the business in the late 1980s, Peder Lundblad began to work on, build, upgrade and perform field upgrades and installations on the early generation AS/400s, and continued growing as a technical consultant and hands on engineer in the field. In the early 2000s, with an opportunity to work with the managed hosting side of the AS/400 industry, found the passion to architect and deliver IBM i solutions for partners and end users desiring to embrace the platform beyond supporting and managing internally. Almost 30 years later, continues to be an advocate for the IBM i platform and the IBM Storage and Power Systems it runs on, evangelizing the best platform on our little planet.

This content was sponsored by Fresche Solutions.

Thoroughly Modern: Building Organizational Resilience in the Digital Age

Thoroughly Modern: Time To Develop Your IBM i HA/DR Plan For 2022

Thoroughly Modern: Infrastructure Challenges And Easing Into The Cloud

Thoroughly Modern: Talking IBM i System Management With Abacus

Fresche Buys Abacus To Integrate From IBM i To Cloud To Code

What IBM i Shops Want From Cloud, And How To Do It Right

A Chat With Steve Woodard, The New CEO At Fresche Solutions

Thoroughly Modern: Making The Case For Code And Database Transformation

Thoroughly Modern: Making Quick Wins Part Of Your Modernization Strategy

Thoroughly Modern: Augmenting Your Programming Today, Solving Staffing Issues Tomorrow

Thoroughly Modern: Clearing Up Some Cloud And IBM i Computing Myths

Thoroughly Modern: IBM i Web Development Trends To Watch In the Second Half

Thoroughly Modern: Innovative And Realistic Approaches To IBM i Modernization

Thoroughly Modern: Running CA 2E Applications? Its Time To Modernize The UI

Thoroughly Modern: Understanding Your IBM i Web Application Needs With Application Discovery

Thoroughly Modern: Whats New With PHP On IBM i?

Thoroughly Modern: A Wealth Of Funding Options Makes It Easier To Take On Modernization

Thoroughly Modern: Speed Up Application Development With Automated Testing

Thoroughly Modern: The Smart Approach to Modernization Know Before You Go!

Thoroughly Modern: Strategic Things to Consider With APIs and IBM i

Thoroughly Modern: Why You Need An IT Strategy And Roadmap

Thoroughly Modern: Top Five Reasons To Go Paperless With IBM i Forms

Thoroughly Modern: Quick Digital Transformation Wins With Web And Mobile IBM i Apps

Thoroughly Modern: Digital Modernization, But Not At Any Cost

Thoroughly Modern: Digital Transformation Is More Important Than Ever

Thoroughly Modern: Giving IBM i Developers A Helping Hand

Thoroughly Modern: Resizing Application Fields Presents Big Challenges

Thoroughly Modern: Taking The Pulse Of IBM i Developers

Thoroughly Modern: More Than Just A Pretty Face

Thoroughly Modern: Driving Your Synon Applications Forward

Thoroughly Modern: What To Pack For The Digital Transformation Journey

Talking Digital Transformation With The New And Prior CEO

Link:
Thoroughly Modern: Taking Charge of Your Hardware Refresh in 2022 - IT Jungle

Read More..

Telefnica S A : Addressing the technological challenges of SMEs, better in the cloud – marketscreener.com

Adapting to the changing times isn't an option. Entrepreneurs are taking control of the future of their businesses and shifting towards digitisation and connectivity so as to move forward and grow. This is why moving to the cloud should be on their agendas.

We already know that the pandemic has demonstrated the ability of the new technologies to make organisations more resilient. What no one could have imagined ended up occurring. And, in the same way, society adapted to keep on moving.

Large companies with financial muscle, developed infrastructures and a more evolved digital culture adapted quickly, as they were coming from an ongoing or highly advanced technological transition.

However, SMEs had to speed up or even jump into the void, without knowing exactly what was waiting for them below.

Small and medium-sized enterprises should view the adoption of digital technologies as an opportunity to grow, become more resilient, increase the efficiency of their business processes and and optimise their budgets. It's not just a matter of acquiring the latest generation of devices.

The major deployment of 5G and the changes in routines are factors that are making this commitment a real need in responding to the mobility of workers, the creation of new multimedia services and the development of new technologies such as the Internet of Things. Connectivity and the use of data are now a reality.

According to Gartner, unstructured data account for 95% of the information which is available. This unstructured information is any not contained in a database that may be found in videos, images, audios, texts generated via mobiles, sensors, medical devices, cameras and so on. Hence the need for the efficient storage of this huge amount of data.

Thus, the hosting of email services, the storage of files and databases and the hosting of software to manage teams and projects were the most frequent uses made by companies in the cloud, based on Eurostat data.

The cost minimisation brought by cloud services is one of the major (and best) reasons for adopting cloud services. The cloud enables SMEs to reduce their expenditure on large servers that are usually never fully optimised, as it's estimated that they aren't used 90% of the time, while companies pay only for what they use with cloud applications and services. And, because they're flexible services, they can cope with any eventuality caused by a peak in demand.

The hardware at these data centers is also more efficient, as it uses up 60%-70% less energy than a data center installed by an SME. These organisations can thus meet their carbon emission reduction targets. Moreover, the systems and software used in the cloud are always updated, secure and fast, making them easy for companies and their employees to use.

Solutions like these enable small and medium-sized businesses to accelerate growth with a return on investment of more than 200% in just three years, according to an IDC study.

Continued here:
Telefnica S A : Addressing the technological challenges of SMEs, better in the cloud - marketscreener.com

Read More..

Supermicro at MWC Barcelona 2022: Digital Transformation, Metaverse, AI and 5G to Drive Server Demand in 2022 – The Fast Mode

In conjunction with MWC Barcelona 2022 which will be taking place from the 28th of February to the 3rd of March, Tara Neal, Executive Editor of The Fast Mode spoke briefly to Michael Clegg, VP & GM, 5G, Embedded/IoT at Supermicroon the company's plans and showcases for this year's event.

Tara: What do you think MWC 2022 will be all about?

Michael: The mobile industry is very broad-based, so we can expect multiple themes. There will likely be a lot of positioning around the metaverse in the applications space. Almost every company has an xVerse of some sort they will use as a messaging tool. Metaverse covers many technologies, including 5G, telco cloud, edge computing, IoT, security, amongst others. Notably, it includes AI/ML, AR/VR, and Digital Twins, which are technologies that will complement, enhance and gain benefits from 5G. We can expect more discussion on Private 5G, Open RAN, plus new features coming out of 3GPP Release 17 and 18, with an early lead-up to 6G. Finally, we think that there will be a greater focus on sustainability.

Tara: And what are your MWC 2022 plans?

Michael: Supermicro will demonstrate a wide range of solutions with several software vendors featuring the latest CPU processors. These include:

Tara: How has the pandemic impacted the operational and business landscape of the tech segment you are operating in?

Michael: The pandemic drove an acceleration in digital transformation and remote access, which has been positive for Supermicro. The need for more compute was commensurated with connectivity. We also built an HPC cluster for a US national lab in a short timeframe to aid in Covid research. As for everyone else, supply chain shortages have been an area where software virtualization has helped to reduce the dependency on custom appliances. The main impact has been an acceleration for adoption timeframes.

Tara: What emerging trends/technologies have you observed in your tech segment in recent months?

Michael: Open RAN and private 5G continue to influence the industry, and deployment of Massive-MIMO is underway. As Standalone 5G deployment ramps up, we see the focus shifting to cloud-native and containers. Many applications utilize video and AI/ML, especially at the edge. Liquid cooling has become more prevalent, and we see this increasing in the future. Server performance and Ethernet speeds will increase significantly, aiding virtualization and digital worlds.

Tara: How do you see those trends/technologies affecting your business or that of your customers?

Michael: As a server supplier, these trends are all positive for Supermicro. With digital transformation, we foresee continued growth of the server market across data centers and edge, with rapid growth in AI. Liquid cooling will enable higher performance compute more workloads per server which benefit everyone. Private 5G will enable digital transformation across all industry verticals. 5G Open RAN, and Cloud Native core will spur adoption of COTS servers in 5G networks and, with scale plus automation, enable a reduction in OPEX.

Michael Clegg has secured market leadership for Supermicro in product categories covering 5G, cloud, edge and IIoT server infrastructure. He also advises start-ups pioneering and commercializing Internet products and services. Before Supermicro he served as VP of Global Sales at Plume Design; COO at Morro Data, a provider of Cached Cloud File Storage-as-a-Service; and SVP & GM for NETGEAR's Service Provider business unit growing the business tenfold globally and achieving #1 market share in Wi-Fi gateways.

Read more:
Supermicro at MWC Barcelona 2022: Digital Transformation, Metaverse, AI and 5G to Drive Server Demand in 2022 - The Fast Mode

Read More..

How ManageCore is automating SAP operations with Avantra – Diginomica

(Image by Gerd Altmann from Pixabay )

Managecore sells itself as a different kind of company' in the SAP managed services market. Key to how it sets itself apart from other MSPs is its operational transparency, enabling customers to see exactly what Managecore sees, and automating SAP environments so that customers can focus on investing in innovation, rather than maintenance.

Both of these differentiators are underpinned by Managecore's use of Avantra, a platform that specializes in automating SAP operations. diginomica has previouslyspoken with Avantra CEO John Appleby about how the companyaims to help its customers shift the cost balance of their SAP environments from maintenance to investment.

Nick Miletich, Chief Technology Officer at Managecore, explains that the company's symbiotic relationship with Avantra is helping to deliver value to customers that are grappling with complex SAP landscapes. Miletich says:

We always want to be not only as efficient as possible, but to give our clients the best type of service we can. Over the years we've used many, many toolsets to monitor different aspects of SAP, the servers and the infrastructure that they run on.

And there was never a solution that was really great at SAP. They often try to be everything to everyone and Avantra is different. They focused on SAP, they started there, and then expanded the other way. They expanded outward, but they started with SAP as their core.

We want our consultants to really be consultants with these accounts, with these clients. Our clients expect a high level of service and they expect us to be providing value to them at every step of the way. And the value that we provide doesn't come in checking for really mundane trivial issues with the system. The value is: how do you use SAP to your best ability? How is your company or organization using SAP now and where should it go tomorrow?

Managecore says that it is using Avantra to take care of the trivial checks that customers would normally have to undertake manually, using the vendor's automation engine to carry out tedious work that operations managers would normally have to endure. Miletich says:

We use its core capacity, but then we've really built upon it, to allow us to automate certain aspects of what we do.

It allows us to be more responsive to issues and really prevent them. The way Avantra works, it allows us to really be preventative and really have that proactive support that everyone's looking for, versus being reactive.

Miletich does say that any monitoring or automation solution is only as good as you set it up to be, so Managecore has spent a lot of time on configuring the platform for its clients' needs. Some examples of how its works for customers include:

For example, if an alert says X, spin up another server. Or if a security issue comes up for SAP, we're able to quickly, across our whole organization, create a check to look for that. We can see, are these clients impacted by it?

You don't have to spend hours and hours and hours trying to look through that and figure it out. We can actually check within five minutes. And then Avantra has automation tools to do the fix for that, which happens to be a kernel patch. We have automation tools to do that, so Avantra can patch kernels automatically, instead of having to do every single system manually.

But as noted above, transparency is also key to Managecore's success, and it is also making use of Avantra's dashboards to give clients customized insight into their SAP environments. Miletich explains:

Each client is different - some may have multiple areas within their business function units - they may have an analytics practice, then they have the core ECC practice and then they may have finance.

All those entities use SAP differently. So what we tend to do is create dashboards that are relevant to them. So you may see just ECC on a dashboard, you may see just production. The IT staff at that organization may see it all - but on a dashboard showing red, green, yellow - versus parsing through every single system trying to figure out what's going on. So we use dashboards heavily to really customize the experience that each individual client gets with us.

Managecore utilizes the public cloud for its clients, particularly Google Cloud, and its use of Avantra is helping it bring public cloud capabilities to SAP. Miletich says that SAP hasn't traditionally lended itself to some of the benefits of this technology, which Managecore sees as an opportunity for its business. Miletich explains:

It's the limitation of the SAP software itself. SAP doesn't run in Kubernetes. It's not an environment that, from an IT administration standpoint, is as nimble as an environment you could put in Kubernetes, where one server goes down and everything still continues to operate.

So what we've done for some of the automation is what you'd almost call pre-planning. We're big in the public cloud space, so we use a lot of automation connectivity between the environments to make that magic happen. As an example, you can correlate a CPU utilization metric, and if it's 90% utilized for over 30 minutes, we can spin up another application server for SAP. Those are the types of automation that we tend to gravitate towards.

It's tying Avantra to SAP and it's tying the public cloud all together to make simple troubleshooting a non-issue.

Avantrarecently announced its latest Enterprise Edition, where it has rebuilt its automation engine for more sophisticated and complex use cases. Managecore is particularly interested in Avantra's autonomous patching capabilities and some of the AI/ML opportunities it has identified with SAP Graph.

But underpinning all of Managecore's use of Avantra is focusing on how it can ultimately shift its customers' time balance away from maintenance, towards new opportunities with SAP. Miletich says:

I would say at its foundation, Avantra allows you to be more flexible and nimble, but also allows you to be in that innovative type of role and environment, rather than a day to day troubleshooting role.

I'll give you an example from one of our clients. We moved them to Google Cloud and migrated them from one MSP to us. They've been with us since then, for four years. And this client, one of their key quotes was: "By moving to ManageCore as our managed services provider, using this product, it allows us to focus on innovation".

They're not on calls asking why their system is down, or asking for more memory, all the things that go into managing an environment. They now can do additional net new innovative deployments that are correlated to SAP to better their business. And I think that's the key takeaway from a product like that.

See the article here:
How ManageCore is automating SAP operations with Avantra - Diginomica

Read More..

The Next And More Profitable 10 Percent Of Server Share For AMD – The Next Platform

When this is all said and done, Intel will deserve some kind of award for keeping its 14 nanometer processes moving along enough as it gets its 10 nanometer and 7 nanometer processes knocked together to still, somehow, manage to retain dominant market share in the server space.

Or, maybe Intel just got lucky that AMDs supply cant even come close to meeting the potential demand it might otherwise have if there were no limits on the fab capacity at Taiwan Semiconductor Manufacturing Co, which etches the core complexes in its Epyc server chips. (Globalfoundries, the spinout of AMDs own foundry mashed up with IBM Microelectronics and Chartered Semiconductor, still makes the memory and I/O hub portions of the Rome Epyc 7002 and Milan Epyc 7003 processors from AMD.)

This time last year, when we took a look at how AMDs share of processor shipments and revenues had grown since its re-entry into the server space back in 2017, the company had just broke through the 10 percent shipment barrier and looked to be on a Opteron-class fast path to 20 percent or maybe event 25 percent. And then the 10 nanometer Ice Lake Xeon SPs were launched, and say what you will about how the Rome chips beat them and the Milan chips hammer them, you go to the datacenter with the server chip that you have, paraphrasing former Secretary of State Donald Rumsfeld, who when under fire for the sorry state of the US Army as the Gulf War started quipped with a certain amount of pique that you go to war with the army that you have.

If you look at the data from Mercury Research, which does a fabulous job of watching the competition between AMD and Intel, you will see that AMD has a big jump in server CPU shipment share and then it levels off a bit or even declines some and then once it gets its footing, it blasts up a few points to a new level and repeats this entallening sawtooth shape again and again.

And so, as the fourth quarter came to a close, according to Mercury Research, AMD shipped 1.13 million server CPUs, an increase of 82.9 percent over the year ago period, which is great. But the overall market for server CPUs pushed into the channel, not consumed by the customers on the other side of the channel rose by 29.9 percent to 8.85 million units. And thus, Intel was still able to grow at almost the same pace of the market at large, rising 24.6 percent to 7.71 million units. And thus, Intel was able to have a record quarter for server CPU shipments, which drove server revenues to new highs as well despite a 5 percent decline in sales to hyperscalers and cloud builders and thanks in large part to a return to spending by enterprises and telco and communications service providers. We suspect that there has been a little channel stuffing on the part of Intel, and a whole lot of component bundling (which is price cutting that doesnt affect the Data Center Group revenue line but does cut into the revenues and profits of the Data Center Group adjacencies such as switch ASICs, network interface cards, silicon photonics (mostly Ethernet optical transceivers), and Optane storage. Such pricing tactics will all come home to roost unless server demand keeps on growing and Intel does a good job with the forthcoming 10 nanometer Sapphire Rapids Xeon SPs due by the second quarter.

To stick with the war metaphor, Intels 14 nanometer infantry with some 10 nanometer tanks was able to hold the line against AMDs tactical assault teams and sharpshooters. And there are some 10 nanometer gunships on their way as AMD brings a new class of weapons to bear with the Genoa Epyc 7004s. If AMD had more capacity, it would be eating more share. There is no question about that. But we think that both Intel and AMD are happy to manage capacity close enough to demand to be able to still have shortages that cause prices to hold or even increase, particularly with high end SKUs in their CPU lines. And they will stretch out product launches to the breaking point, and can always let the hyperscalers and cloud builders have these parts on the sly and charge a premium for that, too.

In other words, server buyers, none of you are in the drivers seat. TSMC and Intel Foundry Services are, and they are calling the tune on pricing and setting the pace on shipments, and if you need a server, you are going to the datacenter with the CPUs that you have.

Anyway, back to statistics. When you do the math, AMD had 12.9 percent shipment share in the fourth quarter of 2021, according to Mercury Research, and that is barely three-tenths of a point higher than the share it had in Q3 and mimicking the same tepid growth in share it had in the prior year at the same time.

It is hard to say what will happen as Genoa Epyc 7004s hit the market and possibly more revenue is recognized for the Trento custom Epyc processors used in the Frontier supercomputer at Oak Ridge National Laboratory (we think a lot of this was done in Q2, Q3, and Q4 of 2021, but AMD has not said how this works).

What we will observe is that AMDs revenue share has been outpacing its shipment share since Q2 2021, which is when that revenue recognition for Frontier might have started. And given that the major supercomputing centers usually pay less than half of list price for CPUs and GPUs, based on estimates we have done, then this revenue recognition would actually have hurt AMDs revenue share. And so if AMDs share of the X86 server chip market revenues is higher that means Intels average revenue per chip is trending up slow than AMDs is trending up. (Both are trending up as customers increasingly buy up the stack.) We will be watching this revenue expansion rate carefully. In the fourth quarter, for instance, AMD captured 14.4 percent share of the $7.48 billion in X86 server CPU sales, and it had the same share of the $6.65 billion in X86 server CPU sales in Q3 2021, according to Mercury Research.

It is even harder to say what will happen out towards the end of 2022 and through 2023 and into 2024. By that time, the server market could be consuming somewhere close to 10 million X86 server CPUs a quarter, and maybe even 500,000 or more Arm server CPUs (why not?), with the X86 servers generating somewhere close to $9 billion in revenues per quarter. We have a long way to go to get to that point, but when it is done, if current trends persist, it is not hard to see AMD having somewhere north of 20 percent shipment share and close to 25 percent revenue share of the X86 market, which is going to continue to grow even if at a much slower pace than it is doing now.

This will be fun to watch unfold, quarter by quarter, shot by shot.

Read the original here:
The Next And More Profitable 10 Percent Of Server Share For AMD - The Next Platform

Read More..

Kubernetes on Bare Metal vs. VMs: It’s Not Just Performance The New Stack – thenewstack.io

Too often, the debate about running Kubernetes on bare metal versus virtual machines is overly simplistic. Theres more to it than a trade-off between the relative ease of management you get with VMs and the performance advantage of bare metal. (The latter, in fact, isnt huge nowadays, as Ill explain below.)

Im going to attempt to walk through the considerations at play. As you will see, while I tend to believe that Kubernetes on bare metal is the way to go for most use cases, theres no simple answer.

Off the bat, lets address the performance vs. ease-of-use question.

Andy Holtzmann

Andy is a site reliability engineer at Equinix and has been running Kubernetes on bare metal since v1.9.3 (2018). He has run production environments with up to 55 bare-metal clusters, orchestrated Kubernetes installs on Ubuntu, CentOS and Flatcar Linux, and recently helped accelerate the bring-up of Equinix Metals Kubernetes platform to under one hour per new greenfield facility. Andy joined Equinix after working in senior software engineer roles at Twilio and SendGrid.

Yes, VMs are easier to provision and manage, at least in some ways. You dont need to be concerned with details of the underlying server hardware when you can set up nodes as VMs and orchestrate them using the VM vendors orchestration tooling. You also get to leverage things like golden images to simplify VM provisioning.

On the other hand, if you take a hypervisor out of the picture, you dont spend hardware resources running virtualization software or guest operating systems. All of your physical CPU and memory can be allocated to business workloads.

But its important not to overstate this performance advantage. Modern hypervisors are pretty efficient. VMware reports hypervisor overhead rates of just 2 percent compared to bare metal, for example. You have to add the overhead cost of running guest operating systems on top of that number, but still, the raw performance difference between VMs and bare metal can be negligible, at least when youre not trying to squeeze every last bit of compute power from your infrastructure. (There are cases where that 2% difference is meaningful.)

When its all said and done, virtualization is going to reduce total resource availability for your pods by about 10% to 20%.

Now, lets get into all the other considerations for running Kubernetes on bare metal versus Kubernetes on VMs. First, the orchestration element. When you run your nodes as VMs, you need to orchestrate those VMs in addition to orchestrating your containers. As a result, a VM-based Kubernetes cluster has two independent orchestration layers to manage.

Obviously, each layer is orchestrating a different thing, so, in theory, this shouldnt cause problems. In practice, it often does. For example, imagine you have a failed node and both the VM-level orchestrator and the Kubernetes orchestrator are trying to recover from the failure at the same time. This can lead to your orchestrators working at cross purposes because the VM orchestrator is trying to stand up a server that crashed, while Kubernetes is trying to move pods to different nodes.

Similarly, if Kubernetes reports that a node has failed but that node is a VM, you have to figure out whether the VM actually failed or the VM orchestrator simply removed it for some reason. This adds operational complexity, as you have more variables to work through.

You dont have these issues with Kubernetes on bare metal server nodes. Your nodes are either fully up or theyre not, and there are no orchestrators competing for the nodes attention.

Another key advantage of running Kubernetes on bare metal is that you always know exactly what youre getting in a node. You have full visibility into the physical state of the hardware. For example, you can use diagnostics tools like SMART to assess the health of hard disks.

VMs dont give you much insight about the physical infrastructure upon which your Kubernetes clusters depend. You have no idea how old the disk drives are, or even how much physical memory or CPU cores exist on the physical servers. Youre only aware of VMs virtual resources. This makes it harder to troubleshoot issues, contributing again to operational complexity.

For related reasons, bare metal takes the cake when it comes to capacity planning and rightsizing.

There are a fair number of nuances to consider on this front. Bare metal and virtualized infrastructure support capacity planning differently, and there are various tools and strategies for rightsizing everything.

But at the end of the day, its easier to get things exactly right when planning bare metal capacity. The reason is simple enough: With bare metal, you can manage resource allocation at the pod level using cgroups in a hyper-efficient, hyper-reliable way. Using tools like the Kubernetes vertical autoscaler, you can divvy up resources down to the millicore based on the total available resources of each physical server.

Thats a luxury you dont get with VMs. Instead, you get a much cruder level of capacity planning because the resources that can be allocated to pods are contingent on the resource allocations you make to the VMs. You can still use cgroups, of course, but youll be doing it within a VM that doesnt know what resources exist on the underlying server. It only knows what it has been allocated.

You end up having to oversize your VMs to account for unpredictable changes in workload demand. As a result, your pods dont use resources as efficiently, and a fair amount of the resources on your physical server will likely end up sitting idle much of the time.

Another factor that should influence your decision to run Kubernetes on bare metal versus VMs is network performance. Its a complex topic, but essentially, bare metal means less abstraction of the network, which leads to better network performance.

To dig a level deeper, consider that with virtual nodes you have two separate kernel networking stacks per node: one for the VMs and another for the physical hosts. There are various techniques for negotiating traffic between the two stacks (packet encapsulation, NAT and so on), and some are more efficient than others (hint: NAT is not efficient at all). But at the end of the day, they each require some kind of performance hit. They also add a great deal of complexity to network management and observability.

Running on bare metal, where you have just one networking stack to worry about, you dont waste resources moving packets between physical and virtual machines, and there are fewer variables to sort through when managing or optimizing the network.

Granted, managing the various networks that exist within Kubernetes, and this partially depends on the container network interface (CNI) you use, does add some overhead. But its minor compared to the overhead that comes with full-on virtualization.

As Ive already implied, the decision between Kubernetes on bare metal and Kubernetes on VMs affects the engineers who manage your clusters.

Put simply, bare metal makes operations and hence your engineers lives simpler in most ways. Beyond the fact that there are fewer layers and moving parts to worry about, a bare-metal environment reduces the constraints under which your team works. They dont have to remember that VMs only support X, Y and Z configurations or puzzle over whether a particular version of libvirt supports a feature they need.

Instead, they simply deploy the operating system and packages and get to work. Its easier to set up a cluster, and its much easier to manage operations for it over the long term when youre dealing solely with bare metal.

Let me make clear that I do believe there are situations where running Kubernetes on VMs makes sense.

One scenario is when youre setting up small-scale staging environments, where performance optimization is not super important. Getting the most from every millicore is not usually a priority for this type of use case.

Another situation is when you work in an organization that is already very heavily wedded to virtualized infrastructure or particular virtualization vendors. In this case, running nodes as VMs simply poses less of a bureaucratic headache. Or maybe there are logistical challenges with acquiring and setting up bare metal servers. If you can self-service some VMs in a few minutes, versus taking months to get physical servers, just use the VMs if it suits your timeline better. Your organization may also be wedded to a managed Kubernetes platform offered by a cloud provider that only runs containers on VMs. Anthos, Google Clouds managed hybrid multicloud Kubernetes offering, supports bare-metal deployments, and so does Red Hats OpenShift. AWSs EKS Anywhere bare metal support is coming later this year.

In general, you should never let a dependency on VMs stop you from using Kubernetes. Its better to take advantage of cloud native technology than to be stuck in the past because you cant have the optimal infrastructure.

VMs clearly have a place in many Kubernetes clusters, and that will probably never change. But when it comes to questions like performance optimization, streamlining capacity management or reducing operational complexity, Kubernetes on bare metal comes out ahead.

Feature image via Pixabay

Read the original post:
Kubernetes on Bare Metal vs. VMs: It's Not Just Performance The New Stack - thenewstack.io

Read More..