Page 1,113«..1020..1,1121,1131,1141,115..1,1201,130..»

Africa Telecom Towers and Allied Market Size & Share Analysis – Growth Trends & Forecasts (2023 – 2028) – Yahoo Finance

ReportLinker

The Africa Telecom Towers and Allied Market size in terms of installed base is expected to grow from 199,092 units in 2023 to 249,652 units by 2028, at a CAGR of 4.63% during the forecast period (2023-2028).

New York, July 05, 2023 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Africa Telecom Towers and Allied Market Size & Share Analysis - Growth Trends & Forecasts (2023 - 2028)" - https://www.reportlinker.com/p06472491/?utm_source=GNW

With the outbreak of COVID-19, the telecom industry has witnessed a significant increase in demand for internet services due to a major chunk of the population staying at home and remote working conditions. The increase in people working from home has led to an increase in demand for downloading, online video viewing, and communication through video conferencing, all of which are leading to increased network traffic and data usage.

Key HighlightsThe telecom tower industry has drastically evolved over the past decade. The core towerco proposition and business models have been successfully adapted to match the demands of new markets in Africa. Many towercos are anticipated to hunker down in their core building business over the forecast period, buying and leasing vertical real estate, and such towercos may still see plenty of 5G antenna overlaid onto their towers.As per the Nigerian Communications Commission, as of January 2021, the number of third and fourth-generation telecom towers deployed in Nigeria has grown by 73.2%. Also, global tower companies are expanding their presence in the region, through strategic collaborations, due to the growing number of opportunities presented.The emergence of KaiOS and its partnerships with operators across Africa is helping overcome the affordability barrier for low-income users. The free resources offered, such as the Life app, also help new users develop digital skills and understand how the internet can be relevant. Such initiatives are expected to boost internet penetration in these countries significantly.Several initiatives by telecom operators and other organizations, especially in low and middle-income countries, are expected to spur growth in the rural areas as the residents of these areas gain increased access to internet connectivity.Furthermore, with businesses going mobile and adopting new concepts, like BYOD, to increase employee interaction and ease of use, it has become essential to provide a high-speed and quality network. The organizations have been looking forward to adopting BYOD aggressively in their operations, fueling the market growth over the forecast period. Furthermore, development in cloud-based services for mobile users and the roll-out of 4G LTE services worldwide have increased the investment into networks by carriers, which drives the demand for telecom towers.The increasing emphasis on improving internet connectivity to rural areas is one of the major factors stimulating the deployment and improvisation of the telecom infrastructure in these areas, thereby aiding the markets growth. Smartphone penetration, raising awareness, increasing penetration of digital technologies, and investments from several organizations and governments have been increasing the adoption of internet connections in the region.

Africa Telecom Towers & Allied Market Trends

Optical Fiber Market is Expected to Grow Significantly During the Forecast Period

The telecommunication and networking market witnessed a massive surge in demand in the region. The emergence of IoT in cloud computing and the demand for 5G networks are driving increased usage of optical fibers in a wide variety of applications: business, government, industrial, academic, and cloud servers in public and private networks.As per CommsUpdate, there was an interval of just three years between launching 3G and 4G services in Algeria, leading to issues. While all three major operators - Djezzy, Mobilis, and Ooredoo - have extended their coverage to all 48 provinces, they all received penalties from the regulator in 2020 for the poor quality of their services.Further, as per the Nigerian Communications Commission, as of January 2021, fiber optics cables have expanded by 16.4 % in the last five years. Also, according to IFC, a total of 1.1 million km of fiber optics have been installed in Africa, of which 50% have been deployed by private mobile network operators (MNOs). Moreover, about 40% of all fiber optic cable in Africa, a staggering 450,000 kilometers, is publicly owned. This includes government networks, state-owned enterprises (SOEs), and utilities.The region has a developing telecom infrastructure, with growth encouraged by supportive regulatory measures and by government policies aimed at delivering serviceable internet connections across the region. Government-funded efforts, including the Universal Service Telecommunications (UTS) program, continue to ensure that fixed-line infrastructure is extended to underserved areas. Thus the slow growth in the number of fixed-telephony connections should be maintained during the next few years.Companies are getting into various partnerships to provide better services while controlling operating costs. For instance, during the beginning of 2021, Ooredoo Algeria deployed Nokias cloud-native Core software to strengthen its network performance and reliability cost-effectively and strategically positioned itself to launch new services to meet customer needs. This deployment is likely to improve the digital ecosystem of the country further.The infrastructure is based on a terrestrial fiber-optic network coupled with undersea cables, offering secure connectivity abroad from West Africa. This investment aims to support the digital ecosystem and meet the regions growing needs for connectivity.

Telecom Tower Market to Grow Significantly during the Forecast Period

The core towerco proposition and business models have been successfully adapted to match the demand from new markets in Africa. Many towercos are anticipated to hunker down in their core business of building, buying, and leasing vertical real estate over the forecast period. Such towercos may still see plenty of 5G antenna overlaid onto their towers.As per the Nigerian Communications Commission, as of January 2021, the number of third and fourth-generation telecom towers deployed in Nigeria has grown by 73.2%. Also, global tower companies have been expanding their presence in the region, through strategic collaborations, due to the growing number of opportunities presented.In January 2020, American Tower acquired Eaton Towers in a deal that included towers across five African countries. While American Tower already had a presence in Africa, the acquisition was a significant deal, demonstrating the types of investment being made in the region, particularly in the tower market.According to estimates by TowerXchange, there are roughly 25,767 towers in South Africa, serving 97mn SIMs, making it one of Africas best-covered markets. Five MNOs operate within the South African market, mainly MTN, Vodacom, Telkom, Cell, and C-and Rain. Cell C is in the process of shutting down its network and switching to a roaming agreement with MTN. They have slowly reached a point of bankruptcy since the sale of their portfolio to American Towers.Furthermore, with businesses going mobile and adopting new concepts like BYOD to increase employee interaction and ease of use, it has become essential to provide a high-speed and quality network. The organizations are looking forward to adopting BYOD aggressively in their operations, thereby fueling the market growth over the forecast period. Further, growth in cloud-based services for mobile users and the roll-out of 4G LTE services worldwide have increased the investment into networks by carriers, which drives the demand for telecom towers.Also, the increasing emphasis on improving internet connectivity to rural areas is one of the major factors stimulating the deployment and improvisation of the telecom infrastructure in these areas, thereby aiding the markets growth. Smartphone penetration, raising awareness, increasing penetration of digital technologies, and investments from several organizations and governments have been increasing the adoption of internet connections in the region.

Africa Telecom Towers & Allied Industry Overview

The African Telecom and Allied market is moderately competitive and consists of many global and regional players. These players account for a considerable market share and focus on expanding their client base globally. These players focus on research and development activities, strategic alliances, and other organic & inorganic growth strategies to stay in the market landscape over the forecast period.

March 2022 - Helios Towers, the independent telecommunications infrastructure company, announced the acquisition of Airtel Africas passive infrastructure company in Malawi, adding 723 sites to its portfolio.January 2022 - ZESCO Limited and Copperbelt Energy Corporation PLC signed an agreement to make new power supply and transmission arrangements. The negotiations, which started on January 17, 2022, are expected to replace the bulk supply agreement that expired on March 31, 2020.

Additional Benefits:

The market estimate (ME) sheet in Excel format3 months of analyst supportRead the full report: https://www.reportlinker.com/p06472491/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Read this article:
Africa Telecom Towers and Allied Market Size & Share Analysis - Growth Trends & Forecasts (2023 - 2028) - Yahoo Finance

Read More..

Disaster recovery and the cloud – IT-Online

Risk assessment, risk planning, and risk mitigation. And then the disaster happens. Whether it be unplanned, prolonged load-shedding or a ransomware hack that takes your business offline. This is when disaster recovery and business continuity kick in in the Cloud.

By Reshal Seetahal, head of Alibaba cloud business unit at BCX

Moving services, operations and data to the cloud not only ensures security, resilience and recovery but gives businesses an advantage in terms of building efficiencies, maintaining service levels to their customers, and creating new revenue streams. When load-shedding hits and the lights go out, your data is not only backed-up and secure, but available and accessible.

Its not just business as usual, but an opportunity to increase productivity, scalability, agility, and performance. The cloud is, quite simply, key in the digital transformation of all entities. It is a strategic tool, providing a platform that can create and develop solutions tailored to your business as it grows and evolves.

It would be remiss not to talk about the potential harm of disasters on businesses. These are the times we live in. A power grid collapse, a fibre cable cut under the ocean, floods, malware, the list is long. Having an on-site, one-server backup for your data and your companys operating services is akin to keeping your cash stashed under your mattress. The only good thing is that you think you know where it is and have a false sense of control over it.

The bad thing? Everything else. Its an easy, vulnerable, one-source target that requires constant levels of scrutiny and security. When disaster hits, it is the first thing to go, the first thing to be targeted, and when it is gone, it is gone forever.

Not everyone has the resources to set up multiple backup servers. There is, surprisingly, still naivety among many businesses about their security just as there is unease and trepidation about moving away from an on-site, lock-and-key situation to a cloud solution.

The solution is often through a layered, nuanced system.

Alibaba Cloud Disaster Recovery (DR) supports warm standby, which acts as an extension of the organisations on-premises environment. During warm standby, a mirror environment offers a scaled-down version of a fully functional environment that remains running in the cloud. This minimises the recovery time and enables mission critical systems to meet stringent RTO (recovery time objective) and RPO (recovery point objective) targets.

Being ready, being able, and being aware of vulnerabilities, and potential disasters is the first step of becoming an entity that can work without fear of the loss of data and service. The cloud mitigates and negates the potential harm of a disaster, and creates an environment not just for recovery, but continuity.

Related

The rest is here:
Disaster recovery and the cloud - IT-Online

Read More..

How a midsize American business recovered from a ransomware … – SC Media

At the CyberRisk Leadership Exchange in Cincinnati on June 7, the chief security officer of an Ohio bottling company used his lunchtime keynote address to recount how his company's eight-person IT team detected, remediated and recovered from a ransomware attack within the space of seven hours, without losing any business and without paying a dime to the attackers.

"We never missed an order. We never missed a delivery. Customer data was not compromised," Brian Balzer, Executive VP of Digital Technology & Business Transformation at G&J Pepsi-Cola Drink Bottlers, Inc., told SC Media in an interview. "I'd say probably 95% of the organization had no idea that we were under attack."

Balzer credits G&J's rapid, successful recovery from the ransomware attack to diligent preparation and a move to cloud-based operations, and to strong support from both colleagues and company leaders.

"I cannot stress enough to companies the importance of having such a strong culture where people are willing to jump in and help one another," says Balzer, "and a leadership team that is supportive of the cyber team, your IT team, whoever it might be, your digital team, to be able to put plans in motion."

Founded in 1925, G&J Pepsi serves Ohio, Kentucky and West Virginia and is the largest family-owned independent Pepsi bottler in the U.S., with more than a dozen facilities, 2,000 employees and $650 million in annual revenue.

The company first noticed something was wrong with its systems just before the Labor Day weekend of 2021.

"We had suspected that we might have allowed an intruder, a hacker into our environment," Balzer says. "We spent the better part of about four or five days trying to understand where they got in, where they were, if they were in and how we might be potentially exposed."

The G&J Pepsi team suspected that someone had used Cobalt Strike to install "beacons," or backdoors, into the systems, but their initial searches found nothing. Then a few days after Labor Day, a call came in around 4:30 in the morning.

"We got a call from one of the folks in our plant saying, 'Hey, something's weird, I can't access files,'" recounts Balzer. "And we knew instantly that we were under attack."

The G&J Pepsi team quickly took as many systems offline as it could. Balzer credits support from the very top of the company for that.

"I [had] to call my CEO at five o'clock in the morning and say, 'We're literally bringing all the systems down.'" Balzer says. "He's like, 'All right, I trust you. You just keep me posted.'"

Balzer's team found two potential points of entry. The first was a user who had unknowingly downloaded a corrupted file a common vector for ransomware infection but G&J Pepsi's endpoint solution quickly detected and remediated that.

The second point of entry was more serious. Just before the long weekend, Microsoft had released a patch for Exchange Server. But it looked very similar to another Exchange Server patch from two weeks earlier, one that G&J Pepsi had already implemented.

"There was some confusion as to, 'Was this the same patch that they released? Or was this a different patch?'" Balzer recalls. "We just misunderstood. It was probably on us. It was our fault for not getting that clarification quickly."

Instead of implementing the new patch right away, G&J Pepsi decided to wait until the following weekend. That's all the time the ransomware crew identified as Conti by notes left on infected systems needed.

"Within four days, they had exploited that particular gap in that Exchange server, and were able to compromise our environment," Balzer says. But, he added, "we don't deal with terrorists."

As a midsize company, G&J Pepsi fit the profile of a prime target for ransomware crews. The fact that the attack happened over a three-day weekend, which gives attackers more time to operate freely, was likely no coincidence.

"Most midsize companies and small companies can't thwart an attack, particularly a Conti ransomware or other sophisticated attacks that are that are taking place," Balzer says. "When we called for support, we [were told] that 'We'll try to help you, but we are absolutely slammed coming out of this three-day weekend because they went haywire on companies across the U.S.'"

Fortunately, because G&J Pepsi had already moved all its systems to the cloud, shutting down company assets and stopping the attackers was less complicated than it might have been for on-premises infrastructure.

"We have nothing on-premise," Balzer says. "Because we're 100% in the cloud, and because we utilize Microsoft Azure Cloud environment, we were able to prevent them from moving laterally across the platforms in our systems."

The virtual nature of G&J Pepsi's systems meant that the company was able to spend the next few hours using its weekly backups to spin up brand-new Azure instances free of ransomware, even as the team continued to investigate the infected systems.

"Within seven hours, we were able to stand up the entire environment again," Balzer told us. "Many of our solutions are SaaS solutions. The things that were impacted were more like file servers we had a couple of other servers that we had developed as IAS solutions in Azure that were at risk. We were able to basically rebuild and recreate that environment."

G&J Pepsi were lucky. None of the company's backups had been affected by the ransomware, and dark-web scans turned up no evidence of company data having been stolen.

"We were very fortunate that we had eyes on it immediately and were able to basically isolate and wall them off and then rebuild our environment," Balzer says.

However, staff PCs left on overnight in the office were infected, as were some ancillary servers. Rebuilding those took a bit more time.

Following the attack, G&J Pepsi brought on Arctic Wolf as a managed detection and response (MDR) provider and changed several company policies.

"We forced all password resets, we changed our policies on backups, we changed our policies on how many admin accounts that we have we limited those and really revamped the security," Balzer says.

Balzer told us that G&J Pepsi had also locked down USB ports on PCs, beefed up identity and access management and automated its systems patching. As it is a U.S.-only company, G&J Pepsi also blocked all system access from outside the country. The company has not had any serious incidents since.

In a separate interview with Microsoft, G&J Pepsi Enterprise Infrastructure Director Eric McKinney says he has learned two things from the company's brush with ransomware.

"If I could go back in time to the months leading up to our ransomware attack, I'd tell myself to strengthen our endpoint policies," McKinney tells Microsoft. "I don't view our recovery as a victory so much as a call to double down on security."

For McKinney, the second lesson was how much there is to be gained from a full cloud migration.

"G&J Pepsi has gotten a wide range of security benefits, such as platform-based backups, cloud-based identities, and multifactor authentication, leveraging native tools that help recommend and identify risk," McKinney says. "It doesn't matter whether you're a huge corporation like PepsiCo, a midsize business like G&J Pepsi, or a mom-and-pop gas station down the road I would make that move to the cloud and make it quickly."

Fielding questions from the audience following his keynote address at the Cincinnati CyberRisk Leadership Exchange, Balzer was struck by how many of his fellow cybersecurity executives wanted to hear about G&J Pepsi's experience.

"I love that the participation was there, that the curiosity was there," he told us. "People wanted to understand what was happening so that they can be aware of what to do if that ever occurs with them."

But Balzer once more stressed how important company culture is to an organization's ability to maintain resilience and quickly recover from an attack.

"The other thing that really stuck out, that we talked about for a brief bit during that [CyberRisk Leadership Exchange] session," Balzer says, "was the importance of having the right culture within your team to be able to come together to thwart an attack, particularly one of that size or even larger.

"We had a plan in place. Unfortunately, we had to use it. But fortunately for us, that plan worked," adds Balzer. "And that worked because we had the right leadership, the most senior leadership and support, and we had the right culture within our team to help support that and thwart that attack."

The next Cybersecurity Collaboration Forum event in Cincinnati will be a CyberRisk CISO Dinner at the end of September.

For more information on the Cybersecurity Collaboration Forum, including how to attend a CyberRisk Leadership event in your area, please visit https://www.cybersecuritycollaboration.com/.

Many thanks to Zack Dethlefs of the Cybersecurity Collaboration Forum.

See the original post:
How a midsize American business recovered from a ransomware ... - SC Media

Read More..

How AI, big tech, and spyware power Israel’s occupation – The New Arab

Automated Apartheid: How Israel's occupation is powered by big tech, AI, and spyware

In-depth: Israel's military occupation has become a laboratory for advanced surveillance systems, artificial intelligence, and spyware technology developed by Western corporations and Israel's army.

AI was a force multiplier, boasted Israeli officials after Operation Guardian of the Walls, an 11-day milliary attack on Gaza in 2021 which displaced over 91,000 Palestinians and left over 260 dead.

Almost two years later, foreign aid, big tech, and new advanced surveillance systems have quite literally laid the groundwork for what Amnesty International calls an Automated Apartheid, one that is powered by Western corporations like Google and Amazon on the outside, and entrenched by spyware and AI on the inside.

A new era: Occupation under automation

AI technology combined with a new far-right government have seen policies of repression in Israels military occupation escalate at an unprecedented rate over the last few years.

Autonomous weapon systems rely on sensor processing rather than human input, by selecting and engaging a target, Omar Shakir the Israel-Palestine Director at Human Rights Watch, told The New Arab.These technologies make it easier to maintain and further entrench apartheid.

Since the beginning of 2023, the Israeli army has killed over 170 Palestinians, including at least 30 children. More than 290 Palestinian-owned buildings across the West Bank and East Jerusalem have been demolished or forcibly seized, displacing over 400 people and affecting the livelihoods or access to services of over 11,000 others.

In a recent 82-page comprehensive report on the use of technology in Israels military occupation, Amnesty International detailed how many of these atrocities are made possible by automated weapons, spyware, and unauthorised biometric systems, calling them crimes against humanity.

Spyware hacks into devices (phones or computers) without alerting the owner. The hackers open the microphone and camera on the device remotely to spy on the surroundings, and download all of the data on the device, Dr Shir Hever, the military embargo coordinator for the Palestinian Boycott, Divestment and Sanctions National Committee (BDSNC), told The New Arab.

Pegasus spyware, the specific system used by the Israeli military, is not only used to breach peoples privacy by filing and scanning data but is also utilised to obtain information even from encrypted messaging services, and plant false evidence into the device without leaving a trace, Dr Hever added.

Most recently, Israels military has come under fire for their Wolf Pack facial recognition systems.

Nadim Nashif, the General Director and Co-Founder of 7amleh - The Arab Center for the Advancement of Social Media, explained how Wolf Pack is used to facilitate Israels occupation.

It's an extensive predatory surveillance database system that contains profiles of nearly every Palestinian in the occupied West Bank, including photographs, family histories, education, and security ratings, he said.

There are countless variations of the program - Red Wolf, Blue Wolf, and White Wolf - which all take information from Palestinians without consent.

Blue Wolf has a colour-coded system that instructs soldiers to either arrest the individual or let them pass through. Israeli soldiers compete to capture the highest number of pictures in the app, Nashif explained.

The updated version of Blue Wolf, Red Wolf is now being used in illegal checkpoints in Hebron. If the system cannot locate the individuals image, it will register it on the databases, and they will be often denied crossing, Nashif added.

A lesser-known version, White Wolf, is used on Palestinian workers who have jobs in illegal settlements. It has the same tracking, harassment, and biometric features as the other two.

The emergence of Smart Cities in Israel has also allowed these tools to be deployed to track and surveil Palestinians under the disguise of tech advancement.

Places like Jerusalem have Smart City technology, that uses cameras, facial recognition, and advanced technological systems that are used at the entries of checkpoints, said Shakir.

With cameras pointing into homes and scanning Palestinians at checkpoints and as they go about their everyday lives, reality under Israeli occupation is becoming increasingly dystopian.

Surveillance impacts our day-to-day activities and behaviours, adding to existing constraints to freedom of movement. We as Palestinians think twice before logging into the internet, using our phones to call a loved one, or meeting with friends in a public space. We are cautious with every move we make, every word we say, Nashif explained.

Residents in Hebron have become accustomed to the presence of drones flying over the city, he added. Data obtained by facial recognition surveillance technology will be used to supply information to an AI-controlled machine gun equipped with ready-to-fire stun grenades and sponge-tipped bullets, explaining how enforcing the occupation has become easier to sustain via technology.

In some cases, data gathered by surveillance methods are used for Israels policy of targeted assassinations, which are carried without legal processes.

Drones, remote-controlled vehicles in the air (UAV), water or land which usually carry surveillance equipment (mostly cameras), are now being used as armed drones to commit assassinations, Dr Hever elaborated.

Its another form of apartheid. Privacy is only a privilege for Jewish Israeli citizens, but not for the Indigenous population of Palestine, he said.

Western corporations: Buying and selling apartheid

While this technology is developed by the Israeli military internally, the means to do so often comes from foreign aid, notably Western corporations.

None of the technologies discussed here (drones, facial recognition, databases, etc.) is an Israeli invention, Dr Hever said.

Western or transnational corporations have a long history of being complicit in and profiting off Israels apartheid, added Apoorva G, the Asia Pacific campaigns coordinator for the BNC.

From sports companies like Puma, Big Oil corporations like Chevron, and even infrastructure companies, like Siemens and HD Hyundai, they (Big Tech) see oppression of Palestinians as a profitable project, which is related to the economic and environmental damage caused worldwide, Apoorva added.

A recent, more concerning contract between big tech and Israel is Amazons and Googles Project Nimbus - a $1.2 billion agreement that provides cloud services to the Israeli army.

Military attacks depend on servers and digital communication, surveillance entirely relies on such technology, databases storing information on Palestinian land records, population databases they all require cloud servers. All of this is now going to be provided by Google and Amazon. And this project is already underway, Apoorva told The New Arab.

Since 2021, workers at these corporations and human rights organisations have been organising against the contract through the #NoTechForAparthied movement, but their efforts have not led to substantial change.

Sometimes these corporations themselves create weapons and export them to Israeli intelligence, creating a buy-and-sell version of occupation. Sophia Goodfriend, a PhD Candidate at Duke University Anthropology examining the ethics and impact of new surveillance technologies, explains how the tech and defence industries intersect.

The IDF has a long history of outsourcing this R and D (research and development) to private start-ups, largely staffed by veterans of Israeli intelligence units, she said, citing companies like Oosto (formerly AnyVision), the NSO Group, and Black Cube, who have all been contracted to provide technology and services to Israel's military forces.

Global violence and repression

The fact that these systems are imported, bought, or sold has led to fears among researchers and activists about their global reach and impact on human rights.

These technologies are promoted by private Israeli arms companies who are selling them around the world, even in violation of military embargos, Dr Hever elaborated. Just recently it was revealed that Israeli arms companies sell lethal weapons to the Junta in Myanmar, despite the international arms embargo over the ethnic cleansing and genocideof the Rohingya people

We know this because this is the technology which the Israeli arms companies are putting up for sale with the slogan battle-tested, adds Apoorva.

The development of AI technology surveillance in oppressive regimes will make these situations more volatile, especially when sold to existing military and security hierarchies.

The more sophisticated the surveillance mechanisms, the greater their impact in terms of violence and repression is likely to be, Nashif said. The use and abuse of surveillance technologies have led to disproportionate profiling, policing, and the criminalisation of racialised groups worldwide. Palestinians are no exception to these repressive practices.

The global market for autonomous military weapons is also increasing as more and more of these systems get tested on Palestinians. These are global trends, not in just Israel, countries like India, Russia, and the UK, the US are heavily investing in the military application of AI, Shakir says, noting that Israel is one of the top exporters of such weaponry.

As the world becomes increasingly automated, digital rights are at the forefront of conversations within human rights organisations. AI technology, which is never neutral, will be fed with/taught past wrong decisions, reinforcing the bias against racialised communities, Nashif said.

Aina Marzia is an Independent Journalist based in El Paso, Texas. Her work has been seen in The Nation, The Daily Beast, Ms. Magazine, Insider, Teen Vogue, NPR, i_D, and more. When she is not writing Aina organises with the National Student Press Law Center, ACLU, and the UCLA Center for Storytellers and Scholars

Follow her on Twitter:@ainamarzia_

Go here to read the rest:
How AI, big tech, and spyware power Israel's occupation - The New Arab

Read More..

PCI-Express Must Match The Cadence Of Compute Engines And Networks – The Next Platform

When system architects sit down to design their next platforms, they start by looking at a bunch of roadmaps from suppliers of CPUs, accelerators, memory, flash, network interface cards and PCI-Express controllers and switches. And the switches are increasingly important in system designs that have a mix of compute and memory types and for clusters that will be sharing components like accelerators and memory.

The trouble is this: The roadmaps are not really aligned well. Most CPU and GPU makers are trying to do major compute engine upgrades every two years, with architectural and process tweaks in the year in between the major launches so they have something new to sell every year. Makers of chips for networking switches and interface cards in the Ethernet and InfiniBand markets tend to be on a two-year cadence as well, and they used to tie their launches very tightly to the Intel Xeon CPU launch cadence back when that was the dominant CPU in the datacenter, but that rhythm has been broken by the constantly redrawn roadmaps from Intel, the re-emergence of AMD as a CPU supplier, and a bunch of other Arm CPU makers, including at least three hyperscalers and cloud builders.

And then there is the PCI-Express bus, which has been all over the place in the past two decades. And while PCI-Express specifications have been released in a more predictable fashion in recent years, PCI-Express controllers have been faithful to the PCI-Express roadmaps but PCI-Express switches are well behind when it comes to product launches from MicroChip and Broadcom.

Sitting here on a quiet July morning, thinking about stuff, we think all of these roadmaps need to be better aligned. And specifically, we think that the PCI-SIG organization that controls the PCI-Express specification and does so through a broad and deep collaboration with the IT industry, needs to pick up the pace and get on a two-year cadence instead of the average of three it has shown in the past two decades. And while we are thinking about it, we think the industry would be better served with a short-cadence jump to PCI-Express 7.0, which needs to be launched as soon as possible to get I/O bandwidth and lane counts in better alignment with high throughput compute engines and what we expect will be an increasing use of the PCI-Express bus to handle CXL-based tiered and shared main memory.

Dont get us wrong. We are grateful that the PCI-SIG organization, a collaboration between all kinds of companies in the datacenter and now at the edge, has been able to get the PCI-Express bus on a predictable roadmap since the very late PCI-Express 4.0 spec was delivered in 2017. There were some tough signaling and materials challenges that kept the datacenter stuck at PCI-Express 3.0 for seven years, and we think Intel, which dominated CPUs at the time and dragged its feet a little bit on boosting I/O because it got burned with SATA ports in the chipsets used with the Sandy Bridge Xeon E5s that came out later than expected in March 2012. Rumors abounded about the difficulties of integrating PCI-Express 4.0 and PCI-Express 5.0 controllers into processors since then.

Generally, a PCI-Express spec is released and then within about a year or so we see controllers embedded in compute engines and network interface chips. So when PCI-Express 4.0 came out in 2017, we saw the first systems using it coming out in 2018 specifically, IBMs Power9-based Power Systems machines, followed by its use in AMD Rome Epyc 7002s launched in August 2019. Intel didnt get PCI-Express 4.0 controllers into its Xeon SP processors until the Ice Lake generation in April 2021.

And even with the short two-year jump to the PCI-Express 5.0 spec in 2019, it wasnt until IBM launched the Power10 processor in its high-end Power E1080 machines in 2021 that it became available in a product. AMD didnt get PCI-Express 5.0 into a server chip until the Genoa Epyc 9004s launched in November 2022 and Intel didnt get PCI-Express 5.0 into a server chip until the Sapphire Rapids Xeon SPs launched in January 2023.

So it was really a three-year cadence between PCI-Express 4.0 and 5.0 products, as expressed in the controllers on the CPUs, even if the spec did a two-year short step.

We think that the specs and the products need to get on a shorter two-year cadence so the compute engines and the interconnects can all be lined up together. And that includes PCI-Express switch ASICs as well, which have traditionally lagged pretty far behind the PCI-Express specs for the 3.0, 4.0, and 5.0 generations that they were widely available.

The lag between PCI-Express ports and PCI-Express switches at any given generation are a problem. That delay forces system architects to choose between composability (which ideally uses PCI-Express switches at the pod level) or bandwidth (which is provided through a direct server slot). Systems and clusters need to be designed with both composability and bandwidth and we would add high radix to the mix as well.

At the moment, there are only two makers of PCI-Express switches, Broadcom (through its PLX Technologies acquisition a number of years ago) and MicroChip. We profiled the MicroChip Switchtec ASICs at the PCI-Express 5.0 level way back in February 2021, which scale from 28 to 100 lanes and from 16 to 52 ports, but as far as we know, they are not shipping in volume. Broadcom unveiled its PCI-Express 5.0 chip portfolio back in February 2022, including the ExpressFabric PEX 89100 switch, which has from 24 to 144 lanes and from 24 to 72 ports. We are confirming if these are shipping as we go to press and have not heard back yet from Broadcom.

Our point is that PCI-Express switches have to be available at the same time that the compute servers, memory servers, and storage servers are all going to be created using chips that support any given level of PCI-Express. On Day One, in fact. You have to be able to embed switches in the servers and not lose bandwidth or ports or sacrifice radix to get bandwidth. We therefore need lots of suppliers in case one of them slips. This is one of the reasons why we were trying to encourage Rambus to get into the PCI-Express switch ASIC racket recently.

All of this is top of mind just as the PCI-SIG has put out the 0.3 release of the PCI-Express 7.0 spec.

Lets take a look at the projections we did for the PCI-Express roadmap a year ago when the PCI-Express 6.0 spec was wrapped up and PCI-Express 7.0 appeared on the horizon:

The PCI-Express 7.0 spec is not expected to be ratified until 2025, and that means we wont see it appearing in systems until late 2026 or early 2027. We think this wait is far too long. We need PCI-Express 7.0 to provide the kind of bandwidth accelerators need to chew on an enormous amount of data that is required to run a simulation or train an AI model. We need it matched up with a fully complex CXL 4.0 specification for shared and pooled memory.

We understand that it would be hard to accelerate PCI-Express 7.0 controllers and switches to market, and that all manner of products would also have to be accelerated. Compute engine and peripheral makers alike would be hesitant to not try to squeeze as much investment as possible out of their PCI-Express 6.0 product cycles.

Still, as PCI-Express 6.0 is put into products and goes through its rigorous testing which will be needed because of the new PAM-4 signaling and FLIT low-latency encoding that it makes use of we think the industry should start accelerating and match up to the CPU and GPU roadmaps as best as possible and to get onto a two-year cadence alongside of them.

Get the components in balance and then move ahead all at once, together.

Go here to see the original:
PCI-Express Must Match The Cadence Of Compute Engines And Networks - The Next Platform

Read More..

Samsung reveals Q2 2023 earnings, profit nosedives to 14-year low – SamMobile – Samsung news

Last updated: July 7th, 2023 at 06:11 UTC+02:00

Samsung has been going through some tough times. Over the past year, its semiconductor chip business has seen a massive downturn amid global economic woes. Since most of the companys profits usually come from its semiconductor chips business, it has been hit hard. The South Korean firm unveiled its earnings estimate for Q2 2023, and things appear worrying.

Samsung expects Q2 2023 sales to be around KRW 60 trillion (around $45.91 billion), while its operating profit would be around a paltry KRW 0.6 trillion (around $459 million). Thats a whopping 95.74% drop in profit compared to the previous year (Q2 2022), while sales dropped 22.28% from a year ago. This is the second consecutive quarter where the company is reporting worrying figures. Even its Q1 2023 profits dropped 96% compared to Q1 2022.

While the company hasnt revealed its complete and final figures, analysts claim that the poor performance is attributed to the semiconductor division (Samsung Device Solutions). This division is expected to have made losses to the tune of up to KRW 4 trillion ($3.06 billion). Clients arent buying enough memory chips for their cloud servers and high-performance computing machines. Samsung said earlier this year it expects this phase to continue throughout the year.

The companys smartphone division seems to have performed solidly, though. The Galaxy S23 series has been selling really well worldwide. In some countries, the companys high-end phones unveiled earlier this year sold 1.6-1.7x more than its predecessor during the same one-month period. Even in the home appliances and TV segments, it is seeing tough competition and wants these segments to improve its profits.

See original here:
Samsung reveals Q2 2023 earnings, profit nosedives to 14-year low - SamMobile - Samsung news

Read More..

Twitters new CEO finally explains rate limits & what it means – Dexerto

Joel Loynds

Published: 2023-07-05T13:46:15

Updated: 2023-07-05T13:46:24

The Twitters new CEO, Linda Yaccarino, has broken the tension with new reasoning behind the current reading rate limits, which was also posted on their blog.

After a few days of turmoil behind the scenes at Twitter, Linda Yaccarino, the new CEO of the social media company has come forward and issued a statement, which was coupled alongside a blog post regarding the companys plan with rate-limiting users.

The current owner, Elon Musk, announced on July 1 that the platform would limit how many posts users could see before it locked them from seeing anything else.

Article continues after ad

This led to Twitter essentially DDOS-ing itself and speculation arose around the $1.5 billion bill it might owe in relation to Google Cloud servers.

Yaccarino has broken her silence on the topic, tweeting her support for the action as means of strengthening the platform.

In the tweet, Yaccarino said:

When you have a mission like Twitter you need to make big moves to keep strengthening the platform.

This work is meaningful and on-going.

Ben Collins, a reporter for NBC News questioned the CEO and official blog post:

Subscribe to our newsletter for the latest updates on Esports, Gaming and more.

If this were true, which seems extremely unlikely, why wouldnt you warn users first?

Article continues after ad

Either another lie, or wildly irresponsible, or both.

Meanwhile, the short statement made on Twitters official blog seems to be reiterating what Musk claims is the reasoning behind the limit. The main reason is that people are scraping data from the site without paying for it.

Twitter has recently clamped down on access to its API and began charging for it.

However, recent reports have indicated that since Musks tenure at Twitter, even those paying for the platform have found it broken.

Article continues after ad

Musk has said that the measures are temporary, but weve yet to see the rate limit be alleviated. Meta is currently prepping to launch its competitor, Threads, which launches tomorrow.

Read more here:
Twitters new CEO finally explains rate limits & what it means - Dexerto

Read More..

Google Quantum Computer Is So Fast It’s Scary – Giant Freakin Robot

By Sean Thiessen| Updated 7 mins ago

Whats faster than a supercomputer? A Google quantum computer, thats what. As reported by Science Alert, scientists trailblazing in the strange world of quantum computing just ran a number-crunching test with the Sycamore quantum computer. It accomplished in seconds what it would take the Frontier, the worlds most powerful supercomputer, 47 years to complete.

Googles quantum computer is now the most powerful machine in existence and its just getting started.

The test used random circuit sampling, a synthetic benchmark that measures how fast the quantum computer can take readings from random quantum processes. The team estimated how quickly the worlds fastest supercomputer could do the same thing, and the difference was staggering.

If you are reading this with a friend and nodding, pretending to understand, youre not alone. The Google quantum computer uses processes related to fields like matrix theory and quantum chaos not exactly light reading.

The first thing to know about quantum computers is that they have nothing to do with Ant-Man. Machines like the Google quantum computer process information by utilizing processing happening at the quantum level. They operate in the realm of probabilities under a paradigm totally different from traditional computers.

Instead of operating on a system of bits, quantum computers use qubits, which can simultaneously represent a 1, a 0, or both. If this all sounds finicky, you are right. The Google quantum computer can only operate in specific conditions and is subject to error thanks to quantum noise.

However, this unavoidable system frailty is being illuminated as tests continue.

While some argue that the random circuit sampling success of the Google quantum computer is not a fair benchmark because it is a totally impractical application, the folks behind the controls on the machine are asserting that the future is quantum.

The field is young and still a long way from where it needs to be, but researchers are calling the latest milestone in quantum computing the clear indicator that quantum supremacy is imminent, if not already here.

It is not likely that your laptop in 10 years will be a Google quantum computer, but the technology does hold promise for large-scale applications, such as cybersecurity, energy storage, artificial intelligence, weather forecasting, and more.

Google quantum computers will eventually change everything, but its still a long way off.

The technology is not even close to being scalable, but the field is moving fast. A computer doing in a few seconds what another would take nearly half a century to achieve is a mind-boggling idea. It is too abstract to truly grasp the gravity of it, but the comparison makes one thing clear: the Google quantum computer is going to change the world.

When that change will arrive is still a mystery. Scientists have debated for years about whether or not quantum computers could ever become a viable alternative to traditional computers. The field is fuzzy and challenging to grasp, but continued experimentation is unraveling the mysteries of quantum mechanics.

It may not be an Ant-Man computer, but the Google quantum computer might be the next step toward a more high-tech future. As quantum technology advances alongside artificial intelligence and alternative fuel, there is no telling what the world will look like in mere decades.

If the researchers at Google have anything to say about it, the future will be quantum.

Go here to see the original:
Google Quantum Computer Is So Fast It's Scary - Giant Freakin Robot

Read More..

A quantum Szilard engine that can achieve two-level system hyperpolarization – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

by Ingrid Fadelli , Phys.org

Quantum computers, machines that perform computations exploiting quantum mechanical phenomena, could eventually outperform classical computers on some tasks, by utilizing quantum mechanical resources such as state superpositions and entanglement. However, the quantum states that they rely on to perform computations are vulnerable to a phenomenon known as decoherence, which entails the loss of quantum coherence and shift to classical mechanics.

Researchers at Karlsruhe Institute of Technology in Germany and Quantum Machines in Israel have recently carried out an experiment aimed at better understanding how environments could be improved to prevent the decoherence of quantum states, thus enhancing the performance of quantum computing hardware. In their paper, published in Nature Physics, they demonstrated the use of a quantum Szilard engine, a mechanism that converts information into energy, to achieve a two-level system hyperpolarization of a qubit environment.

"One of the biggest challenges of quantum superconducting circuits is preserving the coherence of quantum states," Ioan Pop and Martin Spiecker, two of the researchers who carried out the study, told Phys.org. "This is quantified by the energy relaxation time T1 and the dephasing time Tphi. While doing T1 energy relaxation measurements, we noticed that the qubit relaxation was not the same for different initialization sequences, similar to the observations of Gustavsson et al, published in Science in 2016. This motivated us to design and implement the quantum Szilard heat engine sequences presented in the paper."

A Szilard engine resembles the so-called Maxwell daemon, a hypothetical machine or being that can detect and react to the movement of individual particles or molecules. However, instead of operating on classical particles, as a Maxwell daemon would, the quantum Szilard engine operates on an individual quantum bit (i.e., a qubit).

Pop, Spiecker and their colleagues realized that the Szilard engine they created induces a hyperpolarization of a qubit environment. In addition, they were surprised to observe a very slow relaxation time of this environment, consisting of two-level systems (TLSs), which outlive the qubit by orders of magnitude.

"By continuously measuring the qubit and flipping its state in order to stabilize either the state 1 (or 0), the engine essentially uses information acquired from the qubit to heat (or cool) its environment," Pop and Spiecker explained. "By running the engine for sufficiently long, we can prepare the environment of the qubit in a hyperpolarized state, far from thermal equilibrium. Moreover, by monitoring the qubit relaxation we can learn about the nature of the environment and the qubit-environment interaction."

Via their quantum Szilard engine, the researchers were able to reveal the coupling between a superconducting fluxonium qubit and a collection of TLSs, which exhibited an extended energy relaxation time above 50 ms. This system could be cooled down to reduce the qubit population below the 20 mK temperature of the cryostat and heated to create an environment with a qubit population of approximately 80%.

"The before hidden TLS environment turned out to be the main loss mechanism for the qubit, while, almost paradoxically, the TLSs themselves are virtually lossless," Pop and Spiecker said.

"This is a crucial subtlety, because it implies that the qubit T1 is independent on the TLS population, and strategies to improve T1 relaxation times that are based on TLS saturation are not viable. Last, but not least, our experiments uncovered an up-to-now unknown TLS environment, with orders of magnitude longer relaxation times compared to the commonly measured dielectric TLSs."

The recent work by Pop, Spiecker and their colleagues could have valuable practical implications. For instance, their findings highlight the need to include environmental memory effects in superconducting circuit decoherence models. This key insight could help to improve quantum error correction models for superconducting quantum hardware, models that can help to mitigate the adverse impact of noise in quantum processors.

"One of the open questions is the physical nature of these long-lived TLSs, which might be electronic spins, or trapped quasiparticles (broken Cooper pairs) or adsorbed molecules at the surface, or something entirely different," Pop and Spiecker added. "We are currently performing experiments to measure the spectral density of these TLSs and gain some knowledge on their nature. Of course, the ultimate goal is to remove all TLSs from our environment and improve qubit coherence. In our case this would quadruple the qubit T1."

More information: Martin Spiecker et al, Two-level system hyperpolarization using a quantum Szilard engine, Nature Physics (2023). DOI: 10.1038/s41567-023-02082-8

Journal information: Nature Physics

2023 Science X Network

Link:
A quantum Szilard engine that can achieve two-level system hyperpolarization - Phys.org

Read More..

Quantum Computing On A Commodore 64 In 200 Lines Of BASIC – Hackaday

The term quantum computer gets usually tossed around in the context of hyper-advanced, state-of-the-art computing devices. But much as how a 19th century mechanical computer, a discrete computer created from individual transistors, and a human being are all computers, the important quantifier is how fast and accurate the system is at the task. This is demonstrated succinctly by [Davide dakk Gessa] with 200 lines of BASIC code on a Commodore 64 (GitHub), implementing a range of quantum gates.

Much like a transistor in classical computing, the qubit forms the core of quantum computing, and we have known for a long time that a qubit can be simulated, even on something as mundane as an 8-bit MPU. Ergo [Davide]s simulations of various quantum gates on a C64, ranging from Pauli-X, Pauli-Y, Pauli-Z, Hadamard, CNOT and SWAP, all using a two-qubit system running on a system that first saw the light of day in the early 1980s.

Naturally, the practical use of simulating a two-qubit system on a general-purpose MPU running at a blistering ~1 MHz is quite limited, but as a teaching tool its incredibly accessible and a fun way to introduce people to the world of quantum computing.

Go here to see the original:
Quantum Computing On A Commodore 64 In 200 Lines Of BASIC - Hackaday

Read More..