Category Archives: Cloud Servers
Nobody wins from open RAN except the public clouds – Nobody … – Light Reading
Descending the auditorium steps at the FYUZ event in Madrid this week, attendees were enthusiastically told that a surprise guest would appear for the inaugural session. "Tom Cruise?" one former film studies student hopefully enquired. What no one expected to see was the giant screened face of Fredrik Jejdling, the head of Ericsson's mobile networks unit, hovering above the massed ranks of telecom executives like an alien spaceship. Ericsson is not even a member of the Telecom Infra Project (TIP), the organizer of the show, and it has long been seen as an enemy of open RAN, the topic that consumes TIP's attention.
Jejdling seemed keener on talking about Vonage. Ericsson bought the software business for $6.2 billion last year and yesterday wrote down the value of its investment by $3 billion, blaming the dire performance of Vonage peers on stock markets, among other things. But he was able to muster some faint Swedish enthusiasm the kind you saw when Bjorn Borg won a point at Wimbledon for open RAN after his company recently promised to make radios compatible with a certain flavor of the technology.
About 1 million are already in the field, ready for a software upgrade, he told the audience. Operators, then, should expect to see that software by late 2024, posited Santiago Tenorio, the network architecture director of Vodafone and TIP chairman. "I don't want to commit to any particular time," replied Jejdling with a nervous chuckle.
There is predictable skepticism about Ericsson's intentions and commitment. Open RAN's new interfaces could hurt the Swedish vendor by allowing operators to buy only some of its radio access network parts and link those to other suppliers' components. When they shop with Ericsson today, it is all or nothing. But Jejdling's close encounter with open RAN-kind means both Ericsson and Nokia the other big Nordic vendor have beamed down and are voicing support for the concept. And each has probably realized it is under little direct threat in this strange new land.
Investment gaps and duplication
RAN challengers are almost nowhere to be seen. Parallel Wireless, a RAN software developer, is not even a reference anymore, Tenorio told Light Reading last week, after it reportedly cut most of its workforce last year. Chinese vendors have been evicted from the most important markets for Ericsson and Nokia, if they were ever there in the first place.
Others have weakened their research-and-development (R&D) focus by expanding into new areas. Ericsson's RAN market share outside China has risen by six percentage points since 2017, it boasts. Nokia's 5G share is up by the same amount in just 18 months, it claims, citing Dell'Oro research.
The scale of the challenge was neatly summed up by Robert Soni, the RAN vice president of AT&T, who estimates that Ericsson and Nokia spend about $5 billion on R&D in this market each year. "Those numbers that are spent by the incumbents are staggering," he said. And about one in three dollars goes just on integrating their own products, Soni estimates. Others will succeed only if they pool resources, in his opinion.
But the industry is awash with overlapping initiatives government-backed test centers, TIP's facilities, operators' individual efforts. Meanwhile, vendors are foisting their own pre-integrated partner solutions on telcos. "The whole idea was to give the operator the ability to make the choice," said Soni.
The death of general purpose
The virtualization aspect of open RAN is not what it was cracked up to be, either. In recent days, Nokia has attacked the notion that many central processing units (CPUs), supplied largely by Intel, are "general purpose." Instead, it believes they feature a lot of customization to support 5G needs.
Nokia's own cloud RAN products include the same customized silicon from Marvell used in its purpose-built 5G. Hyperscalers now routinely use customized silicon to offload demanding workloads from CPUs, said Joel Brand, Marvell's senior director of product marketing. Why should something with all the specific needs of 5G buck the trend?
Equally suspect is the description of servers including these chips as "common, off-the-shelf" equipment. A server sold to Walmart is not the same as one designed for telcos and compliant with very specific network equipment-building system (NEBS) guidelines, according to Geetha Ram, the head of RAN compute for HPE.
Brand thinks the industry must go even further. "The RAN today is based on custom appliances," he said during a FYUZ panel session. "Those custom appliances are built to operate outdoors, without active cooling, with passive cooling. They are designed for maximum reliability and availability. If we want to succeed to move that to a server environment, we really need to think what a vRAN server looks like." Yet all this would rather undermine some of the original arguments about open RAN economies of scale.
Cloudy economics
None of this means absolutely nothing changes and nobody wins. But open RAN's likely beneficiaries are probably not what the telecom industry intended. Clues lie in the increasing reference to "cloud" rather than virtual RAN and in recent tie-ups between the Nordic vendors and the hyperscalers. Of particular interest is a new deal between Ericsson and Google Cloud.
Under that arrangement, Google would bring its own computing equipment, described as "black boxes" by Ericsson, into a telco's facilities. The public-cloud giant would essentially replace the telco as the investor in that equipment. Ignoring the radios and antennas, the role of Ericsson, previously involved in hardware development, would be limited to contributing the RAN software. Rather than buying appliances and servers, the operator would pay Google on a usage basis.
Dish, a greenfield open RAN operator in the US, already has this kind of relationship with AWS. Decisions about CPU offload to accelerators and other technical features could eventually be taken away from the telcos. AWS, for instance, is working to put RAN workloads on its own Graviton processors, built with the blueprints of Arm.
"We defined an open fronthaul interface so now the processing can be done by a hyperscaler," says Brand. Whether the hardware is Dell, HPE or Graviton, hyperscalers can put their cloud platforms at the site hosting the distributed unit, responsible for baseband processing. "This is not increasing the variety of vendors, because there are only a few of them. Whether it's good for the industry, bad for the industry, will affect Nokia, will affect Ericsson, will affect Verizon, will affect Vodafone, I don't know."
This is not a fait accompli, of course. Hardly any telco workloads run on hyperscaler platforms today. A crop of alternatives Red Hat, VMware and Wind River exists for building a single private cloud that can manage all workloads. But the paradigm has changed now the hyperscalers have entered the private cloud space.
No one seriously thinks Red Hat can move in the opposite direction and establish itself as a public cloud. Nor can it develop chips, build large language models from scratch and do all the other stuff that might attract a telco to a hyperscaler. Nokia insists Red Hat is now its preferred cloud partner, but its anyRAN strategy is also about ensuring its software can work "on any partner's cloud infrastructure." Earlier this year, it showed that its RAN software, split between Marvell and Graviton chips, could run on an AWS platform.
Cloud RAN of this nature would see the public clouds advance even further into the telco sector. It would be especially awkward for supporters of "fair share," the controversial argument that Big Tech should contribute to network costs because demand for their applications is forcing telcos to invest bigger sums. Adding capacity in mobile often means putting in extra compute resources, but these would be rented from Big Tech, not owned by the telcos. Pricing negotiations between landlords and tenants don't often work out well for the latter.
See the original post:
Nobody wins from open RAN except the public clouds - Nobody ... - Light Reading
Unlocking IT Success: Embracing the hybrid cloud revolution – SecurityBrief Australia
The cloud migration revolution has undeniably transformed the IT landscape, encapsulating every business or organisational process in the last decade. The adoption of cloud technology has created greater opportunities for companies to scale and expand both their private and public clouds. Contrary to popular belief, the migration of IT information to the cloud isn't daunting, provided one is backed by an adept IT partner.
Cloud migration encompasses the moving of applications and data from on-premises servers to public or private cloud providers, or even across diverse cloud environments. Typically, this migration strategy is executed in five primary stages: preparation, planning, migration, operation, and optimisation.
Carl Filpo, Director at CMTG, identifies an increase in demand for cloud-based solutions, spawned by the rise in workplaces favouring remote working. "The cloud enables easy access to data and applications from any device with an Internet connection," Filpo explains. This is one key reason for the popularity of cloud-based storage solutions among businesses, as they aim to offer flexibility to their workforce.
However, the ongoing progress in cloud solutions indicates the inefficacy of a one-size-fits-all strategy. Arguably, a hybrid approach of public and private cloud systems may be more fruitful. Some data and information are kept on public cloud systems like Office 365, while others are kept on private clouds or traditional on-premises systems. "The benefit of hybrid clouds is the flexibility, allowing organisations to strategically decide what is kept on-premises and what is migrated to the cloud," Filpo asserts.
Indeed, now is the time to reconsider traditional methods and investigate a hybrid cloud approach. "In this age of hybrid cloud solutions, where public and private clouds can be integrated, the right storage architecture can accelerate your journey," suggests Filpo. Consequently, businesses should contemplate their unique needs and constraints when deciding the right option and tailor strategies that match their IT landscape, needs, and requirements before transitioning to the cloud. A customised solution will be the catalyst for IT success in this dynamic and rapidly evolving era.
CMTG, established in 1998, is an integrated IT infrastructure solutions, service, and consulting company based in WA. The company specialises in high-performance data storage and applications, private cloud systems, and ongoing support. With a state-of-the-art on-site data centre, highly secure dark fibre connection, the latest hardware on the market, and strong aftersales support, CMTG provides industry-leading infrastructure and technology services. The dedicated deployment team ensures high-speed and sustainable IT solutions and safe data migration and handling. Further, the company holds some of the nation's highest security accreditation and certification standards.
See the original post:
Unlocking IT Success: Embracing the hybrid cloud revolution - SecurityBrief Australia
Everyone Is Chasing What Nvidia Already Has – The Next Platform
Transitions in the datacenter take time.
It took Unix servers a decade, from 1985 through 1995, to supplant proprietary minicomputers and a lot of mainframe capacity that would have otherwise been bought. And from 1996 through 2001 or so, Sun Microsystems servers set the pace and reaped the profits, although nothing like what IBM mainframes had commanded before them. And in 2001, IBM brought its Power architecture and aggressive pricing to bear an aggression funded by its vast profits from proprietary systems where mission-critical COBOL and RPG applications and their underlying databases and middleware were extremely sticky and over the next decade and a half wiped Hewlett Packard and Sun from the face of the Unix market. IBM rules Unix systems with its AIX-Power platform, but that business is maybe 25X smaller than it was in the Unix heyday.
Starting in 1993, it took about a decade for X86 servers to evolve to become a reasonable and credible alternative to RISC platforms and the Linux operating system, which was a few years behind Intels Pentium Pro and Xeon efforts in the datacenter, evolved alongside the X86 platform to supplant Unix. And from there, the X86/Linux platform crushed the RISC/Unix platform and became the first dominant, general purpose compute substrate in the datacenter, actually fueling the rise of the hyperscalers and cloud builders.
The Internet as we know it could never be built on mainframes, or even RISC/Unix machinery. It was just too damned expensive and the vendors providing RISC/Unix gear had no incentive to cut their prices in half and lose money on the operation. New low-cost suppliers of all of the system and software components had to come into being to lower that cost. AMD came into the X86 market in 2003, but by 2009 had run out of gas and Intel took over from there, exercising what was effectively monopoly power in datacenter compute. And its higher profits and less aggressive server CPU roadmap are the two things that inevitably fostered the competitive threat that Intel did not see until it was way too far along the wrong paths in both chip design and manufacturing and hit the wall with Moores Law, no matter how much Intels top brass protests that Moores Law (where transistors become half expensive every two years) is not dead.
Its dead, or at least dead enough that Nvidia co-founder and chief executive officer, Jensen Huang, is correct in his assessment from last September that it was indeed dead. At this point, who you gonna believe: Jensen Huang or Pat Gelsinger, Intels chief executive officer and a student of the Andy Grove only the paranoid survive school of semiconductor design and manufacturing?
With riches, Intel lost the ability to be paranoid, and became complacent. And its wrong moves in manufacturing research and development, the consequential stall in CPU architecture advancement, in GPU development, and in tangential software acquisitions all compounded.
So here is to hoping, if you are a system architect or a Wall Street investor, that Huang & Co can remain paranoid this year and into the future.
The Nvidia financial presentation that came to light this week that had a roadmap for CPUs, GPUs, and networking, and which we edited for accuracy and which we added DPUs to, had some other interesting charts in it, which got us to thinking about the IT market and Nvidias growing place in it. There was a companion presentation that Collette Kress, Nvidias chief financial officer, gave at the Citi Global Technology Conference on September 7 that also had some money and market food for thought.
Lets start with total addressable market and serviceable addressable market. The entire IT market, speaking very generally and including hardware, software, and services including telecom and data services is projected to be a $4.71 trillion market in 2023, according to the latest numbers from Gartner. The Nvidia TAM does not perfectly overlap this because it not only includes IT, but autonomous machines, which is embedded within a very large industry making cars, trucks, forklifts, robots and other machinery. The forklift market is north of $60 billion this year, the robotic market is somewhere around $40 billion to $75 billion, depending on how you characterize it, and cars and trucks will probably account for somewhere around $1,400 billion if you poke around the Internet and look for stats. So the total addressable market for Nvidia is the compute, visualization, networking, and software part of these markets, which together comprise around $5.7 billion in spending this year. Within that, and what Nvidia is really carving out in its data, is the much smaller $1 trillion opportunity it is chasing within these industries where compute and visualization are needed. One might properly call this Nvidias serviceable addressable market, but this is really somewhere between TAM and SAM. Call it TAM we guess.
Nvidia has spoken a lot of this $1 trillion opportunity, and here Kress broke it down here in 2023:
Now, if you take this TAM of $1 trillion and divide it into Nvidias projected revenues for calendar 2023 of somewhere around $50 billion, that is 5 percent of the TAM as we have defined it. That sure doesnt feel like anything close to monopoly power. But if we restricted the actual TAM and SAM comparison to all accelerated computing and its necessary networking, visualization, and graphics, you would find that Nvidia has a dominant share. Our guess is more than 50 percent but less than the 85 percent threshold that law enforcement concerning monopoly regulation tends to use to determine if a company is a monopoly or not.
And once again, we are not against monopolies, which we think are economic as well as technical situations that arise naturally after a certain amount of consolidation in any industry. We are, however, against unbridled and unregulated monopolies after a certain amount of economic pain from customers and potential competitors is reached. Economic substitution is how we all get around such pain, but it is sometimes not possible to do it at any scale. Like, for instance, when it comes to GPU compute in the datacenter right now. But competitors are ramping up and getting better, and there will be substitution. Fear not.
The $1 trillion referenced above is not the same $1 trillion that Huang & Co talk about when they are discussing the datacenter infrastructure installed base.
Now, this chart says datacenter installed base, but it appears to be plotting out datacenter revenues for all kinds of things. We doubt very much that this chart is gauging the net present value of datacenter gear and adding new gear going into the datacenter and extracting the value of the machines that are retired and then plotting then-current server CPU revenues and Nvidia datacenter revenues against that. This had better be all revenues per year for the categories shown.
Nvidia does say that the spending for server CPUs, servers, network infrastructure, and other datacenter infrastructure is on a calendar year and that the Nvidia datacenter spending is on the nearest calendar year. (Nvidias fiscal 2023 ends in January 2023, and is almost the same thing as calendar 2022, to say it in plain English American.)
If you do the math on that chart above (by measuring the lines with a ruler on the Y axis because there are no dollar or percent figures given), then in fiscal 2024/calendar 2023, Nvidias share of datacenter spending is somewhere around 15 percent.
You will also note in the chart above that server CPU revenues have taken a downward trend send AMD thank you notes as well as the Arm collective while server revenues are up modestly as AMD has doubled its revenues, more or less. We see here as we have showed before how much GPU-accelerated server spending is propping up the server market.
The big story after the GPU crazy is networking, of course, and AI has been very, very good to InfiniBand. Like this:
Based on our model, we think networking represented about $410 million in revenues in Q1 F2021 for Nvidia, and that it grew to $1.64 billion in Q2 F2024 ended in July of this year. If you use the 4X for overall networking growth and the 7X for InfiniBand growth, and measure the share on the bars in the chart, that means the share of Ethernet versus other on the Q1 F2021 stacked bar is not right. It has to be almost perfectly evenly split to have InfiniBand grow by 7X and overall networking grow by 4X. This implies that InfiniBand drove $1.41 billion in sales in Q2 F2024 for Nvidia, but Ethernet And Other drove only $232 million, up 1.1X from Q1 F2021. (We wonder where NVSwitch is accounted for here? Hopefully in Other.)
The only thing growing faster than the datacenter networking business at Nvidia over the same timeframe is the rest of the Nvidia datacenter business, which has grown by a factor of 11.9X since the Mellanox acquisition and which drove $8.68 billon in sales in Q2 F2024.
Nvidia is clearly the leader of accelerated computing, and it probably has another 12 to 18 months of being the undisputed revenue and mindshare leader. But, others are coming with new technologies and lower prices, and this is as unavoidable as the rain that eventually comes to spur on new growth.
Visit link:
Everyone Is Chasing What Nvidia Already Has - The Next Platform
What is the Internet of Things (IoT) – IoT Business News
The Internet of Things (IoT) has become a ubiquitous term in the realm of technology and innovation, promising to reshape the way we interact with our environment and the devices that surround us. This article aims to provide a comprehensive understanding of what IoT is, its core concepts, its impact on various industries, and its implications for our daily lives.
At its core, IoT is a network of interconnected physical objects, devices, vehicles, buildings, and even people that are embedded with sensors, software, and network connectivity. These smart objects can collect and exchange data, facilitating the seamless integration of the physical and digital worlds. The key elements that define IoT are:
1. Connected Devices: These can range from household appliances and wearable fitness trackers to industrial machinery and autonomous vehicles. Any physical object can become a part of the IoT when its equipped with sensors and connectivity.
2. Sensors: Sensors are the workhorses of IoT. They gather data from the physical world, measuring everything from temperature and humidity to motion and location. These sensors act as the eyes and ears of IoT devices.
3. Connectivity: Data collected by IoT devices is transmitted over networks, which can be wired or wireless. The data is then sent to cloud-based platforms for storage, analysis, and interpretation.
4. Cloud Platforms: Cloud platforms play a vital role in IoT by storing and processing the immense volume of data generated by IoT devices. They also provide a centralized location for data analysis and management.
5. Data Analysis and AI: The data collected from IoT devices is valuable, but to make it truly useful, it needs to be processed and analyzed. AI (Artificial Intelligence) and machine learning are often used to glean insights from this data.
IoT devices work together in a structured ecosystem to collect, transmit, and interpret data. Heres how IoT typically functions:
1. Data Collection: Sensors in IoT devices gather data from their surroundings. This can include temperature readings, location data, air quality measurements, or any other relevant information.
2. Data Transmission: The collected data is transmitted to a central server or cloud platform through wired or wireless networks. This data is often sent in real-time to ensure up-to-the-minute insights.
3. Data Storage and Analysis: The data is stored on cloud servers and is processed and analyzed using AI algorithms and machine learning. This stage is crucial for transforming raw data into actionable information.
4. User Interface: The processed data can be accessed by users through web applications, mobile apps, or other interfaces. This is where individuals or organizations can view, manage, and make decisions based on the IoT data.
IoT has had a profound impact on various industries, offering innovative solutions and revolutionizing processes. Some key sectors that have been transformed by IoT include:
In healthcare, IoT devices are used for remote patient monitoring, medication adherence, and even surgeries performed by robotic arms. These technologies improve patient outcomes and provide real-time health data to medical professionals.
IoT-enabled sensors in agriculture can monitor soil conditions, weather, and crop health. This data allows farmers to optimize irrigation, minimize waste, and increase crop yields.
IoT has ushered in the era of smart factories. Machines equipped with sensors communicate with each other and with a central system, allowing for real-time monitoring, predictive maintenance, and efficient production.
IoT plays a crucial role in the development of autonomous vehicles. Sensors and connectivity enable self-driving cars to navigate roads safely and efficiently. Additionally, IoT has transformed logistics and fleet management, improving route optimization and reducing fuel consumption.
IoT applications in smart cities include intelligent traffic management, waste management, energy consumption optimization, and public safety enhancements. These solutions make urban living more sustainable and efficient.
IoT has introduced innovations like smart shelves that monitor inventory, personalized marketing through beacons, and cashier-less stores. These developments provide a seamless and personalized shopping experience for customers.
IoTs impact isnt limited to industries; it has made its way into our daily lives. Here are some ways in which IoT has transformed our routines:
IoT devices like smart thermostats, lights, and security systems enable homeowners to control and automate various aspects of their homes remotely. Voice-activated virtual assistants like Alexa and Google Assistant make it even easier to manage these devices.
From fitness trackers to smartwatches, wearables use IoT technology to monitor and relay health data, track physical activity, and provide timely reminders.
Refrigerators, ovens, and washing machines equipped with IoT capabilities can communicate with users smartphones, offering convenience and efficiency.
IoT security cameras and sensors enable homeowners to monitor their properties remotely. Alerts are sent to their phones in real-time, providing peace of mind.
IoT allows businesses to offer personalized services based on customer behavior and preferences. This can include location-based recommendations, customized marketing, and tailored product offerings.
The Internet of Things (IoT) is at the forefront of technological innovation, and its future holds exciting possibilities. As IoT continues to evolve, several key trends are emerging, which promise to reshape the landscape of connected devices and data-driven applications. In this article, we will explore these future trends of IoT and their potential impact.
One of the most significant trends in IoT is the convergence with edge computing. Edge computing involves processing data closer to the source, at the edge of the network, rather than sending it to centralized cloud servers. This trend significantly reduces latency, allowing for real-time decision-making. IoT devices, equipped with edge computing capabilities, can make faster and more localized decisions, leading to more efficient and responsive systems. This trend is particularly crucial for applications that require low latency, such as autonomous vehicles and industrial automation.
The rollout of 5G networks promises to impact IoT. The higher bandwidth and lower latency of 5G will provide faster and more reliable connectivity to IoT applications. With 5G, time-critical IoT applications can operate more efficiently, making them more capable and responsive. From smart cities to healthcare and transportation, 5G will enable a wide range of IoT use cases that were previously limited by connectivity constraints.
The integration of artificial intelligence (AI) with IoT is already transforming the capabilities of connected devices. AI-powered IoT systems can analyze and interpret data, make predictions, and adapt to changing conditions. This integration enhances the intelligence and automation of IoT devices, enabling them to learn from data and make decisions without human intervention. For instance, AI-powered cameras in smart homes can identify security threats or monitor energy usage to optimize efficiency.
As the IoT ecosystem continues to expand, security and privacy concerns are becoming increasingly important. With more devices connected to the internet, the attack surface for cyber threats widens. Ensuring robust cybersecurity measures is essential to protect sensitive data and devices. Privacy regulations and user controls are also playing a more significant role in shaping the way IoT data is collected and managed. These considerations are critical to maintaining user trust and the long-term success of IoT applications.
Sustainability is a growing trend in the IoT landscape. As environmental concerns become more prominent, the development of low-power IoT devices and sustainable manufacturing practices is a top priority. Green IoT solutions aim to minimize the environmental impact of IoT technology. These efforts include using renewable energy sources, designing energy-efficient devices, and promoting eco-friendly manufacturing processes.
To realize the full potential of IoT, interoperability and standardized communication protocols are essential. Future IoT systems must seamlessly connect and communicate with various devices and platforms. The lack of interoperability can be a significant barrier to IoT adoption. Industry standards and collaborations will play a key role in achieving this level of compatibility, ensuring that IoT devices and ecosystems work together seamlessly.
In conclusion, the future trends of IoT promise to revolutionize the way we interact with technology and the world around us. Edge computing, 5G connectivity, AI integration, security measures, sustainability, and interoperability are all poised to play a pivotal role in shaping the IoT landscape of tomorrow. As IoT continues to mature and expand, its impact on industries and daily life will only grow, offering innovative solutions and making our world more interconnected and efficient.
The rest is here:
What is the Internet of Things (IoT) - IoT Business News
Cybersecurity in packaging: Q&A with GlobalData thematic analyst – Packaging Gateway
Shabnam Pervez is a thematic analyst at GlobalData. She has been working on the thematic team for over three years, focusing on emerging technology trends in a myriad of different sectors.
Shabnam Pervez: Major cybersecurity challenges within the packaging industry arise from digitalisation. Like other sectors, technology is disrupting the packaging industry. Various technologies can be integrated throughout the value chain to automate processes and gain better insights into customer behaviour. However, understanding and effectively utilising these technologies pose a significant challenge for packaging industry players who are under pressure to keep up with the rapidly changing technological landscape.
Additionally, the future of work presents another challenge as employment dynamics shift. Many businesses are experimenting with hybrid and flexible work strategies, enabled by collaboration tools and cloud computing. These technologies allow employees to work remotely, reducing travel expenses and improving work-life balance. This shift also allows companies to downsize office spaces, lowering real estate costs. However, transitioning to remote and hybrid work models introduces new security challenges as corporate IT has less control over user actions, devices, and software. The Covid-19 pandemic has further accelerated the trend of remote servicing of packaging machinery.
Lastly, businesses across all sectors face the issue of Environmental, Social, and Governance (ESG) considerations. Investors, governments, employees, and customers share the common goal of becoming more sustainable. Companies that prioritise sustainability will reap financial benefits, such as increased share prices and improved customer and partner loyalty. On the other hand, those that neglect sustainability will struggle. In the packaging industry, companies face mounting pressure to use environmentally friendly packaging materials, reduce single-use plastics, and address their carbon emissions. Government regulations, like the UKs Circular Economy Package, dictate sustainability standards favouring recycling and the circular economy. However, ESG encompasses more than just environmental aspects; companies also have a responsibility to care for their customers and employees while ensuring robust corporate governance structures.
Shabnam Pervez: Packaging companies should prioritise cybersecurity across their entire value chain. As the packaging industry undergoes digital transformation, companies at every stage of the value chain are recognising the vulnerabilities present in their supply chains. To address these concerns, they are collaborating with cybersecurity firms to find solutions. Investments in cybersecurity for the packaging industry should focus on essential areas such as threat detection and response, cloud security, data security, vulnerability management, post-breach response services, and risk and compliance. With the increasing use of cloud servers for delivery management and the storage of large amounts of virtual data, packaging companies face the risk of supply chain sabotage if they fail to implement effective cybersecurity measures.
Shabnam Pervez: The manufacturing sector is undergoing rapid digitalisation. According to a survey conducted by the Packaging Machinery Manufacturers Institute (PMMI) in March 2021, approximately 79% of companies have integrated smart technologies into their processing lines, 64% on their assembly lines, and 60% at the end-of-line packaging stage. These numbers are expected to significantly rise as the industry continues to explore and adopt new technologies.
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Your download email will arrive shortly
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
Shabnam Pervez: Certainly not as per IBM, the manufacturing industry is the second most targeted sector by cybercriminals, yet it remains one of the least prepared. The packaging process, which often occurs as the final step in manufacturing, relies heavily on automated machinery, accounting for approximately 60% of end-of-line packaging. The Packaging Machinery Manufacturers Institute (PMMI) reports that in 2020, a staggering 88% of small business owners in the manufacturing sector felt their businesses were susceptible to cyberattacks. Furthermore, 28% of all cybersecurity breaches targeted small businesses, and a distressing 10% of these small businesses were forced to permanently close their doors as a result of such breaches. Consequently, small packaging companies must prioritise the implementation of suitable cybersecurity measures wherever feasible, given their heightened vulnerability as frequent targets of attacks.
Original post:
Cybersecurity in packaging: Q&A with GlobalData thematic analyst - Packaging Gateway
Growing AI computing demands result in major improvements in … – DIGITIMES
The LLM behind generative AI requires processing massive amounts of data for training and faster transmission, which has presented lucrative business opportunities for manufacturers. President of ASUS Cloud and Taiwan Web Service Corporation (TWSC) Peter Wu highlighted the distinctions between traditional AI and generative AI by referring to the two as AI 1.0 and 2.0 respectively.
Wu pointed out that AI 1.0 involves building specific models through supervised learning and requires project creation and labeling. This process needs to be repeated for different models and can only solve specific problems. On the other hand, the generative AI of AI 2.0 is different. Once the model is built, it can learn autonomously. From an application perspective, generative AI is also smarter and more versatile.
The volume of training data that needs to be processed for AI 2.0 is also significantly larger than previous AI models. Past models have parameters in the tens of thousands, while generative AI now demands tens of billions of parameters. Wu pointed out that many people will feel that the machine is smart when using ChatGPT. This is because once a model surpasses the 4060 billion parameter threshold, it reaches a kind of "enlightenment."
OpenAI's GPT-3 has 175 billion parameters, and GPT-3.5 has around 200 billion parameters. GPT-4's LLM was announced on March 14, 2023, but it didn't disclose its parameter count. Some estimations suggest that GPT-4's parameters could be several times larger than GPT-3, while some suggest that it has reached one trillion parameters.
To make AI smarter and increase the number of parameters, corresponding improvements in hardware equipment are necessary. Past architectures that required just one GPU or even just a CPU now require parallel processing with hundreds of GPUs. This shift necessitates changes in servers, network switches, and even the entire data center architecture.
When looking purely at server design, in response to the cooling requirements of Nvidia's HGX module, it uses the 3D Vapor Chamber (3D VC) architecture. This architecture has a heatsink height of 3U (13.35cm), meaning that the chassis needs to be 4U or taller. As the chassis height increases, the internal mechanical component design needs to be adjusted to account for factors like airflow and pressure. The chassis's weight capacity and material also need to be reconsidered.
The descriptions above have to do with air-cooling designs. Considering power usage effectiveness (PUE), liquid cooling can be a potential solution. However, liquid cooling would introduce even more significant changes, involving considerations for cold plates, water blocks, pipes, and cooling distribution units (CDUs). Additionally, how would one address the notorious leakage issues for water cooling? The other option is immersion cooling, and that presents its own set of challenges as well.
Apart from mechanical components, power supply also needs to be considered. As power increases, the size of the power supply will also get bigger, but space is limited. Furthermore, with the increase in wattage, the power supply's conversion efficiency will reach titanium levels of 97% or higher. On top of that, not only is the server itself going to use more power but the entire data center's power design will also need to be elevated.
Besides servers, the architecture of network switches also needs to change. Traditional data center architectures use PCIe switches to connect CPU and GPU NICs. However, for data transmission used in AI or machine learning, this architecture will face three limitations: inability to properly expand, lower performance, and higher transmission latency.
Network switch manufacturer Accton pointed out that the new generation of data centers will need a different architecture to optimize the AI and machine learning process. This involves utilizing server fabric accelerators along with an integration of CXL memory technology and NICs. This allows for flexible expansion in AI or machine learning and reduces transmission latency. While this design still needs testing, it is expected to be the direction of AI network architecture.
Go here to read the rest:
Growing AI computing demands result in major improvements in ... - DIGITIMES
The Privacy, Data Protection and Cybersecurity Law Review … – Lexology
This article is an extract from The Privacy, Data Protection and Cybersecurity Law Review, 10th Edition.Click here for the full guide.
I Introduction
As organisations are embracing remote work culture and formulating their strategies based on data-driven analytics to drive productivity, efficiency and revenue growth, so are the cyber threat actors using 'data as a weapon' to increase the impact of cyberattacks and to gain leverage to fulfil their financial motives. Historically, cyber breaches were a matter of discussion for the information security teams working at the heart of the organisations to run and manage digital assets enabling users to perform their duties, but because of the increasing breaches and harsh implications and the several strict government regulations, the subject has also now made its way to board level in majority of organisations.
As the governments mandate stricter data regulations and reporting timelines it becomes essentially important for the organisations to not only know their regulatory obligations but to also prepare for them and it is equally important for the forensic professionals assisting in the investigation to be able to collect and analyse data enabling the organisations to make informed decisions while responding to regulators and their customers.
In this publication we present an overview of the data exfiltration aspects seen in the top cyberattacks faced by organisations and the common challenges faced during such investigations.
II Overview of regulations related to cyber breaches in China, including Hong Kong
Before we begin with taking a deep dive into the cyberattacks and analysing the data exfiltration aspect, let us examine briefly the prevailing data-related regulations in China.
i The China Cyber Security Law
This law is formulated to (1) ensure cybersecurity; (2) safeguard cyberspace sovereignty and national security, and social and public interests; (3) protect the lawful rights and interests of citizens, legal persons and other organisations; and (4) promote the healthy development of the informatisation of the economy and society. According to the Cyber Security Law (CSL), the organisations impacted by the breach are required to report and notify relevant authorities and affected data subjects of actual or suspected personal information breaches in a timely manner.
ii China's Personal Information Protection Law (PIPL)
Personal information processors are required to 'promptly' notify relevant personal information protection authorities and data subjects in the event a data incident has occurred or is likely to occur. Administrative fine up to 50 million yuan or 5 per cent of the turnover in the last year may apply.
iii China's Data Security Law (DSL)
Applicable to data processing activities carried out within the territory of China and data processing activities conducted outside China that harm China's national security or the public interest, or the legal interests of citizens and organisations in China. Requires organisations to have incident planning. Organisation needs to immediately remediate incidents, promptly notify relevant individuals, and report such data security incidents to the regulator.
iv Personal Data Privacy Ordinance (PDPO), Hong Kong
The Personal Data (Privacy) Ordinance (Cap. 486) (the PDPO) is a set of laws that is technology-neutral and provides a set of data protection principles outlining how data users should collect, handle and use personal data. Data users are required to take steps to ensure that personal data is protected against unauthorised or accidental access, processing, erasure, loss or use, if data breaches or leaks were to occur.
v Privacy Commissioner for Personal Data, Hong Kong
The Privacy Commissioner for Personal Data (PCPD) recommends the filling in of a data breach notification as the recommended practice for proper handling of such incidents.
III Analysing data exfiltration aspects of common cyber incidents faced by organisations
Multiple regulations focused on data privacy and breach notification are in place to mandate reporting and motivating organisations to improve their cybersecurity position, which allow for understanding of the various aspects of data exfiltrations during typical cyberattacks faced by organisations and the key challenges faced during investigations.
i Data compromise in a business email compromise breach
Email-based attacks have been on a constant rise, with a steeper rise seen during the pandemic. As the situation evolved, the threat actors changed their lures to more relatable pandemic-related themes to bank on the uncertainty and changing anti-pandemic measures increasing their chances of baiting their victims. Business email compromise is a cyberattack that involves threat actors gaining access to victims' mailboxes and carrying out a financial transaction by hijacking or creating an existing email chain and tricking the victim into making a fund transfer to a bank controlled by the threat actors. In the process of executing the attack, to reach a final motive of making financial gains, the threat actors modify the bank account details on a legitimate pending invoice and trick the victim into transferring funds to the threat actors' bank account.
While it is clear how the threat actors target and execute the attack, the attack also encompasses several other information discovery steps taken by the threat actors. From our analysis and information collected from public threat reports of multiple similar breaches, we have noted that threat actors prior to carrying out the final impact of the attack that is, the financial transaction gathered and reviewed emails and files that may have contained financial information such as pending bills, customer details, etc. In addition to data review to understand the language, terminology or writing style used within the organisation, they also gathered and replicated the user mailboxes to offline access and accessed global address lists (GAL) containing contact cards of all employees of the organisation. The details captured by the threat actors from GAL or from the files and emails accessed may contain details of customers, personal information, sensitive information and may give the threat actors the ability to carry out further cyberattacks based on information collected.
The key challenges in the investigations related to business email compromise cases are as follows:
Case study: business email compromise scam causing a financial impact of over US$5 million
We were engaged by one of our clients in mainland China to investigate a business email compromise attack that resulted in payments of over US$5 million in fraudulent transactions. During our investigation, we noted that the threat actors were able to gain access to the mailbox of several finance team members over a period of four months prior to the initiation of the fraudulent transaction. A common technique used by the threat actors during this campaign for maintaining access to information was forwarding a copy of incoming emails to the email controlled by the threat actors by using an email forwarding rule as a method of data exfiltration. During our review, we noted multiple simultaneous ongoing conversation chains hijacked by the threat actors and were nearing agreements on payments which were then stopped by our team, mitigating a cumulative loss of about US$8 million. On the advice of the clients' legal counsel, potential data exfiltrated was reviewed to determine the nature of information exfiltrated and accordingly impacted customers were notified about the information (such as proforma invoices, letter heads with declarations, etc.) that may have been exfiltrated and stored by the threat actors.
ii Data compromise in a ransomware breach
Threat actors have been increasingly targeting organisations with ransomware and part of the attack locks the files in the system and exfiltrates data from the organisation with a threat to make it public or force organisations to pay and avoid leak. According to Verizon's data breach report 2022,2 ransomware has increased on an upward trend of 13 per cent. The threat actors before exfiltrating data try to identify valuable data in the organisation network to encrypt and exfiltrate the data. In a typical ransomware investigation done by our team, we have noted the use of public file storage websites and cloud servers as one of the file storage methods used by the threat actors.
The key challenges faced during investigations of ransomware cases are outlined below.
Inadequate or absence of network and endpoint visibility leading to gaps in timeline analysis
Systems event logs are available in all operating systems and capture system activity and actions based on the level of verbosity configured. The system logs are used for gathering and determining operating system level activity such as exploitation or malicious activities during a forensic review and provide details of when the activity occurred but the lack of in-depth visibility such as the amount of data transferred over the network, files transferred over USB, etc. leads to gaps in answering questions around files transferred over network or USBs, etc., and can be mitigated by using external software to collect and monitor such logs because of a lack of availability of such logs within the commonly used operating systems.
Unreliable timestamps and file metadata because of encryption
Typically, in a ransomware incident, threat actors exfiltrate data and execute ransomware to encrypt the files, which leads to an update in the file metadata such as 'file modified date'. Due to this filesystem activity, the file metadata becomes unusable for the forensic reviewer and questions such as number of files accessed by threat actors before encryption or files modified before encryption to identify potential malicious activity may not be answered accurately.
Loss of system artifacts becqause of anti-forensic techniques used by threat actors
Threat actors frequently use anti-forensic techniques to evade leaving footprints of the actions carried out by them to delay creation of counter measures by security companies, evade detection of malware and actions performed by the threat actors. From our experience of investigation, sophisticated attacks and information gained from analysis of various reputed threat reports, it was noted that commonly employed anti-forensic measures include clearing of system logs, deletion of malicious files post execution, high obfuscation of the malicious code and malware capability of self-destruction on receiving commands from the threat actors. These anti-forensic techniques lead to loss in the system artifacts and files impacting the root cause analysis resulting in gaps establishing the timeline of events that may have occurred.
The use of the outdated GeoIP database (GeoIP data contains mapping of IP addresses with their allocated country IP range and autonomous system numbers mapping the IP addresses to the organisations controlling the IP blocks) on firewalls leading to enrichment of connecting IP addresses with inaccurate geolocation, ASN organisation, ISP details, etc. Information leading to miscalculation during statical analysis based on these factors, for example: an outdated entry in the GeoIP database for a malicious IP address allocated to a highly reputed ASN org or internet service provider can lead to exclusion of connection from further scrutiny by the forensic analyst because of the reputation of the ASN organisation controlling the IP address block.
The lack of internal network telemetry amd NetFlow data leads to gaps in visibility of lateral movement across devices in the network. One of the tactics used by the threat actors after gaining access to the compromised endpoint is to carry out several discovery steps to identify potential data of importance to the organisation within the network and in some cases dump the collected data to exfiltrate. Limited details of such connection events are stored in operating system logs for connections that are made using an operating system's inbuilt functions and services but lack overall visibility in the amount of data transferred, the method used for connections and may lack any information at all if a custom tool is deployed by the threat actors for data movement and accessing computer systems within the network.
There are jurisdictional issues faced by the law enforcement agencies in securing access to cloud servers used by threat actors. Usage of cloud-based systems has been noted by the threat actors for hosting command and control infrastructure, data exfiltration destination, etc. from our experience in such investigations. In an event where a law enforcement action is involved to cease and desist the servers (based on the server IP address) used by the threat actors, the law enforcement agencies often face challenges because of jurisdictional issues and the inherent nature of the cloud server provisioning that enables the cloud service subscription holder to deploy servers at or for a short duration and the cloud service provider may provide the same IP address to another customer who may not be related to the incident.
Case study: A ransomware incident impacting a client in the mainland China office and Hong Kong
In a recent case investigated by our team, one of our clients in mainland China was impacted by a ransomware incident leading to encryption of systems across several Mainland China offices and Hong Kong region because of the interconnectivity of the networks. We were engaged as first responders to identify the root cause of the incident, gaps exploited by the threat actors and identify data exfiltration activity. During our review, we identified the initial access was carried out by the threat actors using an exposed remote desktop application which was followed by data exfiltration and encryption of files as final impact. During this review, our team carried out digital forensics and identified the threat actors' actions but could not determine the exact number of files accessed by the threat actors as the metadata was updated as a result of file encryption and a fallback method was used after discussion with the clients' legal counsel to determine the activity by relying upon the fact that if a system was accessed by the threat actor, the data was considered as exfiltrated. In other instances where our clients had more network visibility and endpoint logs, the data exfiltration has been very accurate as the logs provide more details of the various processes executed by the threat actors and contain details of the volume of data transferred in terms of packets during execution of the attack.
iii Data compromise because of cloud misconfigurations
As more and more organisations are shifting towards adaptation of cloud infrastructure to expand to scalable operations, utilising cutting-edge interactive web applications functioning on the user behaviour matrix, it was noted in various threat reports that some of the aspects of cloud security have been challenging for the IT professionals working in traditional on-premises infrastructure that gave more control. In the recent IBM Cost of a Data Breach Report 2022,3 the cost of breaches as a result of cloud misconfigurations to totalled US$4.14 million. Some of the impacts because of cloud misconfiguration result in data loss, access to sensitive or personal information, credentials or API keys, which can in turn be used to further access computer systems in the IT environment.
Key challenges faced during investigation are outlined below.
Lack of application logs
Application logs for a web application capture details of system events and actions performed by the users depending upon the configuration. The details captured in the logs can be a useful source to determine the impact of the malicious actions performed in addition to the logs from the webserver or load balancer, which capture limited details of interaction with the application based on the web requests and not the details of the events themselves in the web application.
Lack of cloud server logs for extensive periods
As organisations are moving towards more digitalisation and usage of cloud systems to reduce overheads and automate workflows, this leads to an increase in the usage of cloud systems. The cloud systems are natively designed on optimising performance and tend to provide limited storage space to manage the cost factor. The limited storage on cloud servers and gaps in the technical understanding of the administrators and the logs of such systems are in general kept for short durations on the systems for the best utilisation of storage as other types of data such as databases or code are also stored, leading to availability of a limited period of logs.
Difficulties in forensic image creation of the storage as compared with traditional hard drives
Cloud system (server) file storage is different in architecture than the traditional hard drives obtained in computer systems. The forensic preservation of cloud server storage poses a variety of challenges for the forensic analysts to efficiently collect images without compromising the integrity of the evidence files. Some of the cloud infrastructure providers may offer methods to download the existing operating system image as a virtual machine as part of the backup functionality, which can be used by the forensic analyst as an image because it is system-generated, indicating no possibility to tamper with it during creation and it contains the image hashes but in some cloud infrastructure providers may not provide such backup methods, posing a challenge for forensic collection and limiting the possibility of retrieving deleted evidence, which may in turn impact the investigation.
Case study: Investigation of a compromised web application
We investigated an incident related to an exposed vulnerable web application that was exploited by the attackers to gain initial access. The breach was identified because of security alerts to the storage by the IT security team. During our root-cause analysis, it was noted that the application was vulnerable for at least eight months before exploitation but as a result of the limitation of logs, previous instances of other security breaches resulting from the vulnerability were not discovered, leading to gaps in the investigation and stronger mitigation measures.
iv Data compromise as a result of insider data theft
Insider data thefts have been on a constant rise. With organisations working remotely, instances of such cases are increasing. Analysis of a recent news story4 also indicates potential advertisements by ransomware-related threat actors for rewarding insiders willing to enable the groups in introducing malware to internal systems.
The key challenges are as follows:
v Good practice to utilise to mitigate challenges during forensic investigations
To investigate a cyber incident and gain visibility of actions performed by the threat actors, digital evidence from computer systems and log files serves as a vital source to understand the activities of the threat actor and to determine data exfiltration activity. We list below some of the best practices for maintaining and preserving the critical digital evidence:
Read this article:
The Privacy, Data Protection and Cybersecurity Law Review ... - Lexology
Keeping up with the demands of the cyber insurance market – Help Net Security
Cyber insurance has been around longer than most of us think. When American International Group (AIG) launched the first cyber insurance policy in 1997, it stepped into completely unknown territory to gain market share. Now, 26 years later, cyber insurance has made the transition from an idea pitched to 20 people, to a strategic necessity, largely driven by executive leadership.
Although cyber insurance could still be considered a nascent area of the market, getting a policy is now a priority for many company boards to reduce any financial losses resulting from a security incident and reassure stakeholders and investors.
Theres also clear evidence that companies increasingly rely on their policies. New research from Delinea found that almost half (47%) of businesses used their policy more than once in the last year, with a seven percent increase from 40% the previous year. With the average data breach cost now standing at $4.45 million, insurance can provide crucial support in covering payments for legal services, remediation and investigations.
While insurers are bringing new products to the market, they are increasingly tightening the requirements for prospective and existing policy holders for the cyber risks they underwrite, asking organizations to demonstrate a high level of security preparedness to gain coverage.
In this scenario, thorough planning ahead of the application process ensures that organizations are in the best position to get coverage and reap the benefits of their policy. So, what are the priorities and the key security factors at play to ensure organizations can improve their chances of qualifying?
While in the UK adoption rates of cyber insurance vary significantly depending on the size of an organization, the US has seen a notable spike in demand across many markets for the past two years, with premiums increasing by 50% in 2022, in large part due to increasing ransomware attacks. And as more organizations seek cyber insurance policies to act as a financial safety net, it has been estimated that the global cyber insurance market could double in value to reach $40.3 billion by 2027.
On one side, these figures indicate that more companies are proactively taking steps to shield their businesses, but on the other they also show that prices are sharply increasing. After initial fierce competition among insurers, vying to offer the most attractive terms to potential clients, providers have been learning from their data and their losses and started to reduce their risk exposure. They are deeply scrutinizing applications and increasing the number of requirements needed to secure a policy at a reasonable premium.
Our own research has shown, the number of organizations requiring six months or more to qualify for cover has been growing steeply, just like the list of exclusions that could make cyber insurance coverage void. This is the case not only for new applicants, but also companies wanting to renew their policies, who should be more aware of the fine print and have clear insight into what they are covered for, and when they can and cannot make a claim.
As underwriters get to grips with the complexity of cybersecurity, comprehensive cyber risk governance from applicants has become a prerequisite for coverage.
Insurers in the US, for example, are increasingly referencing the NIST cybersecurity framework when determining policy requirements. As such, there are several key areas that organizations should focus on to improve their chances of securing a premium.
Businesses must thoroughly understand their specific cyber risks before seeking coverage. This means conducting detailed cybersecurity risk assessments to pinpoint vulnerabilities and determine their organizations cyber risk tolerance.
Insurers expect organizations to demonstrate robust measures such as malware defense layers and a clear data security and maintenance strategy to protect their critical assets. Identity security is particularly important, with just under half (49%) of companies in Delineas research reporting that Identity and Access Management (IAM) and Privileged Account Management (PAM) controls were required by their policies.
IAM and PAM give organizations greater visibility and control over how accounts are used and how identities are behaving and accessing systems a critical capability when so many attacks focus on exploiting identity. Controls such as Multi-Factor Authentication (MFA) should likewise be in place as a standard.
Cyber insurers also prioritize an organizations capability to detect risks and breaches, especially those involving endpoints like laptops and cloud servers. This means that advanced security tools that can deliver timely detection and response to incoming security threats, as well as comprehensive monitoring and alerting systems for potential misuse on workstations and servers, are vital for protecting the company and getting coverage.
Additionally, insurers pay close attention to incident response plans, anticipating a robust strategy that aligns IT, security, and developers for a swift, effective reaction to cyber threats. Devising thorough plans, with role checklists and response measures, and organizing regular simulation exercises will enhance organizations incident readiness and show insurers that they are genuinely prepared.
Finally, post-attack recovery plans also play a significant role in coverage viability. Insurers rigorously evaluate how an organization plans to restore operations after a breach, and how they will use the cyber incidents to reassess their strategy and as a learning opportunity for all employees to improve the companys security posture.
Cyber insurance is becoming an indispensable asset with cyber risk levels showing no sign of abating. However, obtaining this coverage requires more than just filling out an application. Businesses must demonstrate a proactive approach to cybersecurity, aligning with industry best practices and frameworks. As the cyber insurance market evolves, one thing remains clear: thorough preparedness is not just a requirementit is a necessity.
Original post:
Keeping up with the demands of the cyber insurance market - Help Net Security
Six To Try: Salad Servers That Make Even Iceberg Lettuce Look … – Broadsheet
Cutipol Goa salad set
Designed by Jos Joaquim Ribeiro and made by a family-owned business in the village of Sao Martinho de Sande in Portugal, Cutipols Goa salad set features a brushed stainless-steel spoon and fork with contrasting resin handles. Slim and minimalist, theyll add a touch of effortless chic to any refined dining experience. $98
@mudaustralia
Melbourne-based homewares label Fazeek has a track record of creating curvaceous colour-loving glassware. Following suit, its salad servers are made of hand-blown borosilicate glass, with elongated handles that lead to softly rounded spoons. Available in three playful colourways: pink and green, clear and amber, and lilac and teal. $89
Handmade from resin in its Strawberry Hills studio, iconic Sydney label Dinosaur Designs stone servers come in 26 striking hues, from solid colours to textured, marble-like swirls. Go for hot-pink Flamingo, the dappled greys of Abalone, or Sky a mix of blue and white inspired by the vapour of early evening skies. $125
@dinosaur_designs
Loosely resembling the claws of a friendly lobster, cult Sydney-born label Maison Balzacs Cloud serving spoons are made of bamboo polymer with a soft, matte finish. Lightweight yet highly durable, theyre also biodegradable and designed to add a bit of whimsy to your table. Available in sky blue and beige. $49
@maisonbalzac
Known for Aussie-style understated luxury, Country Roads approach to salad servers is unsurprisingly chic. The Nolan features a minimalist design with tapered handles and sleek spoons a timeless style for seasons of dinner parties. Made of high-quality stainless steel in three neutral finishes: soft gold, brushed silver and shiny graphite. $59.95
@countryroad
Wooden salad servers are always a classic. Turkish-inspired, Sydney-based label Saardes are handcrafted using natural olive wood from pruned branches and dead trunks. With a simple pared-back design, the woods textured character does most of the heavy lifting, style-wise. Each piece gets darker and richer in colour as it ages. $45
@saardehome
Looking for more carefully curated buys for your home? Read here.
This article first appeared in Domain Review, in partnership with Broadsheet.
Continued here:
Six To Try: Salad Servers That Make Even Iceberg Lettuce Look ... - Broadsheet
How to Block Ads on Hulu in 2023 [Stop Hulu Ads Easily] – Cloudwards
Key Takeaways: How to Stop Hulu Ads
If you spend a lot of time watching content on Hulu and youre not paying for the more expensive, ad-free plans, youre probably annoyed with just how many ads show up. Users of the more affordable, ad-supported plans may be tempted to upgrade just for this alone, but knowing how to block ads on Hulu doesnt have to cost you anything.
Even though there are ways to get Hulu in Canada and other countries, the streaming service is only officially available in the United States and Japan. You can get around that limitation with a VPN, though ExpressVPN is our favorite. There are a few Hulu plans to choose from, but save for the on-demand ad-free plan, all of them are ad-supported to an extent.
If you want to block Hulu ads and enjoy your affordable Hulu subscription, there are a few ways to go about it. Well show you how to block Hulu ads for the on-demand content, but keep in mind that theres no way to avoid commercials on live TV channels.
Hulu has several subscription tiers, some that only offer on-demand content, and others with live TV content too. If you only need on-demand content, the more affordable plan is ad-supported. The more expensive plan removes all ads, but also excludes several shows that play with ads before and after the actual content.
If youre looking at the Hulu + Live TV plans, unfortunately all of them have annoying ads. The most affordable plan has ads on the on-demand content, as well as the Disney and ESPN packages. You can choose a more expensive plan that removes ads from the on-demand content, but youll still have ads with ESPN Plus.
As with any other live TV provider, commercials on live TV cannot be avoided. This is why, even if you opt for Hulus ad-free on-demand plan, you will still get commercials when watching live TV channels.
To add to this, Hulu also bundles Disney+ and ESPN+ content with its live TV plans. While you can get the on-demand content and Disney+ with no ads, you can only get ESPN+ content with ads.
While you can block ads on Hulu with a regular ad blocker, you can also take things a step further in the security department and use a VPN ad blocker. Not only do you get the benefits of an ad blocker, but you also enjoy a private, encrypted connection. Here is how to use one, with NordVPN as an example.
Go to NordVPNs website and click get NordVPN. Follow the steps to sign up for an account, then download and install NordVPN on your device of choice.
Using the credentials from step one, launch NordVPN on your device and log in.
In the left side menu, open threat protection. Click the toggle to enable it.
Since Hulu is only available in the U.S. and Japan, choose one of those locations to connect to. From the list of servers to the left, select a server and wait for the VPN to connect.
Once you have an active connection to the VPN server, open your browser or Hulu app, log in with your U.S. or Japanese account and enjoy your favorite content ad-free.
Not all VPNs come with an ad blocker, and not all VPN ad blockers are effective with Hulu. Here are the three weve tested and confirmed do a great job starting with NordVPN.
NordVPNs map UI is a breeze to use on desktop, but the same cant be said for the mobile apps.
Pros:
NordVPN is the best way to enjoy the added security of a VPN while streaming Hulu content without ads. The VPN has plenty of servers in the U.S., allowing you to connect to one that is close to you for the best possible performance. It also has servers in Japan, in case you want to watch Japanese content.
The built-in ad blocker is highly configurable and lets you choose exactly what you want the VPN to block. You can use it on both desktop and mobile devices, so you can enjoy the Hulu video library on the go, too. Keep in mind that the mobile UI isnt the easiest to use and may take some getting used to due to its map interface.
The security and performance are excellent, as noted in our NordVPN review. Pair all of that with affordable long-term pricing and a 30-day money-back guarantee, and youve got an excellent VPN for a Hulu no-ads experience.
Surfshark puts its static IP, MultiHop and dedicated IP address servers in separate lists, letting you choose the exact type of server you want to tunnel through.
Pros:
Cons:
If youre watching Hulu on multiple devices at the same time, Surfsharks unlimited simultaneous connections can come in quite handy. It also has servers in both the U.S. and Japan, so you can pick and choose which one of Hulus content libraries you want to watch. You can find out more about it in our Surfshark review.
The built-in ad blocker, called Clean Web 2.0, does a great job with Hulu ads, blocking most of them before they can even load. Unfortunately, the improved 2.0 version is only available if you use the ad blocker as a browser extension. If you install it on a desktop or mobile device, you can only use the older version, which is slightly less effective.
Surfsharks performance is pretty good, with high speeds but sometimes inconsistent latency, which may lead to buffering issues every once in a while. We discussed this in our fastest VPNs roundup. If thats not a big concern for you, Surfshark is a great VPN to consider for a Hulu no-ads experience, especially with its affordable pricing and 30-day money-back guarantee.
Proton VPNs UI does remind us of someone else, but we have to say, its overall remarkably easy to use.
Pros:
Proton VPN is a provider we recommend often as an excellent free VPN, but if you want to use the ad blocker, youll need a paid plan. The VPN has an excellent and very easy to use UI on both desktop and mobile, making it a breeze to connect to any of its servers in the U.S. or Japan.
Proton VPNs built-in ad blocker is called NetShield, and its a highly effective way of watching Hulu shows with no ads whatsoever. It doesnt only work with ads, but it can also block trackers, and it comes with a dashboard to show you how many ads and trackers its blocked and how much data its saved.
Proton VPNs Achilles heel is pricing even if you opt for the most affordable plan and a two-year subscription, youll still pay a lot more than you would for Surfshark or NordVPN. You do get a 30-day money-back guarantee, though, in case you find you dont like it. You can find out more about it in our ProtonVPN review.
If you want to block Hulu ads on the ad-supported plan without using a VPN ad-blocker, there are several other ways that weve tested. Your experience may vary, as weve found that some of them arent as consistent as using a VPN when it comes to blocking Hulu ads. They are, however, free, and let you skip Hulu ads, so theyre certainly worth considering.
The simplest way to block ads on Hulu is to watch Hulu content in your browser, as opposed to the Hulu app, and use an ad-blocking browser extension. There are extensions for most major browsers and mobile operating systems.
If you only want to block Hulu ads, but not other ads when browsing, you can use Hulu Ad Skipper. This is a dedicated Chrome extension that is made for blocking ads on the Hulu streaming platform. When an ad is detected, the extension will mute it and fast forward through it, letting you enjoy Hulu ad-free. In our experience, it does a great job at moving through ads quickly.
If you want to block Hulu ads while youre watching Hulu on your mobile device, you can use an ad blocking app that works with all apps on your device. You can try AdGuard, which works on both Android and iOS devices, or you can try watching Hulu using the mobile version of the Brave Browser.
Its worth mentioning that Hulu doesnt like you using ad-blocking apps and extensions, and its constantly updating its ad-serving methods to bypass such apps and extensions. It may happen that you see an ad or two if Hulu has updated its ads, but the extension or app youre using still hasnt been updated to block the new ad.
If you want, you can also install a dedicated ad-blocking browser. These browsers have built-in ad blocking functionality, so you dont have to worry about extensions. A great example is Brave Browser, which we reviewed and found to be an excellent alternative to big names like Google Chrome, Mozilla Firefox or Opera.
You can also get creative in your quest to skip Hulu ads and use the two-browser-tabs method. Keep in mind that this will require a fast internet connection, since youll be watching two streams of your show simultaneously. Heres how to watch Hulu without an ad break using two browser tabs.
First, open Hulu in one browser tab and go to the show you want to watch. Start playing the episode youd like to watch and pause it right at the beginning. Open a second browser tab, and navigate to the same episode of the same show. Play your show in both tabs simultaneously (or as quickly as you can), and mute one of the tabs.
The moment you notice the first ad appear in the tab youre watching, mute that tab and then continue watching your show in the second tab. Once you notice more ads in the second tab, mute it and go back to your first tab. This way, you always have the show in at least one tab, uninterrupted, letting you get rid of ads in a slightly unconventional, but very effective way.
Keep in mind that, by doing this, you will end up rewatching parts of the show. The ad length puts you behind on the other tab, which may be a nuisance.
Weve saved this one for last, because even though weve had a good experience with the method itself, it doesnt completely get rid of ads on Hulu. On-demand shows on Hulu have both long and short ads, and you get them seemingly at random.
When youre watching a show, if a long ad starts, you can refresh the browser window. In most cases, this allows you to skip the long ad, instead replacing it with a shorter one thats a bit less annoying. As we mentioned, its not exactly an ad-free viewing experience, but it gets a little closer.
With Hulu increasing its pricing, we wouldnt blame you for wanting to enjoy ad-free streaming without paying for a more expensive subscription. There are quite a few ways to skip ads on Hulu, from ad-blocking browser extensions like Hulu Ad Skipper, to watching on two tabs to avoid pesky ads interrupting your streaming experience.
The best way to get rid of and skip ads on Hulu is to use a VPN ad blocker. In addition to the ad-free experience, you are also getting a secure, encrypted internet connection while you enjoy uninterrupted streaming. NordVPN is the best tool for the job, but Surfshark and Proton VPN are also solid alternatives.
Have you tried to disable ads on the ad-supported Hulu plan? Which of the methods above have you found works best? Let us know in the comments, and as always, thank you for reading.
You can get rid of ads on the Hulu app by either paying for an ad-free plan or by using a VPN with an ad blocker while you watch content.
If youre paying for one of Hulus ad-free live TV plans, unfortunately there are no ways to avoid commercials on the live TV channels. In addition to that, the bundled ESPN+ content is always ad-supported, regardless of the plan.
If youre currently using an ad-supported Hulu plan, you can upgrade to an ad-free plan in your Hulu account settings.
Yes, Hulu offers several plans, many of which are available without ads. However, you will still have commercials on the live TV channels.
Let us know if you liked the post. Thats the only way we can improve.
YesNo
The rest is here:
How to Block Ads on Hulu in 2023 [Stop Hulu Ads Easily] - Cloudwards