Category Archives: Cloud Servers
Hotec EMEA profile: Why the Cheval Collection is moving to the cloud – Hotel Management
HTNG Europe & Hotec EMEA are partnering to convene leading hoteliers, procurement specialists and technology partners from across Europe in a joint event set to run Oct. 30-31 at the Hotel Cascais Miragem Health & Spa in Cascais, Portugal. The two-day event will include hands-on demos of the latest in hospitality technology as well as insights from more than 30 industry leaders.
Ahead of the conference, Mustafa Gokcen, head of IT for the Cheval Collection of serviced residences, shared some insights on the benefits of incorporating technology into hotel operations.
The Cheval Collection includes three brands: Cheval Residence, Cheval Maison and My Locanda. Although the company launched 40 years ago, it has expanded to 14 properties in London and Edinburgh and a number of properties signed for 2024 and 2025. With the rapid growth came a need for overhauling our technology stackour digital guest journey, digital platforms, guest-facing technologies, etc., Gokcen said. The process of overhauling the stack has included shifting to cloud-based technologies, leveraging artificial intelligenceboth guest-facing and back-of-houseand implementing a new app and loyalty program. So I am busy with the technology roadmap that we will start implementing starting Q1 2024.
When considering new technologies, Gokcen said he has to consider a range of questions. What are the additional features we can get? How will we get a response to our investment? What [are] the value adds? Security, redundancy and resiliency are also important factors, he added, as is the overall cost of the new technology under consideration.
All these systems that we are designing at the moment can quickly increase and decrease as part of the resources at a moment's notice, Gokcen said, noting that transitioning to cloud-based technologies has helped bring properties onlinemetaphorically and literallymuch faster. We actually eliminate that additional lead time because when a new property opens, the property needs an additional new five servers. By transitioning to the cloud, Gokcen said he could add capabilities without placing orders, waiting for lead times, dealing with customs and handling invoices. My cloud environment can grow or shrink anytime I want. This ability, he added, helps the company convert hotels much faster than they had been able to before, requiring weeks rather than up to a year.
Gokcen attends as many industry events as he can in a bid to catch up with colleagues from other companies. He appreciates the chance to discuss what they are doing, what they are implementing, how they are adapting to the new technology that becomes available. He also enjoys networking with vendors, learning what they have to offer and how hotel companies can benefit from new products and services. I have a need [to meet with representatives from] almost every category, starting from [property-management systems] to [point-of-sale] systems to networking distributors to cloud distributors and payment gateways. With the Cheval Collection continuing to grow, he added, he hopes to have these conversations sooner rather than later.
Compared to other industries, Gokcen believes that hospitality is reluctant to leverage emerging technologies, which he acknowledged can be due to financial concerns. We need to be a bit open [to] change, he said, encouraging other industry leaders to think of ways they can implement new technologies into a hospitality environment. New developments can have benefits, he argued, from financial to features.
HTNG Europe & Hotec EMEA will run Oct. 30-31 at the Hotel Cascais Miragem Health & Spa in Cascais, Portugal. Buyers and suppliers can register here.
View post:
Hotec EMEA profile: Why the Cheval Collection is moving to the cloud - Hotel Management
Google Cloud C3D Shows Great Performance With AMD EPYC Genoa – Phoronix
Back in August Google Cloud announced the C3D instances powered by AMD EPYC 9004 "Genoa" processors while only last week was C3D promoted to general availability. Curious about the performance of C3D after being impressed by AMD EPYC Genoa bare-metal server performance at Phoronix as well as what I've seen with Genoa in the cloud at Microsoft Azure and Amazon EC2 / AWS, here are some benchmarks of the new C3D up against other GCE instances.
In Google's announcement last week they noted the general purpose C3D VMs can offer up to 45% performance increases over prior generation VMs. Given what I've seen out of Genoa and the great Zen 4 benefits like AVX-512 and transitioning to DDR5 system memory, these claims aren't all that surprising. Some of the specifics Google shared in their announcement was around 54% better NGINX performance, MySQL and PostgreSQL database servers up to 62%, and in-memory databases like Redis up to 60%.
Google Cloud offers C3D VMs up to 360 vCPUs (the C3D vCPUs are made up of a mix of physical cores and SMT sibling threads) and up to 2.8TB of DDR5 memory. For the purposes of this initial Google Cloud C3D benchmarking on Phoronix, I was focusing on the 60 vCPU C3D VM instance. For this article the following instance types were compared:
c3d-standard-60: The new C3D AMD EPYC Genoa instance with 60 vCPUs, powered by AMD EPYC 9B14.
c2d-standard-56: The prior generation AMD EPYC Milan instance with EPYC 7B13 processor. This instance was at 56 vCPUs with not having a 60 vCPU option.
n2d-standard-64: The AMD EPYC Rome based instance with EPYC 7B12 processors. Here the closest sized instance was 64 vCPUs.
c2-standard-60: The Intel Xeon Cascade Lake competition at 60 vCPUs for reference.
Unfortunately for Google's new C3 machine types that are powered by the latest-generation Sapphire Rapids processors, they are sized at 22 / 44 / 88 vCPUs (among other smaller and larger sizes). So unfortunately nothing in the 56~64 range as tested with the other machine types, which is why there wasn't any C3 instance tested for this article due to the starkly different sizing. In any case this article is mostly focused on the AMD EPYC generational performance in Google Cloud.
All of the tested instances were on Ubuntu 22.04 LTS. In addition to raw performance the performance-per-dollar based on the current hourly pricing was also tabulated.
Page 1 - IntroductionPage 2 - Google Cloud C3D HPC BenchmarksPage 3 - Google Cloud C3D BenchmarksPage 4 - Google Cloud C3D AI + TensorFlow / OpenVINO BenchmarksPage 5 - Google Cloud C3D Benchmarks
See original here:
Google Cloud C3D Shows Great Performance With AMD EPYC Genoa - Phoronix
Google Bets $2 Billion on AI Startup Anthropic, Inks Cloud Deal – HT Tech
Google has committed to invest $2 billion in the artificial intelligence company Anthropic, solidifying its relationship with the startup darling and stoking the Silicon Valley frenzy for AI.
The deal is structured as a convertible note, a type of debt that will convert to equity at the startups next funding round, an Anthropic spokesperson confirmed. The deal closely follows another large convertible note investment into Anthropic from Amazon.com Inc. for up to $4 billion earlier this year.
With the latest financing, Alphabet Inc.s Google invested $500 million with a commitment to add $1.5 billion more in the future, according to people familiar with the matter, who asked not to be identified because the information is private. The investment is in addition to the $550 million Google put into Anthropic earlier this year.
We are now on WhatsApp. Click to join.
Prior to the financing, Google also signed a major cloud agreement with Anthropic, according to one person familiar with the details. The cloud deal, in which Anthropic will use Googles suite of computing services, was even larger than the latest investment, and will stretch over multiple years, the person said.
The Wall Street Journal earlier reported details of the Google investment.
Anthropics flurry of dealmaking this year has been led in part by a new hire, Vu Bui, its head of strategic finance and investor relations, who started in September, according to his LinkedIn profile. Bui has been instrumental in orchestrating the companys talks with investors, one of the people said.
The next step for Anthropic may be to raise traditional funding, rather than convertible debt. At that point, both Google and Amazons shares will convert to equity and the company will receive a new valuation. Anthropic had discussed a valuation between $20 billion and $30 billion, according to the people, a pricetag that was deemed too high by some investors. Earlier this year, the startup had a valuation of about $5 billion, according to PitchBook data.
The company isnt likely to rush to raise more money, however, because its recent investments from Google and Amazon have meant it has ample cash, the people said.
For both Google and Amazon, investments in startups like Anthropic are a way to cement relationships with the companies shaping the next wave of AI, and nudge them to use their proprietary cloud tools. If AI grows to become ubiquitous, as some experts predict, it will require vast computing resources. That means AI companies will be lucrative cloud customers in the future.
Googles ties to Anthropic put it in an unusual partnership with Amazon, its rival in cloud services. While Google has a large cloud deal with the startup, as part of its investment Amazon also said it would be Anthropics primary cloud provider.
The relationship between Google, Amazon and Anthropic has occasionally been awkward. Google knew this summer that Anthropic was seeking to raise more money, according to one person familiar with the matter. But details of the startups embrace of Amazon Web Services still came as a surprise to some people inside the search giant, according to that person and two others familiar with the matter.
Anthropic runs primarily on a combination of its own servers and computing power rented from Google Cloud Platform, one of the people said. Google, which had already invested millions in Anthropic, was reluctant to commit the kind of sum Amazon ended up investing.
Amazon said in a filing that it had invested $1.25 billion into Anthropic in the form of a convertible note, with the ability to invest $2.75 billion more in a second note, an option that expires in the first quarter of next year. Amazon also has a deal with Anthropic in which the startup will use AWS cloud services and chips. The debt structure of Google and Amazons deals hasnt been previously reported.
The deals pit the two tech giants against Microsoft Corp., which is putting more than $10 billion into Anthropic rival OpenAI. Both OpenAI and Anthropic are building chatbots with generative AI technology, capable of generating content in response to prompts. Anthropics chatbot, Claude, has put a particular emphasis on responsible AI and reducing bias in the technology.
One more thing! HT Tech is now on WhatsApp Channels! Follow us by clicking the link so you never miss any update from the world of technology. Click here to join now!
Read more:
Google Bets $2 Billion on AI Startup Anthropic, Inks Cloud Deal - HT Tech
Hyperscale data-center capacity on pace to triple over next six years – Network World
The rush to embrace artificial intelligence, particularly generative AI, is going to drive hyperscale data-center providers like Google and Amazon to nearly triple their capacity over the next six years.
Thats the conclusion from Synergy Research Group, which follows the data center market. In a new report, Synergy notes that while there are many exaggerated claims around AI, there is no doubt that generative AI is having a tremendous impact on IT markets.
Synergys updated six-year forecast shows that the average capacity of new hyperscale data centers will soon be more than double that of current operational hyperscale data centers. And the total capacity of all operational hyperscale data centers will grow almost threefold between 2023 and 2028.
The research is based on an analysis of the data center footprint and operations of 19 of the worlds major cloud and internet service firms. By mid-2023, those companies had 926 major data centers in operation around the world. Synergy's projection for future data centers includes another 427 facilities in the coming years.
Synergy says the impact of generative AI advances has not so much fueled an increase in the number of data centers but rather led to a substantial increase in the amount of power required to run those data centers. As the number of GPUs in hyperscale data centers skyrockets, driven primarily by AI, the power density of associated racks and data center facilities also needs to increase substantially. This is causing hyperscale operators to rethink some of their data center architecture and deployment plans.
So if its a headache for AWS to manage this stuff, what will it be like for an average enterprise running five-year-old servers?
While corporations are rushing to embrace generative AI to improve their businesses, the cost of acquiring the hardware and operating it has given many pause. A DGX server from Nvidia, custom-built for generative AI and packed with hardware, can easily run in the six-figure range. For that kind of money, you can also buy about 10 regular servers. Which will enterprises prioritize?
Plus theres the expense of operating them. Nvidia GPUs are not known for being a low power draw. It's quite the opposite. They are the biggest power hogs in a data center. So for a budget-conscious enterprise, particularly a midsized one, deploying generative AI hardware might be too taxing.
In addition, the way AI operates is different from traditional line-of-business applications. There's the process-intensive task of training, which requires GPUs, and then theres inference, which runs off the models trained by the GPUs. Once a model is trained, theres a good chance you may not need to revise it for months. Then your very expensive hardware sits idle, unneeded, and depreciating.
Could enterprises do this themselves without using a hyperscale cloud provider? "Theoretically, yes, but costs might be prohibitive and access to the right sort of expertise could be severely limited, said John Dinsdale, chief analyst and research director at Synergy Research Group.
So the emerging trend in enterprise IT for generative AI is to farm out the training portion of its AI but do the inference, which is much less process intensive, in-house. Why invest hundreds of thousands of dollars in hardware that you only use sparingly when you can rent it out from Google or AWS?
This is known as AI as a service, an emerging offering from hyperscalars like AWS and Microsoft. We can expect more to come on that.
Link:
Hyperscale data-center capacity on pace to triple over next six years - Network World
Fast Charge: Google should be worried about the Snapdragon 8 … – TrustedReviews
OPINION: Qualcomm has revealed the next-gen chipset set to be used in plenty of flagship phones in 2024, the Snapdragon 8 Gen 3. However, unlike previous iterations of flagship smartphone chips that offer simple year-on-year gains in the CPU and GPU department, this years chipset is a different beast entirely and Google should be worried.
Thats because the Snapdragon 8 Gen 3 isnt purely focused on performance; it has a huge focus on AI, and Generative AI think ChatGPT, Generative Fill in the Photoshop Beta and those scarily accurate AI-generated 90s headshots doing the rounds on TikTok in particular.
More specifically, the AI Engine within the 8 Gen 3 boasts support for Generative AI models with up to 10 billion parameters, all on-device without the need for an internet connection. That means that youll be able to use AI to perform tasks like generating emails or an itinerary for an upcoming trip without the AI needing to connect to the cloud.
It goes far beyond just getting an AI to write you a social media post though; the real focus is on camera technology. Qualcomm has this week showcased a number of firsts for Snapdragon devices, including Image Eraser tech that removes unwanted subjects from photos, along with a real-time ultra-low light photo and video mode, and intelligent in-sensor zoom thatll automatically pan and follow fast-moving subjects.
There are also cool editing features like Photo Expansion, very similar to what weve seen on the Photoshop Beta this year that essentially uses AI to fill in the gaps around the edge of a photo, along with Stable Diffusion a fancy way of saying AI image generation.
In fact, theres a particular Fast Stable Diffusion demo that I saw earlier this week that showed the chipset generating entirely new images in under a second, all while in Airplane mode. Thats the fastest in the world, both on-device and when using cloud servers.
That means that, depending on the features that manufacturers want to implement from Qualcomms wide-ranging AI toolset, phones equipped with the next-gen Snapdragon chipset will be able to do some entirely new things not seen on any other smartphone. Aside from Googles Pixel range, that is.
Since the release of the Google Pixel 6 with Googles in-house Tensor chipset, the company has gone all-in on AI, particularly when it comes to photo and video capture and editing, so much so that general performance isnt entirely up to scratch with the rest of the flagship smartphone competition.
However, features like Night Sight, Super Res Zoom, Magic Eraser and newer additions like Audio Magic Eraser and Best Take usually make up for the slight dent in CPU and GPU performance because they were unique features to the Pixel lineup that allowed them to deliver some of the best photos and videos captured on a smartphone. Theres a reason why the Pixel range is generally heralded as the best camera phone on Android.
But with Qualcomms Snapdragon 8 Gen 3 bringing many of these features to other Android smartphones in 2024, along with a boost in performance 30% to CPU, 25% to GPU and 98% to NPU means that not only are the Pixel phones no longer unique in the Android market, but theyre now also relatively underpowered.
While I havent had the chance to benchmark the 8 Gen 3 just yet, this years 8 Gen 2-equipped flagship smartphones comfortably beat the Tensor G3 of the Google Pixel 8 Pro in most benchmark tests, so Im only expecting the gap to widen further.
So, while Google still has a few months of domination left with its Pixel 8 range, once 2024 rolls around and Snapdragon 8 Gen 3-equipped devices begin to appear in the Western market, Google could have a real problem on its hands.
The rest is here:
Fast Charge: Google should be worried about the Snapdragon 8 ... - TrustedReviews
Meet Updated ODBC with Server to Server Authentication Mode … – PR.com
Prague, Czech Republic, October 27, 2023 --(PR.com)--Devart, a recognized vendor of world-class data connectivity solutions for various data connection technologies and frameworks, announced new versions of ODBC Drivers with Server to Server authentication mode, support and compatibility improvements, and more.
The list of the improvements includes:- An update for Direct Mode, built on the SQLite engine, version 3.42.0;- Support for CREATE TABLE AS SELECT statement for xBase is now available;- Implemented Server to Server authentication for Salesforce Marketing Cloud (formerly ExactTarget);- Jira supports custom tables and custom fields now;- ListSurveys table and fields to the ListMembers table for SurveyMonkey are available now;- Compatibility with the Survey Rocket plugin for Sugar CRM;- Updated SQL engine for Sendinblue, along with multiple bug fixes and quality-of-life improvements.
To learn more about the recent release and download new products, visit:https://blog.devart.com/new-in-odbc-server-to-server-authentication-mode-support-and-compatibility-improvements-and-more.html
Devart ODBC Drivers are high-performance connectivity solutions with enterprise-level features for accessing the most popular database management systems and cloud services from ODBC-compliant reporting, analytics, BI, and ETL tools on 32-bit and 64-bit Windows, macOS, and Linux. Our ODBC drivers fully support standard ODBC API functions and data types to enable easy access to live data from anywhere.
About Devart
Devart is one of the leading developers of database tools and administration software, ALM solutions, data providers for various database servers, data integration, and backup solutions. The company also implements Web and Mobile development projects.
For additional information about Devart, visit https://www.devart.com/.
The rest is here:
Meet Updated ODBC with Server to Server Authentication Mode ... - PR.com
Network complexity, talent shortages drive shift in enterprise IT buying habits – Network World
Increased network complexity, constant security challenges, and talent shortages are driving enterprises to depend more on channel business partners, including managed service providers, system integrators, resellers and other tech providers.
Greater use of partners by enterprises is expected to continue over the next few years, experts say.IDC in its research on the future of industry ecosystems found that by the end of 2023, almost 60% of organizations surveyed will have expanded the number of partners they work with outside of their core industry.
Cisco, too, expects to see extensive growth in partner sales. We believe by that by 2027, 47% of what Cisco sells will be sold through a managed service, said Alexandra Zagury, vice president of partner managed services and XaaS sales with Ciscos global partner organization.
A variety of market-destabilizing factors including the pandemic, supply chain shortages, continuing cloud adoption, and growth in infrastructure-as-a-service options have created a turning point for the channel community and given enterprises more opportunities to form higher value partnerships to help fill gaps, drive innovation, and disrupt entire industries, IDC stated.
Organizations are expanding their industry ecosystems to include an array of different participants that complement their skill sets or provide capabilities they may not possess, IDC stated.
In the networking arena, theres been a trend toward more closely integrating network and security operations a shift that can lead to a variety of challenges for IT organizations.
The old way of looking at networking and security teams was to approach them as siloes. An organization had a network team, a security team and an application team that worked separately, Zagury said. As technology gets abstracted to platforms, and enterprises adopt a platform approach, the disciplines are coming together, she said.
Now it's about providing secure network experiences that have observability layered on top so that you can detect attacks before they happen, Zagury said. So, connecting all of these things to deliver customer outcomes is really important for business partners, because you can no longer separate the technology from the operation of that technology.
The merging of networking and security resources is one of the key trends driving an uptick in the use of managed service providers (MSP).
Recent Cisco-sponsored research by Canalys found that MSPs expect most future revenue growth to come from endpoint and network security. MSPs are developing managed security practices faster than the resellers that have cybersecurity licensing capabilities, Canalys stated.
Detection and response capabilities will be critical to the growth of these cybersecurity services. Currently, many customers cybersecurity investments fail because too much focus is placed just on protection, Canalys stated.
It is vitally important for channel partners learn from the actions of threat actors, particularly the ways in which they breach defenses and go undetected, Canalys stated. The challenge is channel partners know how to sell and deploy protection, but detection and response are areas that need significant resourcing and improvement.
Spending on cybersecurity products and services that provide protection are projected to grow by 7% each year to $186 billion in 2026, driven by investment in areas such as cloud and identity security. Detection and response, meanwhile, are projected to grow by 34% each year to reach $112 billion in 2026, according to Canalys.
Another area of investment thats on the rise is network observability.
Full-stack observability (FSO) tools can correlate data from application, networking, infrastructure, security, and cloud systems to streamline troubleshooting and optimize performance. Cisco sees an emerging opportunity for partners to take its observability technology and package it as a managed service. You will see us talking a lot about managed FSO and that opportunity for our partners, Zagury said. Cisco is expected to roll out new offerings for business partners at its upcoming Partner Summit.
AI stands to strengthen the observability trend.
AI will fuel the greatest opportunity of the next 10 years, which is to take the data and telemetry from all these different technologies and provide observability for our customers, Zagury said.Cisco can take the data from our networking systems or our security systems or collaboration systems and bring it all together to create these observability use cases, and an AI is just going to help bring this all together.
In the bigger picture, business partners that specialize in AI are on the path to expansion, experts say.
Some of the larger channel providers already offer AI services.World Wide Technology announced an AI lab where customers can compare, test, validate and train AI large language models (LLMs) such as ChatGPT-3.5, Google Bard and others through its Advanced Technology Center (ATC), for example.
Because our new AI lab environment is composable, different vendor components can be swapped in and out, allowing for fast customization and scaling, and enabling you to test different configurations depending on your current environment, future needs and business objectives, WWT wrote in a blog detailing the new lab.
Traditional consulting providers such as Accenture, Deloitte, IBM Consulting, KPMG, and McKinsey & Company offer AI services as well.
Cisco is focused on how it embeds AI technologies to assist, augment and automate, Zagury said.Then, depending on how we go to market and how we choose to monetize this, I think you'll start seeing some more partner specializations, she said.
Integrating networking gear is also a growing area for partners, and soon channel sales of networking products will exceed channel sales of servers, Canalys predicts.
Servers are currently the largest IT infrastructure category for the channel, driven by demand for cloud infrastructure services more than half of all servers in 2022 were shipped to the hyper-scalers, Canalys stated.However, this will change by 2027, as channel sales of networking products surpass servers for the first time.
Today, the combined sales of switching, wireless LAN and routers account for 36% of infrastructure sales through the channel, according to Canalys. This will grow to more than 40% over the next five years, according to Canalys.
Networking sales are booming, driven by Wi-Fi 6 rollout to support hybrid working in offices, as well as digital transformation across as education, healthcare, retail, and manufacturing, Canalys stated.
Additionally, campuses will undergo a major refresh cycle over the next three years, as Wi-Fi 7 becomes commercially available. The new standard promises to boost speeds by up to four times compared to Wi-Fi 6. It will require networks to be rearchitected with at least 50G access and 100G and even 400G core switches.
On the workforce front, IT staffing remains a perennial challenge; the demand for skilled professionals with sufficient knowledge and experience in emerging technologies outweighs the supply, Zagury said.
Enterprises are experiencing exponential innovation, and with that comes exponential complexity, which makes it very hard to keep people up to date and have the right talent at the right place, she said.
Its rare that any single vendor can offer, say, networking and security services end to end. And thats when partners can get called in to stitch things together in a single services stack that can deliver the outcome an enterprise business wants, Zagury said.
Read more here:
Network complexity, talent shortages drive shift in enterprise IT buying habits - Network World
2024 could be ‘another year of effiiciency’ for Meta: Analyst – Yahoo Finance
Google-parent Alphabet (GOOG, GOOGL) stock tumbled earlier this week after reporting slower-than-expected revenue growth for its cloud division. RBC Capital Markets Internet Analyst Brad Erickson sits down with Yahoo Finance anchor Rachelle Akuffo to talk about cloud technology's current influence on Big Tech earnings, particularly Meta (META).
"They have covered a lot more ground with money they have spent than people realized, they're not going to have to spend as much as people thought in 2024," Erickson says on Meta's building out of its facilities and AI infrastructure.
Click here to watch the full interview on the Yahoo Finance YouTube page or you can watch this full episode of Yahoo Finance Live here.
RACHELLE AKUFFO: --to see if it was unfairly punished, given how well the actual earnings report was about the focus on Cloud.
BRAD ERICKSON: Yeah, I got it. I would say so, the advertising business, we do a lot of channel checks in the ad space. We came in tactically constructive on the space where everybody hit numbers, or beat numbers, really, for Q3. Meta's guidance was fine for Q4, even including some impact they called out from the Middle East.
Google had an unfortunate quarter with The cloud business, and it brought everything else down. And with the Meta result on Wednesday, I think the takeaway there was, unfortunately, people were looking for any reason to sell the stocks, I think that's probably a reflection on the broader macro, where we think the consumer is going, but looking for an excuse to sell the stock almost, and just made it a tough week until today with Amazon fortunately.
RACHELLE AKUFFO: And so speaking of Meta then, obviously 2023, the year of efficiency seemingly paying off at the moment. Obviously, the metaverse play, if we look over the past five years, still seeing some losses there. What are your expectations then for 2024, based on some of the cuts we've already seen Meta making this year?
Story continues
BRAD ERICKSON: I think it's going to be a combination of things. And you point to three factors that really drive the operating expense guidance they gave, which was modestly better than expectations. Number one, is just in general around headcount costs, facilities costs, I think there the message is, 2024 is going to continue to be another year of efficiency, that's number one.
Number two is just on their capital expenditures, these are around building data centers, and buying servers, and, certainly, graphics processors in the age of generative AI. I think there, what we learned on the call, which was positive, was that they've covered a lot more ground with money they've already spent, I think, than people realize, and so they're not going to have to spend quite as much as people thought in 2024.
And then the last one is Metaverse. And I think there, we don't love the losses, investors don't love the losses. Obviously, they're clearly continuing to spend, but I think the takeaway, again, nets out positively in that Mark Zuckerberg and his team are clearly acquiescing to investors desires to keep it more reasonable, meaning balancing the growth with that spend, and you're netting out to still an attractive consolidated P&L.
Continue reading here:
2024 could be 'another year of effiiciency' for Meta: Analyst - Yahoo Finance
Nairobi MCAs raise red flag over city revenue collection – The Star Kenya
Nairobi County Assembly has expressed concerns over the transparency and accountability of the city's revenue collection system.
The assembly's Energy and ICT Committees on Friday claimed that one year down the line, the Governor Johnson Sakaja-led administration doesn't have access to the system collecting revenue on behalf of the county.
The revelations come at a time when the EACC listed Nairobi as one of the counties with serious accountability deficits in their revenue management systems.
"It is instructive that we found a complete lack of transparency in the entire manner in which the Nairobi Countys revenue is being collected, reported and accounted for," said Robert Alai, Kileleshwa MCA.
Alai who is the vice chair of the PAC committee noted with concern that the Finance Department at the executive has no clue or control over how revenue collection is managed in Nairobi.
"CEC for Finance is helpless and even the Chief Officer and Directors arent aware what is happening. This must stop!" he stated.
The committees in a statement also noted with concern the opacity of Nairobi Countys revenue collection.
They listed seven irregularities in the management of the Nairobi Revenue System.
The MCAs claimed that the current and the previous CeC for Finance are not aware of the current collector and manager of the revenue system in the County, popularly known as the Nairobi Revenue System (NRS).
It was also revealed that the Finance and ICT sectors under the executive do not know where the physical servers of the NRS are including details of the cloud servers.
"The Finance executive members do not have administrative rights to the current revenue collection on the system," reads the statement.
The MCAs also claimed that it is not clear how many bank accounts are linked with the revenue collection system.
More shocking details are that the MCAs claim that there is currently no valid contract between Nairobi City County and Nairobi Revenue System(NRS) as the principal collector as they emerged before the end of the Nairobi Metropolitan Service term.
"When the Nairobi Metropolitan term expired (NMS), all functions ought to have been reversed to the County Government, yet very suspicions, there are concerns or sub-contraction of revenue collection by KRA then to Nairobi Revenue System in unclear and un-procedural way," reads the statement.
The MCAs also noted there has never been a complete audit of all revenue systems deployed in Nairobi City County since the advent of devolution to provide the accountability the public needs on this key infrastructure.
Alai noted that all 135 revenue streams are automated through NRS, so when the system is down, everything is down.
The MCAs said they were optimistic that Nairobi can raise more as much as Sh43 billion annually from its revenue sources.
"The drive to automate government services and make them available online is commendable. AutomaAon shouldnt end on the revenue streams. Non-revenue functions automation is an aspect forgotten in the campaign by the President," Alai said.
In March 2023 it was revealed that the Kenya Revenue Authority is still in charge of the revenue collection at Nairobi City County.
This is despite the current administration coming into power last year in August.
Nairobi Finance executive Charles Kerich said that despite the transition process between the then NMS and City Hall coming to an end in November last year, the revenue system has never been handed back to the county and they are still in the transfer process.
Kerich also said that only a few individuals in the finance sector and ICT have viewership rights of the Nairobi Revenue System and they follow to see the revenue being collected in the day.
The system is yet to be transferred to the county. We were only given the viewership rights and we are able to check the revenue being collected, he said then.
The finance boss further revealed that from their end as city hall, they currently do not know the location of the system servers.
We do not know the physical location though and as it has been stated, the process of transfer is yet to begin, Kerich revealed.
He was speaking during a committee meeting between the ICT committee and the Nairobi ICT and Finance and economic sectors.
KRA was mandated to collect the countys revenue when NMS was instituted.
However, since NMS tenure came to an end, the Collection of revenue was to be transferred back to the county.
See the rest here:
Nairobi MCAs raise red flag over city revenue collection - The Star Kenya
What Is Hybrid Cloud Security? How it Works & Best Practices – eSecurity Planet
eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.
Hybrid cloud security is a framework for protecting data and applications in a computing environment that includes both private and public clouds. It combines on-premises and cloud-based resources to satisfy an organizations diversified computing demands while ensuring strong security. This approach to cloud computing enables enterprises to benefit from the scalability and flexibility provided by public clouds while maintaining sensitive data within their own infrastructure.
As more businesses embrace hybrid cloud models to cater to their different computing demands, safeguarding the boundary between these environments has become critically important, making hybrid cloud security a top priority for ensuring protection, compliance, and resilience in an ever-changing digital ecosystem.
See our guides to public and private cloud security
Hybrid cloud security combines on-premises controls and practices with cloud-specific solutions, reinforcing data and application protection between environments. Hybrid cloud security starts with analyzing and categorizing data and progresses to customized security measures. Hybrid cloud security generally follows best practices for network security and cloud security:
These components work together to establish a complete hybrid cloud security strategy, but the specific components and their configuration will vary depending on the organizations security needs and the cloud services it employs.
A hybrid cloud architecture primarily involves integrating different types of cloud and on-premises technology to fulfill an organizations unique demands. Here are some examples of hybrid cloud security architectures.
An enterprise in this case combines its on-premises data center or infrastructure with a public cloud. Some workloads, apps, or data may be hosted on the organizations own servers, while others may be offloaded to a public cloud provider such as AWS, Azure, or Google Cloud.
Here, businesses can combine a public cloud with a private cloud, which may be housed in a dedicated data center. They use the public cloud for some processes and services, but keep a private cloud for more sensitive data or mission-critical applications.
Businesses may mix various public cloud providers, private clouds, and on-premises technology in more complex setups. This enables them to select the most appropriate environment for each workload, application, or data type.
Data synchronization is critical in hybrid cloud architectures to provide consistency across infrastructures. Connecting private clouds, legacy systems, and public clouds through the internet or private networks guarantees that data and applications flow seamlessly. A single management tool facilitates supervision because managing numerous cloud environments independently can be complicated due to differences in APIs, SLAs, and features from different providers. This provides a centralized interface for effective control and monitoring of hybrid cloud resources.
A hybrid cloud infrastructure gives enterprises a scalable, adaptable, and cost-effective solution that prioritizes data protection, privacy, and disaster recovery. This approach ensures business continuity and adaptation to changing demands by allowing for smooth resource allocation and cost control.
Hybrid clouds offer flexibility for enterprises with a wide range of demands and endpoints. They enable you to effortlessly move between on-premises and cloud servers based on your needs. You may manage your infrastructure at your own speed and respond quickly to changing demands.
It can be expensive to set up and manage on-premises data centers. By transferring resource-intensive activities to the cloud, a hybrid cloud approach can allow for cost-effective solutions. Cloud companies charge depending on consumption, which can lower infrastructure and maintenance costs, particularly for companies trying to meet fluctuating demand. Real-time monitoring and clear payment alternatives help with expenditure control.
Hybrid architecture is extremely scalable, allowing for company expansion by adding or deleting cloud servers as required. Employees may connect to the office system using a variety of devices without the need for extra hardware. Depending on demand, operations can be scaled up or down to optimize expenses.
Large amounts of data may be stored and analyzed in the cloud. To guard against cyber attacks, cloud systems include powerful security features such as encryption, firewalls, authentication, and data backups. Data security is improved by privacy features like number masking and dynamic caller IDs. Hybrid solutions enable you to preserve sensitive data on private clouds while keeping general data on public servers.
Cloud bursting allows workloads to be expanded to a public cloud during demand surges and then scaled down to the original server. This rented resource solution saves money and time while adjusting to changing workloads.
If security, privacy and regional compliance demands are met, storing or backing up critical data on cloud servers improves disaster recovery capability. Multiple backups provide data management even in the face of unforeseen occurrences like natural catastrophes. Because cloud-based operations can be expanded and controlled from anywhere, they provide business continuity in crisis scenarios.
When compared to typical security methods, securing a hybrid cloud environment brings unique challenges, particularly for enterprises with stringent regulatory requirements and established procedures. Some areas of concern include:
It is important to understand the shared responsibility of your company and cloud service providers. Cloud providers protect the infrastructure, but clients must protect their data and applications.
How to address this challenge: To protect data and applications, ensure that providers can satisfy regulatory requirements and incorporate business continuity and disaster recovery strategies in service level agreements (SLAs). And keep tight controls on access and other frequent cloud security mistakes.
When issues develop within the infrastructure of a cloud service provider, teamwork is required to resolve them. Issues such as data commingling in multicloud systems, data privacy influencing log analysis, and disparities in defining what constitutes an event can all provide difficulties.
How to address this challenge: To reduce downtime and data exposure, enterprises should define explicit incident response plans, including communication methods, and verify they comply with the cloud providers policies.
Cloud applications are vulnerable to a variety of security risks, and a range of products address certain areas of this issue, such as software development life cycle security, authentication, compliance, monitoring, and risk management. Managing them separately can be difficult logistically, so look for solutions that incorporate various security roles.
How to address this challenge: Organizations should take a DevSecOps approach to security, including it in the application development lifecycle. Using automated security testing tools and doing frequent code reviews helps to protect the integrity of apps.
Because sensitive data is dispersed across several environments in hybrid cloud security, consistent security procedures and monitoring are required to prevent exposure and breaches.
How to address this challenge: Using a data-centric security approach, such as data encryption, data classification, access restrictions, and data loss prevention solutions, may help protect sensitive information no matter where it is stored.
Because of the requirement to follow varying standards across numerous cloud environments, compliance and auditing pose issues in hybrid cloud security, demanding complicated monitoring, reporting, and adherence processes.
How to address this challenge: To ease the compliance process, organizations should establish a centralized compliance and auditing system that uses automated technologies to monitor and report on the compliance status of their hybrid cloud environment.
While specific configurations may differ, adopting these best practices assists businesses in mitigating risks and successfully responding to security challenges.
Encrypting data in transit and then examining it guarantees that sensitive information is kept private during transmission while also allowing for the discovery of any possible security risks or breaches. This way, security is ensured on both ends.
Continuous configuration monitoring and auditing aid in detecting deviations from defined security standards and policies, ensuring that the hybrid cloud system stays compliant and safe. Monitor and audit settings across all of your clouds and data centers on a regular basis. Misconfigurations, which are frequently the consequence of human mistakes, are a major source of vulnerabilities. Automation is a useful technique for ensuring secure setups.
Vulnerability scans uncover possible flaws in the system, allowing for quick correction to strengthen security against hostile actors. Conduct vulnerability checks on a regular basis to uncover weak places in your infrastructure. Make use of automated solutions that prioritize vulnerabilities based on risk profiles to ensure efficient and successful remediation.
Applying security updates on a regular basis keeps software and systems up to date, addressing known vulnerabilities and improving the hybrid cloud infrastructures security posture. By shortening the period between patch release and implementation, the window of opportunity for possible cyber attacks is reduced.
To reduce the danger of unauthorized access or lateral movement by attackers, zero trust security necessitates strong authentication and access rules that regard all users and devices as untrusted entities. Implement security principles based on zero trust, which prioritize least-privilege access and strong authentication.
Create an effective response strategy in the event of a security compromise. In the case of a security breach or disaster, a recovery plan specifies how to restore services and data while minimizing downtime and data loss and guaranteeing business continuity. Keeping backup storage separate from the original data source reduces the chance of a single point of failure and speeds up remediation operations.
Endpoint security solutions, such as EDR and multi-factor authentication, prevent illegal access and data breaches by securing endpoints such as devices and user access points. While cloud computing has revolutionized company security, endpoints could still remain a weak link. It is critical to protect data going through and between these devices.
The hybrid cloud security landscape is continuously expanding, and several major companies now offer comprehensive solutions to protect your data and apps in hybrid environments. Here are three of the top hybrid cloud security solutions to consider.
Acronis Cyber Protect Cloud specializes in providing comprehensive services to safeguard data across various environments, particularly in hybrid cloud setups, making it a good option for organizations seeking to secure and manage their data in complex, multi-cloud, and on-premises environments.
Key Features: Acronis includes AI-based antivirus, anti-malware, and anti-ransomware technologies for proactive threat prevention, as well as fail-safe patching, forensic backup, and continuous data protection.
Services: Data backup and recovery, cybersecurity tools against malware, ransomware, and other threats, and services for data storage and management.
Unique Offering: AI-Based Protection, blockchain technology, and integrated data protection.
Skyhighs Cloud Native Application Protection Platform offers an all-in-one solution for securing cloud-native applications, encompassing a risk-based perspective on application and data context.
Key Features: Skyhighs CNAPP examines workloads, data, and infrastructure in real time, detecting misconfigurations, software vulnerabilities, and sensitive data. For comprehensive security, it defends against configuration deviations, automates assessments, and supports short-lived workloads with application allow-listing, workload reinforcement, integrity monitoring, and On-Premises Data Loss Prevention (DLP) Scanning.
Services: Offers a unified set of controls based on an integrated platform, customer assistance, and expert guidance.
Unique Offering: Skyhigh (formerly McAfee MVISION) is a pioneering platform that integrates application and data context, combining Cloud Security Posture Management (CSPM) for public cloud infrastructure and Cloud Workload Protection Platform (CWPP) for application protection across virtual machines, compute instances, and containers.
Trend Micro Cloud One platform has broad support across public cloud providers (AWS, Google Cloud, Azure), VMware-based private clouds, and on-premises storage.
Key Features: Trend Micro offers AI and ML-powered vulnerability analysis, a bug bounty program for zero-day attack readiness, contributions from 15 global research centers, managed detection and response services, protection for cloud-native applications, and versatile integrations via native APIs. Advanced automation enhances vulnerability detection and compliance monitoring.
Services: Managed detection and response, threat analysis, and professional assistance are all available through the platform.
Unique Offering: Provides full coverage, including open source assets, filling a critical cybersecurity gap. Trend Micros relationship with Snyk offers specific coverage for open source assets, making it a good option for businesses that already rely on open source.
Businesses should explore hybrid clouds if they have dynamic workloads, seasonal swings, need gradual cloud adoption, or want flexibility in the face of an uncertain future. Hybrid clouds allow businesses to adapt at their own speed, giving financial relief and a safety net for those hesitant to embrace full-scale changes. Hybrid cloud security, which combines traditional on-premises security practices with cloud-specific measures, ensures a comprehensive defense strategy, allowing organizations to benefit from cloud computing while effectively safeguarding their data and applications from evolving cyber threats and regulatory compliance issues.
Read next:
View original post here:
What Is Hybrid Cloud Security? How it Works & Best Practices - eSecurity Planet