Page 1,112«..1020..1,1111,1121,1131,114..1,1201,130..»

Unleashing the Power of Serverless Architecture: Exploring Global … – Fagen wasanni

Unleashing the Power of Serverless Architecture: Exploring Global Function as a Service

Serverless architecture, a revolutionary concept in the world of cloud computing, is rapidly gaining traction among businesses and developers alike. This innovative technology, also known as Function as a Service (FaaS), is transforming the way applications are built, deployed, and scaled, offering a myriad of benefits that traditional server-based models simply cannot match.

At its core, serverless architecture is a cloud computing execution model where the cloud provider dynamically manages the allocation and provisioning of servers. This means that developers can focus on writing code without worrying about the underlying infrastructure. The serverless model is event-driven, meaning that applications are built around individual functions that are triggered by specific events or requests.

One of the key advantages of serverless architecture is its cost-effectiveness. In traditional server-based models, businesses have to pay for server capacity, regardless of whether or not they are using it. With serverless architecture, however, businesses only pay for the compute time they consume. This pay-as-you-go model can result in significant cost savings, especially for applications with unpredictable or fluctuating workloads.

Another major benefit of serverless architecture is its scalability. In a server-based model, scaling requires manual intervention to add or remove servers based on demand. In contrast, serverless architecture automatically scales applications in response to demand, ensuring that applications can handle any amount of workload without any manual intervention. This automatic scaling not only improves application performance but also reduces the risk of downtime during peak usage periods.

Despite these advantages, serverless architecture is not without its challenges. One of the main concerns is the cold start problem, where functions may take longer to execute after a period of inactivity due to the time it takes to initialize a serverless function. However, cloud providers are continuously working on solutions to mitigate this issue.

Security is another concern with serverless architecture. While cloud providers take care of many security aspects, such as physical security and operating system patches, businesses are still responsible for securing their applications. This includes protecting sensitive data, managing access controls, and ensuring compliance with regulatory standards.

Despite these challenges, the benefits of serverless architecture are undeniable. As businesses continue to seek ways to improve efficiency, reduce costs, and scale rapidly, the adoption of serverless architecture is expected to grow. According to a recent report by MarketsandMarkets, the global serverless architecture market is projected to grow from $7.6 billion in 2020 to $21.1 billion by 2025, at a Compound Annual Growth Rate (CAGR) of 22.7% during the forecast period.

In conclusion, serverless architecture, or Function as a Service, is a powerful tool that is reshaping the landscape of cloud computing. By eliminating the need to manage servers, it allows businesses to focus on what they do best: building and delivering innovative applications. While there are challenges to overcome, the potential benefits of cost savings, scalability, and improved efficiency make serverless architecture an attractive option for businesses of all sizes. As the technology continues to evolve and mature, it is poised to become a key component of the global cloud computing market.

Go here to see the original:
Unleashing the Power of Serverless Architecture: Exploring Global ... - Fagen wasanni

Read More..

Best VoIP VPN in 2023 [Unblock VoIP Services From Anywhere] – Cloudwards

Key Takeaways: VoIP VPN

VoIP (voice over internet protocol) technology allows you to call other people using the internet. A VoIP VPN (virtual private network) helps users access VoIP services in countries where theyre banned. Some governments dont like the idea of encrypted calls, and thats why they block them. Countries like China, Kuwait, Oman and Saudi Arabia, among others, ban VoIP services.

A VPN can assign an IP address that belongs to a country where VoIPs arent banned, allowing you to access VoIP services. VPNs also encrypt your connection and VoIP traffic ensuring you can use VoIP services without the fear of your VoIP activity being traced back to you. However, not all VPNs are equally good at encrypting your connection and maintaining anonymity.

If youre not familiar with VPNs, finding a decent service can be tricky thats what were here for. In this article, we help you select the best VoIP VPN. If you dont want to spend time reading the full guide, the short version is that we recommend using ExpressVPN.

Setting up VoIP with a VPN is fairly easy. Heres a quick step-by-step guide:

Start by downloading a VPN app from a VPN service providers website we recommend ExpressVPN. Install and launch the app on your computer.

Connect to a VPN server in a country that allows VoIP services to operate. Ideally, you should connect to a server close to your location for better connection speeds.

Once connected, place a call using the VoIP app of your choice.

The best VoIP VPN should offer excellent security and fast connection speeds. The following VPNs offer that and much more.

Security is the most critical aspect of using a VPN for a VoIP service. Strong encryption ensures nobody not even the government or your ISP (internet service provider) can read your data. The best VPN services use encryption protocols like OpenVPN or WireGuard and ciphers like AES-256 or ChaCha20.

Fast connection speeds are just as important, as they ensure that call quality doesnt suffer. Using a VPN typically slows down speeds, because of the encryption. However, the best VPNs keep this reduction to a minimum.

VPNs can also help access geoblocked content. Not all VPNs can effectively bypass geoblocks, though, so make sure to select one that has a proven track record of getting through and a large server network. This is especially important in censorship-heavy countries like China and Saudi Arabia.

We tested all major VPNs with the most popular VoIP services, including Skype and WhatsApp, to find the five that work the best. Well dive deeper into these VPN services below, but if you dont want to bother with details we recommend ExpressVPN.

ExpressVPN offers an excellent combination of speed and security.

More details about ExpressVPN:

Pros:

ExpressVPN is the best VPN on the market, offering everything youd expect from a premium VPN. Most importantly, ExpressVPN offers top-notch security and fast connection speeds a combination you need to unblock VoIP apps safely and maintain great call quality.

The app includes protocols like OpenVPN and Lightway ExpressVPNs proprietary protocol. ExpressVPN has additional security features like a kill switch, DNS leak protection and split tunneling (available for Windows, Mac, Android and routers).

Our tests reveal that ExpressVPN is one of the fastest VPNs on the market, meaning you wont have problems with shaky call quality or unexpected drops. Learn more about ExpressVPNs capabilities and features in our comprehensive ExpressVPN review.

Obfuscation is built into all ExpressVPN servers. Obfuscated servers help conceal VPN usage and make it harder for the VoIP service provider to detect that youre using a VPN. This allows you to access VoIP services more safely. ExpressVPN also has a massive server network spread across over 90 countries.

Obfuscation is built into all ExpressVPN servers.

ExpressVPN boasts a superior feature set and premium performance which is reflected in its pricing. The annual plan offers the most value and costs $6.66 per month. The six-month plan costs $9.99 per month, while the monthly plan costs $12.95 per month. You can try ExpressVPN risk-free using its 30-day money-back guarantee.

ExpressVPN Plans

NordVPN is the fastest VPN on the market.

More details about NordVPN:

Pros:

Cons:

NordVPN is the fastest VPN on the market, according to our speed tests. Whether youre making voice or video calls, NordVPN provides the smooth connection needed for high-quality calls. The speeds dont come at the expense of security, either. NordVPN is one of the most secure VPNs there is.

The service offers two protocols: OpenVPN (UDP and TCP) and NordLynx NordVPNs proprietary protocol, based on WireGuard. A kill switch, DNS leak protection and split tunneling are also available. NordVPN has one of the largest server networks among the VPNs in this list, with over 5,000 servers spread across 60 countries.

If youre in a region where VoIP is blocked, NordVPN can help unblock VoIP services. It gets into all the popular VoIP platforms. Being able to unblock these platforms using a VPN as fast as NordVPN helps you make great quality video and voice calls on your preferred platform. Learn more about the fastest VPN service in our comprehensive NordVPN review.

NordVPN offers four specialty server types for additional security: obfuscated, double VPN, P2P and Onion Over VPN servers. Obfuscated servers are helpful when accessing VoIP services because they help conceal VPN usage while allowing you to bypass VoIP restrictions. Double VPN is a great way to improve your networks security because it routes your traffic via more than one server location.

Note that these servers offer additional security. NordVPN is secure even when youre not using one of these server types.

NordVPN offers four types of specialty servers for extra security.

NordVPNs two-year plan is its best value, costing $3.49 per month. Alternatively, you can opt for a shorter-term plan. The annual plan costs $4.99 per month, while the monthly plan costs $12.99 per month. You can try this VoIP VPN risk-free using its 30-day money-back guarantee.

Standard

Surfshark is one of the few VPNs that offers unlimited simultaneous connections.

More details about Surfshark:

Pros:

Surfshark hasnt been around for as long as other services on the list, but its quickly become one of the best VPNs on the market. It has an extensive feature set, offers top-notch security and delivers great connection speeds, perfect for high-quality VoIP calls.

The app offers three security protocols: OpenVPN, IKEv2 and WireGuard. It also offers a clear, transparent no-logging policy. This means you never have to worry about Surfshark sharing your data with anyone.

Surfshark is one of the few providers to offer unlimited simultaneous connections, meaning you can share your account with friends or family who need to access a VoIP service or unblock geoblocked websites. Learn more about Surfsharks offerings in our Surfshark review.

Surfshark offers a strong lineup of additional features that not many premium VPNs have. For example, the Surfshark app is equipped with CleanWeb 2.0 a powerful malware and ad blocker that can block ads on all apps, instead of just the browser.

The apps IP rotation feature automatically changes your IP address every few minutes without changing your virtual location, which is helpful if a service keeps blocking your IP address. Using this feature might disrupt the connection when youre on a call, but most VoIP services will reconnect you without dropping the call altogether.

Surfshark offers an extensive feature set with unique features.

Surfshark is a low-cost service, with its best deal (the two-year plan) costing $2.49 per month. Six-month and monthly plans are also available, costing $3.99 per month and $12.95 per month, respectively. Like other services, Surfshark offers a 30-day money-back guarantee, so you can always claim a full refund if youre not entirely satisfied.

Surfshark Plans

Proton is the best free VPN to unblock VoIP services.

More details about Proton VPN:

Pros:

Cons:

If you want a free VPN to go with your free VoIP phone service, like Skype, Google Hangouts or Facebook Messenger, Proton VPN is an excellent choice. We typically avoid recommending free VPNs theyre slow, unreliable and often harmful (especially the ones in our worst VPNs list) but Proton VPN is one of the few exceptions. For others, see our list of the best free VPNs.

Proton VPN offers OpenVPN, IKEv2 and WireGuard protocols alongside AES-256 and ChaCha20 ciphers. Its privacy policy commits to zero logging, so you never need to worry about your datas integrity. To round off a solid offering, the app offers app-based split tunneling (on Windows and Android), a kill switch and DNS leak protection.

Even on the free plan, Proton VPN offers great speeds, which is good news for voice and video clarity during VoIP calls. The free plan includes servers in three countries: the Netherlands, the U.S. and Japan. Learn more about this VPN service in our Proton VPN review.

Most free VPNs, at least the trustworthy ones, are a preview of the VPN providers premium service. VPN providers typically restrict access to a handful of features to encourage users to switch to the paid plan. Most VPNs choose to limit data, but fortunately, Proton VPN takes a different route.

Instead of limiting data, Proton VPN limits server locations on the free plan. This works out well for users who need a fast and secure VPN connection to make VoIP calls. Its also great for users who want to stream geoblocked content (albeit in three countries) for free.

Proton VPN offers unlimited data on the free plan, which allows you to make unlimited high-quality voice and video calls.

Proton VPN offers paid plans too. The best deal (the two-year plan) costs $4.99 per month. The one-year plan costs $5.99 per month, while the monthly plan costs $9.99 per month. All plans come with a 30-day money-back guarantee, so if youre not completely satisfied, you can claim a full refund.

CyberGhost offers an excellent interface for the desktop app.

More details about CyberGhost:

Pros:

CyberGhost is the most affordable service on the list. However, it still has the makings of a premium VPN. One of its best features is its large server network. CyberGhost has over 9,000 servers spread across 91 countries.

The app offers excellent security. You can choose between OpenVPN, WireGuard and IKEv2 (available on Windows, macOS and iOS). Theres a kill switch, and the Windows and macOS apps offer URL-based split tunneling, while the Android app offers app-based split tunneling.

CyberGhost isnt one of the fastest VPNs weve seen, but its good enough for making clear VoIP calls. Streaming, on the other hand, isnt CyberGhosts strongest suit. It gets into HBO Max and Hulu, but we had to switch a few servers to unblock BBC iPlayer, and it failed to get into Disney Plus. Learn more about the service in our thorough CyberGhost review.

The smart rules feature on CyberGhost helps automate various VPN-related tasks that can save you a few clicks every day. Smart rules available on Windows and Mac rely on triggers to take a predefined action. For example, you can configure CyberGhost to connect to a specific server location whenever you launch a browser or VoIP application.

CyberGhost offers an automation feature called smart rules.

CyberGhost is a low-cost service. The two-year plan (its best value) costs just $2.37 per month. Shorter-term plans include the six-month plan, costing $6.99 per month, and the monthly plan, which costs $12.99 per month. You get a longer, 45-day money-back guarantee (14 days on the monthly plan), so you have a longer window to try CyberGhost with zero risk.

CyberGhost Plans

VPNs offer various advantages over a proxy. First, a VPN tunnel is encrypted. This means that your internet traffic cant be read by anyone, even if they manage to get their hands on it. A proxy doesnt offer any encryption, so the government or your ISP can easily trace online activities back to you. Thats why theyre not the best option to unblock VoIP.

In any case, most proxies are unable to unblock VoIP services altogether. Theyre just not as effective as VPNs at unblocking geoblocked websites and services. Even some VPNs fail at unblocking websites, though, so its important to choose a service that has been tested to get into blocked websites we recommend ExpressVPN.

Many countries block VoIP services because they offer end-to-end encrypted voice calls, and local carriers take a hit when you make free calls using a VoIP like Skype. You can use a VPN to unblock Skype and other VoIP services in the following countries where VoIP services are completely or partially banned by the government:

Yes, you can use a free VPN like Proton VPN to access VoIP on computers or mobile devices. We typically dont recommend free VPNs because theyre slow and unreliable. However, exceptions do exist Proton VPN is one of them. Using a free VPN along with a free VoIP provider like Skype can help you connect with friends or family at no cost.

Many countries, including China, Iran, Kuwait and the UAE, block VoIP services. VPNs can help unblock them, provided you pick the right service. VPNs also secure your connection, protecting you from the government or ISP tracking the VoIP activity back to you.

Have you tried using a VPN to make VoIP calls? Which one, and did you experience any lack of clarity during the call? Let us know in the comments below, and as always, thank you for reading.

Yes, you can use a VoIP service with a VPN. If youre in a country that blocks VoIPs, using a reliable VPN can help unblock these services so you can enjoy free VoIP calling. Even if youre in a country where VoIP services are accessible, using a VPN helps secure your connection.

VPNs generally slow down your connection speed because of encryption. If youre already on an internet connection with slow speeds and you use a VPN that cant deliver great speeds, you might face performance issues during calls. By using a premium VPN like ExpressVPN, this reduction will be minimal.

VPNs can help bypass VoIP restrictions. They do so by hiding your real IP address, and therefore, your location. VPNs also encrypt your connection so that nobody will know youve bypassed government restrictions and accessed VoIP services.

Let us know if you liked the post. Thats the only way we can improve.

YesNo

Continue reading here:
Best VoIP VPN in 2023 [Unblock VoIP Services From Anywhere] - Cloudwards

Read More..

Cloud customers want security and control that goes beyond the hyperscale model – diginomica

( Just_super - Canva.com)

Fifteen years into the cloud computing era, most cloud providers sound a lot like auto pioneer Henry Ford: You can have any color you want as long as its black. In their view, businesses can use any cloud they want as long as it runs in a massive hyperscale data center. If customers want to deploy cloud services from a smaller facility, perhaps even from their own location, theyre out of luck.

That is changing in large part because many current and potential cloud customers, ranging from companies to entire countries, must meet security and regulatory mandates that require far more flexible deployment options. At the same time, these organizations and governments dont want to compromise on benefitting from the continuous innovation happening in the hyperscale cloud world. They want the option to run all of the same services without delays, no matter what configurations they need for their cloud deployments.

Cloud computing is no longer one-size-fits-all because it cant be, any more than Henry Ford could keep people from wanting red, blue, and yellow cars.

A growing number of countries have established policies to protect the privacy and security of their citizens data, not to mention sensitive information related to military, intelligence, and other government operations. That makes using a multi-tenant cloud region that can extend beyond their borders problematic, at best.

Public cloud offers enormous benefits to businesses and government entities, such as providing extremely efficient data processing and storage. But the very nature of the cloud is based on the use of shared servers, networking, and storage, putting that model at odds with privacy and security mandates. That friction makes life complicated for governments and companies in regulated industries that want cost-effective and high-performance computing resources. Yes, you can cordon off pieces of a public cloud region for use by a single business, but that alone will not satisfy many of these complicated and often shifting regulations.

The result is that organizations need to add another factor into their considerations data sovereignty. This is the concept that data related to individuals and organizations is subject to the privacy regulations of their home nation. Some regulators decided the easiest way to ensure that was to outlaw data leaving the region. Although well intentioned, this creates issues for any organization planning to benefit from the use of a standard hyperscale cloud. Thats why existing and potential cloud customers are clamoring for options, including cloud deployments which will give them more control over how and where their data is stored and processed and who can access it.

On top of all of that, cloud infrastructure requires a secure-by-default approach to make sure the customer is not spending extra time and resources on basic protections. Broadly speaking, the only way to meet this increasingly diverse set of requirements is to offer a choice of several options for deployment, location, and staffing, along with a complete set of cloud services that runs across all those options with 100% fidelity.

For example, a large French bank may want to utilize full cloud services in its own data center and ensure that it is managed and maintained by French citizens employed by the bank, not the cloud provider. Or a South American government agency needs a super-secure cloud that is air-gapped (ie, isolated) from the rest of the world but that still runs the same cloud services that are available in the public cloud.

Oracle Cloud Infrastructure (OCI) meets the needs of many customers around the world with its distributed cloud, which offers public cloud to customers via dozens of cloud regions located in several countries around the world and multi-cloud, hybrid cloud, and dedicated cloud options including cloud services deployable on-premises. But increasing regulations, such as the General Data Protection Regulation (GDPR), mean that many customers need more specific controls. To help customers fully address the European Unions (EU) regulatory laws, Oracles upcoming EU Sovereign Cloud will restrict the operation and customer support responsibilities to EU residents to ensure total compliance with the regulations. Customers with data and applications that are sensitive, regulated, or of strategic regional importance, as well as workloads that fall under EU requirements for data privacy and sovereignty (such as GDPR), can now move to the cloud. This also opens the door for government agencies to adopt cloud technology in their operations in ways that were not previously possible.

Other customers need to take it a step further. In Japan, the Nomura Research Institute (NRI) needed to modernize its IT operations for its financial services. It needed to have access to the full range of cloud benefits availability, scalability, efficiency, and performance among others. However, the sensitive nature and compliance requirements of the financial transactions on this system meant that it couldnt sacrifice control over its data and financial governance. The financial services industry is one of the most demanding in terms of regulation and security, and so the companies that operate in that space need a level of customization that standard hyperscale cloud models just dont provide. The answer for NRI was OCI Dedicated Region, which allowed the company to run a fully up-to-date cloud region from within its own datacenter giving it the best of both worlds.

The need for more cloud deployment options will only grow as technical and regulatory requirements change. Regulations in each country, and across regions like the European Union, will shift and proliferate as needed. In 2016, the 261-page GDPR law replaced the Data Protection Directive of 1995. And everyone needs to be prepared to adapt again because no one thinks the requirements of GDPR are set in stone, even eight years later. Things will change.

There will always be a market for services operating in massive hyperscale public clouds. But organizations with key sensitive workloads their crown jewels will increasingly demand reliable, secure cloud services that run from a more diverse set of footprints.

Once again, the days of one-cloud-fits-all-needs are over.

See the original post:
Cloud customers want security and control that goes beyond the hyperscale model - diginomica

Read More..

US Looks To Restrict China’s Access To Cloud Computing To … – Slashdot

The Biden administration is preparing to restrict Chinese companies' access to U.S. cloud-computing services, WSJ reported Tuesday, citing people familiar with the situation, in a move that could further strain relations between the world's economic superpowers. From the report: The new rule, if adopted, would likely require U.S. cloud-service providers such as Amazon.com and Microsoft to seek U.S. government permission before they provide cloud-computing services that use advanced artificial-intelligence chips to Chinese customers, the people said. The Biden administration's move would follow other recent measures as Washington and Beijing wage a high-stakes conflict over access to the supply chain for the world's most advanced technology.

Beijing Monday announced export restrictions on metals used in advanced chip manufacturing, days ahead of a visit to China by Treasury Secretary Janet Yellen. The proposed restriction is seen as a means to close a significant loophole. National-security analysts have warned that Chinese AI companies might have bypassed the current export controls rules by using cloud services. These services allow customers to gain powerful computing capabilities without purchasing advanced equipment -- including chips -- on the control list, such as the A100 chips by American technology company Nvidia.

Read the original here:
US Looks To Restrict China's Access To Cloud Computing To ... - Slashdot

Read More..

The Role of SaaS in Shaping the Future of Global … – Fagen wasanni

Exploring the Impact of SaaS on the Future of Global Telecommunications

The role of Software as a Service (SaaS) in shaping the future of global telecommunications cannot be overstated. As the world becomes increasingly digital, the telecommunications industry is undergoing a significant transformation, driven by the rapid adoption of SaaS solutions. This shift is not only changing the way telecommunications companies operate but also how they deliver services to their customers.

SaaS, a cloud-based service model, allows users to access software applications over the internet. Instead of purchasing and installing software on individual computers or servers, users can access these applications via the cloud. This model offers numerous benefits, including cost savings, scalability, and flexibility, making it an attractive option for telecommunications companies.

One of the most significant impacts of SaaS on the telecommunications industry is the shift from capital-intensive infrastructure to a more flexible and scalable cloud-based model. Traditionally, telecommunications companies have had to invest heavily in physical infrastructure, such as servers and data centers. However, with SaaS, these companies can now leverage cloud-based solutions to deliver services more efficiently and cost-effectively. This shift not only reduces capital expenditure but also allows for greater scalability, as companies can easily adjust their services based on demand.

Moreover, SaaS is enabling telecommunications companies to innovate and deliver new services faster. In the past, launching a new service often required significant time and resources. With SaaS, however, companies can quickly deploy new applications and services, allowing them to stay competitive in a rapidly evolving market. This agility is particularly crucial in the telecommunications industry, where customer expectations and technology trends are constantly changing.

In addition to operational efficiency and agility, SaaS is also enhancing the customer experience in telecommunications. With SaaS, telecommunications companies can provide customers with more personalized and seamless experiences. For instance, cloud-based customer relationship management (CRM) systems allow companies to track customer interactions and preferences, enabling them to deliver more targeted and personalized services. Furthermore, SaaS solutions can also improve service reliability and quality, as they can be updated and maintained more easily than traditional software.

However, the adoption of SaaS in telecommunications is not without challenges. Security concerns are a significant issue, as sensitive data is often stored and transmitted via the cloud. To address this, telecommunications companies need to ensure they have robust security measures in place and work with reputable SaaS providers. Regulatory compliance is another challenge, as telecommunications companies operate in a highly regulated environment. Therefore, they need to ensure that their SaaS solutions comply with all relevant regulations.

Despite these challenges, the benefits of SaaS for the telecommunications industry are clear. As the world becomes more connected, the demand for efficient, scalable, and innovative telecommunications services will continue to grow. By leveraging SaaS, telecommunications companies can meet this demand and stay competitive in the digital age. In conclusion, SaaS is not just shaping the future of global telecommunications; it is driving it.

Read the original post:
The Role of SaaS in Shaping the Future of Global ... - Fagen wasanni

Read More..

HPE plans to spend $1 billion on high-volume servers in India – ITPro

Hewlett Packard Enterprise is set to manufacture servers in India amid efforts to diversify the companys global supply chain and support demand for hardware in India.

HPE expects to make $1 billion worth of servers during the first five years of production at the plant in Manesar, Haryana, and will be partnering with VVDN technologies.

Antonio Neri, president and CEO at HPE, said: Today's announcement marks a significant milestone for HPE and reiterates our commitment to the government of Indias Make in India initiative for a self-reliant India.

India is a strategic market for HPEs business, talent, innovation and now, manufacturing.

The company has its largest workforce outside the United States in India, and has more than 4,000 employees based out of its Mahadevpura campus in Bengaluru.

In total, HPE has approximately 14,000 team members in the country, according to Neri.

In 2019, HPE announced an investment of $500 million over five years to grow its operations and team base in India. It has since opened multiple new campuses and offices.

RELATED RESOURCE

Supply chain as kill chain

Security in the era Zero Trust

DOWNLOAD FOR FREE

HPE did not immediately respond to a query regarding the specifics of the server product line it planned to manufacture in India. The plans call for manufacturing some of its high-volume servers.

Som Satsangi, SVP and managing director of India at HPE, noted that India is expected to grow into a $1 trillion digital economy.

He said: With local manufacturing, HPE will be able to better serve the needs of our customers across industries and segments, from startups to SMBs - to enterprise to government.

It was announced last week that Ashok Leyland, one of Indias leading commercial vehicle manufacturers, had selected the HPE GreenLake edge-to-cloud platform to modernize its SAP infrastructure.

The move followed 2022s announcement that the Steel Authority of India Limited (SAIL) had also selected GreenLake for its Central Marketing Organization (CMO).

HPEs India manufacturing plans come just over a year after it announced the fourth of the companys global high-performance computing sites, located in Kutn Hora, Czech Republic, next to HPEs existing European site.

In 2020 it announced industry-standard servers with US Country of Origin - servers built by vetted HPE employees in highly secure US facilities as part of its HPE Trusted Supply Chain initiative.

Read the original:
HPE plans to spend $1 billion on high-volume servers in India - ITPro

Read More..

Honeycomb: How We Used Serverless to Speed Up Our Servers – InfoQ.com

Transcript

Kerr: One Tuesday morning a developer sits down at their desk, opens up their laptop. They look at some dashboards, maybe, what is this blip in the error rate? Could that possibly be related to the change I pushed yesterday? They open up their log aggregator and they type in a query, take a sip of coffee. Might as well go get another cup. That's it. That's the whole story. They never get back to that question. Maybe their kid walked in and distracted them. Maybe they got an email. Who knows? Their attention is gone. I'm Jessica Kerr. I'm here to talk to you about how at Honeycomb, we use serverless functions to speed up our database servers.

I'm going to talk about how serverless is useful to us at Honeycomb, not for answering web requests, but for on-demand compute. Then I'll talk about some of the ways that it was tricky. Some of the obstacles that we overcame to get this working smoothly. Finally, how you might use serverless. Some things to watch out for, what workloads you might use this for.

First, I need to tell you why we use serverless at all. We use Lambda functions on AWS Lambda, to supplement our custom datastore, whose name is Retriever. Your first question there should definitely be, why do you have a custom datastore? Because the answer to let's write our own database is no. In our case, our founders tried that. It turned out that we are really specialized. Retriever is a special purpose datastore for real-time event aggregation for interactive querying over traces, over telemetry data for observability. Why would we want to do that? Because Honeycomb's vision of observability is highly interactive. People should be able to find out what's going on in their software system when they need to know not just learn that something's wrong, but be able to ask, what's wrong? How is this different? What does normal look like? A repeated question structure, always get to new questions. The difference for us between monitoring and observability, is in monitoring, you decide upfront what you want to watch for. You maybe watch for everything that has been a problem in the past. Then when you want to graph over that, you can graph that over any period of time, and it's really fast because you've stored it in a time series database. You've done all the aggregating already. In Honeycomb, you don't yet know what you're going to need to ask about production data. Dump it all into events. We'll put it into Retriever. Then we'll make every graph fast. Each different field that you might want to group by or aggregate over, each different aggregation you might want to do, from a simple count to a p50, or a p90, or a p99, or a heatmap over the whole distribution. Our goal is to make all of these graphs fast, so that you can get the information you need and immediately start querying it for more information.

It goes like this. Say I want to know how long our Lambda functions take to execute. What's the average execution time? Not always the most useful metric, but we can use it today. I know I need to look in Retriever's dataset for something in Lambda, but I don't remember the name of the spans I'm looking for. I'll just ask it for group by name, give me all the different names of the spans. I recognize this one, invoke. I'm looking for invoke. Next question, run this query, but show me only the invoke spans. Ok, got that back. Next query, show me the average of their durations. I can scroll down and I can see what that is. Then I get curious. This is important. I'm like, why is this so spiky? What is going on over here where it's like, super jumpy, and the count is way higher and the average duration is bouncy? I'm like, look at this, look at that spike in the p50 of the median duration down there? Let's see, I'll heatmap over that. Doesn't look like they're particularly slower in the distribution. Let's say, what is different about these spans compared to everything else in the graph?

Honeycomb does a statistical analysis of what's going on. Then we can scroll down and we can see what's different. It looks like for the spans in this box I drew they're mostly from queries, and they have a single trace ID so they're from this particular query. Ok, so now, next aggregation. Give me only the spans inside this trace. Now I'm seeing all the invocations in this one query, but now I want to know what query were they running that made it take so long? Instead of looking for the invocations, let's look through this trace, but let's find something with a query spec in it. Now I'm going to get back probably just one span, a couple spans, of Retriever client fetch. I recognize that name. That's the one that's going to tell me what this particular customer was trying to do. If I flip over to raw data, then I can see all of the fields that we sent in Retriever client fetch. Look, there's the query spec right there. I'm not sure exactly what that is but it looks hard. Some queries that customers run are definitely harder than others.

The point is to get this interactive feel, this back and forth, this dialogue going with your production data, so that you can continue to ask new questions over and over. For that, it has to be really fast. If I hit run query, and then I take a sip of coffee, now I should have my answer. If I have to go get another cup, complete failure. We've lost that developer or that SRE. That's not good enough. The emphasis on this is on the interactivity here. Ten seconds is a little slow. One second is great. A minute, right out.

How do we do this? Architecture of Retriever. Customers send us events. We put them in the database. Then developers, and SREs, and product, and whoever, runs the queries from our web app. Of course, the events come into Kafka. This is not weird. Naturally, we partition them. Retriever is a distributed datastore. There's two Retrievers to read off of each topic, so that we have redundancy there. It reads all the events, and then it writes them to local disk. Because local disk is fast, in-memory is too expensive. Anywhere else is slower. It writes all these things to local disk. That's quick. The more of Retrievers we have, the more local disks we have. Then, when a query comes in, it comes into one Retriever. That Retriever says, ok, this dataset has data in these many other partitions, sends off inner queries to all of those Retrievers so that they can access their local disks. Then there's a big MapReduce operation going on, it comes back to the Retriever you asked, and it responds to the UI. That's the distributed part.

The next trick to making this really fast is that Retriever is a column store. It's been a column store since before these were super cool, but it's still super cool. Every field that comes in with an event goes in a separate file. That's fine. This is how we scale with quantity of fields on the event. Because at Honeycomb, we want you to send all kinds of fields and they can have all different values. We don't care because we're only going to access the ones we need. When a query comes in, if we're looking for service name equals Lambda, and name of the span is invoke, and we're aggregating over the duration, all Retriever is going to look at is the service name, the name, and the duration columns, and the timestamp. There's always a timestamp associated with every query. That's the next trick is, in order to segment this data, we use timestamp. At Honeycomb, I like to say we don't index on anything, but that's not quite true, we index on timestamp. The data is broken into segments based on like, I think, at most 12 hours, or a million events, or a certain number of megabytes in a file. Then we'll roll over to the next segment. Then we record like, what timestamps are the earliest and latest in each segment. That way, when a query comes in, we're like, ok, the query has this time range, we're going to get all the segments that overlap that time range. We're going to look through the timestamp file to find out which events qualify. That's how Retriever achieves dynamic aggregation of any fields across any time range at that interactive query speed.

Then we have the problem of success, and we've got bigger customers with more data coming in, and datasets are getting bigger. The thing is, our strategy used to be, whenever we run out of space for a particular dataset, new segment starts, older segments get deleted. That was fine when the oldest segment was like a week old. The point is, your current production data is what's most important. We got datasets that were big enough that at our maximum allocation for a dataset, we were throwing away data from like 10 minutes ago. That's not ok. You need more than 10 minutes window into your production system. We did what everybody does when there's too much data, we started putting it in S3. This time, instead of deleting the oldest segment, we were shipping it up to S3. Each Retriever still takes responsibility for all of the segments in its partition, it's just that now we're not limited in storage. We can store it up to 60 days. That's a much better time window, than, until we run out of space, much more predictable. Then those queries are going to be slower. They're not as fast as local disk. It's the most recent stuff that you query the most often, and that's what you want to be really fast. It's also the stuff that's the most urgent.

We're like, ok, so each Retriever, when it needs some data that's older, it'll go download those files from S3, and include those in the query. It won't be quite as fast, but it'll be a lot more flexible, because you have more data. That's good. Now people can run queries over 60 days' worth of data. No, 60 days is a lot. How much longer is that going to take? When you're reading from local disk, it's really fast, but as soon as you hit S3, the query time grows, at least linearly with the number of segments that it has to download and query. If you query for the last few minutes, yes, you can take a sip of coffee. If you query for the last few days, you might have to take a couple sips, and 60 days, we had to change our maximum query timeout to an hour. That's way beyond a cup of coffee. That's like roast the beans and brew the pot. I hear you can roast beans, it doesn't take that long, but this took too long.

That was not ok. What are we going to do? Retriever is like, I need more compute. The network wasn't the bottleneck here. It was actually the compute because we're doing all those reads and the aggregations, and group bys, and filters, and all that stuff in memory. At query time, compute was on limitation. We could just like spin up more Retrievers. We could get more EC2 instances. You can buy compute. Except we really don't need it all the time. The Retriever dog doesn't always want to play. This is like when we need the compute. This is the concurrency of how many Lambdas are we running at any one time, and it's super spiky. Often, pretty much none. Sometimes, we need thousands. This is very different from the compute profile of EC2 because we don't need it 30 seconds from now, after use. Even if an instance spun up that fast, which they don't all, that's too long. We need sudden access to compute while you're lifting your cup. That is exactly what serverless provides. Also, Lambdas are like right next door to S3. Retriever, you get some minions. Now, when a Retriever needs to access its segments in S3, it spins up a Lambda for each eight or so segments. That Lambda reads the data from S3, decrypts it, looks at the files just that it needs to. Does the aggregations. Sends the intermediate result to Retriever, and the MapReduce operation flows upward. This is much better.

See, our query time, it still goes up with the number of segments queried. That's not weird. It's very sublinear. If you're running a 60-day query, and it's a hard one, you might get more than one sip in but you're not going to have to go get another cup. Win. It turns out that buying compute in, used to be 100 milliseconds, now it's 1 millisecond increments, you can do it. This is like us scaling the compute, so that the time of the query doesn't scale with how much it's doing. We're throwing money at the problem, but very precisely, like only when we need to.

We use Lambda to scale up compute in our database. We found that it's fast enough. Our median start time is like 50 milliseconds. My cup doesn't get very far in that amount of time. It's ok. We don't see much of a difference between hot and cold startups. They tend to return within two and a half seconds, which is acceptable. They are 3 or 4 times more expensive, but we run them 100 times less, at least, than we would an EC2 instance, for the same amount of compute, so this works out. There are caveats to all of these, or at least caveats that we overcame. Watch out.

We started doing this a little over a year ago, and at AWS, this was a new use case, at the time, for serverless. Because they designed it for web apps, they designed it as like a backend on-demand. The scaling isn't exactly what we expected. The scaling for Lambda is, it'll go up to what is called the burst limit, which in US-East-1 is 500. In US-West-2 I think it's 3000. It varies by region. That burst limit is like 500 Lambdas. Then they stop scaling. Then AWS was like, but if you have continuous load, then over the next minute, they will scale up, I think it might be linearly, I've drawn it as steps, to the concurrency limit, which is like 1000. The rest of them will get a 429 response which is throttled for retry. We hit this. Spending a minute scaling up by 500 more Lambdas is not helpful, because our usage pattern looks like this. We don't have a minute of sustained load. That doesn't help us at all, so we really needed our burst limit raised. We talked to AWS and they raised our burst limit. You can talk to your rep and you can get your burst limit raised into the tens of thousands now. That helps, at least your concurrency limit, both fairly. The trick is to not surprise your cloud provider. We were able to measure how many Lambdas we needed to run at a given time, or are running. In fact, we added this concurrency operator to count how many of a thing it wants, just for this purpose. Now that's available to everyone.

Startup, we need this to be fast. People talk about cold starts, warm starts. Is that a problem for us? It hasn't been. When you invoke a Lambda function, AWS may or may not have some already ready of these processes already started up and ready. If not, it'll start up a new one and then invoke it. Then that one will hang out a little while waiting to see if it gets some more invocations. You only get charged for while it's running the code. You can see the difference between these. We can make a trace, and we do. We make a trace not only of our invocations, but of that wider Lambda process, because we omit a span when it wakes up and we omit a span right before the function goes to sleep. We can see run, sleep, run, sleep, run sleep. You can actually follow what's going on in that process, even though during those sleeps, it's not actively doing anything. I think that's fun.

Generally, our startup within 50 milliseconds, like you saw. This is in Go, so that helps. Here it goes. Here's the Lambda function process, you can see that this one hung out for a while. We can count the number currently running. We can use concurrency to count the number currently sleeping, and you can see that those are wider. That's just neat. What matters is that, when we invoke them, they start up quickly, they do their processing. They return within two-and-a-half seconds, most of the time, 90% of the time, but definitely not 100%. You can see the 30,000 millisecond to the 32nd line in the middle of this graph, there's a cluster, that's S3 timeout. Lambda may be right next door to S3, but S3 does not always answer its knock. The trick to this is just don't wait that long. Start up another one with the same parameters, and I hope you get a little luckier on the timing this time and S3 does respond. Watch out because the default timeout in the Lambda SDK is like 30 seconds or longer, it's way too long. Do not want to use the default timeout, make sure you give up before the data becomes irrelevant.

We did also find a peculiar restriction that like the functions can't return more than 6 megabytes of data. Put the return value in S3 and respond with a link. Amazon has a limit for everything. That's healthy. They have boundaries. They will surprise you. You will find them. Also, when you try to send the functions data, we would like to send them binary data, but they only want JSON. There's weird stuff. JSON is not that efficient. It's not exactly JSON, it's whatever AWS's Lambda JSON cop has decided is JSON. Don't deal with it. Put the input in S3 and send a link. This is fine.

Finally, everyone knows that serverless is expensive. Per CPU second, it costs like three to four times what an EC2 instance would cost. Given that we're running at less than a 100th of the time as much, that seems like a win. What can we do to keep that down? First of all, what really worries me about Lambda costs is that you don't know what they're going to be, because how many of these is your software going to invoke and suddenly spin up? What are the costs associated with that? Are you going to get surprised by a bill that's like a quarter of your AWS bill? Sometimes. This is where observability is also really important. Because we have spans that measure that invocation time, we can multiply that duration by how much we pay per second of Lambda invocation. We can count that up by customer, because all of our spans include customer ID as a dimension. Then, we can get notified, and we do, whenever a particular customer uses more than $1,000 of Lambda in a day or an hour. Then sometimes we get the account reps to reach out and talk to that customer and be like, what are you doing? Here's a more effective way to accomplish what you're looking for. We throttle our API and stuff like that. Really, the best you can do is find out quickly if you're going to get a big bill.

Also, we do a ton of optimization. We do so much optimization of our Lambda execution, really all of our major database processes to get that speed. One way that we optimize is that we've moved from x86 to ARM processors to the Graviton2 processors, both for our Retrievers and our ingest, most of our other servers, but also for our Lambdas. Liz Fong-Jones, who's our field CTO now, has written several articles about the ARM processors are both faster in the sense that it's going to take less CPU to run them. Those CPU seconds are cheaper. We get lower costs in two different ways. We can measure that. We started building our Lambda functions there in Go for both x86 and ARM. The first time we tried a 50-50 split, and we ran into some, ok, maybe this, maybe not. Initially, the ARM64 processors were about the same average, but a lot more varied in their performance, and overall slower. Take it back. They were not the same average. They were more varied in their performance and overall slower. We're like, ok, let's change that feature flag, and we'll roll this back so we're running 1% on ARM processors and 99% on x86. We did that. Yes, so now you can see our ARM percentage, you can barely see the orange line at the end after the feature flag was deployed.

Then we started investigating, why was it so slow? One was capacity. Even though we had our Lambda executions limits raised, there were only so many ARM processors available to run them. The total capacity in AWS for these is still lower than for x86 Lambdas. We had to work with AWS directly, and created a capacity plan for when we would be able to spin up more of them to ARM. The next thing we noticed was that these were running slower, because at the time, the current Golang was 1.17, and 1.17 had a particular optimization of putting parameters in registers instead of having to put them in memory for function calls that made calling functions faster on x86. Because we're doing all these super complicated queries, and which filter are we doing, and which group by are we doing, and there's a lot of branching in what our aggregators are doing, there were a lot of function calls. A little bit of overhead on a function call went a long way. Go 1.18 also has this optimization on ARM, so we started using 1.18 a little bit early, just for our Lambdas, and that made a difference. Now Go is 1.19, it's fine. At the time, that was a significant discovery. We figured that out with profiling.

Also, through profiling, we noticed that the compression was taking a lot longer on ARM than on x86. It turned out that the LZ4 compression library had a native implementation on x86, but had not been released yet natively in assembly for ARM64. Liz spent a couple afternoons porting the ARM32 assembly version of the LZ4 compression library to ARM64, got that out, and brought the performance more in line. These three considerations fixed the performance problems that we saw at the time. Under the capacity ones, that's a gradual fix over time. Since then, since a year ago, we've been able to bump it up to 30% ARM. Then AWS called and said, "Try. Go for it." We bumped it up to like 99, but then there were some regressions and so we dropped it down to 50, and that was ok. Then we got those fixed, and then bumped it up to 90, or gradually worked it up to 99%. Now we're there. We keep 1% on x86 just so we don't break it without noticing.

The performance is good. There's a little more variation in the purple x86 lines here, but that's just because they're 1%. The orange lines are ARM. Yes, the performance is the same. We figured out also through profiling and observability that on ARM, with the same CPU size as x86, it was sufficiently fast enough that we'd actually hit network limitations. We scaled back the CPU by 20%. On fewer CPUs, we're getting the same performance. Also, those CPUs are 20% cheaper. This continued optimization is how we manage to spend money very strategically on our database CPU, so that people can get that interactive query timing, even over 60 days.

We scaled up our compute with Lambda, should you? Think about it. If you do, be sure to study your limits. Be sure to change the SDK retry parameters, don't wait 30 seconds for it to come back. Deployment is its own thing. We stub that out for automated tests. The only real test is production, so also test in production, with good observability. Observability is also really important for knowing how much you're spending, because you can really only find that out, again, in production from minute to minute. Always talk to your cloud provider. Don't surprise them. Work this out with them. Talk about your capacity limits. A lot of them are adjustable, but not without warning. The question is, what should you do on serverless, and what should you not? Real-time bulk workloads. That's what we're doing. We're doing a lot of work while someone is waiting in our database. It needs to be a lot of work, or don't bother, just run it on whatever computer you're already on. It needs to be urgent, like a human is waiting for it, or else there's no point spending the two to four times extra on serverless, unless you just really want to look cool or something. Run just a Kubernetes job, run it on EC2, something like that, if it's not urgent.

Once you've got someone waiting on a whole lot of work, then what you're going to need to do is move the input to object storage. You've got to get all of the input that these functions need off of local disk, and somewhere in the cloud where they can access it. If they have to call back to retrieve or to get the data, that wouldn't help. Then you've got to shard it. You've got to divide that up into work that can be done in parallel. It takes a lot of parallelism. The MapReduce algorithms that our Lambdas are using have this. Then you'll want to bring that data together. You could do this in Lambda, but this also can be a bottleneck. We choose to do that outside of Lambda on our persistent Retriever instances, which are also running on ARM for added savings.

Then you're going to have to do a lot of work. You're spending money on the serverless compute, use it carefully. You're going to need to tune the parameters, like how many segments per invocation. What's the right amount of work for the right Lambda execution? How many CPUs do you need on Lambda at a time? I think memory is connected to that. Watch out for things like when you're blocked on network, no more CPU is going to help you. You'll need to optimize properly, and that means performance optimizing your code where it's needed. You'll need profiling. You definitely need observability. There's an OpenTelemetry layer, and it will wrap around your function, and create the spans at the start and end. It's important to use a layer for this. Your function can't send anything after it returns nothing. As soon as it returns, it's in sleep mode until it starts up again. The Lambda layer allows something to happen to report on the return of your function. Be sure to measure it really carefully, because that's how you're going to find out how much you're spending.

In the end, technology doesn't matter. It's not about using the latest hotness. The architecture doesn't matter. It's not about how cool a distributed column store is. What matters is that this gives something valuable to the people who use Honeycomb. We spend a ton of thought, a ton of development effort, a ton of optimization, a ton of observability, we put all of our brainpower and a lot of money into our serverless functions, all to preserve that one most precious resource, developer attention.

If you want to learn more, you can find me at honeycomb.io/office-hours, or on Twitter is jessitron, or you can read our book. Liz, and George, and Charity all from Honeycomb have written about how we do this, how we do observability and how we make it fast in the "Observability Engineering" book. You can learn a lot more about Retriever in there.

Anand: I was wondering how much data we're talking about, when we say 60 days for large clients?

Kerr: I think it's in terabytes, but tens of terabytes, not pi terabytes.

Anand: What's the normal workflow for your customer using the Retriever function? What's their normal method? Let's say you have a customer, they build a dashboard with charts, do they basically say, this chart, I want to be faster or more real time.

Kerr: We just work to make everything fast. You don't pick custom indexes. You don't pick which graphs to make fast. We aim to make all of them fast. Because we don't want you to be stuck with your dashboards. Yes, you can build a dashboard. That is a functionality that Honeycomb has. It's not what we're optimizing for, we're really optimizing for the interactive experience. You might start at your dashboard, but then we expect you to click on the graph, maybe change the time range, maybe compare it to last week, more likely group by a field, or several fields. Get little tables of the results as well as many lines on the graph. Then maybe click on, make a heatmap, or click on it and say, what's different about these? We're going to go on a series of queries to tell you that.

Anand: It's completely done on demand, in real time as the user is doing his or her analysis. It's not about optimizing the speed of a chart in a dashboard. It's all about the interactive part.

Kerr: Yes. We could go look in the database, what you have in your dashboards, but your dashboard queries are not any different from a live query.

Anand: Do you also speed those up with Retriever, the canned set of charts that people have?

Kerr: Yes. If you make a dashboard that's for a long period of time, and access it a lot, we're probably going to notice and maybe talk to you. If you're like updating that every 30 seconds, we're going to cache it for you. Because those are expensive queries.

When to use Lambda functions and when not to, is whether the data is in S3. If it's in S3, we're going to use a Lambda. If it's on local disk, then we're not. That's entirely determined by time. The time isn't the same for every dataset, though. If you have a smaller dataset, maybe all of the data is on local disk. As it gets bigger more, a larger percentage of that data is in S3.

Anand: It's based on the dataset size. Then, do you move the data to S3, like behind the scenes?

Kerr: Retriever does.

Anand: You get to decide how much data you hold? Do I decide if I want six months of data?

Kerr: You can negotiate that a little bit with your contract. I think we have a few exceptions where customers keep more than 60 days of data in particular datasets. At Honeycomb, we keep things pretty simple. Pretty much everybody had 60 days of data. How much of that is in local disk is like a fixed amount per dataset, roughly. Some datasets have more partitions than others, and so they'd have correspondingly more data on local disk, but it's all invisible to the customer. You don't know when we're using Lambda.

Anand: Can you elaborate on what makes Lambdas hard to test?

Kerr: You can test the code inside the Lambda. You can unit test that. It's just fine. Actually testing whether it works once it's uploaded to AWS, like integration testing Lambdas is really hard. You can't do that locally. You can't do that in a test environment. You can do that in a test version that you uploaded to AWS, but that's really slow. Honeycomb is all about test in production. Not that we only test in production, but that we also test in production and we notice when things break, and then we roll back quickly. The other thing we do is we deploy to our internal environment first. Our internal environment is not test, is not staging, it's a completely separate environment of Honeycomb, that we're the only customer of. There's production Honeycomb that monitors everybody else's stuff, all of our customers' observability data. Then, there's our version of Honeycomb that just receives data from production at Honeycomb. We call it dog food, because we use it to eat our own dog food. The dog food Honeycomb, we deploy to that first. Technically, there's another one that monitors dog food, but close enough. If we broke the interface between Retriever and EC2, and then the Lambdas, or anything else about the Lambdas that we couldn't unit test, if we broke that we'll notice it very quickly. We even have deployment gates, that normally, deployment to production would just happen 20 minutes later. If our SLOs don't match, if we get too many errors in dog food, we'll automatically stop the deploy to production. We test in prod, where it's a smaller version of prod. It's not all of prod. It's limited rollout.

Anand: How do you compare Lambdas to Knative?

Kerr: I've never tried anything in Knative. We do use Kubernetes in Honeycomb.

Anand: Are you using Kubernetes over EKS, the Elastic Kubernetes Service for the control plane.

Kerr: EKS, yes.

Anand: Does it sometimes make sense to use a platform agnostic language like Java that may help avoid issues with suboptimal libraries that are not yet ported to native CPU architecture?

Kerr: Sometimes, absolutely. It depends on your business case. We're doing something really specialized in this custom database. In general, don't write your own database. In general, don't optimize your code this much, for like business software. This is like the secret sauce that makes Honeycomb special. This is what makes it possible for you to not have to decide which queries you want fast, they're just all fast. It's a dynamically generated schema, we don't even know what fields you're going to send us, just that it's going to be fast. It's super specialized. At the scale that we do these particular operations at, it's expensive. That's where a significant portion of our costs are, in AWS, and a significant chunk of that is Lambda. We are constantly optimizing to run really lean in AWS, and we watch that closely. Liz Fong-Jones is always noticing something that that could be faster and could save us tens of thousands of dollars a month, which is significant at our size.

Anand: Is your entire platform written in Go.

Kerr: Pretty much. The frontend is TypeScript.

Anand: What are your timeouts? A user types in a query in the UI, how long will they wait? Will they wait as long as it takes to get a result, but you try to be as fast as possible?

Kerr: It'll time out after 5 minutes. If it takes that long, there's a bug. More likely something went down. The user waits until there's a little spinny thing. Then, all the queries will populate all at once, when the results have been aggregated and sent back. Usually, it's like 5 seconds on a long one, and 2 seconds on a typical query.

Anand: This is the Holy Grail, like call cancellation, someone closes the window, you have to schedule the workload. Everyone wants to do it, and never gets around to it.

Kerr: It'll finish what it's doing, there's no really stopping it, because it's doing everything at once already. We will cache the results if somebody runs that exact query with that exact timespan again. Those results are actually stored in S3. This makes permalinks work. Those results are actually stored forever, so that your queries that you've already run that you've put into your incident review notes, those will always work. You just won't be able to drill further in once the data has timed out.

Anand: What's the time granularity of your buckets, like your timestamp?

Kerr: You can set that on the graph within a range. It'll usually start at a second but you can make it 30 minutes, you can make it 5 milliseconds, depending on your time range. You're not going to do 5 milliseconds for a 60-day query, no, but appropriately.

Anand: They could group by second. You support group bys where they have 1-second buckets.

Kerr: That's just bucketing for the heatmaps. Group by is more like a SQL group by where you can group by account ID, you can group by region, any of the attributes. It'll divide those out separately, and show you a heatmap that you can hover over.

See more presentations with transcripts

View original post here:
Honeycomb: How We Used Serverless to Speed Up Our Servers - InfoQ.com

Read More..

Behind the war in Ukraine is a shady war of cyber attackers reveals … – iTWire

Killnet, Anonymous Sudan, Fancy Bear ... these aren't names you might hear on the daily news, but its a whos who of hacking groups that operate on the fringe of society, launching devastating attacks against computer infrastructure in response to perceived injustices against their political or moral allegiances. And while these attacks may be targeted, they dont happen in a vacuum, taking down other infrastructure in their path.

These are the messages from Radware director of threat intelligence Pascal Geenens, who monitors and tracks hacker groups, working to understand their motives and methods.

"Killnet is back in the news," Geenens said, explaining researchers speculated this pro-Russian hacktivist group may have ceased operations but instead Radwares research finds the opposite is true; the organisation is breathing new life and its founder, Killmilk, is working to refresh his roster with shrewd new members who fit the stringent criteria of his new world order.

According to Geenens, Killmilk is frustrated by the support that Ukraine's IT Army - a Government-sanctioned hacking group based in that country - receives from its government, and has spoken out publicly taunting Russians and even the Russian government for their apathy. He claims he will stand independently and transform his hacktivist group into a private military, cyber elite group, inspired by the Wagner Private Military Company, a group of mercenaries funded by Yevgeny Prigozhin who recently made the news for their almost-coup of Russia.

It's a serious statement and one that could change the face and nature of hacking worldwide, as well as seriously disrupt activities and forces in Ukraine and the Western world. Its thanks to Geenens and his team monitoring and tracking the situation that the rest of the world can be prepared.

Let's rewind; "Killnet is one of the most iconic of the pro-Russian hackers," Geenans explains. His research team at Radware has been following different hacker groups since the Ukraine war began. These groups target Western countries and governments, along with any other government or country that shows support for Ukraine. This includes Australia.

The reason Killnet has such an iconic status is due to its leader, Killmilk; this individual is media savvy and takes part in media interviews within Russia and also in the West. He tries to bring the narrative of a hero in a cyber war, Geenens said.

Previously, Killmilk's attacks were of short-term impact such as distributed denial of service attacks (DDoS). These can impact a business and its reputation and takes resources and servers down. However, when the DDoS stops, the services typically come back up in a normal state.

In August 2022 Killmilk announced he wanted to go more destructive and pivot to more permanent attacks. He announced a new person would take the lead, Black Kite. We believe Black Kite had a background in ransomware operations and groups, Geenans said.

Yet, nothing ended up coming from this; Killmilk announced Black Kite would take over and he would step down, but by late 2022 and early 2023 Killnet appeared to be active in media claiming credit for DDoS attacks actually being performed by others. They were around, but not active in hacking.

Suddenly, in March, Killmilk spoke again. He announced a new group starting; his cyber version of the Wagner Private Military Company, dubbed BlackSkills.

Killmilk's intentions for BlackSkills also include an underground school where he would teach what hed learned during his first few months of attacks.

"He wanted to make a group of cyber mercenaries, and he would model it on Wagner, who he is clearly a fan of, Geenans explained. He supports Wagner group messages and said he would start a new company consisting of multiple units - HR, training, and a big operation.

We consider he might be trying to do this, but we're not yet seeing how he will pull it off in the short-term future.

Killmilk's media savvy made him one of the most influential people in the pro-Russian hacking scene within Killnets first year of attacking Western targets. With this influence, he was able to build a cluster around him, that has allowed him influence, control, and followers.

"When the leader of Anonymous Russia was apprehended in Belarus, Killmilk said he would appoint a new leader for that group and reboot it - and thats what happened, Geenans said. With all his media support many people are looking at him and following him, and it gives him the capabilities of building something bigger, such as BlackSkills.

Prior to the war, Killmilk was active in the hacking and underground scene and sold a botnet-as-a-service dubbed Killnet. It was only after the war commenced that Killnet switched to the activist group that it is known as today. And a major source of inspiration came from Ukraine itself. The day after the invasion, President Volodymyr Zelenskyy announced the voluntary IT Army of Ukraine, a Telegram group where tasks are posted for Ukrainian hackers, and any aligned Western hacker, who wanted to take on the tasks and attack Russian infrastructure on a voluntary basis. Killmilk saw this, and immediately built a Killnet telegram channel promoting Russian narratives and promoting attacks on anti-Russian sentiment.

Surprisingly, Geenans noted, these attacks are almost never on Ukraine itself. Instead, the attacks are waged against France, Italy, Sweden, or other nations if they make pro-Ukrainian remarks. The Killnet cluster attack in campaigns whenever something is said that would upset Russia, he explained.

There's a reason for this. Nation-state groups such as Fancy Bear are attacking Ukrainian targets for the Russian government, and if all these hacktivists groups went after Ukraine they would trip over, or interrupt, or interfere with each other accidentally, Geenans said. So only a couple of groups are specifically performing DDoS attacks against Ukrainian targets. We dont see groups like Killnet or NoName attacking Ukraine.

NoName is another group, that similarly modelled itself after the IT Army of Ukraine and its automated botnet. NoName, which specifically separates itself from Killnet, is a volunteer-based botnet that also performs attacks on Western targets. Volunteers are asked to download software that connects to a NoName server, downloads a list of targets, and starts attacking them.

Of course, with all these groups performing DDoS attacks, there's a big problem for the rest of the world. These packets dont magically appear on the Russian border, Geenans said. They go through all the infrastructure, from the US, through France, whichever path it takes. It has an impact on all our infrastructure.

We understand people may have good reasons, but even if they are ethically correct, it doesnt mean DDoS is something we should all reach out for, Geenans said.

Meanwhile, another hacking group is Anonymous Sudan. Despite the name, there's a view this group is a Russian black flag operation that is unrelated to Sudan but designed to look like Sudan is attacking.

Anonymous Sudan came onto the hacking scene in late January attacking Sweden and Denmark, following a protestor burning the Quran outside the Turkish embassy in Denmark in objection to Turkeys blocking of NATO.

The act outraged Muslims, and Anonymous Sudan appeared on the scene out of nowhere DDoSing targets in Sweden and Denmark.

Of course, the Russian people also didn't want NATO support, and while Anonymous Sudan claimed its motivations were religious and Killnets were political, the latter reached out to welcome Anonymous Sudan to the Killnet cluster.

Anonymous Sudan has since used the Killnet cluster name on its own branding and has further launched cyber attacks on airports in France, citing the Charlie Heddo cartoon about Mohammed as their motivation.

When fashion label Not A Man's Dream sent a model on a catwalk with transparent clothing bearing the wording, Allah walks with me pro-Muslim hackers in Pakistan and Bangladesh announced an operation against Australian companies including ports, governments, and other online businesses.

Anonymous Sudan picked up on this and joined the fight. The organisation has a well-established infrastructure, Radware has identified, and rents large space in data centres. It also rents anonymous proxies to route the traffic, to appear as if it is coming from someones home when in fact it is coming from huge cloud-based servers.

This type of attack bucks the trend; Geenans explained typically you'd see all these attackers going after applications because they dont have a lot of infrastructure themselves.

NoName perfected the skill with its volunteer botnet named project DDoSia, where volunteers can install software that starts attacking a list of targets from a central command-and-control server.

While these attacks don't have a lot of members, it's the first time, Geenans noted, that researchers saw an attacker perform reconnaissance. They would look at a website and determine if specific pages would impact infrastructure more than others. For example, a search query might go to a backend database and bringing that down could be more impactful than hitting the static website home page.

Their research sees them going to a website and looking for pages like contact forms, or feedback forms. They analyse the form and copy all the parameters and arguments and randomise what they post into the form. They make realistic random variables to fool boots - like phone numbers, email addresses, and long text-building attacks to minimise real traffic that makes it harder to block.

Even if the attack doesn't bring down the resource it still has a large impact. You can imagine the Government receiving a million feedback forms and having to trawl through them to find one or two legitimate messages from constituents.

By contrast, Anonymous Sudan goes a different way. It ramps up fast, using its extensive infrastructure to perform millions of requests per second.

Radware's researchers have observed Anonymous Sudan performing campaigns on multiple companies and countries. This includes a recent attack on Microsoft. Microsoft published a blog saying Office applications and the Azure cloud were attacked and while Microsoft called the attacker Storm, Radware is almost certain it was Anonymous Sudan due to the consistent attack vectors.

In contrast to many attacking groups, Anonymous Sudan somehow has equipment, and funding to run its own servers to perform such high-hitting attacks.

Radware itself has been targeted by some of these groups, although at other times they have been delighted by Radware charts on the top attacking groups taking these to the hacking forums as evidence of how dangerous they are.

Killnet is found in a lot of social circles, and people in the general public follow them. The rapper Kasha made a song about the group in 2022, KillnetFlow (Anonymous diss) and, Radware believes, made donations to Killnet. Killnet also receives donations from an artist who made a special line of Killnet jewellery such as rings and earrings, with 50% going to the group. Its a clever move by the artist, with their small business leveraging Killnets social channel.

Additionally, Killnet advertises DDoS services, and anonymous VPN servers, and manages other deals where their celebrity status sees the advertisers coming forward with offers. Additionally, Killnet has been associated with forums that are used for underground transactions like selling drugs.

There's more; the Killnet exchange is a crypto exchange for hacktivists to exchange crypto to Roubles, even delivering to your door.

However, "where Anonymous Sudan gets money from is still a mystery," Geenans said. Thats why some people think it is a Russian black-flag operation.

Geenans believes Killmilk works as a DBA for an IT company. "We see many hacktivists, criminals, and others have a day job in IT and then go home and start their second job with hacking and crime, he said.

However, one thing that's clear; if a criminal wants to perform a DDoS attack the public almost never finds out because theres no advertisement. The hacktivists are the complete opposite, Geenans said. They want people to know; they shout it to the world. Theyre sending all those messages and thats how we can track them.

You can follow Pascal Geenan on LinkedIn and on Radware's blogs. You can also follow Radwares security research centre.

For more reading on the topics covered in this story, please visit these pages:

Excerpt from:
Behind the war in Ukraine is a shady war of cyber attackers reveals ... - iTWire

Read More..

Flare Network and Ankr Partner to Enhance Developer Experience with High-Performance RPC Infrastructure – CoinCodex

Key Highlights

Flare Network, the pioneering EVM-based smart contract platform focused on secure data acquisition and interoperability, has announced today a strategic partnership withAnkr, a decentralized Web3 infrastructure provider, to bolster its suite of remote procedure call (RPC) tools for developers.

Per the announcement, this strategic partnership aims to provide developers building on Flare with access to distributed and high-performance RPC services, streamlining dApp development processes and facilitating seamless node operations.

Paving the way for new use cases and monetization models with its unique interoperable cross-chain solution, Flare Network stands out as an EVM-based Layer-1 smart contract platform that extends the utility of blockchain technology. By optimizing decentralized off-chain data acquisition, Flare empowers developers with oracles built into the network's structure, providing a simple and coherent stack for decentralized interoperability. As an EVM-compatible chain, Flare enables the deployment of decentralized applications coded in Solidity, facilitating the trustless acquisition of price and time series data, blockchain events and state data, as well as web2 API data on a large scale.

The collaboration enables developers on the Flare Network to leverage Ankr's cutting-edge infrastructure, which includes advanced APIs with custom indexing. These APIs not only simplify onboarding for first-time users but also offer seamless querying for onchain data, includingnon-fungible token (NFT) metadata. By integrating Ankr's services, Flare Network aims to reduce friction and enhance the overall developer experience.

On the other hand, Ankr, a leading cloud infrastructure provider in the Web3 space, currently offers RPC services to over 35 different blockchains. Their services encompass both HTTPS and WebSocket RPC endpoints, leveraging bare metal cloud servers, alongside a comprehensive suite of developer tools. With more than 600 nodes located across 12 countries and five continents, Ankr's RPC services provide rapid and efficient dApp request handling. Utilizing load-balancing technology, Ankr ensures that all requests are automatically directed to the nearest data center, guaranteeing 99.99% uptime. Furthermore, with an impressive average response time of just 110ms, Ankr's nodes exhibit industry-leading speeds and low latency.

Hugo Philion, the CEO and co-founder of Flare, emphasized the immense benefits that developers would gain from the partnership with Ankr. He highlighted that this alliance allows developers on Flare to unlock the power of premium RPC tooling provided by Ankr, the leading node provider in the blockchain industry. In addition to the API Portal and Google Cloud infrastructure, this strategic partnership underscores Flare's unwavering dedication to empowering developers with a versatile, resilient, and trustworthy stack. The goal is to provide developers with the necessary tools to effortlessly build and monitor their dApps and services, all while ensuring enterprise-grade levels of availability and security.

View original post here:
Flare Network and Ankr Partner to Enhance Developer Experience with High-Performance RPC Infrastructure - CoinCodex

Read More..

TRAI Exploring Emergency Ban on WhatsApp and Facebook – IndianWeb2.com

TRAI releases Consultation Paper on Regulatory Mechanism for Over-The-Top (OTT) Communication Services, and Selective Banning of OTT Services.

Department of Telecommunications (DoT) has requested TRAI to suggest a suitable regulatory mechanism for OTTs, including issues relating to 'selective banning of O1T services' as part of its recommendations to DoT.

Further, a parliamentary panel has recommended the DoT to examine the recommendation of TRAI and come out with a policy which will enable the selective banning of OTT services such as Facebook, WhatsApp and Telegram during unrest and crisis as these apps are prone to be used by terrorists or anti-national elements in specified regions.

In December 2021, a Parliaments Standing Committee on Communication and Information Technology submitted its 26th report titled Suspension of telecom services/ Internet and its impact.

The Committee feels that it will be of great relief if the DoT can explore the option of banning selective services, such as Facebook, WhatsApp, Telegram etc. instead of banning the Internet as a whole. This will allow financial services, health, education, and various other services to continue to operate for business as usual thereby minimizing inconvenience and suffering to the general public and also help in controlling spreading of misinformation during unrest.

Through the letter dated 7th September 2022, DoT has also mentioned that "in view of the humongous growth of services in the recent past and these services having reached a matured stage, there is a need to holistically look into the various aspects of these services induding regulatory, economic, security, privacy, and safety aspects. This is also in keeping with para 2.2 of the National Digital Communications Policy - 2018 which mentions the policy goal for "Ensuring a holitic and harmonal approach for harnessing Emerging Technologies' It has been mentioned therein that a policy framework for 'Over the Top' services will be developed."

The consultation paper mentions "In some use cases such as messaging, consumer preferences have switched from traditional telecommunication services to OTTs. With OTTs used increasingly for messaging and to some extent for voice communication, globally, the general trend is a transition from voice and SMS towards data as a primary source of revenue for telecom service providers. In India, composition of the revenue basket of wireless access service providers has undergone a sea-change in the period from the year 2013 to 2022.

The websites that use dynamic IP addresses and are hosted on cloud servers can pose a challenge to conventional methods of blocking said the consultation paper. In such situations, such situations, alternative methods may be necessary to effectively control internet filtering. Advanced techniques can be employed to identify and block access to such websites.

Further, there may be scenarios where the targeted websites use Hypertext Transfer Protocol Secure (https) protocol. HTTPS protocol provides encryption and security for websites, making it difficult for service providers to block content on these sites. However, there are still ways to block or filter content at a network level, such as using a firewall or content filtering software. As far as area specific barring is concerned, it also needs to be carried out at network level, for which effective methods are required to be worked out.

The Department have also informed the parliamentary committee that Facebook, WhatsApp, Telegram etc. are basically categorized as OTT services in short. These OTT services are riding over the existing telecom service provider's network.

In the consultation paper, the regulator has sought views on the requirement of a framework for OTTs encompassing lawful interception, privacy and security, customer verification, pesky calls and messages, technical challenges in banning specific service etc.

In the past several consultation papers, TRAI has set aside the demand for regulating internet-based calling and messaging apps.

TRAI has set August 4 as the last date for comments on the paper and August 18 for counter comments.

Follow this link:
TRAI Exploring Emergency Ban on WhatsApp and Facebook - IndianWeb2.com

Read More..