Page 1,078«..1020..1,0771,0781,0791,080..1,0901,100..»

MoonQube Launches New Website and Data Hosting Platform – AccessWire

ATLANTA, GA / ACCESSWIRE / July 17, 2023 / MoonQube, an all-new cloud hosting service, is excited to announce its launch. By taking customer-focused service to the next level, MoonQube's revolutionary approach offers businesses unique opportunities to ensure seamless operations, data access and responsive availability for their clients.

In today's competitive market, the ability for a company to have its operations, data and website on fast and reliable servers is crucial to success. More and more businesses are migrating their operating systems, applications, and data storage to the cloud. This shift allows employees and customers to have 24/7 access to everything they do while minimizing overhead costs and environmental footprint.

MoonQube fills that sought-after niche in the marketplace by providing affordable, advanced and state-of-the-art services around the world. MoonQube's core values of stability, affordable pricing, security, customer service and transparency ensure that customers remain at the center of what they do.

What does MoonQube offer?

Domains and Web Hosting: For many companies their website is the primary interface with new and existing customers; it's a vital part of doing business, and so too is reliable website hosting. MoonQube helps with domain features and website migrations with no interruption in availability or responsiveness.

Object and Block Storage: Large image, audio and video files can quickly clog up onsite data storage options; with MoonQube those files can be stored in a secure, redundant and instantly-available space that customers organize and configure themselves.

Virtual Machines (Qubes) and Kubernetes: Whether your focus is security, ease of use, scalability or automation, we offer containerized and isolated hosting options that meet a variety of needs.

What sets MoonQube apart is its approach to personalization, allowing clients to customize what works best for their company's unique needs. MoonQube provides technology support that is scalable, simple, affordable and accessible for businesses of all sizes. This means that data and applications are not only available lightning-fast anywhere in the world but are backed up constantly, providing the needed redundancy to prevent loss of invaluable tools and work progress.

MoonQube also is focused on ensuring that its offerings are easy to interface with and affordable for every budget. Behind everything they do, however, is a dedication to 24/7 live customer support, accessible whenever it is needed.

For more information, to receive updates or to set up an account, visit http://www.moonqube.com.

Media Inquiries:

Morgan GeorgeForum Communications678-629-7797[emailprotected]

SOURCE: MoonQube

See the original post:
MoonQube Launches New Website and Data Hosting Platform - AccessWire

Read More..

The Energy Crunch: AI Data Centers and the Battle for Power – Digital Information World

The demands on energy in a cloud-powered world are astonishing, and the emergence of generative AI is just exacerbating the situation. Tom Keane, a Microsoft data center veteran, has warned about AI models' energy consumption and the issues existing data centers confront.

AI model training in data centers may consume up to three times the energy of regular cloud workloads, putting a strain on infrastructure. The present generation of data centers is unprepared to meet the increased demand for AI-related activities. Last year, Data Center Alley in Northern Virginia almost faced a power outage, anticipating the looming energy issues.

Access to power is becoming increasingly important as big businesses like Amazon, Microsoft, and Google race to satisfy the need for generative AI. The current data center infrastructure cannot accommodate this next wave of technologies. Data center power usage is expected to exceed 35 gigawatts(GW)per year by 2030, more than tripling the amount consumed last year.

AI model training, in particular, is highly energy-intensive, requiring large amounts of power from graphic processing units (GPUs). AI servers with many GPUs may require up to 2 kilowatts of power, as opposed to 300 to 500 watts for a standard cloud server. This growth in energy demand presents data center operators with new hurdles.

To overcome these issues, businesses like DigitalBridge are investing billions of dollars in constructing and renovating data centers mainly intended for generative AI workloads. Smaller data centers are intentionally located in suburban locations, away from big markets, to connect to existing electrical networks without overburdening them. These sites provide higher power density, quicker connectivity, and reduced expenses.

The following data center will be more comprehensive than established hubs like Virginia or Santa Clara. It will instead emerge in low-cost places where electricity supply is not a constraint. To satisfy the expectations of AI growth, data center operators must adapt and adopt novel solutions.

As AI data centers struggle to meet the increasing demand for generative AI, the competition for power heats up. Companies are trying every possible tactics to secure the success, laying the groundwork for a transformational energy future.

The struggle for power becomes increasingly important as the fight for AI dominance proceeds. The present infrastructure needs to be equipped to meet the energy requirements of AI data centers. The sector can only secure a sustainable and efficient future by adopting innovative techniques and exploring other places.

Fasten your seat belts and buckle up for the insane yet amazing technological ride in the coming times. As the world of AI and data centers embark on a trip to secure the energy supplies that will define the future of technology.

Read next:Mobile App Economy Surges: Global In-App Expenditure Skyrockets to Record $67.5 Billion In H1 2023

The rest is here:
The Energy Crunch: AI Data Centers and the Battle for Power - Digital Information World

Read More..

Machine Learning and Data Engineering Applications in Agriculture – TechNative

In God we trust, all others bring data. William Edwards Deming

The main problems that farmers face on an everyday basis, such as climate changes, manual time-consuming tasks, and changes in consumers everyday dietary are presented in the article. Along with possible solutions that Machine Learning and Data Engineering can offer by applying such tools as IoT, data collection, computer vision, blockchain, and Augmented Reality.

Impact of cultural transformations on Productivity in Agriculture

Over the past centuries, human beings have gone through certain transformations that led to agricultural revolutions afterward. Each revolution played a key role in the increase of production and optimization of the production process. A long way from the first revolution to the fourth traces a path from hunting and gathering to digitalization and artificial intelligence. Lets dive into details, there are four agricultural revolutions:

First agricultural revolution (1700- onwards). This revolution is characterized by stationary farming with core principles mainly based on manual labor, horsepower, and simple tools, which means that productivity remained relatively low during that time period.

Second agricultural revolution (1914-1980s). At this time happens a shift from natural nitrogen supplementation to synthetic fertilizers. The introduction of crop rotation and drainage dramatically increased crop and livestock yields, improved soil fertility, and reduced fallow. An increase in production and a decrease in labor demand led to migration and urban expansion.

Third agricultural revolution (1920s-present). This revolution is all about combustion engines and rural electrification, the introduction of biotechnology, and genetic engineering, alongside computerized programs. The result of this revolution we could observe nowadays in the markets.

Fourth agricultural revolution (the 1970s onwards). This revolution made a transition from industrial production to a digital model that optimizes production processes, reduces time and cost, and enhances customer value. At this point, the agricultural industry started to operate with keys like the Internet of Things, Big Data, Artificial Intelligence, cloud computing, remote sensing, ingestion, and processing of big data into Data Lakes as a foundation for the decision-making approach.

The first and Second agricultural revolutions show a transition from manual labor to production, while the third and fourth revolutions show the importance of computerization and the collection of data. Nonetheless, there are still enormous problems in the agricultural sector of economics that seek solutions in Machine Learning.

Problems of traditional agriculture and the Role of Machine Learning

Demand for agricultural production has highly increased over the last year and ought to be one of the major causes of inflation all over the world in the nearest future. In the meantime, intensive agricultural yield increase is limited due to a number of external reasons:

A limited global land surface that is suitable for cultivation based on climate conditions, good soil, and urban development. According to up-to-date statistics, approximately 40% of the land is covered by jungles, deserts, urban places, or other natural land states such as forests. So, very little land is left for agricultural expansion.

Constantly changing consumer dietary habits and patterns push farmers to shift from one type of production to another. For example, demand for meat products is rising rapidly in societies due to inequality among the population.

Climate changes and natural disasters that have increased in the last century are likely to result in more extreme weather patterns, with average temperatures increasing, resulting in fluctuating yields and production shortfalls.

The application of Machine Learning in the agricultural sector can smooth the above-mentioned problems. In order to better understand the interference between these two fields, lets look at the example. Lets imagine that there is a farmer who relies mainly on calculations of input efforts and output yields. This farmer forecasts his/her profit based on scientific calculations. At the same time, he/she operates with data from sensors on the machinery, such as crop and GPS data, whilst other data is retrieved from a drone, and correlated against GIS information. At the same time, this farmer can also observe pricing data, livestock position, and demand for his product based on third-party cloud servers (diagram 1). All this together creates a picture of the potential crop value and demand. Also, weather forecast comes from open sources and clearly predicts conditions for the upcoming week. At the same time, sensors detect moisture in the ground and check the health status of plants and animals. All this data is gathered and stored for future analysis.

Tracking of every product entity is simple and can be clearly observed on the dashboards for the farmer and the final customer. Combining all this information together could save a lot of effort, help organizations work more efficiently, and solve the main problems that farmers face nowadays.

Integration of Machine Learning and Data Engineering into Agriculture

The covid-19 pandemic had a huge impact on Agriculture and at the same time on the development of Machine Learning and Data Engineering. The main goal of the integration between Agriculture and Machine Learning is to increase final crop yield, save efforts and resources, and help control every step of plants and animals growth. There is a number of techniques in Machine Learning that help collect agricultural data and ease the process for farmers. The following applications could be of great use for the development of communication between data collection and actual production.

1) A unified protocol

It is useful to have a single, unified protocol for cross-manufacturer compatibility of electric and electronic components. All mechanical and automotive devices have to be combined like a LEGO piece into one big machine. All parts of the final construction should communicate with others via protocols. This unified protocol is based on the International Standard ISO 11783 and started to be applied all over the world in 2008.

2) Internet of Things (IoT)

Different devices in a system have to be connected to the Internet and can interact with one another in real-time (diagram 2.) The number of sensors and their application is growing every year and will probably be around 250 billion in the next 5 years. According to this incredible number of sensors development of software products, it becomes a non-trivial task to collect and store information in a single place.

3) Drones and remote sensing

Development in information technology and agricultural science has made it possible to merge drones and Sensing, leading to the rise of precision farming. Such a scheme brings maximum profit and production with minimum input and optimal use of resources. One of the interesting applications is based on the global positioning system (GPS) and GIS technologies that help calculate optimal paths for tractors. With Machine Learning Algorithms challenging trigonometry task from university is transformed into simple solution and real money that wasnt spent on extra fuel.

4) Data collection and social network communication

The key to this is the creation of an efficient chain with local food production systems and livestock systems. This approach will create a greater understanding of the entire food supply chain efficiency and the integration of these two systems will generate long-term positive environmental impact and will deliver greater food security.

5) Computer vision

Image analysis and detection are among the most intensively growing fields in informatics research. All automated machines start from sensing, most often using cameras to obtain data that provides information about the crop and location of the harvesting system. Typically, this is an RGB camera, depth camera, or lidar system. Images are passed to machine learning pipelines that are based on effective classification approaches, including support vector machines(SVMs), neural networks, k-means, principal component analysis (PCA), feature extraction, etc. Among applications that were developed by different teams, the following could be mentioned:

6) Data transparency and blockchain

A blockchain is a method of encrypted data that conducts a search for every single transformation that has been applied to a target entity such as storing, linking, and recovering. The modern agricultural industry has accelerated and now uses a blockchain in the agriculture value chain because it is seen as a mechanism for optimizing different issues, such as transparency, cost-effectiveness, traceability, quality supply systems, etc. For example, the French food market Carrefour has been using blockchain solutions for the traceability of its products since 2018. The aim is to provide a QR code scan where consumers can retrieve data on the product on their mobile phones. The information available through the code includes the place and date of production, the products composition, the method of cultivation and etc.

7) Augmented Reality

This field is only at the development stage but has already demonstrated high potential in a specific field that requires 3D image resolutions. Visualization of animals, their diseases, and crops damages in order to assess and carry out treatment. Augmented Reality promises a lot in the near future, especially if it is combined with Artificial Intelligence (AI).

The importance of Machine Learning and Data Engineering in Agriculture should not be underestimated. Implementation of new techniques could definitely benefit farmers by increasing revenue and customers by saving them time.

Conclusion

The evolution of human transformations in agriculture outlines the main changes that the agricultural sector historically went through and at the same time points out problems that farmers face nowadays. In the era of computers and digital communication, farmers seek ways to increase profit, while consumers demand quality service in a short amount of time. In order to satisfy these needs, the Sigma Software group researched and developed several approaches, such as data communication, computer vision, blockchain, augmented intelligence and etc. These approaches along with other tools of Machine Learning and Data Engineering create a core outline for the agricultural sector. As a result, in order to increase revenues in the agricultural sector and create a more efficient chain supply, farmers need to rely more on machinery.

About the Author

Ihor Oleinich is Software Developer at Sigma Software Group. Sigma Software provides top-quality software development solutions and IT consulting to more than 170 customers all over the globe. Volvo, SAS, Oath Inc., Fortum, IGT (previously GTECH), Checkmarx, Formpipe Software, JLOOP, Vergence Entertainment, Collective, Genera Networks, Viaplay, and others trust us to develop their products. Our clients choose us for our timely and efficient communication, flexibility, strong desire, and ability to reach clients business goals.

Featured image: slonme

Link:
Machine Learning and Data Engineering Applications in Agriculture - TechNative

Read More..

The Future of VMs on Kubernetes: Building on KubeVirt – The New Stack

Remember when virtualization was the hot new thing? Twenty years ago, I was racking and deploying physical servers at a small hosting company when I had my first experience of virtualization.

Watching vMotion live-migrate workloads between physical hosts was an aha moment for me, and I knew virtualization would change the entire ecosystem.

Perhaps then its not a surprise that I became an architect at VMware for many years.

I had a similar aha moment a decade later with containers and Docker, seeing the breakthrough it represented for my dev colleagues. And in the years after it was clear that Kubernetes presented a natural extension of this paradigm shift.

Im sure many of you reading this will have been through similar awakenings.

Despite 20 years of innovation, reality has a way of bringing us back down to earth. Out in the enterprise, the fact is we have not completely transitioned to cloud native applications or cloud native infrastructure.

While containerized apps are gaining popularity, there are still millions of VM-based applications out there across the enterprise. A new technology wave doesnt always wipe out its predecessor.

It may be decades before every enterprise workload is refactored into containerized microservices. Some never will be: for example, if their code is too complex or too old.

So we have a very real question: How do we make virtualization and containers coexist within the enterprise?

We have a few options:

And indeed there is a solution to make this third option possible: KubeVirt.

KubeVirt is a Cloud Native Computing Foundation (CNCF) incubating project that, coincidentally, just hit version 1.0 last week.

Leveraging the fact that the kernel-based virtual machine (KVM) hypervisor is itself a Linux process that can be containerized, KubeVirt enables KVM-based virtual machine workloads to be managed as pods in Kubernetes.

This means that you can bring your VMs into a modern Kubernetes-based cloud native environment rather than doing an immediate refactoring of your applications.

KubeVirt brings K8s-style APIs and manifests to drive both the provisioning and management of virtual machines using simple resources, and provides standard VM operations (VM life cycle, power operations, clone, snapshot, etc.).

Users requiring virtualization services are speaking to the virtualization API (see the diagram below), which in turn is speaking to the Kubernetes cluster to schedule the requested virtual machine instances (VMIs).

Scheduling, networking and storage are all delegated to Kubernetes, while KubeVirt provides the virtualization functionality.

KubeVirt delivers three things to provide virtual machine management capabilities:

Because virtual machines run as pods in Kubernetes, they benefit from:

KubeVirt sounds amazing, doesnt it? You can treat your VMs like just another container.

Well, thats the end goal: getting there is another matter.

KubeVirt is open source, so you can download and install it today.

But the manual installation process can be time-consuming, and you may face challenges with integrating and ensuring compatibility with all the necessary components.

To start, you need a running Kubernetes cluster, on which you:

You need to do this for each cluster. While a basic installation allows you to create simple virtual machines, advanced features such as live migration, cloning or snapshots require you to deploy and configure additional components (snapshot controller, Containerized Data Importer, etc).

We mentioned above about the inefficiency of nested infrastructures. Although its technically possible to run KubeVirt nested on top of other VMs or public cloud instances, it requires software emulation, which has a performance impact on your workloads.

Instead, it makes a lot of sense to run KubeVirt on bare metal Kubernetes and that, traditionally, has not been easy. Standing up a bare metal server, deploying the operating system and managing it, deploying Kubernetes on top the process can be convoluted, especially at scale.

When it comes to Day 2 operations, KubeVirt leaves the user with a lot of manual heavy lifting. Lets look at a couple of examples:

First, KubeVirt doesnt come with a UI by default: its all command line interface (CLI) or API. This may be perfectly fine for cluster admins that are used to operating Kubernetes and containers, but it may be a challenging gap for virtualization admins that are used to operating from a graphical user interface (GUI).

Even an operation as simple as starting or stopping a virtual machine requires patching the VM manifest or using the virtctl command line.

Another example is live migration: To live migrate a VM to a different node, you have to create a VirtualMachineInstanceMigration resource that tells KubeVirt what to do.

apiVersion: kubevirt.io/v1kind: VirtualMachineInstanceMigrationmetadata: name: live-migrate-webapp01 namespace: defaultspec: vmiName: webapp01

apiVersion: kubevirt.io/v1

kind: VirtualMachineInstanceMigration

metadata:

name: live-migrate-webapp01

namespace: default

spec:

vmiName: webapp01

If youre running at scale, performing many such operations each day across multiple clusters, the effort can be considerable. Building out scripting or automation can solve that, but itself increases the learning curve and adds to the setup cost.

We saw an opportunity to take all the goodness that KubeVirt offers, address all these issues, and create a truly enterprise-grade solution for running VMs on Kubernetes.

And today weve announced just that: Meet Virtual Machine Orchestrator (VMO), new in version 4.0 of our Palette Kubernetes management platform.

VMO is a free capability that leverages KubeVirt and makes it easy to manage virtual machines (VMs) and Kubernetes containers together, from a single unified platform.

Here are the highlights.

If youre not familiar with Palette, one of the things that makes it unique is the concept of Cluster Profiles, preconfigured and repeatable blueprints that document every layer of the cluster stack, from the underlying OS to the apps on top, which you can deploy to a cluster with a few clicks.

Weve built an add-on pack for VMO that contains all the KubeVirt components we talked about earlier, and much much more, including:

Palette can not only build a cluster for you, but deploy the VM management capability preconfigured into that cluster thanks to the Cluster Profile. The result is much less manual configuration effort.

Whats more, Palettes multicluster decentralized architecture makes it easy to deliver the VMO capability easily to multiple clusters instead of having to enable it manually per cluster.

We talked about the importance of running KubeVirt on bare metal, and how hard it is to provision and manage bare metal servers for Kubernetes.

Well, Palette was built to simplify the way you deploy Kubernetes clusters in all kinds of environments, and bare metal is no exception.

There are many ways of orchestrating bare-metal servers, but one of the most popular ones is Canonical MAAS, which allows you to manage the provisioning and the life cycle of physical machines like a private cloud.

Were big fans of MAAS, and weve included Canonical MAAS and our MAAS Provider for Cluster API in our VMO pack to automate the deployment of the OS and Kubernetes on bare metal hardware. It makes deploying a new Kubernetes bare metal cluster as easy as cloud.

Of course, you can use your own bare metal provider if you dont want to use MAAS.

Once everything is up and running, Palettes always-on declarative management keeps the entire state of your cluster as designed, with automated reconciliation loops to eliminate configuration drift. This covers your VM workloads too.

While DIY KubeVirt leaves you on your own when it comes to some of the more powerful features youve come to expect in the world of virtualization, Palette provides a long list of capabilities out of the box.

These include VM live migration, dynamic resource rebalancing and maintenance mode for repairing or replacing host machines, and the ability to declare a new VLAN from the UI. You also get out-of-the-box monitoring of clusters, nodes and virtual machines using Prometheus and Grafana.

And while with DIY KubeVirt the platform operator (thats you) must select, install and configure one of the open source solutions to get a UI, Palette already looks like this:

As you can tell, were pretty excited about the launch of Palette 4.0 and the Virtual Machine Orchestrator feature.

Weve built on the open source foundations of KubeVirt, and delivered a simpler and more powerful experience for enterprises.

The result? Organizations that have committed to Kubernetes on their application modernization journey, and have already invested in Kubernetes skills and tools, will benefit from a single platform to manage both containers and VMs.

And thats not just as a temporary stepping stone for the applications that will be refactored, but also for hybrid deployments (applications that share VMs and containers) and for workloads that will always be hosted in VMs. Even after nearly 25 years of virtualization, VMs are certainly not dead yet.

To find out more about Palettes VMO feature, check out our website or our docs site. Wed love to get your feedback.

Originally posted here:
The Future of VMs on Kubernetes: Building on KubeVirt - The New Stack

Read More..

Turbocharging ASP.NET Core Applications: A Deep Dive into … – Medium

Performance is paramount when developing web applications. A slow, unresponsive application results in poor user experience, losing users, and possibly business. For ASP.NET Core developers, there are many techniques and best practices to optimize application performance. Lets explore some of these approaches in this article.

When we talk about performance, the first thing to ask is Where is the issue? Without understanding where the bottlenecks are, we could end up optimizing parts of our application that wont have a significant impact on overall performance.

There are many tools and techniques to identify performance bottlenecks in an ASP.NET Core application:

One of the simplest approaches is to add logging and metrics to your application. You can measure how long operations take and log any issues that occur.

ASP.NET Core supports a logging API that works with a variety of built-in and third-party logging providers. You can configure the built-in logging providers to output logs to the console, debug, and event tracing.

Heres an example of how you can use the ILogger service to log the execution time of a method:

public IActionResult Index(){var watch = Stopwatch.StartNew();

// Code to measure goes here...

watch.Stop();var elapsedMs = watch.ElapsedMilliseconds;_logger.LogInformation("Index method took {ElapsedMilliseconds}ms", elapsedMs);

return View();}}

A more advanced way to identify performance bottlenecks is to use a profiler. A profiler is a tool that monitors the execution of an application, recording things like memory allocation, CPU usage, and other metrics.

There are many profilers available, including:

Application Performance Management (APM) tools go a step further, providing in-depth, real-time insights into an applications performance, availability, and user experience. APM tools can identify performance bottlenecks in real-world scenarios, not just in development and testing.

Asynchronous programming is a way to improve the overall throughput of your application on a single machine. It works by freeing up a thread while waiting for some IO-bound operation (such as a call to an external service or a database) to complete, rather than blocking the thread until the operation is done. When the operation is complete, the framework automatically assigns a thread to continue the execution.

The result is that your application can handle more requests with the same number of threads, as those threads can be used to serve other requests while waiting for IO-bound operations to complete.

ASP.NET Core is built from the ground up to support asynchronous programming. The framework and its underlying I/O libraries are asynchronous to provide maximum performance.

Heres how you might write an asynchronous action in an ASP.NET Core controller:

In this example, GetDataAsync might be making a call to a database or an external service. By awaiting this method, the thread executing this action can be freed up to handle another request.

Heres an example of how you might use async in a service that calls Entity Framework Core:

public MyService(MyDbContext context){_context = context;}

public async Task GetDataAsync(){return await _context.MyData.OrderBy(d => d.Created).FirstOrDefaultAsync();}}

Caching is an effective way to boost the performance of your ASP.NET Core applications. The basic idea is simple: instead of executing a time-consuming operation (like a complex database query) every time you need the result, execute it once, cache the result, and then just retrieve the cached result whenever you need it.

ASP.NET Core provides several built-in ways to cache data:

In-memory caching is the simplest form of caching. It stores data in the memory of the web server. This makes accessing the cached data extremely fast.

In-memory caching in ASP.NET Core stores cache data in the memory of the web server. The data is stored as key-value pairs and can be any object. The access to the in-memory cache is extremely fast, making it an efficient way to store data thats accessed frequently.

One thing to note about in-memory caching is that the cache data is not shared across multiple instances of the application. If you run your application on multiple servers, or if you use a process-per-request model, then the in-memory cache will be separate for each instance or process.

In-memory caching can be an effective way to improve the performance of your application in the following scenarios:

Heres an example of how you might use in-memory caching in an ASP.NET Core controller:

public MyController(IMemoryCache cache){_cache = cache;}

public IActionResult Index(){string cacheEntry;

if (!_cache.TryGetValue("_MyKey", out cacheEntry)) // Look for cache key.{// Key not in cache, so get data.cacheEntry = GetMyData();

// Set cache options.var cacheEntryOptions = new MemoryCacheEntryOptions()// Keep in cache for this time, reset time if accessed..SetSlidingExpiration(TimeSpan.FromMinutes(2));

// Save data in cache._cache.Set("_MyKey", cacheEntry, cacheEntryOptions);}

return View(cacheEntry);}

private string GetMyData(){// Simulating a time-consuming operationThread.Sleep(2000);return "Hello, world!";}}

In this example, the GetMyData method simulates a time-consuming operation. This could be a complex database query, a call to an external service, or any operation that takes time to execute. By caching the result, we avoid the need to execute this operation every time the Index action is called.

Distributed caching involves using a cache thats shared by multiple instances of an application. ASP.NET Core supports several distributed cache stores, including SQL Server, Redis, and NCache.

When using a distributed cache, an instance of your application can read and write data to the cache. Other instances can then read this data from the cache, even if theyre running on different servers.

You should consider using distributed caching in the following scenarios:

When we talk about improving the performance of web applications, one area often overlooked is the size of the HTTP responses. Large responses take longer to transmit over the network, and this latency can have a significant impact on performance, especially for clients with slow network connections.

Response compression is a simple and effective way to reduce the size of HTTP responses, thereby improving the performance of your application. It works by compressing the response data on the server before sending it to the client. The client then decompresses the data before processing it. This process is transparent to the end user.

The most common compression algorithms used for response compression are Gzip and Brotli. They can significantly reduce the size of responses, often by 70% or more.

ASP.NET Core includes middleware for response compression. To enable it, you need to add the middleware to your Startup.ConfigureServices and Startup.Configure methods, like this:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env){app.UseResponseCompression();

// Other middleware...}

By default, the response compression middleware compresses responses for compressible MIME types (like text, JSON, and SVG). You can add additional MIME types if necessary.

You should also configure your server (like IIS, Kestrel, or HTTP.sys) to use dynamic compression. This ensures that your responses are compressed even if youre not using the response compression middleware (for example, for static files).

While response compression can improve performance, there are a few things to keep in mind:

Entity Framework Core (EF Core) is a powerful Object-Relational Mapper (ORM) that simplifies data access in your .NET applications. However, if used without consideration for its performance behavior, you can end up with an inefficient application. Here are some techniques to improve the performance of your applications that use EF Core:

Lazy loading is a concept where the related data is only loaded from the database when its actually needed. On the other hand, Eager loading means that the related data is loaded from the database as part of the initial query.

While lazy loading can seem convenient, it can result in performance issues due to the N+1 problem, where the application executes an additional query for each entity retrieved. This can result in many round-trips to the database, which increases latency.

Eager loading, where you load all the data you need for a particular operation in one query using the Include method, can often result in more efficient database access. Here's an example:

In this example, each Order and its related Customer are loaded in a single query.

When you query data, EF Core automatically tracks changes to that data. This allows you to update the data and persist those changes back to the database. However, this change tracking requires additional memory and CPU time.

If youre retrieving data that you dont need to update, you can use the AsNoTracking method to tell EF Core not to track changes. This can result in significant performance improvements for read-only operations.

EF Core 5.0 and above support batch operations, meaning it can execute multiple Create, Update, and Delete operations in a single round-trip to the database. This can significantly improve performance when modifying multiple entities.

In this example, all the new orders are sent to the database in a single command, rather than one command per order.

Try to filter data at the database level rather than in-memory to reduce the amount of data transferred and memory used. Use LINQ to create a query that the database can execute, rather than filtering the data after its been retrieved.

In this example, only the orders from the last seven days are retrieved from the database.

The Select N+1 issue is a common performance problem where an application executes N additional SQL queries to fetch the same data that could have been retrieved in just 1 query. EF Cores Include and ThenInclude methods can be used to resolve these issues.

This query retrieves all orders, their related Customers, and the Addresses of the Customers in a single query.

When your application needs to interact with a database, it opens a connection to the database, performs the operation, and then closes the connection. Opening and closing database connections are resource-intensive operations and can take a significant amount of time.

Connection pooling is a technique that can help mitigate this overhead. It works by keeping a pool of active database connections. When your application needs to interact with the database, it borrows a connection from the pool, performs the operation, and then returns the connection to the pool. This way, the overhead of opening and closing connections is incurred less frequently.

Connection pooling is automatically handled by the .NET Core data providers. For example, if you are using SQL Server, the SqlConnection object automatically pools connections for you.

When you create a new SqlConnection and call Open, it checks whether there's an available connection in the pool. If there is, it uses that connection. If not, it opens a new connection. When you call Close on the SqlConnection, the connection is returned to the pool, ready to be used again.

You can control the behavior of the connection pool using the connection string. For example, you can set the Max Pool Size and Min Pool Size options to control the size of the pool.

Optimizing the performance of your ASP.NET Core applications can be a challenging task, especially when youre dealing with complex, data-rich applications. However, with the right strategies and tools at your disposal, its a task thats well within your reach.

In this article, weve explored several key strategies for performance optimization, including understanding performance bottlenecks, leveraging asynchronous programming, utilizing different types of caching, compressing responses, optimizing Entity Framework Core usage, and taking advantage of advanced features such as connection pooling and HTTP/2.

The key takeaway here is that performance optimization is not a one-time event, but a continuous process that involves monitoring, analysis, and iterative improvement. Always be on the lookout for potential bottlenecks, and remember that sometimes the smallest changes can have the biggest impact.

Moreover, while we focused on ASP.NET Core, many of these principles and techniques apply to web development in general. So, even if youre working in a different framework or language, dont hesitate to apply these strategies. The ultimate goal of performance optimization is to provide a smooth, seamless experience for your users.

Happy coding, and heres to fast, efficient applications!

Original post:
Turbocharging ASP.NET Core Applications: A Deep Dive into ... - Medium

Read More..

Is Wayland Becoming the Favored Way to Get a GUI on Linux? – Slashdot

The Register shares its collection of "signs that Wayland is becoming the favored way to get a GUI on Linux."- The team developing Linux for Apple Silicon Macs said they didn't have the manpower to work on X.org support.

- A year ago, the developers of the Gtk toolkit used by many Linux apps and desktops said that the next version may drop support for X11...

- One of the developers of the Budgie desktop, Campbell Jones, recently published a blog post with a wildly controversial title that made The Reg FOSS desk smile: "Wayland is pretty good, actually." He lays out various benefits that Wayland brings to developers, and concludes: "Primarily, what I've learned is that Wayland is actually really well-designed. The writing is on the wall for X, and Wayland really is the future." Partly as a result of this, it looks likely that the next version of the Budgie desktop, Budgie 11, will only support Wayland, completely dropping support for X11. The team point out that this is not such a radical proposition: there was a proposal to make KDE 6 sessions default to Wayland as long ago as last October...

- The GNOME spin of Fedora has defaulted to Wayland since version 25 in 2017, and the GNOME flavor of Ubuntu since 21.04.

- [T]here's now an experimental effort to get Wayland working on OpenBSD. The effort happened at the recent OpenBSD hackathon in Tallinn, Estonia, and the developer's comments are encouraging. It's already available as part of FreeBSD.

Read more from the original source:
Is Wayland Becoming the Favored Way to Get a GUI on Linux? - Slashdot

Read More..

Goodbye Azure AD, Entra the drag on your time and money – The Register

Opinion All tech grunts know an update from a vendor can be good news, bad news, or both. Fortunately, theres a quick way to tell even before the first sentence of the community blog post that is today's royal proclamation of choice. If the person addressing the community is an engineer, its good news. If marketing, not so much.

Hence the sinking feeling when the renaming of Azure AD to Microsoft Entra ID was revealed to the community (thats you) by a General Marketing Manager. This isnt a dig at marketing or marketing people in general, nor exalting engineering as a higher, purer, more noble calling. Even though it is. Nor that a cartoon penguin as the entire Linux marketing department has done a lot less damage. Even though it has.

The trouble is that people who have to make things work have little interest in marketing, at least as something of immediate interest. If things are changing, people need to hear the hows and whys from those who live that work and if youre doing daily hands-on enterprise IT infrastructure, you want to hear from senior devs, PMs or higher. If you dont get that, the vendor doesnt know or doesnt care what you need, or has to bluff that the incoming change isnt just putting a pile of work on you for its own benefit.

In this case, the story is that the rebranding of Azure AD to Entra makes no difference to anyone, its just a name, man. Which is true if you dont have to update management documents, procedures, licenses, training materials, asset portfolios, and so on. Marketing people dont see this, they see a clean name change.

There are good reasons for new names, when there are new things to be named. There are good reasons for old names, when they refer to important, established ideas. Unfortunately for us, old names also have what marketing people call mindshare. Thats why celebrities get big bucks for smiling at consumer goods - the marketing people know some of that positive mind-share the celeb has will spill over. Its a shortcut to our psyches. Enterprise IT doesnt really do influencers, which saves Linus from hawking manscaping razors, but we do have big name technologies.

Thats why, when Microsoft needed to put identity management into the cloud, it decided to retain the Active Directory name in Azure AD, despite it being a bad fit and the two products doing quite different things. The cloud is not your on-prem collection of LED flashers. Active Directory hails from 2000; it knows about PCs and group policies and physical servers. Azure knows about the virtualised landscape of cloud resources.

The arrival of Azure ID management would have been a great time for a new name, too, that reflected the fundamental change in philosophy. Azure AD could not replace Active Directory, and Active Directory could not manage cloud services.

Yet the pull of the old was too strong to resist. Heck, this was in the days when Microsofts cloud was still called Windows Azure a legacy of it being born in 2010 under Steve Ballmers watch. It was only itself renamed to Microsoft Azure less than 10 years ago. Upwards compatibility had to be implied in name even if lacking in fact.

It was lacking. The two identity managers couldnt even share identity management at the simplest level. Users had to use both, which of course they did? Then they had to have two separate IDs, which would not be a problem. It just means that if you have a password change policy that users will have to do this twice (and they could of course choose the same password for both). Tell me youve never managed users without saying youve never managed users.

But now Azure AD has been around long enough to be bedded in, marketing no longer sees Windows as a selling point. Time to rename it and make it just part of the Entra brand of things that can be licensed in so many creative and lucrative ways. Let the rest of the industry pay for the bulk of the rebranding consequences. Marathon to Snickers, Windscale atomic disaster site to Sellafield. Does it matter?

Much more than many might think. In most of the world, the names we give to things are just that names. They dont form part of the thing itself. You can recap a vintage IBM PC, and it doesnt matter that capacitors were once called condensers, or that IBM called printed circuit boards planars. Both work just fine if you call them Freds and Cynthias. But in the world of code and data, or objects and procedures, or APIs and spaces, the names are an intrinsic feature. If youre building something new, you name a lot of its internals after the whole thing. Your APIs will reference the name,, your internal code structure will identify themselves thus, and other software, tools and automation will use that name. Its part of the hierarchy by which everything works.

So when, for marketing purposes, the package changes its name, there are just two choices. Rename all those references, which has the potential for a rolling fustercluck that scales faster than an Oracle invoice after an audit, or leave well alone and watch the code fossilize, like a forgotten kingdom known only for the placenames it leaves behind. Its not that this is impossible to manage, but youd think a world where keeping old systems alive is increasingly difficult, IT vendors would have some idea of the responsibility they have for future, that they understood something about the unique nature of the industry theyre part of.

Naw. Only engineers worry about those sorts of horizons. Theres a saying that marketing is the price a company pays for being boring; in IT, we all get to chip in too.

See more here:
Goodbye Azure AD, Entra the drag on your time and money - The Register

Read More..

British Red Cross volunteer celebrates 65 years of support – Herald Series

Ann Coulter, aged 91, from Abingdon, was given an award recognising her many years of service during an event organised by the British Red Cross in Abingdon.

The event took place on Tuesday, July 11 in The Ann Coulter Training Room, a room named in her honour.

READ MORE:Pub giving away FREE breakfast butties to celebrate footy festival

Mrs Coulter has most recently been volunteering with the Mobility Aids Service in Oxfordshire, but her career with the charity has been wide-ranging, from helping out at the Grenfell Tower fire in 2017 to meeting the Royal Family.

Her favourite part of volunteering has been providing emotional and practical support to people during emergencies, from floods to fires.

Mrs Coulter said: The emergency part is what I really liked. Id get up at 2am, 4amand get my car from the garage.

I was never afraid - I think the adrenaline was going.

Mrs Coulter first joined the British Red Cross on 30th May 1958 and said that helping people has always appealed to her.

She chose to work as a receptionist in a hospital for 40 years alongside her volunteering.

I loved my job at the hospital because I was working with patients, and I loved the British Red Cross too because it was about helping people as well, she said.

One of her favourite moments was meeting the Royal Family while volunteering with the British Red Cross.

She was selected to take part in a picnic with Princess Anne at The Mall after celebrating her 50th anniversary with the British Red Cross, and she has kept her picnic basket as a souvenir.

My part of The Mall was by Princess Anne and her husband. Princess Anne came over and was talking to us really nicely, she knew so much about everything, she said.

Ignazio Ferrara, service delivery coordinator for the British Red Crosss Mobility Aids Service in Oxfordshire, who worked with Mrs Coulter said: On first meeting Ann, I was struck by her absolute commitment to The Red Cross and her meticulous attention to detail.

READ MORE:Fury over 'dangerous overgrown' cycle path

She took her role very seriously and treated everyone with great care.

Ann has the ability to engage in a conversation with anyone and more importantly, to listen attentively.

She is a softly spoken lady but quite strong willed.

During my initial training she often reminded me when I didnt follow the correct procedure and untilher last day picked up on even the tiniest of details she can be very persistent at times but is larger than life and has been a great ambassador for the service.

She cares passionately and I consider myself lucky to know Ann, I am continually humbled by her achievements, both past and present.

Go here to read the rest:
British Red Cross volunteer celebrates 65 years of support - Herald Series

Read More..

‘Sharknado’ Movies In Order: How Many ‘Sharknado’ Movies Are There? – Decider

By Raven Brunner

Updated: July 18, 2023 // 10:00am

Can you believe its already been over 10 years since the first Sharknado movie premiered? SYFY is celebrating the monumental occasion by hosting an all-day movie marathon of the fin-tastic cult classic, and we cant wait to take a bite out of the whole series.

But even if youre not available to dive into Sharknado today, you can swim into the season with the ultimate binge-watch of all the Sharknado movies, starring Ian Ziering, Tara Reid and Cassie Scerbo with star-studded guest appearances by Olivia Newton-John, Al Roker, Billy Ray Cyrus, Jerry Springer and more. The first movie premiered on theSYFYchannel on July 11, 2013, and instantly accumulated a cult following.

With its over-the-top storyline and bad CGI, the movie redefined the so bad, its good genre and spun off into its own franchise, culminating in a total of six flicks and a spin-off trilogy thats equally absurd.

The movies follow husband-and-wife duo Fin Shepard (Ziering) and April Wexler (Reid) as they battle sharknados, a weather event that results in tornados being filled with sharks. Over time, the movie series ventures into new genres, bringing magical ancient devices, dinosaurs and aliens into the shark-infested mist.

Queue up the Sharknado theme song and sink your teeth into all the Sharknado movies! Heres everything you need to know about watching the silly action franchise, including how to tune into the SYFY movie marathon.

There are sixSharknadomovies and three spin-off movies. As of now, The Last Sharknado: Its About Time, which premiered in 2018, is the end of the original franchise, but theres always room for more. Mind you, it seems unlikely that the Sharknado franchise will continue to grow as the series began to lose stream (and support) after the first two movies, resulting in a few duds. But you cant say that they didnt commit to the bit! The first spin-off movie Lavalantula released in 2015, shortly after the release of Sharknado 3 and featured Ziering in a cameo as his Sharknado character, Fin. The movie was followed up by a sequel 2 Lava 2 Lantula, and a crossover flick between the two movie series called 2025 Armageddon.

SYFYs Sharknado movie marathon, in celebration of the franchises 10-year anniversary, occurs on Tuesday, July 18, 2023. The binge-watch begins at 12 PM ET/PT with the fourth movie and plays backward until the first movie, which screens at 6 PM ET/PT. From there on, the movies will play in chronological order until the final movie.

Here is the SYFY screening schedule:

The Sharknado movies follow a consistent timeline, but they arent challenging to follow. As long as you understand the general premise, youll be fine! Many of the movies feature callbacks to their predecessors, but nothing thatll make or break your viewing.

Scroll down to find out where to watch everySharknadomovie on streaming.

1

Take it back to where it all began! The first Sharknado movie is undeniably the best in the franchise. Its pure camp and lacks the cockiness of the filmmakers, which begin to show after the sequel. The story is rather contained despite its nonsensical nature and follows Fin as he reunites with his estranged wife April after rushing home to keep his family safe during the unprecedented sharknado attack.

Who is in the Sharknado cast? Ian Ziering, John Heard, Tara Reid, Cassie Scerbo and Jaason Simmons star in the movie.

2

After the massive success of Sharknado, everybody was clamoring to join the sequel, thus giving way to an array of fun, unexpected celebrity cameos. The movie sees Ziering and Reid reprising their roles as they travel to New York City to promote a survival guide that April has written about sharknados. While on the plane, another sharknado event happens, and the shark-slaying couple attempt to warn and protect the city from the oncoming storm.

Who is in the Sharknado 2: The Second One cast? Ian Ziering and Tara Reid star in the movie, alongside Vivica A. Fox, Mark McGrath, Billy Ray Cyrus, Sandy Pepa Denton, Perez Hilton, Richard Kind, Al Roker and more.

3

Sharknado 3 brings the epic disaster to Florida! The action movie opens with Fin being awarded a Presidential Medal of Freedom for his life-saving work during the previous sharknados; however, the event is overshadowed by a sudden sharknado attack. After Fin and the Secret Service team fight against the sharknado, it mysteriously disappears and Fin worries that it is on route to find April, who is now pregnant and visiting Universal Studios. He makes his way down the East Cost where he reunites with a familiar face. This threequel is pure chaos and contains the weirdest birthing scene ever!

Who is in the Sharknado 3: Oh Hell No! cast? Ian Ziering, Tara Reid, Cassie Scerbo, Mark McGrath, Frankie Munis, Mark Cuban and David Hasselhoff star in the movie, alongside Ne-Yo, Ann Coulter, Jerry Springer, Matt Lauer, Al Roker, Natalie Morales, Savannah Guthrie, Kathie Lee Gifford, Hoda Kotb, Alexis Ohanian and more.

4

Can the fourth movie top the madness of Sharknado 3? Theyre sure going to try. The movie visits Fin roughly five years after the last sharknado attack and sees him fighting against several sharknado variants, such as a cownado, a lightningnado and a firenado. You might get nadoed out after this one!

Who is in the Sharknado: The 4th Awakens cast? Ian Ziering, Tara Reid and David Hasselhoff star in the movie, alongside Duane Chapman, Al Roker, Stacey Dash, Stassi Schroeder, Hayley Hasselhoff, Carrot Top and more.

5

At this point, there are no rules when it comes to this franchise. Sharknado 5 follows Fin and April as they travel around the world using tornado portals to save their son Gil, who had been taken by a sharknado. Come for the nados, stay for the Sharkzilla!

Who is in the Sharknado 5: Global Swarming cast? Ian Ziering, Tara Reid, Cassie Scerbo and Billy Barratt star in the movie, alongside Tony Hawk, Olivia Newton-John, Sasha Cohen, Margaret Cho, Al Roker, Clay Aiken, Abby Lee Miller and more.

6

A bittersweet goodbye! The Last Sharknado is, indeed, the last sharknado. The movie is an epic finale to the franchise and circles back to the beginning, finding Fin going head-to-head with the nado that started it all. The shark-slayer travels back in time to prevent the first sharknado from ever happening, alongside his now-adult son and cyborg wife.

Who is in The Last Sharknado: Its About Time cast? Ian Ziering, Tara Reid, Cassie Scerbo and Vivica A. Fox star in the movie, alongside Neil deGrasse Tyson, Alaska, Leslie Jordan, Jonathan Bennett, Tori Spelling, Al Roker, La Toya Jackson and more.

Load more...

https://decider.com/article/sharknado-movies-in-order/?utm_source=url_sitebuttons&utm_medium=site%20buttons&utm_campaign=site%20buttons

Follow this link:
'Sharknado' Movies In Order: How Many 'Sharknado' Movies Are There? - Decider

Read More..

Vitalik Buterin wants Bitcoin to experiment with layer-2 solutions, just like Ethereum – Cointelegraph

Ethereum co-founder Vitalik Buterin believes the Bitcoin network needs scalable solutions like zero-knowledge rollups (ZK-rollups) to become more than another payment network. Buterins comments came during a Twitter Space hosted by Bitcoin developer Udi Wertheimer, with discussions revolving around Ethereums scaling experiments.

A ZK-rollup is an off-chain protocol that operates on top of the Ethereum blockchain and is managed by on-chain Ethereum smart contracts. It offers a more scalable and faster way to verify transactions without sharing critical user information.

The Ethereum co-founder shed light on how Ethereum has incorporated various scaling solutions over the years to increase throughput. Buterin cited Optimism and Arbitrum as two successful examples of rollups that could be considered case studies for Bitcoin, adding:

Scalability has been a long-drawn point of discussion for Bitcoin and Ethereum over the years. While the Ethereum network has shifted from a proof-of-work to a proof-of-stake network, it is also experimenting with various layer-2 solutions like ZK-rollups and Plasma.

Related: Zero-knowledge proofs coming to Bitcoin, overhauling network state validation

On the other hand, Bitcoins layer-2 solution, the Lightning Network, has been crucial to its scalability,and lately, Bitcoin Ordinals have helped the Bitcoin network become more than just another payment layer. Buterin lauded the rise of Ordinals and said he thinks they have brought back the builder culture into the Bitcoin ecosystem.

Bitcoin Ordinals are the latest layer-2 solution enabling decentralized storage of digital art on the Bitcoin blockchain. Its popularity soared fast, and by the end of June, Bitcoin Ordinals inscriptions hit over $210 million in trading volume.

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

Magazine: Moral responsibility Can blockchain really improve trust in AI?

Here is the original post:

Vitalik Buterin wants Bitcoin to experiment with layer-2 solutions, just like Ethereum - Cointelegraph

Read More..