Category Archives: Cloud Servers

How 5G is Creating New Opportunities for Tech Professionals? – Analytics Insight

Unveil how 5G technology revolutionizes new opportunities for tech professionals in the IT sector

For various professionals, there will be different opportunities. The 5G communication systems will be developed, designed, implemented, and run by the communication and software engineers. The essential circuits for 5G devices and infrastructures will be designed by electronics experts, who will also develop 5G technology-enabled IoT networks with millions of IoT endpoints globally.

Faster and more dependable communications are delivered by 5G networks. In addition to the Internet of Things (IoT), autonomous driving, fixed wireless, internet, and quicker video viewing, they will open doors to intriguing new prospects. For business customers, 5G will make it possible to use new services like connected car fleets, remote health diagnostic services, smart factory automation and safety applications, remote mining, drilling, and other hazardous operations. Additionally, it will assist in bringing broadband to isolated and rural homes.

Universities should embed specialized courses to equip tech professionals with the skills needed to harness growing opportunities. These courses should cover topics such as cloud network architecture, integration of 4G/5G networks, and training and certifications for developing use cases like Hybrid Remote Teaching, Smart Manufacturing, and Drone Logistics. Moreover, these courses should be developed broadly and incorporate interdisciplinary approaches spanning various technologies and applications.

New levels of connectivity and creativity have arrived with the introduction of 5G technology. Beyond only providing us with higher internet connections on our devices, 5G is significantly impacting several other businesses and presenting exciting new prospects for tech professionals. In this article, well examine how 5G technology is reshaping the future of the IT sector and examine its disruptive potential.

One of the most significant implications of 5G is its role in accelerating the Internet of Things (IoT) revolution. With ultra-low latency and massive device connectivity, 5G enables seamless communication between IoT devices. It creates many opportunities for tech professionals to develop, deploy, and manage IoT solutions that can revolutionize industries like healthcare, agriculture, transportation, and more.

5Gs low latency capabilities are driving the adoption of edge computing. Tech professionals are now tasked with building the infrastructure and applications that leverage edge computing to process data closer to the source. It enhances real-time decision-making and reduces the load on centralized cloud servers, creating a more efficient and responsive digital ecosystem.

5Gs high-speed and low-latency networks perfectly match AR and VR applications. Tech experts can now explore new frontiers in immersive technologies, from designing lifelike virtual experiences to developing AR applications for industries like education, gaming, healthcare, and remote collaboration.

The COVID-19 pandemic accelerated the adoption of telemedicine, but 5G takes it to the next level. Healthcare tech professionals are leveraging 5Gs capabilities to enhance remote patient monitoring, conduct high-definition telehealth consultations, and even perform surgeries remotely using robotic systems. This opens up exciting career opportunities in health tech and telemedicine.

The automotive industry is undergoing a massive transformation with the introduction of autonomous vehicles. 5Gs low latency and high reliability are critical for enabling vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication systems. Tech professionals are at the forefront of developing the software, algorithms, and cybersecurity measures required for safe and efficient autonomous transportation.

The tech industry increasingly focuses on environmental sustainability, and 5G can contribute to this cause. Professionals in green tech are exploring how 5G can be harnessed to optimize resource management, reduce energy consumption, and create eco-friendly solutions.

More:
How 5G is Creating New Opportunities for Tech Professionals? - Analytics Insight

BT plays it cool with new sustainability solutions – TelecomTV

IPSWICH, UK BTs Sustainability Festival 2023 As part of its efforts to become a green telco, BT is exploring the use of liquid cooling solutions across its network and IT infrastructure, and, according to the operator, its initial trials with an abundance of partners suggest significant cost reductions could be achieved.

During its first Sustainability Festival, held at its Adastral Park research centre this week, the UK telco demonstrated some of the emerging technologies that, it hopes, will help it become net zero by the end of March 2031 and the majority of them are focused on ways to keep datacentre equipment cool, a challenge often cited by the whole tech sector as a major pain point.

While the fundamental technology is owned by the vendors of the solutions it had on display at Adastral Park, BT is engaged with them in terms of specific design requirements. Particularly noteworthy were two demonstrations of equipment being fully immersed in liquid. Branded by BTs team as revolutionary, the concept is based on taking any piece of network equipment, removing its fans, and immersing it in dielectric fluid that does not conduct electricity.

One of BTs liquid cooling solution partners is Immersion4, which is working on a trial to house servers in an immersion tank (see image, above). The companys founder, Serge Conesa, explained that Immersion4s technology can deliver up to 70% in energy savings as it focuses on cooling only the equipment, rather than the whole building (an approach he likened to chilling a bottle of champagne rather than cooling an entire venue).

In another showcase, BT demonstrated full immersion technology from partner PeaSoup Cloud that can save up to 20% in energy consumption (see image, below). BT explained that with this solution not only is there no need to cool the whole building but it means the equipment doesnt need to deal with excessive heat and is not put under strain, which results in longer-lasting components.

See the original post here:
BT plays it cool with new sustainability solutions - TelecomTV

4 Best Automation Tools for Developers in 2023 – TechRepublic

Automation tools help developers streamline their processes to save time, boost productivity and concentrate on their most critical tasks. This guide will break down the following top automation tools in terms of their features, pros, cons and pricing:

Jump to:

Gradle is a fast and flexible open-source build automation tool that accelerates development while improving software quality.

Gradles list of features is highlighted by:

Gradle helps developers build multiple projects at the same time, making it ideal for large projects with multiple subprojects. Its rich API lets developers customize builds to fit their needs, and the automation tool also manages dependencies so your projects always have the most updated versions they need.

You can use Gradle to run tests to ensure your project is working as intended, plus deploy projects to different targets, such as cloud platforms or servers. Gradle also offers seamless integrations with popular IDEs like Android Studio and IntelliJ IDEA.

Gradles pros include:

Gradle is quite speedy compared to other build automation tools. Its Free plan is ideal for budget-minded individual developers, and the software is quite flexible in terms of supported programming languages and platforms. Gradle is also highly customizable, allowing developers to use it for diverse projects.

Gradles cons include:

Getting to understand Gradle can be a daunting task with its extensive documentation. Building tasks with the tool takes some technical know-how. All of the above make it not too beginner-friendly.

Gradle Enterprise has a free trial and is offered in two pricing tiers:

The Free plan offers unlimited build scans, a visual build timeline, performance data, a dependency graph, custom values, tagging, environment data, test behavior details, an enhanced console log and a build performance summary.

The Core plan offers distributed cache node management, access control, cross-build analysis, build failure aggregation, comprehensive failure metrics, Enterprise REST API and more.

The developer tool also offers separate extensions to meet additional productivity needs, such as test distribution, test failure analytics and predictive test selection.

Apache Maven is an open-source build automation tool. Launched over 20 years ago with Java developers in mind, it is now a popular DevOps tool used by Agile development teams, project managers, etc., for its ability to help them build, publish and deploy several projects simultaneously, dependency and release management features and more.

Some of the features that make Maven a popular DevOps automation tool include:

Maven boosts productivity by letting developers get started with new projects and modules quickly and allowing them to work on multiple projects simultaneously. The automation software is easily extensible via plugins written in Java or scripting languages and an extensive repository of libraries and metadata.

When new features are available, you will not have to worry about time-consuming configuration with Maven. And it also saves time via model-based builds and quickly-generated documentation built off project metadata. Lastly, Maven offers dependency management (transitive dependencies, Ant tasks, automatic updating, etc.) and release management.

Mavens advantages include:

Since Maven is open-source, developers can use it at no cost, which is excellent news if you are on a limited budget. The developer tool is easy to use and configure, consistent since it follows a standardized approach that can easily replicate future processes, and offers fuss-free documentation generation.

Mavens disadvantages include:

While Maven is user-friendly, it may not be beginner-friendly for those unfamiliar with configurations, terminology, etc. Some have complained that the programming tool is lacking in documentation and support. Others, meanwhile, have noted slow performance when dealing with complex projects.

Since Maven is an open-source DevOps tool, developers can enjoy it for free. There may be additional costs for premium extensions or plugins, however.

Travis CI is an easy-to-maintain cloud-based CI/CD tool with time-saving one-command automations that supports over 30 coding languages.

Travis CIs top features that have allowed its popularity to grow in the DevOps community include:

Travis CIs multi-language build matrix supports more than 30 coding languages. Developers can run and test simultaneously in different environments, plus automate tasks for validation, integration and deployment with a single command. The DevOps tool integrates with popular third-party developer tools like Slack, Perforce, Docker, etc., and has a feature that catches code failures and bugs on autopilot.

Travis CIs strengths include:

Developers seeking a fuss-free automation tool get just that with Travis CI. Thanks to its cloud-based options, developers can enjoy Travis CIs time-saving features with minimal setup and maintenance.

Travis CI also employs a lot less code (around one-third less) than competing programmer tools and is quite flexible with its support for over 30 coding languages.

Areas where Travis CI could improve include:

Budget-minded software development teams may get turned off by the lack of a Travis CI free plan. The developer tools cost can creep up quickly as your need for added concurrent jobs grows, and the customer support has received complaints of being slow to respond to issues.

Travis CIs pricing is split into cloud and enterprise options. Cloud pricing is as follows:

Each cloud plan comes with unlimited repositories, collaborators and build minutes, plus a free trial. Choose the self-hosted Enterprise plan, and you will pay $34 per user per month and have the option to host Travis CI on-premise or in your private cloud. The Enterprise plan offers premium support and Subversion and Perforce CI/CD.

Katalon Studio is a codeless test automation tool that offers a low-code experience for beginners and advanced testing for experts.

Some of Katalon Studios best features for testing automation are:

Katalon Studio lets developers automate tests with varying sets of data. This is ideal for applications that handle confidential or sensitive data. The automation software also supports keyword-driven testing, allowing developers to create reusable test scripts for multiple applications.

API testing comes in handy for testing back-end applications and services, and if there are any broken tests, Katalon will fix them automatically. Katalon Studio records and plays back user actions on applications to help create automated tests, and it offers detailed reporting to help troubleshoot.

Katalon Studios pros include:

The programmer tools Free version is a plus for those with limited budgets seeking basic test automation capabilities. Katalon Studios low-code approach makes it beginner-friendly, and the interface is easy to navigate and user-friendly. The fact that the automation software works with multiple environments (Windows, macOS, Google Chrome, Firefox, Android, iOS, etc.) is another plus.

Katalon Studios cons include:

Some have complained that Katalon Studios performance is less than stellar, and the automation tool can sometimes lag or freeze. Support is noted for being slow to respond. Since Katalon is relatively new and has a smaller community, you are less likely to find fast help from colleagues. The desktop app can also be a memory hog when booting or running tests.

Katalon Studio has three pricing plans:

The Free plan offers test automation for mobile, web, API and desktop applications. Enterprise adds debugging, custom reports and advanced API testing. And Ultimate adds 24/7 support and a dedicated onboarding manager.

As the need for automation grows, so does the number of automation tools that hit the market.

How can you pick the right one? Besides looking at the price (some offer free plans) to find an automation solution that fits your budget, read reviews regarding user-friendliness. Depending on your team size and goals, you may want something scalable that can grow as you expand.

Customer support and community size are other factors to consider, as are features. Useful features to look for in automation software include configuration management, reporting, workflow management, CI/CD, monitoring, orchestration, version control support and solid security. You should also look for plenty of third-party integrations with popular developer tools for added functionality, if needed, and support for the programming languages you use.

The automation tools listed above can help developers enjoy increased speed and productivity without sacrificing the quality of their releases. Before choosing an automation tool for your software development team, make sure it fits your needs in terms of features, user-friendliness and pricing.

Also See: Top DevOps Monitoring Tools

More:
4 Best Automation Tools for Developers in 2023 - TechRepublic

Key Players Hewlett Packard, IBM, and More Compete in the … – GlobeNewswire

Dublin, Sept. 15, 2023 (GLOBE NEWSWIRE) -- The "Clustering Software Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2023-2028" report has been added to ResearchAndMarkets.com's offering.

The global clustering software market reached a size of USD 2.8 billion in 2022, and it is expected to grow to USD 3.5 billion by 2028, exhibiting a Compound Annual Growth Rate (CAGR) of 3.2% during the period from 2023 to 2028.

Clustering software refers to various software applications that connect, coordinate, and manage multiple distributed servers. It enables these servers to collectively perform computing and administrative tasks such as load balancing, node failure detection, and failover assignment.

This technology divides complex software into smaller, manageable subsystems and facilitates efficient data management over large networks, providing fault-tolerant responses. It plays a crucial role in various industries, including telecom, aerospace, academics, life sciences, and defense.

Market Dynamics

Several factors are driving the growth of the global clustering software market:

Key Market Segmentation

The report provides an analysis of the global clustering software market's key trends and forecasts at the global, regional, and country levels from 2023 to 2028. It categorizes the market based on:

Competitive Landscape

Key players in the global clustering software market include Hewlett Packard Enterprise Company, IBM Corporation, Fujitsu, Microsoft Corporation, NEC Corp., Oracle, Red Hat, Broadcom, Inc., VMware, and others.

Key Questions Answered in This Report

The report answers important questions related to the global clustering software market:

Key Attributes:

For more information about this report visit https://www.researchandmarkets.com/r/lzfi5i

About ResearchAndMarkets.comResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

View original post here:
Key Players Hewlett Packard, IBM, and More Compete in the ... - GlobeNewswire

Unlocking the potential of IoT systems: The role of Deep Learning … – Innovation News Network

The Internet of Things (IoT), a network of interconnected devices equipped with sensors and software, has revolutionised how we interact with the world around us, empowering us to collect and analyse data like never before.

As technology advances and becomes more accessible, more objects are equipped with connectivity and sensor capabilities, making them part of the IoT ecosystem. The number of active IoT systems is expected to reach 29.7 billion by 2027, marking a significant surge from the 3.6 billion devices recorded in 2015. This exponential growth requires a tremendous demand for solutions to mitigate the safety and computational challenges of IoT applications. In particular, industrial IoT, automotive, and smart homes are three main areas with specific requirements, but they share a common need for efficient IoT systems to enable optimal functionality and performance.

Fig. 1: Overview of VEDLIoT technology layers and components

Increasing the efficiency of IoT systems and unlocking their potential can be achieved through Artificial Intelligence (AI), creating AIoT architectures. By utilising sophisticated algorithms and Machine Learning techniques, AI empowers IoT systems to make intelligent decisions, process vast amounts of data, and extract valuable insights. For instance, this integration drives operational optimisation in industrial IoT, facilitates advanced autonomous vehicles, and offers intelligent energy management and personalised experiences in smart homes.

Among the different AI algorithms, Deep Learning that leverages artificial neural networks is very appropriate for IoT systems for several reasons. One of the primary reasons is its ability to learn and extract features automatically from raw sensor data. This is particularly valuable in IoT applications where the data can be unstructured, noisy, or have complex relationships. Additionally, Deep Learning enables IoT applications to handle real-time and streaming data efficiently. This ability allows for continuous analysis and decision-making, which is crucial in time-sensitive applications such as real-time monitoring, predictive maintenance, or autonomous control systems.

Despite the numerous advantages of Deep Learning for IoT systems, its implementation has inherent challenges, such as efficiency and safety, that must be addressed to fully leverage its potential. The Very Efficient Deep Learning in IoT (VEDLIoT) project aims to solve these challenges.

A high-level overview of the different VEDLIoT components is given in Fig. 1. IoT is integrated with Deep Learning by the VEDLIoT project to accelerate applications and optimise the energy efficiency of IoT. VEDLIoT achieves these objectives through the utilisation of several key components:

VEDLIoT concentrates on some use cases, such as demand-oriented interaction methods in smart homes (see Fig. 2), industrial IoT applications like Motor Condition Classification and Arc Detection, and the Pedestrian Automatic Emergency Braking (PAEB) system in the automotive sector (see Fig. 3). VEDLIoT systematically optimises such use cases through a bottom-up approach by employing requirement engineering and verification techniques, as shown in Fig. 1. The project combines expert-level knowledge from diverse domains to create a robust middleware that facilitates development through testing, benchmarking, and deployment frameworks, ultimately ensuring the optimisation and effectiveness of Deep Learning algorithms within IoT systems. In the following sections, we briefly present each component of the VEDLIoT project.

Fig. 2: Smart mirror demonstrator developed as part of the smart home application in VEDLIoT

Various accelerators are available for a wide range of applications, from small embedded systems with power budgets in the milliwatt range to high-power cloud platforms. These accelerators are categorised into three main groups based on their peak performance values, as shown in Fig. 4.

The first group is the ultra-low power category (< 3 W), which consists of energy-efficient microcontroller-style cores combined with compact accelerators for specific Deep Learning functions. These accelerators are designed for IoT applications and offer simple interfaces for easy integration. Some accelerators in this category provide camera or audio interfaces, enabling efficient vision or sound processing tasks. They may offer a generic USB interface, allowing them to function as accelerator devices attached to a host processor. These ultra-low power accelerators are ideal for IoT applications where energy efficiency and compactness are key considerations, providing optimised performance for Deep Learning tasks without excessive power.

The VEDLIoT use case of predictive maintenance is a good example and makes use of an ultra-low power accelerator. One of the most important design criteria is low power consumption, as it is a battery-powered small box that can externally be installed on any electric motor and should monitor the electronic motor for at least three years without a battery change.

Fig. 4: Performance overview of AI accelerators

The next category is the low-power group (3 W to 35 W), which targets a broad range of automation and automotive applications. These accelerators feature high-speed interfaces for external memories and peripherals and efficient communication with other processing devices or host systems such as PCIe. They support modular and microserver-based approaches and provide compatibility with various platforms. Additionally, many accelerators in this category incorporate powerful application processors capable of running full Linux operating systems, allowing for flexible software development and integration. Some devices in this category include dedicated application-specific integrated circuits (ASICs), while others feature NVIDIAs embedded graphics processing units (GPUs). These accelerators balance power efficiency and processing capabilities, making them well-suited for various compute-intensive tasks in the automation and automotive domains.

The high-performance category (> 35 W) of accelerators is designed for demanding inference and training scenarios in edge and cloud servers. These accelerators offer exceptional processing power, making them suitable for computationally-intensive tasks. They are commonly deployed as PCIe extension cards and provide high-speed interfaces for efficient data transfer. The devices in this category have high thermal design powers (TDPs), indicating their ability to handle significant workloads. These accelerators include dedicated ASICs, known for their specialised performance in Deep Learning tasks. They deliver accelerated processing capabilities, enabling faster inference and training times. Some consumer-class GPUs may also be included in benchmarking comparisons to provide a broader perspective.

Selecting the proper accelerator from the abovementioned wide range of available options is not straightforward. However, VEDLIoT takes on this crucial responsibility by conducting thorough assessments and evaluations of various architectures, including GPUs, field-programmable gate arrays (FPGAs), and ASICs. The project carefully examines these accelerators performances and energy consumptions to ensure their suitability for specific use cases. By leveraging its expertise and comprehensive evaluation process, VEDLIoT guides the selection of Deep Learning accelerators within the project and in the broader landscape of IoT and Deep Learning applications.

Trained Deep Learning models have redundancy that can sometimes be compressed to 49 times their original size, with negligible accuracy loss. Although many works are related to such compression, most results show theoretical speed-ups that only sometimes translate into more efficient hardware execution since they do not consider the target hardware. On the other hand, the process of deploying Deep Learning models on edge devices involves several steps, such as training, optimisation, compilation, and runtime. Although various frameworks are available for these steps, their interoperability can vary, resulting in different outcomes and performance levels. VEDLIoT addresses these challenges through hardware-aware model optimisation using ONNX, an open format for representing Machine Learning models, ensuring compatibility with the current open ecosystem. Additionally, Renode, an open-source simulation framework, serves as a functional simulator for complex heterogeneous systems, allowing for the simulation of complete System-on-Chips (SoCs) and the execution of the same software used on hardware.

Furthermore, VEDLIoT uses the EmbeDL toolkit to optimise Deep Learning models. The EmbeDL toolkit offers comprehensive tools and techniques to optimise Deep Learning models for efficient deployment on resource-constrained devices. By considering hardware-specific constraints and characteristics, the toolkit enables developers to compress, quantise, prune, and optimise models while minimising resource utilisation and maintaining high inference accuracy. EmbeDL focuses on hardware-aware optimisation and ensures that Deep Learning models can be effectively deployed on edge devices and IoT devices, unlocking the potential for intelligent applications in various domains. With EmbeDL, developers can achieve superior performance, faster inference, and improved energy efficiency, making it an essential resource for those seeking to maximise the potential of Deep Learning in real-world applications.

Since VEDLIoT aims to combine Deep Learning with IoT systems, ensuring security and safety becomes crucial. In order to emphasise these aspects in its core, the project leverages trusted execution environments (TEEs), such as Intel SGX and ARM TrustZone, along with open-source runtimes like WebAssembly. TEEs provide secure environments that isolate critical software components and protect against unauthorised access and tampering. By using WebAssembly, VEDLIoT offers a common environment for execution throughout the entire continuum, from IoT, through the edge and into the cloud.

In the context of TEEs, VEDLIoT introduces Twine and WaTZ as trusted runtimes for Intels SGX and ARMs TrustZone, respectively. These runtimes simplify software creation within secure environments by leveraging WebAssembly and its modular interface. This integration bridges the gap between trusted execution environments and AIoT, helping to seamlessly integrate Deep Learning frameworks. Within TEEs using WebAssembly, VEDLIoT achieves hardware-independent robust protection against malicious interference, preserving the confidentiality of both data and Deep Learning models. This integration highlights VEDLIoTs commitment to securing critical software components, enabling secure development, and facilitating privacy-enhanced AIoT applications in cloud-edge environments.

Fig. 5: Requirements framework showing the various architectural views

Additionally, VEDLIoT employs a specialised architectural framework, as shown in Fig. 5, that helps to define, synchronise and co-ordinate requirements and specifications of AI components and traditional IoT system elements. This framework consists of various architectural views that address the systems specific design concerns and quality aspects, including security and ethical considerations. By using these architecture views as templates and filling them out, correspondences and dependencies can be identified between the quality-defining architecture views and other design decisions, such as AI model construction, data selection, and communication architecture. This holistic approach ensures that security and ethical aspects are seamlessly integrated into the overall system design, reinforcing VEDLIoTs commitment to robustness and addressing emerging challenges in AI-enabled IoT systems.

Traditional hardware platforms support only homogeneous IoT systems. However, RECS, an AI-enabled microserver hardware platform, allows for the seamless integration of diverse technologies. Thus, it enables fine-tuning of the platform towards specific applications, providing a comprehensive cloud-to-edge platform. All RECS variants share the same design paradigm to be a densely-coupled, highly-integrated communication infrastructure. For the varying RECS variants, different microserver sizes are used, from credit card size to tablet size. This allows customers to choose the best variant for each use case and scenario. Fig. 6 gives an overview of the RECS variants.

The three different RECS platforms are suitable for cloud/data centre (RECS|Box), edge (t.RECS) and IoT usage (u.RECS). All RECS servers use industry-standard microservers, which are exchangeable and allow for use of the latest technology just by changing a microserver. Hardware providers of these microservers offer a wide spectrum of different computing architectures like Intel, AMD and ARM CPUs, FPGAs and combinations of a CPU with an embedded GPU or AI accelerator.

Fig. 6: Overview of heterogeneous hardware platforms

VEDLIoT addresses the challenge of bringing Deep Learning to IoT devices with limited computing performance and low-power budgets. The VEDLIoT AIoT hardware platform provides optimised hardware components and additional accelerators for IoT applications covering the entire spectrum, from embedded via edge to the cloud. On the other hand, a powerful middleware is employed to ease the programming, testing, and deployment of neural networks in heterogeneous hardware. New methodologies for requirement engineering, coupled with safety and security concepts, are incorporated throughout the complete framework. The concepts are tested and driven by challenging use cases in key industry sectors like automotive, automation, and smart homes.

The VEDLIoT project has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 957197.

Please note, this article will also appear in the fifteenthedition of ourquarterly publication.

Read more from the original source:
Unlocking the potential of IoT systems: The role of Deep Learning ... - Innovation News Network

Clockwork discovering wasted bandwidth between the nanoseconds – diginomica

If time is money, then what is the value of a nanosecond (billionth of a second)? Well, if you are building a large network of distributed applications, it could mean a ten percent improvement in performance or a ten percent reduction in cost for the same workload. It could also mean orders of magnitude fewer errors in transaction processing systems and databases.

At least that is according to Balaji Prabhakar, VMWare Founders Professor of Computer Science at Stanford University, whose research team helped pioneer more efficient approaches for synchronizing clocks in distributed systems. He later co-founded and is CEO of TickTock, which became Clockwork to commercialize the new technology. He also previously co-founded Urban Engines, which developed algorithms for congestion tracking and was acquired by Google in 2016. He has been working on designing algorithms to improve network performance for decades.

The company initially focused on improving the fairness of market placements in financial exchanges. They have started building out a suite of tools to synchronize cloud applications and enterprise networking infrastructure more broadly. Accurate clocks can help networks and applications to improve consistency, event ordering, and scheduling of tasks and resources with more precise timing.

This is a big advantage over the quartz clocks underpinning most computer and network timing, which can drift significantly enough to confound time-stamping processes in networks and transaction processing. Traditional network-based synchronization can help reduce this drift but suffers from path noise created by fluctuations in switching times, asymmetries in path lengths, and clock time stamp noise.

Prabhakar says some customers are interested in cost conservation and want to right-size deployments and switch off virtual machines they no longer need. He notes:

So, if they save 10% or more, and we charge them just 2%, the remaining is just pure savings.

Others want a more performant infrastructure. Clockwork did one case study that found they could get seventy VMs to do the work of a hundred by running apps and infrastructure more efficiently.

It is important to point out that there are two levels of improvement in their new approach. The new protocols can achieve ten nanosecond accuracy with direct access to networking hardware. In cloud scenarios mediated by virtual machines, the protocol can achieve a few microseconds of accuracy. However, thats still good enough to satisfy the new European MiFID IT requirements for high-frequency trading and many other use cases. It is also helpful that the clock sync agent requires less than one percent of a single-core CPU and less than 0.04% of the slowest cloud link while saving 10% of bandwidth.

Perhaps the most important thing to consider is the impact it could have on the trend toward clockless design in distributed systems. Clockless designs help scale up new application and database architectures but make basic operations like consistency, event ordering, and snapshotting difficult.

The more accurate clock sync technology is already showing promise in improving tracing tools, mitigating network congestion, and improving the performance of distributed databases like CockroachDB. Over the last couple of years, Clockwork has been building out supporting infrastructure around the new protocol called HUYGENS to improve cloud congestion control, create digital twins of virtual machine placement, and improve distributed database performance by ten to a hundred times. It is named after Christiaan Huygens, who invented the pendulum clock in the 1600s, which became the most accurate timekeeper until the commercialization of quartz clocks in the late 1960s.

The impact of synchronized time is increasingly important as the world transitions from dedicated networks and compute to various forms of statistical multiplexing. Networks have been transitioning from dedicated circuits using protocols like circuit-switched networks and asynchronous transmission mode (ATM), which delivered high-level performance for each user but wasted unused bandwidth. As a result, the industry has been migrating to TCP/IP and wide-area Ethernet, which do a better job of sharing unused bandwidth but can get clogged up, causing delays when the load gets too high.

A similar thing has been happening with compute. Legacy enterprise systems built on dedicated hardware guarantee high performance. However, these struggle to reallocate compute across multiple applications with varying usage requirements or scale out across multiple servers. The move towards virtual machines, cloud architectures, and now containers helps enterprises gain the same economies for compute that TCP/IP brought to networking.

However, problems with statistical balancing approaches arise when too many users or apps hit the edges of performance. Packets get lost, and transactions dont get processed, resulting in increased delays and additional overhead as services try to make up for lost time with retries. More precise time synchronization helps networks, apps, and micro-services reach their peak load and then gracefully back down when required without wasting resources on packet retries or additional transaction processing.

Referring to the transition from dedicated compute and networks to modern approaches, Prabhakar says:

The trade-off cost us. In communication, we went from deterministic transit times to best-effort service. And computing went from centralized control of dedicated resources to highly variable runtimes and making us coordinate through consensus protocols.

To contextualize the field, the synchronization of mechanical clocks played an important role in improving efficiency and reducing railroad accidents in the 1840s. More recently, innovations in clocks built using quartz, rubidium, and cesium helped pave the way for more reliable and precise clocks. These led to more reliable networks, operations, and automation and played an essential role in the global positioning system (GPS) for accurate location tracking.

However, the inexpensive clocks built into standard computer and networking equipment tend to drift over time. In 1980, computer scientists developed network time protocol (NTP) for achieving millisecond (thousandths of a second) accuracy. Although the protocol supports 200 picoseconds (trillionth of a second) resolution, it loses accuracy owing to varying delays in packet networks, called packet noise.

One innovation on top of NTP, called chrony, combines advanced filtering and tracking algorithms to maintain tighter synchronization. Most cloud providers now recommend and support chrony with optimized configuration files for VMs.

Various other techniques, such as precision time protocol (PTP), data center time protocol (DTP), and pulse per second (PPS), achieve tens of nanosecond accuracy but require expensive hardware upgrades. They also sometimes require precisely measured cables in a data center between a mother clock on a central server and daughter clocks on distributed servers.

Clockworks HUYGENS innovated on NTP with a pure software approach that can be enhanced by existing networking hardware. It uses coded time transmission signals that help to identify and reject bad data caused by queuing delays, random jitter, and network card time stamp noise. It also processes the data using support vector machines that help estimate the one-way propagation times and achieve clock synchronization within 100 nanoseconds. Prior techniques required a round trip, which suffered from differences in each packet's routes.

Another substantial difference is that HUYGENS trades timing data across a mesh to improve resolution instead of the client-server approach used with NTP. The agent on each machine periodically exchanges small packets with five to ten other machines to determine the clock drift for each server or virtual machine in a mesh. The agent, in turn, generates a multiplier for slowing or speeding up the clock as prescribed by the corrections.

Ideally, all the computers would use the most advanced clocks available, but these are expensive and only practical for special applications. As a result, most modern clocks count the electrical vibration in quartz crystals that resonate at 32.7 thousand times per second (called a hertz for short). These are 100 times more accurate than mechanical approaches. These are inexpensive but can drift 6-10 microseconds per second unless cooled with more expensive hardware.

Atomic clocks monitor the cadence of atoms oscillating between energy states. These clocks are so precise that in 1967, a second was defined by the 9.192 billion oscillations per second of a cesium atom. Rubidium is a cheaper secondary clock that ticks at about 6.8 billion hertz. Current atomic clocks drift a second every hundred million years. However, in practice, they must be replaced every seven years. The current most accurate timekeepers, in labs only for now, use strontium that ticks at over a million billion hertz. These only drift a second in 15 billion years and are used for precise gravity, motion, and magnetic field measurements.

It's important to note that the lack of precision in quartz arises from the lack of temperature controls. Prabhakar says:

If these [quartz] clocks were temperature controlled, you can get down to the parts per billion. So, it'll be some small number of nanoseconds per second. Now, those kinds of clocks and network interface cards could easily be in the few hundreds of dollars to possibly up to $1,000 on their own. And the next level is rubidium clocks, which are three to five grand, and then cesium. As you add these costs to the raw cost of a server, you're piling up the costs across a large data center. So, it'd be nice if we could do it without having to resort to that. And that's more or less what we do.

Understanding virtual infrastructure is a dark art since most cloud providers dont inform you about their physical placement. In theory, at least each VM and networking connection is similar. In practice, it is not so simple. Clockworks has been developing a suite of tools to help analyze and optimize the cloud infrastructure using the new protocol. One research project last year explored the nuances of VM colocation.

A simple analysis might suggest that two VMs running on the same server would have a better connection to each other since packets might be able to flow over the faster internal bus. But Clockworks research across Google, Amazon, and Microsoft clouds revealed this is not necessarily the case. The fundamental issue is that the virtual networking service built into hypervisors that run these VMs creates a bottleneck. Sometimes, the hypervisor even tries to run what one would think should be local networking calls to a co-located VM across acceleration services over the much slower external network rather than just the much faster computer bus.

The problem is confounded when enterprises attempt to collocate multiple VMs running similar apps. For example, a business might have multiple instances of a front-end or business logic app all connected to a back-end database. But performance slows significantly during peak traffic when they are all trying to access the backend server. In one instance, they found that four co-located VMs only saw a quarter of the expected bandwidth arising from this competition. The fundamental problem, they surmised, was that the cloud providers were over-allocating bandwidth in the belief that each VM would require peak networking at different times.

Although the technology could improve many aspects of distributed networking, Clockwork is focusing on the cloud for now because that is the biggest consolidated market. Prabakar says:

Cloud is a nice place to sell because it's a place, and it's very big. I'm sure we could improve enterprise LANs and hotel Wi-Fi. But we started with the more consolidated, high-end crowd first and will then go from there.

I never really thought much about time synchronization until I heard about Clockwork a month ago. A few years ago, I was elated that Microsoft started using NTP to automatically tune my computer clock, which always seemed to drift a few minutes per month.

It seems like any protocol or tool that can automatically identify and reduce wasted bandwidth and computer resources could have a long shelf life and provide incredible value. The only concern is that HUYGENS is currently a proprietary protocol, which may limit its broader adoption as opposed to NTP, which became an Internet standard.

It is possible that Google, which bought Prabhakar's prior company and helped develop the technology, may ultimately buy them out and restrict the technology to the Google cloud. This might be a loss for the industry as a whole, but serve as a competitive differentiator for Googles growing cloud ambitions. It could also go the other way by releasing it as an open standard like many other Google innovations.

Original post:
Clockwork discovering wasted bandwidth between the nanoseconds - diginomica

Improving Digital Infrastructure through IoT Connectivity: Digitalising the Physical World with Local and Cloud Infrastructure Integration – Express…

By Sandeep Chellingi, Cloud & Infrastructure leader, Orion Innovation

The world has undergone a profound revolution thanks to the emergence of Cloud technologies. Concepts like Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) have taken root, reshaping how the software industry conceives of resilience, deployments, security, and infrastructure.

In the present landscape, the boundaries between the physical and digital realms have been further obscured by technologies like the Internet of Things (IoT) and Edge computing. Machines are no longer mere CPUs following user manuals; theyre now regarded as vital sources of data offering insights into usage patterns and strategies for enhanced efficiency. Businesses that effectively harness these technologies stand at the precipice of transforming their digital infrastructure. Leveraging IoT empowers businesses to amass an array of data from their physical assets. This data can then be channeled into PaaS services like Microsoft Azure Stream Analytics, extracting meaningful patterns and correlations.

These insights, in turn, can trigger subsequent workflows and generate dashboards that steer business decisions toward efficiency, optimisation, and innovation. Coupling IoT with Edge computing confers a distinct advantage, enabling businesses to analyze data from IoT devices with minimal latency and bolstered on-premise security for sensitive data. This approach also promotes compatibility with legacy devices. Edge computing seamlessly complements a companys data center by ensuring critical workloads are processed closer to data sources and consumers, leading to cost reduction and faster processing times. Additionally, it empowers businesses to make real-time decisions and focuses on low-latency processing use cases.

However, its vital to acknowledge that while Edge computing accelerates data processing, it doesnt replicate cloud computings scalability, AI-driven operations, analytics capabilities, or infrastructure cost reduction. It is imperative to integrate on-premise and cloud infrastructures, amplifying processing potential and distributing it across multiple data centers and hyperscale environments to effectively manage and analyze voluminous data. Opening new avenues for data processing inevitably broadens the attack surface, underscoring the need to prioritise robust security measures for both on-premise and cloud infrastructure.

Businesses can secure their cloud environment by leveraging native cloud-based security services like Microsoft Defender for Cloud, offering security posture detection, threat management, and standardised practices to safeguard cloud servers and databases. As for physical on-premise assets in Edge environments, adhering to the same security policies as those applied within on-premise networks is advisedpractices such as least privilege access, regular patch updates, and restricted network access.

Effective data management is another critical facet, especially considering the deluge of data pouring in from physical assets. Without efficient practices and policies in place, capitalising on the full potential of cloud, IoT, and Edge integration becomes challenging. Data must not be processed for its own sake; it should add value to the business. A well-structured data architecture is pivotal; it must acknowledge that stored data must be actionable to derive meaningful insights. The data lakehouse architecture, for instance, facilitates secure data engineering, machine learning, data warehousing, and business intelligence directly on vast data volumes housed in data lakes. This architecture supports unified catalogs, access controls, discovery mechanisms, auditing, and quality management.

To truly enhance digital infrastructure in response to the ever-evolving landscape, businesses must embrace IoT and Edge computing advancements. Over the last decade, these advancements and the integration of cloud technology have formed a comprehensive solution for intelligent data mining, heightened operational efficiency, and refined customer experiences through insight into user interactions and behaviors.

The key lies in deploying an infrastructure that spans multi-cloud and on-premises environments, selecting the appropriate technologies, and distinguishing between datasets suitable for Edge and those more suited for the cloud. All of this hinges on enforcing data management and security policies within computational boundaries to extract secure and pertinent insights for business growth.

Read more:
Improving Digital Infrastructure through IoT Connectivity: Digitalising the Physical World with Local and Cloud Infrastructure Integration - Express...

OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost … – The Information

OpenAI is currently on pace to generate more than $1 billion in revenue over the next 12 months from the sale of artificial intelligence software and the computing capacity that powers it. Thats far ahead of revenue projections the company previously shared with its shareholders, according to a person with direct knowledge of the situation.

The billion-dollar revenue figure implies that the Microsoft-backed company, which was valued on paper at $27 billion when investors bought stock from existing shareholdersearlier this year, is generating more than $80 million in revenue per month. OpenAI generated just $28 million in revenue last year before it started charging for its groundbreaking chatbot, ChatGPT.The rapid growth in revenue suggests app developers and companiesincluding secretive ones like Jane Street, a Wall Street firmare increasingly finding ways to use OpenAIs conversational text technology to make money or save on costs.Microsoft, Google and countless other businesses trying to make money from the same technology are closely watching OpenAIs growth.

See more here:
OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost ... - The Information

The pursuit of digital sovereignty for a trusted, integrated internet – GovInsider

GovInsider speaks with data protection experts on how the implementation of digital sovereignty practices can unite instead of fragment and what that could look like in practice.

The physical world we live in today is demarcated by national borders drawn over the course of time. The digital world today seems to be following in its footsteps.

Increasingly, countries around the world have been adopting numerous regulations that restrict data flow. The Information Technology & Innovation Foundation found that data localisation measures, which stipulate how data should be processed, stored, and perhaps confined within a geographic location, is rapidly increasing. From 2017 to 2021, regulations surrounding data localisation more than doubled.

The shift is often fuelled by geopolitical concerns, Welland Chu, Alliance Director for APAC at French multinational corporation Thales and the author of an ISACA article focusing on digital sovereignty, told GovInsider in a video interview.

A 2022 GovInsider article referenced Chinese-owned social media platform TikToks plan to move the private data of American citizens to cloud servers in the United States to prevent the risk of foreign governments accessing the data. This shift is also observed in Asia with Vietnam most recently imposing data localisation regulations in late 2022.

The same GovInsider article highlighted potential implications on economic growth due to increased costs on companies and the impediment of free trade.

A report funded by the NUS Centre for Trusted Internet and Community purports that Singapore, as a highly connected nation, may suffer from such practices given its reliance on the free flow of data.

Additionally, data localisation could hinder the performance of technologies like artificial intelligence and cloud computing, which work best with access to a broad set of free-flowing data.

The cloud, for example, is an enabler, says Francesco Bonfiglio, CEO of Gaia-X, in an online interview. Gaia-X is a non-profit which aims to promote digital sovereignty and enable the creation of data spaces through the development of trusted platforms and standards.

[The cloud is] a motorway that allows you to openly share data, make available applications, increase the power and performance of those applications, and make the services and products more competitive and meaningful for citizens, he explains.

But organisations are hesitant to openly share this data as there is a lack of platforms that are uniformly trusted on the market today, Bonfiglio says. He cited how adoption rates of cloud technology remain low in Europe.

Hindering innovation will only slow down the development of more efficient and citizen-centric public services. Bonfiglio gave the example of healthcare. Currently, healthcare research is often limited to specific networks of hospitals or research institutions with pre-arranged agreements on how data sharing will be done. But in order to build new drugs and vaccines, it is ideal to collect as much data as possible.

If we could freely create an open and distributed data space to collect all the data to analyse them with the power of genomics, and identify new drugs we could be faster in developing drugs, he says.

But most of the core data of organisations is not on a cloud as they fear making data available on a technology where they have no control, Bonfiglio says. This is why he is promoting the broader concept of digital sovereignty as opposed to simply data localisation.

Digital sovereignty stipulates that information is subject to the rules of its originating jurisdiction (regardless of its actual location), according to a Gartner report published in December 2022. This means that even if data is exported and stored in another country, the use of and access to the data is governed by regulations of its home country.

An example of this is demonstrated by Article 45 of the EUs General Data Protection Regulation (GDPR), which details that personal data can only be transferred should the importing country or organisation have data protection regulations and measures equivalent to that of the EU.

This approach is increasingly in demand today, according to Chu. He cited a survey by the International Data Corporation which predicts that 40 per cent of major enterprises will mandate data sovereignty from their cloud service providers by 2025.

One way organisations can have digital sovereignty is through technical controls which give users the ability to retain control of their data, wherever it is on the cloud, Chu suggests.

Most data today on the cloud is encrypted, which ensures that data is rendered unintelligible for those that do not possess the cryptographic keys. But if the cloud operators are the ones in control of these keys, that means that they have the ability to decrypt the data as they wish, Chu says. Instead, it should be the data exporters that is, the organisations themselves as opposed to the cloud service providers who hold the cryptographic keys. (Note that the terms organisations, customers, and users are used interchangeably in this article. They refer to the entities from whom the sensitive data generate.)

He suggests that this can be done through three methods: Bring Your Own Key, Hold Your Own Key and Bring Your Own Encryption.

First, Bring Your Own Key means that customers are able to generate their own cryptographic key and import it to the cloud to decrypt their data if they require access.

Meanwhile, Hold Your Own Key,such as that offered by AWS or Google, provides an additional layer of security, as it means that the cryptographic keys are always in the hands of the customer. If the cloud service provider or a third party requires access to the data, they will first need to issue a request, which the customer can review and only grant the access based on the context of the request.

Finally, Bring Your Own Encryption means that the cloud users encrypt the data before it even enters the cloud and upload the encrypted data into the cloud.

These approaches ensure that cloud operators or third parties like foreign governments are unable to access and decrypt data that is on the cloud without prior consent by the users who retain control of their own data ultimately, Chu explains.

The importance of this has previously been recognised by the Court of Justice of the European Union (CJEU). In 2020, the CJEU had ruled that appropriate safeguards, enforceable rights and effective legal remedies must be in place before the data are transferred from Europe to the US, Chu had written in the blog post.

Continued here:
The pursuit of digital sovereignty for a trusted, integrated internet - GovInsider

SaaS series – Pipedrive: Devs, APIs & the connectivity-efficiency … – ComputerWeekly.com

This is a guest post for the Computer Weekly Developer Network written by Siim Kibus, engineering manager at Pipedrive a company known for its sales and Customer Relationship Management (CRM) platform technology.

Kibus writes in full as follows

APIs as a product feature or part of a product offering are critically important for developers creating strong services for demanding end users.

Good APIs and developer relations are entwined. Savvy customers look at both when choosing a new SaaS offering for their tech stack and its often a factor in churn, if problems or lack of connections are not addressed.

In order to establish a consistent API look and feel, APIs should ideally be design-first, with design principles, documentation and processes to ensure a quality result. Developers end up with better results when they specifically design with safety, efficacy and (resource) efficiency in mind.

With that in mind, it can be tempting to build APIs for your own UI first and then release them to the public But a safer bet is to strictly separate the two from the start.

Without the right integrations, API or otherwise, a percentage of potential customers might not choose your service.

Cloud connections enable users to overcome product limitations or supercharge their tech stack and get more done. For that reason, its vital to stay close to your customers in more ways than one. For example, for performance reasons, by deploying to multiple AWS regions, or taking advantage of content delivery networks like Cloudflare. Yes, having your data across multiple locations will add complexity, but unless your customer base is located in one region, it will perform better. It also makes it easier to deal with data privacy legislation, like GDPR.

Additionally, services like Cloudflare provide DDoS protection which when you need, you need yesterday, like rate limiting. This can come by the aforementioned external tooling, a self-managed API gateway, or an API token. The latter are easier to get going for smaller businesses or teams but come with their own security and usability issues.

To help the dev team manage all that connectivity complexity, employ an infosec team or continuously train your devs to be aware of the major threats and make sure your work is audited, such as through what we call mission landing checklists, AKA release management checklists.

Beyond the API, theres a whole landscape of cloud connections to play with and deploying the right solution right sizes the software development and engineering workflow.

Webhooks can be beneficial both for the receiver and provider, with less resource spent on serving polling requests, which really scale up. In a SaaS CRM example, a sales teams deal win can be pushed through to the relevant users once the data has been input. The server does not need to be constantly pinged for an update.

Consider APIs to be a product feature not a silver bullet as their use is context-dependent.

Take Twilio, where APIs are the main product; but for SaaS providers, again, taking CRM as an example, youre likely to first build the UI and progress to providing APIs only after you have grown big enough that connectors can further scale your reach.

Siim Kibus, engineering manager at Pipedrive.

Always monitor actual performance for resource efficiency. For example, if you give the user too much data it slows down the performance, so keep watch how requests scale. Keep utilisation high by ensuring users see value. Ideally your API should offer the same features as the UI front end. Additionally, a great ecosystem lets people know about the connector, so maintain excellent dev relations with the wider community. Its how you encourage others to learn how to use it.

Just starting?

First focus on security and scalability even where this is contradictory! Trust is the most valuable currency. If users dont trust their data is secure, then they wont do business with you. Once thats in great shape, deploy cloud services which can supply your services to users even if demand rises quickly. No one wants to be spinning up new servers in a hurry.

See the article here:
SaaS series - Pipedrive: Devs, APIs & the connectivity-efficiency ... - ComputerWeekly.com