Page 913«..1020..912913914915..920930..»

5 things about AI you may have missed today: Zomato launches AI chatbot, Israels AI-powered plane and more – HT Tech

AI Roundup: Zomato launched its own AI chatbot today that will assist users in placing orders, while Google DeepMind co-founder emphasized that the US should only allow those buyers access to Nvidia's AI chips who agree to use this technology ethically. In a separate development, the Israeli Defense Ministry unveiled a surveillance plane equipped with AI-powered sensors.

All this, and more in our todays AI roundup.

Keeping up with the latest trends, Zomato announced the launch of its AI chatbot on Friday. The chatbot, called Zomato AI, will assist users in placing orders. In ablog post, Zomato announced that one of the chatbots standout features is its multiple-agent framework which empowers it with a variety of prompts for different tasks. Zomato AI will initially be exclusively rolled out to Zomato Gold members.

In a move that is expected to strengthen its defense capabilities, the Israeli Defense Ministry unveiled a new surveillance aircraft that is equipped with AI-powered sensors. The Israel Aerospace Industries (IAI) installed C4I, a high-tech and secure communication system, along with sensors to a Gulfstream G550 jet. According to areportby Fox News Digital, Brig. Gen. Yaniv Rotem, Head of military research and development in the DDR&D of the Ministry of Defense said, The use of Artificial Intelligence (AI) technology will enable an efficient and automated data processing system, which will produce actionable intelligence in real-time, enhancing the effectiveness of IDF operational activities.

With controversies surrounding AI and its regulation, Mustafa Suleyman, the co-founder of Google DeepMind said that the US should only allow those buyers access to Nvidia's AI chips who agree to use this technology ethically.Speakingto the Financial Times on Friday, Suleyman said, The US should mandate that any consumer of Nvidia chips signs up to at least the voluntary commitments and more likely, more than that.

Chinese technology companies such as Alibaba and Huawei are seeking approval from the Cyberspace Administration of China (CAC) for deepfake models, according to a list published by the cyberspace regulator on Friday. As per a Reutersreport, the tech companies are looking to obtain approval for adhering to regulations set by CAC in December in regards to deepfakes.

While AI has been deemed to play a major role in crime fighting, especially when combined with facial recognition technology, the UK Police say that some of their officers, known as super-recognizers, are even better than AI as they never forget a face. As per an AFP report, Tina Wallace, a surveillance expert with Thames Valley Police highlighted that only one percent of the population has this ability to remember faces. These officers are now being deployed outside nightclubs to identify sexual assault perpetrators.

The rest is here:

5 things about AI you may have missed today: Zomato launches AI chatbot, Israels AI-powered plane and more - HT Tech

Read More..

Unlocking the potential of IoT systems: The role of Deep Learning … – Innovation News Network

The Internet of Things (IoT), a network of interconnected devices equipped with sensors and software, has revolutionised how we interact with the world around us, empowering us to collect and analyse data like never before.

As technology advances and becomes more accessible, more objects are equipped with connectivity and sensor capabilities, making them part of the IoT ecosystem. The number of active IoT systems is expected to reach 29.7 billion by 2027, marking a significant surge from the 3.6 billion devices recorded in 2015. This exponential growth requires a tremendous demand for solutions to mitigate the safety and computational challenges of IoT applications. In particular, industrial IoT, automotive, and smart homes are three main areas with specific requirements, but they share a common need for efficient IoT systems to enable optimal functionality and performance.

Fig. 1: Overview of VEDLIoT technology layers and components

Increasing the efficiency of IoT systems and unlocking their potential can be achieved through Artificial Intelligence (AI), creating AIoT architectures. By utilising sophisticated algorithms and Machine Learning techniques, AI empowers IoT systems to make intelligent decisions, process vast amounts of data, and extract valuable insights. For instance, this integration drives operational optimisation in industrial IoT, facilitates advanced autonomous vehicles, and offers intelligent energy management and personalised experiences in smart homes.

Among the different AI algorithms, Deep Learning that leverages artificial neural networks is very appropriate for IoT systems for several reasons. One of the primary reasons is its ability to learn and extract features automatically from raw sensor data. This is particularly valuable in IoT applications where the data can be unstructured, noisy, or have complex relationships. Additionally, Deep Learning enables IoT applications to handle real-time and streaming data efficiently. This ability allows for continuous analysis and decision-making, which is crucial in time-sensitive applications such as real-time monitoring, predictive maintenance, or autonomous control systems.

Despite the numerous advantages of Deep Learning for IoT systems, its implementation has inherent challenges, such as efficiency and safety, that must be addressed to fully leverage its potential. The Very Efficient Deep Learning in IoT (VEDLIoT) project aims to solve these challenges.

A high-level overview of the different VEDLIoT components is given in Fig. 1. IoT is integrated with Deep Learning by the VEDLIoT project to accelerate applications and optimise the energy efficiency of IoT. VEDLIoT achieves these objectives through the utilisation of several key components:

VEDLIoT concentrates on some use cases, such as demand-oriented interaction methods in smart homes (see Fig. 2), industrial IoT applications like Motor Condition Classification and Arc Detection, and the Pedestrian Automatic Emergency Braking (PAEB) system in the automotive sector (see Fig. 3). VEDLIoT systematically optimises such use cases through a bottom-up approach by employing requirement engineering and verification techniques, as shown in Fig. 1. The project combines expert-level knowledge from diverse domains to create a robust middleware that facilitates development through testing, benchmarking, and deployment frameworks, ultimately ensuring the optimisation and effectiveness of Deep Learning algorithms within IoT systems. In the following sections, we briefly present each component of the VEDLIoT project.

Fig. 2: Smart mirror demonstrator developed as part of the smart home application in VEDLIoT

Various accelerators are available for a wide range of applications, from small embedded systems with power budgets in the milliwatt range to high-power cloud platforms. These accelerators are categorised into three main groups based on their peak performance values, as shown in Fig. 4.

The first group is the ultra-low power category (< 3 W), which consists of energy-efficient microcontroller-style cores combined with compact accelerators for specific Deep Learning functions. These accelerators are designed for IoT applications and offer simple interfaces for easy integration. Some accelerators in this category provide camera or audio interfaces, enabling efficient vision or sound processing tasks. They may offer a generic USB interface, allowing them to function as accelerator devices attached to a host processor. These ultra-low power accelerators are ideal for IoT applications where energy efficiency and compactness are key considerations, providing optimised performance for Deep Learning tasks without excessive power.

The VEDLIoT use case of predictive maintenance is a good example and makes use of an ultra-low power accelerator. One of the most important design criteria is low power consumption, as it is a battery-powered small box that can externally be installed on any electric motor and should monitor the electronic motor for at least three years without a battery change.

Fig. 4: Performance overview of AI accelerators

The next category is the low-power group (3 W to 35 W), which targets a broad range of automation and automotive applications. These accelerators feature high-speed interfaces for external memories and peripherals and efficient communication with other processing devices or host systems such as PCIe. They support modular and microserver-based approaches and provide compatibility with various platforms. Additionally, many accelerators in this category incorporate powerful application processors capable of running full Linux operating systems, allowing for flexible software development and integration. Some devices in this category include dedicated application-specific integrated circuits (ASICs), while others feature NVIDIAs embedded graphics processing units (GPUs). These accelerators balance power efficiency and processing capabilities, making them well-suited for various compute-intensive tasks in the automation and automotive domains.

The high-performance category (> 35 W) of accelerators is designed for demanding inference and training scenarios in edge and cloud servers. These accelerators offer exceptional processing power, making them suitable for computationally-intensive tasks. They are commonly deployed as PCIe extension cards and provide high-speed interfaces for efficient data transfer. The devices in this category have high thermal design powers (TDPs), indicating their ability to handle significant workloads. These accelerators include dedicated ASICs, known for their specialised performance in Deep Learning tasks. They deliver accelerated processing capabilities, enabling faster inference and training times. Some consumer-class GPUs may also be included in benchmarking comparisons to provide a broader perspective.

Selecting the proper accelerator from the abovementioned wide range of available options is not straightforward. However, VEDLIoT takes on this crucial responsibility by conducting thorough assessments and evaluations of various architectures, including GPUs, field-programmable gate arrays (FPGAs), and ASICs. The project carefully examines these accelerators performances and energy consumptions to ensure their suitability for specific use cases. By leveraging its expertise and comprehensive evaluation process, VEDLIoT guides the selection of Deep Learning accelerators within the project and in the broader landscape of IoT and Deep Learning applications.

Trained Deep Learning models have redundancy that can sometimes be compressed to 49 times their original size, with negligible accuracy loss. Although many works are related to such compression, most results show theoretical speed-ups that only sometimes translate into more efficient hardware execution since they do not consider the target hardware. On the other hand, the process of deploying Deep Learning models on edge devices involves several steps, such as training, optimisation, compilation, and runtime. Although various frameworks are available for these steps, their interoperability can vary, resulting in different outcomes and performance levels. VEDLIoT addresses these challenges through hardware-aware model optimisation using ONNX, an open format for representing Machine Learning models, ensuring compatibility with the current open ecosystem. Additionally, Renode, an open-source simulation framework, serves as a functional simulator for complex heterogeneous systems, allowing for the simulation of complete System-on-Chips (SoCs) and the execution of the same software used on hardware.

Furthermore, VEDLIoT uses the EmbeDL toolkit to optimise Deep Learning models. The EmbeDL toolkit offers comprehensive tools and techniques to optimise Deep Learning models for efficient deployment on resource-constrained devices. By considering hardware-specific constraints and characteristics, the toolkit enables developers to compress, quantise, prune, and optimise models while minimising resource utilisation and maintaining high inference accuracy. EmbeDL focuses on hardware-aware optimisation and ensures that Deep Learning models can be effectively deployed on edge devices and IoT devices, unlocking the potential for intelligent applications in various domains. With EmbeDL, developers can achieve superior performance, faster inference, and improved energy efficiency, making it an essential resource for those seeking to maximise the potential of Deep Learning in real-world applications.

Since VEDLIoT aims to combine Deep Learning with IoT systems, ensuring security and safety becomes crucial. In order to emphasise these aspects in its core, the project leverages trusted execution environments (TEEs), such as Intel SGX and ARM TrustZone, along with open-source runtimes like WebAssembly. TEEs provide secure environments that isolate critical software components and protect against unauthorised access and tampering. By using WebAssembly, VEDLIoT offers a common environment for execution throughout the entire continuum, from IoT, through the edge and into the cloud.

In the context of TEEs, VEDLIoT introduces Twine and WaTZ as trusted runtimes for Intels SGX and ARMs TrustZone, respectively. These runtimes simplify software creation within secure environments by leveraging WebAssembly and its modular interface. This integration bridges the gap between trusted execution environments and AIoT, helping to seamlessly integrate Deep Learning frameworks. Within TEEs using WebAssembly, VEDLIoT achieves hardware-independent robust protection against malicious interference, preserving the confidentiality of both data and Deep Learning models. This integration highlights VEDLIoTs commitment to securing critical software components, enabling secure development, and facilitating privacy-enhanced AIoT applications in cloud-edge environments.

Fig. 5: Requirements framework showing the various architectural views

Additionally, VEDLIoT employs a specialised architectural framework, as shown in Fig. 5, that helps to define, synchronise and co-ordinate requirements and specifications of AI components and traditional IoT system elements. This framework consists of various architectural views that address the systems specific design concerns and quality aspects, including security and ethical considerations. By using these architecture views as templates and filling them out, correspondences and dependencies can be identified between the quality-defining architecture views and other design decisions, such as AI model construction, data selection, and communication architecture. This holistic approach ensures that security and ethical aspects are seamlessly integrated into the overall system design, reinforcing VEDLIoTs commitment to robustness and addressing emerging challenges in AI-enabled IoT systems.

Traditional hardware platforms support only homogeneous IoT systems. However, RECS, an AI-enabled microserver hardware platform, allows for the seamless integration of diverse technologies. Thus, it enables fine-tuning of the platform towards specific applications, providing a comprehensive cloud-to-edge platform. All RECS variants share the same design paradigm to be a densely-coupled, highly-integrated communication infrastructure. For the varying RECS variants, different microserver sizes are used, from credit card size to tablet size. This allows customers to choose the best variant for each use case and scenario. Fig. 6 gives an overview of the RECS variants.

The three different RECS platforms are suitable for cloud/data centre (RECS|Box), edge (t.RECS) and IoT usage (u.RECS). All RECS servers use industry-standard microservers, which are exchangeable and allow for use of the latest technology just by changing a microserver. Hardware providers of these microservers offer a wide spectrum of different computing architectures like Intel, AMD and ARM CPUs, FPGAs and combinations of a CPU with an embedded GPU or AI accelerator.

Fig. 6: Overview of heterogeneous hardware platforms

VEDLIoT addresses the challenge of bringing Deep Learning to IoT devices with limited computing performance and low-power budgets. The VEDLIoT AIoT hardware platform provides optimised hardware components and additional accelerators for IoT applications covering the entire spectrum, from embedded via edge to the cloud. On the other hand, a powerful middleware is employed to ease the programming, testing, and deployment of neural networks in heterogeneous hardware. New methodologies for requirement engineering, coupled with safety and security concepts, are incorporated throughout the complete framework. The concepts are tested and driven by challenging use cases in key industry sectors like automotive, automation, and smart homes.

The VEDLIoT project has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 957197.

Please note, this article will also appear in the fifteenthedition of ourquarterly publication.

Read more from the original source:
Unlocking the potential of IoT systems: The role of Deep Learning ... - Innovation News Network

Read More..

Clockwork discovering wasted bandwidth between the nanoseconds – diginomica

If time is money, then what is the value of a nanosecond (billionth of a second)? Well, if you are building a large network of distributed applications, it could mean a ten percent improvement in performance or a ten percent reduction in cost for the same workload. It could also mean orders of magnitude fewer errors in transaction processing systems and databases.

At least that is according to Balaji Prabhakar, VMWare Founders Professor of Computer Science at Stanford University, whose research team helped pioneer more efficient approaches for synchronizing clocks in distributed systems. He later co-founded and is CEO of TickTock, which became Clockwork to commercialize the new technology. He also previously co-founded Urban Engines, which developed algorithms for congestion tracking and was acquired by Google in 2016. He has been working on designing algorithms to improve network performance for decades.

The company initially focused on improving the fairness of market placements in financial exchanges. They have started building out a suite of tools to synchronize cloud applications and enterprise networking infrastructure more broadly. Accurate clocks can help networks and applications to improve consistency, event ordering, and scheduling of tasks and resources with more precise timing.

This is a big advantage over the quartz clocks underpinning most computer and network timing, which can drift significantly enough to confound time-stamping processes in networks and transaction processing. Traditional network-based synchronization can help reduce this drift but suffers from path noise created by fluctuations in switching times, asymmetries in path lengths, and clock time stamp noise.

Prabhakar says some customers are interested in cost conservation and want to right-size deployments and switch off virtual machines they no longer need. He notes:

So, if they save 10% or more, and we charge them just 2%, the remaining is just pure savings.

Others want a more performant infrastructure. Clockwork did one case study that found they could get seventy VMs to do the work of a hundred by running apps and infrastructure more efficiently.

It is important to point out that there are two levels of improvement in their new approach. The new protocols can achieve ten nanosecond accuracy with direct access to networking hardware. In cloud scenarios mediated by virtual machines, the protocol can achieve a few microseconds of accuracy. However, thats still good enough to satisfy the new European MiFID IT requirements for high-frequency trading and many other use cases. It is also helpful that the clock sync agent requires less than one percent of a single-core CPU and less than 0.04% of the slowest cloud link while saving 10% of bandwidth.

Perhaps the most important thing to consider is the impact it could have on the trend toward clockless design in distributed systems. Clockless designs help scale up new application and database architectures but make basic operations like consistency, event ordering, and snapshotting difficult.

The more accurate clock sync technology is already showing promise in improving tracing tools, mitigating network congestion, and improving the performance of distributed databases like CockroachDB. Over the last couple of years, Clockwork has been building out supporting infrastructure around the new protocol called HUYGENS to improve cloud congestion control, create digital twins of virtual machine placement, and improve distributed database performance by ten to a hundred times. It is named after Christiaan Huygens, who invented the pendulum clock in the 1600s, which became the most accurate timekeeper until the commercialization of quartz clocks in the late 1960s.

The impact of synchronized time is increasingly important as the world transitions from dedicated networks and compute to various forms of statistical multiplexing. Networks have been transitioning from dedicated circuits using protocols like circuit-switched networks and asynchronous transmission mode (ATM), which delivered high-level performance for each user but wasted unused bandwidth. As a result, the industry has been migrating to TCP/IP and wide-area Ethernet, which do a better job of sharing unused bandwidth but can get clogged up, causing delays when the load gets too high.

A similar thing has been happening with compute. Legacy enterprise systems built on dedicated hardware guarantee high performance. However, these struggle to reallocate compute across multiple applications with varying usage requirements or scale out across multiple servers. The move towards virtual machines, cloud architectures, and now containers helps enterprises gain the same economies for compute that TCP/IP brought to networking.

However, problems with statistical balancing approaches arise when too many users or apps hit the edges of performance. Packets get lost, and transactions dont get processed, resulting in increased delays and additional overhead as services try to make up for lost time with retries. More precise time synchronization helps networks, apps, and micro-services reach their peak load and then gracefully back down when required without wasting resources on packet retries or additional transaction processing.

Referring to the transition from dedicated compute and networks to modern approaches, Prabhakar says:

The trade-off cost us. In communication, we went from deterministic transit times to best-effort service. And computing went from centralized control of dedicated resources to highly variable runtimes and making us coordinate through consensus protocols.

To contextualize the field, the synchronization of mechanical clocks played an important role in improving efficiency and reducing railroad accidents in the 1840s. More recently, innovations in clocks built using quartz, rubidium, and cesium helped pave the way for more reliable and precise clocks. These led to more reliable networks, operations, and automation and played an essential role in the global positioning system (GPS) for accurate location tracking.

However, the inexpensive clocks built into standard computer and networking equipment tend to drift over time. In 1980, computer scientists developed network time protocol (NTP) for achieving millisecond (thousandths of a second) accuracy. Although the protocol supports 200 picoseconds (trillionth of a second) resolution, it loses accuracy owing to varying delays in packet networks, called packet noise.

One innovation on top of NTP, called chrony, combines advanced filtering and tracking algorithms to maintain tighter synchronization. Most cloud providers now recommend and support chrony with optimized configuration files for VMs.

Various other techniques, such as precision time protocol (PTP), data center time protocol (DTP), and pulse per second (PPS), achieve tens of nanosecond accuracy but require expensive hardware upgrades. They also sometimes require precisely measured cables in a data center between a mother clock on a central server and daughter clocks on distributed servers.

Clockworks HUYGENS innovated on NTP with a pure software approach that can be enhanced by existing networking hardware. It uses coded time transmission signals that help to identify and reject bad data caused by queuing delays, random jitter, and network card time stamp noise. It also processes the data using support vector machines that help estimate the one-way propagation times and achieve clock synchronization within 100 nanoseconds. Prior techniques required a round trip, which suffered from differences in each packet's routes.

Another substantial difference is that HUYGENS trades timing data across a mesh to improve resolution instead of the client-server approach used with NTP. The agent on each machine periodically exchanges small packets with five to ten other machines to determine the clock drift for each server or virtual machine in a mesh. The agent, in turn, generates a multiplier for slowing or speeding up the clock as prescribed by the corrections.

Ideally, all the computers would use the most advanced clocks available, but these are expensive and only practical for special applications. As a result, most modern clocks count the electrical vibration in quartz crystals that resonate at 32.7 thousand times per second (called a hertz for short). These are 100 times more accurate than mechanical approaches. These are inexpensive but can drift 6-10 microseconds per second unless cooled with more expensive hardware.

Atomic clocks monitor the cadence of atoms oscillating between energy states. These clocks are so precise that in 1967, a second was defined by the 9.192 billion oscillations per second of a cesium atom. Rubidium is a cheaper secondary clock that ticks at about 6.8 billion hertz. Current atomic clocks drift a second every hundred million years. However, in practice, they must be replaced every seven years. The current most accurate timekeepers, in labs only for now, use strontium that ticks at over a million billion hertz. These only drift a second in 15 billion years and are used for precise gravity, motion, and magnetic field measurements.

It's important to note that the lack of precision in quartz arises from the lack of temperature controls. Prabhakar says:

If these [quartz] clocks were temperature controlled, you can get down to the parts per billion. So, it'll be some small number of nanoseconds per second. Now, those kinds of clocks and network interface cards could easily be in the few hundreds of dollars to possibly up to $1,000 on their own. And the next level is rubidium clocks, which are three to five grand, and then cesium. As you add these costs to the raw cost of a server, you're piling up the costs across a large data center. So, it'd be nice if we could do it without having to resort to that. And that's more or less what we do.

Understanding virtual infrastructure is a dark art since most cloud providers dont inform you about their physical placement. In theory, at least each VM and networking connection is similar. In practice, it is not so simple. Clockworks has been developing a suite of tools to help analyze and optimize the cloud infrastructure using the new protocol. One research project last year explored the nuances of VM colocation.

A simple analysis might suggest that two VMs running on the same server would have a better connection to each other since packets might be able to flow over the faster internal bus. But Clockworks research across Google, Amazon, and Microsoft clouds revealed this is not necessarily the case. The fundamental issue is that the virtual networking service built into hypervisors that run these VMs creates a bottleneck. Sometimes, the hypervisor even tries to run what one would think should be local networking calls to a co-located VM across acceleration services over the much slower external network rather than just the much faster computer bus.

The problem is confounded when enterprises attempt to collocate multiple VMs running similar apps. For example, a business might have multiple instances of a front-end or business logic app all connected to a back-end database. But performance slows significantly during peak traffic when they are all trying to access the backend server. In one instance, they found that four co-located VMs only saw a quarter of the expected bandwidth arising from this competition. The fundamental problem, they surmised, was that the cloud providers were over-allocating bandwidth in the belief that each VM would require peak networking at different times.

Although the technology could improve many aspects of distributed networking, Clockwork is focusing on the cloud for now because that is the biggest consolidated market. Prabakar says:

Cloud is a nice place to sell because it's a place, and it's very big. I'm sure we could improve enterprise LANs and hotel Wi-Fi. But we started with the more consolidated, high-end crowd first and will then go from there.

I never really thought much about time synchronization until I heard about Clockwork a month ago. A few years ago, I was elated that Microsoft started using NTP to automatically tune my computer clock, which always seemed to drift a few minutes per month.

It seems like any protocol or tool that can automatically identify and reduce wasted bandwidth and computer resources could have a long shelf life and provide incredible value. The only concern is that HUYGENS is currently a proprietary protocol, which may limit its broader adoption as opposed to NTP, which became an Internet standard.

It is possible that Google, which bought Prabhakar's prior company and helped develop the technology, may ultimately buy them out and restrict the technology to the Google cloud. This might be a loss for the industry as a whole, but serve as a competitive differentiator for Googles growing cloud ambitions. It could also go the other way by releasing it as an open standard like many other Google innovations.

Original post:
Clockwork discovering wasted bandwidth between the nanoseconds - diginomica

Read More..

Improving Digital Infrastructure through IoT Connectivity: Digitalising the Physical World with Local and Cloud Infrastructure Integration – Express…

By Sandeep Chellingi, Cloud & Infrastructure leader, Orion Innovation

The world has undergone a profound revolution thanks to the emergence of Cloud technologies. Concepts like Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) have taken root, reshaping how the software industry conceives of resilience, deployments, security, and infrastructure.

In the present landscape, the boundaries between the physical and digital realms have been further obscured by technologies like the Internet of Things (IoT) and Edge computing. Machines are no longer mere CPUs following user manuals; theyre now regarded as vital sources of data offering insights into usage patterns and strategies for enhanced efficiency. Businesses that effectively harness these technologies stand at the precipice of transforming their digital infrastructure. Leveraging IoT empowers businesses to amass an array of data from their physical assets. This data can then be channeled into PaaS services like Microsoft Azure Stream Analytics, extracting meaningful patterns and correlations.

These insights, in turn, can trigger subsequent workflows and generate dashboards that steer business decisions toward efficiency, optimisation, and innovation. Coupling IoT with Edge computing confers a distinct advantage, enabling businesses to analyze data from IoT devices with minimal latency and bolstered on-premise security for sensitive data. This approach also promotes compatibility with legacy devices. Edge computing seamlessly complements a companys data center by ensuring critical workloads are processed closer to data sources and consumers, leading to cost reduction and faster processing times. Additionally, it empowers businesses to make real-time decisions and focuses on low-latency processing use cases.

However, its vital to acknowledge that while Edge computing accelerates data processing, it doesnt replicate cloud computings scalability, AI-driven operations, analytics capabilities, or infrastructure cost reduction. It is imperative to integrate on-premise and cloud infrastructures, amplifying processing potential and distributing it across multiple data centers and hyperscale environments to effectively manage and analyze voluminous data. Opening new avenues for data processing inevitably broadens the attack surface, underscoring the need to prioritise robust security measures for both on-premise and cloud infrastructure.

Businesses can secure their cloud environment by leveraging native cloud-based security services like Microsoft Defender for Cloud, offering security posture detection, threat management, and standardised practices to safeguard cloud servers and databases. As for physical on-premise assets in Edge environments, adhering to the same security policies as those applied within on-premise networks is advisedpractices such as least privilege access, regular patch updates, and restricted network access.

Effective data management is another critical facet, especially considering the deluge of data pouring in from physical assets. Without efficient practices and policies in place, capitalising on the full potential of cloud, IoT, and Edge integration becomes challenging. Data must not be processed for its own sake; it should add value to the business. A well-structured data architecture is pivotal; it must acknowledge that stored data must be actionable to derive meaningful insights. The data lakehouse architecture, for instance, facilitates secure data engineering, machine learning, data warehousing, and business intelligence directly on vast data volumes housed in data lakes. This architecture supports unified catalogs, access controls, discovery mechanisms, auditing, and quality management.

To truly enhance digital infrastructure in response to the ever-evolving landscape, businesses must embrace IoT and Edge computing advancements. Over the last decade, these advancements and the integration of cloud technology have formed a comprehensive solution for intelligent data mining, heightened operational efficiency, and refined customer experiences through insight into user interactions and behaviors.

The key lies in deploying an infrastructure that spans multi-cloud and on-premises environments, selecting the appropriate technologies, and distinguishing between datasets suitable for Edge and those more suited for the cloud. All of this hinges on enforcing data management and security policies within computational boundaries to extract secure and pertinent insights for business growth.

Read more:
Improving Digital Infrastructure through IoT Connectivity: Digitalising the Physical World with Local and Cloud Infrastructure Integration - Express...

Read More..

OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost … – The Information

OpenAI is currently on pace to generate more than $1 billion in revenue over the next 12 months from the sale of artificial intelligence software and the computing capacity that powers it. Thats far ahead of revenue projections the company previously shared with its shareholders, according to a person with direct knowledge of the situation.

The billion-dollar revenue figure implies that the Microsoft-backed company, which was valued on paper at $27 billion when investors bought stock from existing shareholdersearlier this year, is generating more than $80 million in revenue per month. OpenAI generated just $28 million in revenue last year before it started charging for its groundbreaking chatbot, ChatGPT.The rapid growth in revenue suggests app developers and companiesincluding secretive ones like Jane Street, a Wall Street firmare increasingly finding ways to use OpenAIs conversational text technology to make money or save on costs.Microsoft, Google and countless other businesses trying to make money from the same technology are closely watching OpenAIs growth.

See more here:
OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost ... - The Information

Read More..

The pursuit of digital sovereignty for a trusted, integrated internet – GovInsider

GovInsider speaks with data protection experts on how the implementation of digital sovereignty practices can unite instead of fragment and what that could look like in practice.

The physical world we live in today is demarcated by national borders drawn over the course of time. The digital world today seems to be following in its footsteps.

Increasingly, countries around the world have been adopting numerous regulations that restrict data flow. The Information Technology & Innovation Foundation found that data localisation measures, which stipulate how data should be processed, stored, and perhaps confined within a geographic location, is rapidly increasing. From 2017 to 2021, regulations surrounding data localisation more than doubled.

The shift is often fuelled by geopolitical concerns, Welland Chu, Alliance Director for APAC at French multinational corporation Thales and the author of an ISACA article focusing on digital sovereignty, told GovInsider in a video interview.

A 2022 GovInsider article referenced Chinese-owned social media platform TikToks plan to move the private data of American citizens to cloud servers in the United States to prevent the risk of foreign governments accessing the data. This shift is also observed in Asia with Vietnam most recently imposing data localisation regulations in late 2022.

The same GovInsider article highlighted potential implications on economic growth due to increased costs on companies and the impediment of free trade.

A report funded by the NUS Centre for Trusted Internet and Community purports that Singapore, as a highly connected nation, may suffer from such practices given its reliance on the free flow of data.

Additionally, data localisation could hinder the performance of technologies like artificial intelligence and cloud computing, which work best with access to a broad set of free-flowing data.

The cloud, for example, is an enabler, says Francesco Bonfiglio, CEO of Gaia-X, in an online interview. Gaia-X is a non-profit which aims to promote digital sovereignty and enable the creation of data spaces through the development of trusted platforms and standards.

[The cloud is] a motorway that allows you to openly share data, make available applications, increase the power and performance of those applications, and make the services and products more competitive and meaningful for citizens, he explains.

But organisations are hesitant to openly share this data as there is a lack of platforms that are uniformly trusted on the market today, Bonfiglio says. He cited how adoption rates of cloud technology remain low in Europe.

Hindering innovation will only slow down the development of more efficient and citizen-centric public services. Bonfiglio gave the example of healthcare. Currently, healthcare research is often limited to specific networks of hospitals or research institutions with pre-arranged agreements on how data sharing will be done. But in order to build new drugs and vaccines, it is ideal to collect as much data as possible.

If we could freely create an open and distributed data space to collect all the data to analyse them with the power of genomics, and identify new drugs we could be faster in developing drugs, he says.

But most of the core data of organisations is not on a cloud as they fear making data available on a technology where they have no control, Bonfiglio says. This is why he is promoting the broader concept of digital sovereignty as opposed to simply data localisation.

Digital sovereignty stipulates that information is subject to the rules of its originating jurisdiction (regardless of its actual location), according to a Gartner report published in December 2022. This means that even if data is exported and stored in another country, the use of and access to the data is governed by regulations of its home country.

An example of this is demonstrated by Article 45 of the EUs General Data Protection Regulation (GDPR), which details that personal data can only be transferred should the importing country or organisation have data protection regulations and measures equivalent to that of the EU.

This approach is increasingly in demand today, according to Chu. He cited a survey by the International Data Corporation which predicts that 40 per cent of major enterprises will mandate data sovereignty from their cloud service providers by 2025.

One way organisations can have digital sovereignty is through technical controls which give users the ability to retain control of their data, wherever it is on the cloud, Chu suggests.

Most data today on the cloud is encrypted, which ensures that data is rendered unintelligible for those that do not possess the cryptographic keys. But if the cloud operators are the ones in control of these keys, that means that they have the ability to decrypt the data as they wish, Chu says. Instead, it should be the data exporters that is, the organisations themselves as opposed to the cloud service providers who hold the cryptographic keys. (Note that the terms organisations, customers, and users are used interchangeably in this article. They refer to the entities from whom the sensitive data generate.)

He suggests that this can be done through three methods: Bring Your Own Key, Hold Your Own Key and Bring Your Own Encryption.

First, Bring Your Own Key means that customers are able to generate their own cryptographic key and import it to the cloud to decrypt their data if they require access.

Meanwhile, Hold Your Own Key,such as that offered by AWS or Google, provides an additional layer of security, as it means that the cryptographic keys are always in the hands of the customer. If the cloud service provider or a third party requires access to the data, they will first need to issue a request, which the customer can review and only grant the access based on the context of the request.

Finally, Bring Your Own Encryption means that the cloud users encrypt the data before it even enters the cloud and upload the encrypted data into the cloud.

These approaches ensure that cloud operators or third parties like foreign governments are unable to access and decrypt data that is on the cloud without prior consent by the users who retain control of their own data ultimately, Chu explains.

The importance of this has previously been recognised by the Court of Justice of the European Union (CJEU). In 2020, the CJEU had ruled that appropriate safeguards, enforceable rights and effective legal remedies must be in place before the data are transferred from Europe to the US, Chu had written in the blog post.

Continued here:
The pursuit of digital sovereignty for a trusted, integrated internet - GovInsider

Read More..

Privacy-focused Gmail alternative got its start in Philly – The Philadelphia Inquirer

Google, Apple, and other smartphone providers manage most Americans email at little or no extra charge while skimming data to share with advertisers.

Thats not for everyone. A handful of independent, low-fee email services have won fans among individual and small-business users. One of the longest-lived is Fastmail, which traces its roots to POBox.com (Pobox), an email service started in a West Philadelphia dorm room in 1995.

Its been interesting to have been in the email business for such a long time, said Helen Horstmann-Allen, who cofounded Pobox with software developer Meng Weng Wong when they were University of Pennsylvania undergraduates. She now services as Fastmails chief operating officer from its U.S. headquarters at 1400 Market St. in Philadelphia. (Wong moved on to Silicon Valley and then his native Singapore and founded Legalese.com.)

The service, which lured users from Prodigy, AOL, and other internet dial-up connectors, at first cost $15 a year. (Standard Fastmail is now $5 a month.) It got enthusiastic early coverage in Wired magazine and was quickly turning a profit in those pre-Gmail, pre-Microsoft Outlook days.

Early users, Horstmann-Allen recalled, were techies who understood how email worked and sought a reliable provider they could reach on the phone. Later, customers focused on privacy, hoping to shield their communications data from big tech companies.

Its tough to compete with free, she said. But we can see now that free service is costing us dearly. People want to be a customer, not the product.

Controlling your online identity

Most recently, Horstmann-Allen said, Fastmail has attracted users who are seeking to gain control of their identity.

Identity from email? In a world of shifting reality, your email has become your electronic memory, said Bron Gondwana, the companys Australia-based chief executive.

The original Fastmail, an Australian email hosting service, bought Pobox in 2015. It kept the Pobox brand for its email forwarding service and also offers Topicbox, a group email management service.

The internet changes so fast, Gondwana said. You dont know if an article you read online has changed from yesterday. But email doesnt change. And that is becoming valuable. A lot of young people are now using email, not so much to communicate with people, but to save their information.

Fastmail, which is self-funded and privately held, now claims 280,000 email accounts and a total of more than 400,000 users. Gondwana said that includes at least 100 people in each of more than 100 countries, with the largest group in the United States. The company also sells a growing list of communications services. It maintains its own servers not relying on cloud servers from Amazon or other tech giants at locations including sites in New Jersey and Washington State.

We work with hardware, Horstmann-Allen said. We arent spinning your data in a cloud.

The company employs 60 worldwide, including 25 in Philadelphia.

Horstmann-Allen said the service has attracted more small-business clients in recent years. The difference between free and paid is when something happens to your Google account, you just about need a son who works at Google to straighten it out. Our support team is one of the best in the business.

Google didnt respond to queries about competition from independent providers. Companies such as Fastmail arent a threat to internet services that provide connectivity and many other services besides email: Independent email providers dont represent a competitive solution [vs.] Comcast Business; rather, these could be one of many communication and collaboration applications a customer uses, said Matt Helmke, a Comcast spokesperson.

Last year, PC Magazine rated Fastmail the best service for email geeks, noting it has evolved from a feature-rich tool for email nerds to a tool as simple and familiar as Gmail with calendar, contacts, snooze warnings, and pinned notes.

The magazine by contrast praised rival ProtonMail, a larger, Swiss-based independent email service founded in 2014 initially crowdfunded, its basic service is free that is popular among software developers for offering an especially high level of encryption and anonymity.

Skiff.com, a 2-year-old, San Francisco-based email service, in an online comparison last spring called ProtonMail and Fastmail the new and the old, rating ProtonMails security as more advanced but gave Fastmail an edge for more diverse features, such as rich text search and storage for hundreds of aliases.

A focus on customer service

At least a few of Fastmails business users are hometown-proud Philadelphians. When we opened our brick-and-mortar store in 2018 and we had to choose an email, we werent into the whole Google thing, said Bill Rhoda, cofounder of Philadelphia Typewriter, a 10-person repair shop and parts maker on East Passyunk Avenue in South Philly.

All the businesses around me are using Google the spreadsheet, the drive, the email but on my personal Gmail, theres always huge amount of spam, just nonstop emails that have nothing to do with me or what I am interested in, Rhoda said.

An early employee told him about Fastmail, and Rhoda got excited: It wasnt in the mainstream. Its kind of this cult thing, and when I realized they were actually based in Philly, that was really cool.

Rhoda signed his company up. And now I dont have spam issues. They are really good at registering what is spam based in my inbox and what I dont what to see. I like that it allows me to focus on what I need to focus on.

He also likes being able to call Fastmail and get help from human employees.

Were a scrappy start-up, and its great to be able to support a Philly-grown business, instead of one of the big guys, said Samantha Wittchen, director of programs and operations at Circular Philadelphia, a Kensington recycling-system designer, which uses Fastmail, as do two other enterprises she helps lead.

When she started in business, Wittchen used email from her internet service. I was not loving that. A lot of spam got through, she said. There were filters, but it was difficult to set them up for people who were not super-tech-savvy.

Checking Googles email offerings, Wittchen said, you had to figure it all out on your own. They have a knowledge base, they have instructions, you can map the servers; I know how to do that. But one of my business partners was struggling with Yahoos really clunky Web interface. Moving us all to Fastmail was so much easier.

She especially likes that Fastmail makes it easy to set up free specialized email addresses and multiple accounts.

Horstmann-Allen said a career in email has taught her the value of multiple addresses. They are a form of control, she said. I can give you different e-mails for Amazon or Apple [correspondence]. Give one to a politician, you discover very fast how quickly they share them with fund-raisers.

There are very few people I will give my phone number to, she added. With email, I can just block it.

Read more:
Privacy-focused Gmail alternative got its start in Philly - The Philadelphia Inquirer

Read More..

SaaS series – Pipedrive: Devs, APIs & the connectivity-efficiency … – ComputerWeekly.com

This is a guest post for the Computer Weekly Developer Network written by Siim Kibus, engineering manager at Pipedrive a company known for its sales and Customer Relationship Management (CRM) platform technology.

Kibus writes in full as follows

APIs as a product feature or part of a product offering are critically important for developers creating strong services for demanding end users.

Good APIs and developer relations are entwined. Savvy customers look at both when choosing a new SaaS offering for their tech stack and its often a factor in churn, if problems or lack of connections are not addressed.

In order to establish a consistent API look and feel, APIs should ideally be design-first, with design principles, documentation and processes to ensure a quality result. Developers end up with better results when they specifically design with safety, efficacy and (resource) efficiency in mind.

With that in mind, it can be tempting to build APIs for your own UI first and then release them to the public But a safer bet is to strictly separate the two from the start.

Without the right integrations, API or otherwise, a percentage of potential customers might not choose your service.

Cloud connections enable users to overcome product limitations or supercharge their tech stack and get more done. For that reason, its vital to stay close to your customers in more ways than one. For example, for performance reasons, by deploying to multiple AWS regions, or taking advantage of content delivery networks like Cloudflare. Yes, having your data across multiple locations will add complexity, but unless your customer base is located in one region, it will perform better. It also makes it easier to deal with data privacy legislation, like GDPR.

Additionally, services like Cloudflare provide DDoS protection which when you need, you need yesterday, like rate limiting. This can come by the aforementioned external tooling, a self-managed API gateway, or an API token. The latter are easier to get going for smaller businesses or teams but come with their own security and usability issues.

To help the dev team manage all that connectivity complexity, employ an infosec team or continuously train your devs to be aware of the major threats and make sure your work is audited, such as through what we call mission landing checklists, AKA release management checklists.

Beyond the API, theres a whole landscape of cloud connections to play with and deploying the right solution right sizes the software development and engineering workflow.

Webhooks can be beneficial both for the receiver and provider, with less resource spent on serving polling requests, which really scale up. In a SaaS CRM example, a sales teams deal win can be pushed through to the relevant users once the data has been input. The server does not need to be constantly pinged for an update.

Consider APIs to be a product feature not a silver bullet as their use is context-dependent.

Take Twilio, where APIs are the main product; but for SaaS providers, again, taking CRM as an example, youre likely to first build the UI and progress to providing APIs only after you have grown big enough that connectors can further scale your reach.

Siim Kibus, engineering manager at Pipedrive.

Always monitor actual performance for resource efficiency. For example, if you give the user too much data it slows down the performance, so keep watch how requests scale. Keep utilisation high by ensuring users see value. Ideally your API should offer the same features as the UI front end. Additionally, a great ecosystem lets people know about the connector, so maintain excellent dev relations with the wider community. Its how you encourage others to learn how to use it.

Just starting?

First focus on security and scalability even where this is contradictory! Trust is the most valuable currency. If users dont trust their data is secure, then they wont do business with you. Once thats in great shape, deploy cloud services which can supply your services to users even if demand rises quickly. No one wants to be spinning up new servers in a hurry.

See the article here:
SaaS series - Pipedrive: Devs, APIs & the connectivity-efficiency ... - ComputerWeekly.com

Read More..

The shortcomings of serverless computing – InfoWorld

Arecent reportpublished by Datadog, a monitoring and observability cloud service provider, found that serverless computing is more popular than ever. An analysis of serverless computing use across Datadog customers found that more than 70% of AWS customers, 60% of Google Cloud customers, and 49% of Microsoft Azure customers use one or more serverless solutions.

Nothing new here, really; serverless is old news and is baked into the cloud development cake when it comes to picking the best development platform for net-new and migrated cloud applications. Its fast, does not require a great deal of infrastructure planning (almost none), and the applications seem to perform well. No brainer, right? Not so fast.

Serverless computing promises to reduce infrastructure management and enhance developer productivity. However, as with any technology, there are downsides to consider. Most of the people picking serverless may not see the whole picture. Perhaps thats you.

One of the primary concerns with serverless computing is the cold-start latency. Unlike traditional cloud computing models where virtual machines or containers are preprovisioned, serverless functions must be instantiated on demand. Although this provides dynamic scaling, it introduces a delay known as a cold start. Its not good and can impact the application response time.

Although providers have improved this issue, it can still be a concern for applications with strict real-time performance requirements. Ive had a few people tell me that they needed to swap serverless out because of this, which delays development time as you scramble to find another platform.

You may be thinking that this is only an issue with apps that require real-time performance. There are more of those applications than you think. Perhaps it is a requirement of the application youre about to push to a serverless platform.

This should be well understood, but I still run into developers and architects who believe that serverless applications are easily portable between cloud brands. Nope, containers are portable; serverless is different. Ive seen avoids vendor lock-in in more than a few serverless computing presentations, which is a bit jarring.

Each cloud provider has its unique serverless implementation, making it challenging to switch providers without significant code and infrastructure modifications. This can limit an organizations flexibility and hinder its ability to adapt to changing business needs or take advantage of competitive offerings. Now with the movement to more multicloud deployments, this could be a valid limitation that needs to be factored in.

Traditional debugging techniques, such as logging into a server and inspecting the code, may not be feasible in a serverless environment. Additionally, monitoring the performance and health of individual serverless functions can be complicated, especially when dealing with many serverless functions spread across different services.

Organizations must invest in specialized tools and techniques to debug and monitor serverless applications effectively. This usually is better understood when the need arises, but at that point, it can cause delays and cost overruns.

The big problem is cost management of deployed serverless systems. Serverless computing can provide cost savings by eliminating the need to manage and provision infrastructure (which many developers and architects screw up by overprovisioning resources). However, it is essential to monitor and control costs effectively, and since serverless systems dynamically allocate resources behind the scenes, it isnt easy to manage cloud resource costs directly. Furthermore, as applications become complex, the number of processes and associated resources may increase, leading to unexpected overruns.

Organizations should closely monitor resource utilization and implement cost management strategies to avoid surprises, but most dont, making serverless less cost-effective. Many organizations can operate applications in more cost-optimized ways by taking a non-serverless path for some applications.

Serverless computing does offer increased developer productivity and reduced infrastructure management overhead. Its the easy button for deploying applications. However, it is crucial to consider the potential disadvantages and make informed decisions. Careful planning, proper architectural design, and effective monitoring can help organizations navigate these challenges and fully leverage the benefits of serverless computingor decide that its not right for certain applications.

See the original post here:
The shortcomings of serverless computing - InfoWorld

Read More..

Rackspace Faces Massive Cleanup Costs After Ransomware Attack – Dark Reading

After being hit with a ransomware attack at the end of 2022, Rackspace is now faced with fronting the cost of the cleanup, as well as legal fees, which at present have amounted to $10.8 million.

The attack, which occurred in December 2022, disrupted email service for thousands of the customers of the managed cloud hosting services company, which are mostly small-to-midsize businesses. The ransomware attack came in the form of a zero-day exploit against a server-side request forgery vulnerability within the Microsoft Exchange server at the hands of Play ransomware group. The vulnerability known as CVE-2022-41080 was patched by Microsoft a month before the attack.

In a US filing, the company noted how the expenditures largely go to "costs to investigate and remediate, legal and other professional services, and supplemental staff resources that were deployed to provide support to customers."

In addition to those costs, Rackspace has been named in multiple lawsuits due to the ransomware attack, many of which are seeking compensation via monetary funds, among other things.

Rackspace expects a significant amount of the costs to be reimbursed by cyber-insurance companies. Ithas not noted whether or not it paid the initial ransom request.

Keep up with the latest cybersecurity threats, newly-discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

Read the original post:
Rackspace Faces Massive Cleanup Costs After Ransomware Attack - Dark Reading

Read More..