Category Archives: Cloud Servers

Google’s Gen-AI models are coming to more Android phones with … – Android Authority

TL;DR

Google and Qualcomm have partnered to bring powerful generative AI experiences to Android phones. The two companies announced their tie-up to bring more on-device AI to Android devices during the Snapdragon 8 Gen 3 launch. Googles state-of-the-art foundation AI models will run on Qualcomms Hexagon NPU. These are likely the same foundation models that also run on the Pixel 8 Pro and power AI features like on-device recorder summaries, Smart Reply in Gboard, and the upcoming Video Boost feature.

With Googles AI models running on-device on the Snapdragon 8 Gen 3, more Android phones will be able to perform on-device AI tasks, making way for new features, performance improvements, and power savings.

Among the first generative AI models you see running on Snapdragon will come from none other than Google. Over the past few months, researchers at Google have been working to take their massive next-generation large language models and distill them to fit on a mobile device. Soon, youll be able to do more on-device with Google applications than ever before, said Alex Katouzian, SVP and GM of Mobile, Compute, and XR at Qualcomm.

Googles VP of engineering for Android, Dave Burke, also joined the conversation on stage during the Snapdragon 8 Gen 3 launch to confirm that the companys partnership with Qualcomm will enable complex AI models to run on-device on upcoming Android flagship phones, removing the need for internet connectivity and round trips to cloud servers.

Simply put, we can now expect smarter and faster AI processing on premium Android phones with the latest Snapdragon chip. The Qualcomm Snapdragon 8 Gen 3 processor supports large language models with over 10 billion parameters running at almost 15 tokens per second. You can read more about the Snapdragon 8 Gen 3 here. Qualcomm also gave us a glimpse at what to expect from its next flagship processor. You can read about the Snapdragon 8 Gen 4 here.

Link:
Google's Gen-AI models are coming to more Android phones with ... - Android Authority

TSMC Makes The Best Of A Tough Chip Situation – The Next Platform

If you had to sum up the second half of 2022 and the first half of 2023 from the perspective of the semiconductor industry, it would be that we made too many CPUs for PCs, smartphones, and servers and we didnt make enough GPUs for the datacenter. Or rather, Taiwan Semiconductor Manufacturing Co, the worlds largest and most important chip foundry, didnt.

The world would probably buy somewhere between 1 million and 2 million datacenter GPUs this year, but apparently Nvidia could only make on the order of 500,000 of its most advanced Hopper H100 devices this year, and that was limited by the availability of the Chip on Wafer on Substrate (CoWoS) 2.5D packaging technique that has been in use along with HBM stacked memory for GPUs and other kinds of compute for the past decade.

And so, TSMC has had to make the best of the situation even as revenues and earnings remain in a slump and the cost of each successive manufacturing process node gets more and more expensive.

In the third quarter ended in September, TSMCs revenues were down 14.6 percent to $17.28 billion and net income fell by 28.1 percent to $6.66 billion. The various IT channels are still burning off their capacity of CPUs and GPUs for client devices, and the hyperscalers and cloud builders are also digesting the tens of billions of dollars in servers and storage that they acquired in 2022 and only buying what they need now as they await the next generation of devices and another heavy investment cycle that could start next year.

Everyone is shifting some of their server, storage, and switching budgets in the datacenter to higher cost and more strategically important AI training and inference systems as the generative AI boom is well underway and there is no sign of that boom stopping anytime soon after a period of hyperinflation this year. And so, GPUs and anything that can do matrix math like a GPU are almost worth their weight in gold as companies try to figure out how to weave generative AI capabilities into their applications during a period of intense demand and limited supply.

In these cases, TSMC can charge more for its advanced chip etching and packaging capacity, but not enough to offset the declines in other parts of its business. Unlike, say, Nvidia, which can pretty much charge whatever it wants for any GPU that it can get out of the factories. Its financials will continue to defy gravity for a while. But eventually, as capacity constraints ease supply will catch up with demand and prices will normalize. But not this year, not even as Nvidia doubles its CoWoS capacity and seeks to increase it further through 2024.

TSMC has to cope with a lot of tensions in its line of work, and one of them is that it has to do a lot of research, development, and capital investment to make sure it can keep advancing the state of the art in semiconductors. And when business slows, as it has in recent quarters for reasons sometimes out of its control and sometimes because it is difficult to plan for booms like the one that took off for GenAI in late 2022, the company has to make a lot of calls about when to curtail capital spending and still not leave itself flatfooted. Thats because TSMCs customers can benefit much more from supply shortages and high demand than it can. Again, Nvidia is the illustrative case in point.

In the September quarter, TSMC really pulled back on the capital investment reins, spending only $7.1 billion, a decrease of 18.9 percent compared to the year ago period and also representing a 13.1 percent sequential increase from the $8.17 billion the company spent on factories, etching equipment, and so forth in Q2 2023. Wendell Huang, chief financial officer at TSMC, said on a call with Wall Street analysts that TSMC was expecting to spend only $32 billion for capital expenses in all of 2023, with 70 percent being for advanced chip making gear at the smallest nodes (5 nanometers and lower these days), 20 percent for specialty technologies that tend to be at larger nodes (12 nanometers up to 28 nanometers), and about 10 percent on packaging, testing, and mask making gear. That means capital expenses in Q4 2023 will be around $6.8 billion, a drop of 32.7 percent.

This is as the ramp for 3 nanometer processes is well under way and 2 nanometer technologies are building momentum towards ramp.

The third quarter was the first where TSMC sold products based on 3 nanometer processes, and this node already accounted for 6 percent of revenues, or just over $1 billion out of the chute. Chips etched with 5 nanometer processes drove $6.39 billion in revenues, or 37 percent of the total, while 7 nanometer processes still drive 16 percent of revenues, or $2.76 billion. All other processes, ranging from 12 nanometers all the way up to 250 nanometers drove the remaining $7.08 billion in sales. All of those older nodes have plenty of use a lesson that Intel forgot because it was a foundry with only one customer, and one that always needed to be at the bleeding edge to compete in CPUs.

N3 is already in volume production with good yield and we are seeing a strong ramp in the second half of this year, supported by both HPC and smartphone applications, explained CC Wei, chief executive officer at TSMC, on the call. We reaffirm N3 will contribute mid-single-digit percentage of our total wafer revenue in 2023, and we expect a much higher percentage in 2024 supported by robust demand from multiple customers.

Remember that when TSMC says HPC it means any kind of high performance silicon, which can be a PC or server CPU or GPU or a networking ASIC. TSMC does not mean ASICs dedicated to HPC simulation and modeling or AI training or inference although these certainly are within the scope of TSMCs HPC definition. The 3 nanometer node will be a long-lasting one, with an N3E crank having just passed qualification further enhancements in the N3P and N3X processes in the works. The N5 node has been in production since Q3 2020, just to give you a sense of how long these nodes can be in the field, and it has only just become the dominant revenue generator. The N7 nodes are on the wane, of course, but will also be in the portfolio for a long, long time.

Like Intel 18A, TSMC N2 will employ a nanosheet transistor structure and will drive both transistor density and power efficiency. For smartphone and HPC applications, which drive the business, Wei said that interest in N2 is at or higher than it has been for N3 at the same point in their development and production cycles. The backside power rail adjunct technology for N2 will be available in the second half of 2025 and put into production in 2026.

As for who will have the lead in process in 2025, Wei is having none of the smack talk of Intel 18A versus TSMC N2.

We do not underestimate any of our competitors or take them lightly, Wei said. Having said that, our internal assessment shows that our N3P now I repeat again, our N3P technology demonstrates comparable PPA to 18A, my competitors technology, but with an earlier time to market, better technology maturity, and much better cost. In fact let me repeat again our 2 nanometer technology without backside power is more advanced than both N3P and 18A and will be semiconductor industrys most advanced technology when it is introduced in 2025.

Your move, Pat.

Last thing. TSMC did not divulge how much of its revenues were being driven by AI training and inference workloads, as it did during its Q2 2023 conference call. But if the ratio between AI revenues and TSMC HPC revenues is consistent, then it should have been just shy of $1 billion. That seems low to us, but it might just be an indication of how much profits companies like Nvidia and AMD can extract from GPU sales these days.

If you can make a compute engine for an Nvidia or an AMD a few hundred bucks and add HBM memory for a few thousand bucks and then an Nvidia or an AMD can sell the complete device $30,000 and then maybe get another 2X factor in sales turning those compute engines into a system with lots of CPU compute, CPU memory, flash storage, and networking, this becomes a very big business. So that $1 billion in AI training and inference chip sales for TSMC can balloon up to tens of billions of dollars in hardware spending at the end user level even if those end users are hyperscalers and cloud builders among the Super 8 or Top 10 or whatever who get steep volume discounts from companies like Nvidia and AMD.

Maybe TSMC and its downstream chip partners and further downstream partners could adopt a new short-term strategy: The more you buy, the even more you should pay. At this point, it is just as logical to say that those who need 20,000 GPUs should pay more per unit than someone who needs only 200 or even 2,000 as it is logical to say they should be paying less, as the IT market seems to have believed for decades.

Right?

Continue reading here:
TSMC Makes The Best Of A Tough Chip Situation - The Next Platform

iSCSI vs. NFS: 5 Key Comparisons | Spiceworks – Spiceworks News and Insights

iSCSI is a storage area networking (SAN) protocol. Also known as iSCSI SAN storage, it defines the data transfer process between host and storage systems. Additionally, iSCSI enables small computer system interface (SCSI) data transportation from the iSCSI initiator to the storage target and vice versa, a process that takes place at the block level using TCP/IP networks.

Compared to the more traditional fiber channel (FC) SAN, iSCSI storage is cost-effective and does not require dedicated hardware such as an FC switch and FC host bus adapter (HBA). In fact, iSCSI SAN storage can be deployed on existing network hardware such as routers and fiber switches. iSCSI is also faster and more efficient than FC SAN as it is based on the block transfer standard.

iSCSI has two key components: the iSCSI initiator and the iSCSI target. The iSCSI initiator is a hardware or software component deployed at the server level to transmit requests and receive responses from the iSCSI target. Conversely, the iSCSI target is deployed at the storage level and provides the required storage space.

See More: What Is a Subnet Mask? Definition, Working, and Benefits

NFS is an open-source networking protocol for distributed file sharing. This standard protocol is leveraged for data distribution and relies on TCP/IP for communication. Enterprises can use NFS on virtually any operating system or device.

In a nutshell, NFS enables users to remotely access les on servers without disrupting the user experience the files can be accessed seamlessly as if they are stored locally. Apart from this, NFS provides scalability and security.

First introduced in 1985, NFS was updated several times. The first version was built to link UNIX hosts and remote computers. NFSv2, which served the same purpose as version 1 but had added TCP/IP support, was released in 1989. NFSv3 was released in 1994, featuring enhanced networking support and increased eciency. Finally, the current version of the network le system is NFS Version 4 (NFSv4). This version is documented in RFC 7530 and focuses on security, performance, and data integration.

NFS is popular for several use cases. For instance, it is deployed in UNIX environments to share les between users and computers with read or write access. Think of a field professional with no fixed endpoint device this person can access the required les from different endpoints using NFS even though the files are not stored in the local system. This is possible because the les are stored on a central network server.

See More: What Is an Intranet? Meaning, Features, and Best Practices

Internet small computer system interface (iSCSI) is a SAN protocol that sets rules for data transfers between host and storage systems. On the other hand, network file system (NFS) is a distributed file system protocol that enables users to access les stored remotely, similar to how local storage is accessed.

iSCSI vs. NFS: Architectural Overview

Sources: TechTargetOpens a new window and BaeldungOpens a new window

Lets dive in and learn more about the key comparisons between iSCSI and NFS.

iSCSI works by transmitting block-level data between an iSCSI initiator (placed on a server) and an iSCSI target (placed on a storage device). Once the packet reaches the iSCSI target, it is disassembled, and the SCSI commands are separated by the protocol. This allows the storage to be visible using any operating system.

Unlike its alternatives (such as fiber channels), iSCSI can work on existing IP infrastructure without dedicated cabling. As a result, it can serve as a low-cost SAN option.

iSCSI can establish communications with arbitrary SCSI device types. This protocol is widely used by system administrators to set up servers for disk volume access on storage arrays. However, performance issues may arise if iSCSI is not deployed on a dedicated network or subnet.

The client-side may issue two types of requests: read requests and write requests. Read requests are issued when the client wants to read the data on the server. Write requests are issued to the server when the client computer needs to write over the existing data. The read and write requests are implemented using the standard read/write operations. The server computer completes the request by leveraging the corresponding protocol. The data is then returned to the client computer.

Data requests from NFS clients are transmitted through the NFS server. The server retrieves the requested data from the storage and transmits it back to the clients.

Shared le locking is a key software feature of NFS. Shared le access can be implemented by properly specifying both le locking and caching parameters. If the user fails to specify these parameters and le data is only retained in a host cache, all NFS storage clients use the same locking and caching parameters for mounted les.

In cases where multiple computers or threads attempt to access one le simultaneously, the shared file access feature may malfunction. The le locking mechanism was developed to improve the efficiency of shared le access functionality. Shared le access can be executed within a single host or among several hosts, with NFS being used for accessing the same le.

The iSCSI initiator is the host-based hardware or software component. Deployed on the server, this component enables data transmission to and from the storage array. The source array is also capable of serving as a data migration initiator among the storage arrays. The storage network can be created using standard Ethernet components for the software initiator. iSCSI initiators manage several parallel communication links to several targets at once.

The iSCSI target is the component deployed on the storage side. It essentially plays the role of a server that hosts storage resources and allows storage access. iSCSI targets are basically the storage resources within an iSCSI server. They generally represent hard disk storage and are usually accessed via Ethernet.

Targets are data providers and include tape libraries and disk arrays. They expose one or more SCSI logical unit numbers (LUNs) to specific iSCSI initiators. However, iSCSI targets are the logical entities within the context of enterprise storage. iSCSI targets manage several parallel communication links to several initiators.

Next comes the iSCSI HBA, similar to a fiber channel. It offloads computing responsibilities from the system processor. iSCSI HBA helps enhance server network and storage performance but can cost more than a standard Ethernet NIC.

Finally, the iSCSI iSOE can be a good alternative for iSCSI HBA, as it provides similar functionality at a lower cost.

NFS operations leverage three main components, which, logically speaking, reside at the three OSI model layers corresponding to the TCP/IP application layer.

The above three key components or subprotocols represent most of the NFS protocol. Apart from them, the protocol includes numerous other functions. Of these, the key ones are highlighted below.

A key advantage of iSCSI is its use of TCP/IP, which allows for long-distance IP routing without external gateway hardware. It also provides a large storage network environment and increased flexibility.

Standard Ethernet

Using Standard Ethernet by iSCSI means that the protocol does not require expensive components to be built and deployed.

Storage array

A large storage array for iSCSI targets can be either open-source software or commercial. Unique iSCSI targets are provided for numerous clients.

Security

Internet security protocol is leveraged to secure IP network traffic by encrypting and authenticating each data packet received.

RPC is available for servers as well as clients. It replaces the transport device interface for enhanced scalability and support.

Multiple port extensions support RPC ports that are easy to use at the client level and compatible with firewalls.

Firewall compatibility is a key advantage of NFS version 4 and uses TCP Port -2049 for service execution. This simplifies protocol usage across firewalls.

Finally, NFS is a kerberized file system interface and features additional Kerberos privacy, such as Krb5p, to support krb5, krb5i, and other existing Kerberos options.

iSCSI is primarily designed for Microsoft Windows.

This protocol facilitates block-level sharing, allowing connected devices to access and utilize storage resources at the block level, similar to a local hard drive.

In an iSCSI setup, the responsibility of managing the file system lies with the guest operating system. This means that the guest OS handles tasks related to the file system, such as organizing and managing files and directories.

With iSCSI, each volume on the block level can be accessed by a single client, ensuring dedicated access and control over the storage resources.

In iSCSI, the file system is implemented at the client level. This enables both data and metadata to be read and managed within the client file system.

Implementing iSCSI can be slightly challenging as it requires configuring hosts, storage options, virtual local area networks (VLAN), and other related settings to ensure proper functionality and integration with the system.

NFS can be used for Microsoft Windows, Linux, and UNIX operating systems, making it a versatile choice for cross-platform environments.

It facilitates file-based sharing, enabling clients to access and share individual files or directories rather than accessing storage at the block level.

In an NFS setup, the responsibility of managing the file system (such as organizing and managing files and directories on behalf of the clients) rests with the NFS server.

NFS allows files to be shared among multiple servers, providing a means for collaborative access and data sharing across server environments.

In NFS, the file system is implemented at the server level. This means the server maintains the file system, and clients access files within that shared file system.

NFS is a protocol known for its efficiency and streamlined design. It is considered a user-friendly choice as it is a shared protocol, making it easier for clients or users to implement and utilize it.

iSCSI is cost-effective in implementation, providing an economical network at the block level. The need for additional network devices is reduced as the protocol need not always use HBAs, distinct cabling, or specific storage devices.

iSCSI is also flexible as it runs on an internet protocol that does not limit the distance between the initiator and the target. This protocol fully leverages the interoperability advantages of Ethernet and TCP/IP. Plus, existing servers can be used several times for configuring iSCSI implementation.

iSCSI is known for swift data transfer even for larger volumes, as the protocol is normally configured for 10 gigabits per second Ethernet (GbE) infrastructure.

iSCSI is easy to deploy and manage, with the users who maintain it not requiring in-depth technical knowledge. The protocol is, therefore, conducive for development and disaster recovery too.

Finally, iSCSI features enhanced network security through identity authentication, physical and logical network isolation, confidentiality, and integrity.

NFS is secure as it uses strong authentication for protection against unauthorized access.

Users can share large les without breaking them down into smaller parts, and enterprises can collaborate across teams via NFS, thus enhancing productivity.

High scalability via data integration is a key benefit of NFS. The protocol can integrate local data with data from remote locations. Enterprises can, therefore, optimize their data centers and minimize costs by consolidating storage.

NFS provides speedy access to data by minimizing latency across wide area networks (WANs).

Like iSCSI, NFS is also suitable for disaster recovery and is used by organizations during disaster recovery planning. In case of a disaster, personnel can leverage NFS to create a virtualized remote copy of sensitive data.

Finally, NFS is secure and suitable for thwarting unauthorized access to data. It is also conducive for auditing and monitoring network activity remotely.

See More: What Is Network Topology? Definition, Types With Diagrams, and Selection Best Practices for 2022

In the realm of network storage, iSCSI and NFS are two well-known protocols.

iSCSI shines in block-based workloads, providing optimal performance for storage area networks (SANs), virtualization, and database applications, particularly in Windows and VMware environments. On the other hand, NFS excels in file-based workloads, offering high throughput and low latency, making it ideal for file-sharing and backup applications, particularly in Linux and UNIX environments.

While iSCSI boasts its own security features, NFS relies on the security mechanisms of the underlying network and file system. NFS scales easily by adding more servers and file systems, whereas iSCSI scales by adding more targets and logical unit numbers (LUNs). However, both protocols may encounter challenges when managing many connections, configurations, or devices.

NFS and iSCSI continue to evolve to meet the storage requirements of the modern world. They are integrating with cloud-based storage services, embracing software-defined storage solutions, and providing persistent storage for containerized applications, enhancing portability, performance, and scalability.

Despite their strengths, NFS and iSCSI face challenges in the network storage landscape. Compatibility issues, complex architectures, and competing protocols like SMB, CIFS, FCoE, NVMe-oF, and S3 can introduce interoperability problems, configuration errors, performance degradation, operational overhead, and security vulnerabilities.

Understanding the nuances of these two protocols and carefully assessing storage requirements will help users make an informed decision to ensure efficient and reliable network storage implementation.

Did this article help you understand the workings of iSCSI and NFS? Share your feedback on FacebookOpens a new window , XOpens a new window , or LinkedInOpens a new window !

Image Source: Shutterstock

See the original post here:
iSCSI vs. NFS: 5 Key Comparisons | Spiceworks - Spiceworks News and Insights

On-Pin International, On-Pin Analytics: Introduce Verifeye – The First Call

How well do golf course owners and operators really know and understand their customers, and their customers' playing and buying trends? For On-Pin International and On-Pin Analytics, a leader in technology development for the global golf industry, the answer lies with the launch of Verifeye, a patented system of passively tracking individuals as they move about the course.

Utilizing portable Radio-Frequency Identification (RFID) tags, Verifeye provides golf course owners and operators with the unprecedented ability to collect and analyze live pace of play, individual player habits, actual course utilization, and detailed historical data. Verifeye has the ability to integrate efficiently with a facility's point-of-sale and tee sheet to deliver a sophisticated level of data collection that can be essential in helping determine the success of the operation of any type of golf facility.

"Through real-time data that encompasses all aspects of a given operation, Verifeye can enhance both the bottom line at facilities and the experience for golfers who can be rewarded with affinity programs and other ways to encourage them to spend more time using the entire facility," said Ian Glasson, who founded On-Pin Analytics in 1998, as he recognized an opportunity for technology to improve operations at golf facilities.

"We constantly are innovating," he continued, "by inventing technologies to empower golf course operations. The effective use of advanced technology such as Verifeye is critical to the overall business and playing experiences in golf today."

Member and guest RFID tags are provided -- to be kept on golf bags, in wallets, etc.-- to golfers, with RFID reader stations installed around heavily trafficked golf cart path locations on each course. Verifeye data is collected and displayed in real-time on the On-Pin dashboard and uploaded to On-Pin Cloud servers.

Golf course rangers/marshals then effectively use the data to keep play moving and for detailed, visual reports.

Glasson said he is excited from results gleaned through initial product development and testing of Verifeye case studies that have demonstrated:

a 10-12 percent increase in golf facility revenue

At The Woodford Club in Versailles, Kentucky, owner Randy Clay added Verifeye last fall. He was so impressed with the value of the dynamic system at his 18-hole, semi-private facility that he created On-Pin International LLC, in conjunction with Glasson Investments Pty. LTD, and nationally acclaimed PGA Professional Bob Baldassari, who will lead the business development to help make Verifeye's array of technologies available to owners and operators at golf facilities throughout North America.

"Verifeye is an emerging technology system that also can complement other pre-existing technologies that a facility may have invested in," Clay said. "While the technology in Verifeye is highly advanced, the goal for any facility that uses Verifeye is simple -- to more completely understand your golfers and determine ways to get them to play more golf and spend more time and money with you. That can happen with greater retention through the predictive analysis that is unique to Verifeye."

For more information on On-Pin International and the new Verifeye system, call On-Pin International at 859-682-6001, and visit http://www.on-pininternational.com.

Go here to see the original:
On-Pin International, On-Pin Analytics: Introduce Verifeye - The First Call

These Refurbished Blink Cameras Are up to 66% Off – Lifehacker

Keeping an eye inside and outside your home is a lot more affordable with Woots fall sale on Blink devices. Theyre selling refurbished Blink cameras for up to 66% off, starting at $15 until Oct. 27 at 1 a.m. ET or until supplies last.

Keep in mind that all these cameras are refurbished, so they might come with signs of wear, but they were all serviced and tested. Woot only ships to the 48 contiguous states in the U.S. If you have Amazon Prime, you get free shipping; otherwise, itll be $6 to ship. Woot products come with an assigned manufacturer warranty or Woots 90-day warranty program in case you have a problem with a product.

TheBlink Mini is currently$15 (normally $35). Its a wired, plug-in camera that is not weather-resistant, so its meant for indoor use. It has motion activation alerts, a live-view mode, shoots in 1080p, has two-way audio and night vision, and works with Alexa. This is the first and only generation of the Blink Mini released in 2020. You can read PCMags full review of the Blink Mini here.

If you would rather have a wireless indoor security camera, consider the 3rd generation Blink Indoor camera. Currently, you can get two Blink Indoor cameras for $59.99 (normally $139.99). The Blink Indoor uses two AA lithium batteries that last two years, so you dont have to worry about changing batteries too much. As the name implies, theyre not weather-resistant and are only meant to be used indoors. Like the Blink Mini, you get a 110 field of view, motion activation alerts, a live-view mode; it shoots in 1080p, has two-way audio and night vision, and works with Alexa. The main difference with the Blink Mini is that you can put this camera anywhere in your home without having to worry about where to plug it in.

For those looking for outdoor cameras, the 3rd generation Blink Outdoor camera is your only choice. Currently, you can get two Blink Outdoor cameras for $59.99 (normally $179.99). The only difference between this one and the Blink Indoor camera is that its weather-resistant. This is already one of the best and most affordable outdoor cameras. And right now, you can get two for less than the price of a new one. You can read PCMags full review here.

If you want to store the videos from multiple Blink cameras locally, see them on the Blink App or your computer, youll need a Sync Module. The Sync Module 2 ($35) is a hub that connects Blink cameras to Blinks cloud servers, allowing you to store video locally, manually record, share live video footage (but no live view recording), and control up to 10 Blink devices with the Blink Home Monitor app. Keep in mind that the Blink system is not compatible with Google or Apples smart home system. You will also need a Blink subscription to use all of its features:

You have two options for a Blink subscription plan. The Blink Basic Plan is $30 per year; the Blink Plus Plan runs you $100 a year with more features, the most important of which is having an unlimited number of devices hooked up to your account.

Read the rest here:
These Refurbished Blink Cameras Are up to 66% Off - Lifehacker

Check out our latest solutions for AI, cloud, telco, and more at the … – ASUS Edge Up

Since its inception in 2011, the Open Compute Project (OCP) has challenged us to develop hardware that makes the benefits of open source and open collaboration real and accessible for our data center customers. For the 2023 OCP Global Summit, were ready to share our latest efforts for green and sustainable modular data center operation, as well as our all-new solutions for enterprise-grade generative AI.

Our theme for the OCP Global Summit is Solutions Beyond Limits were eager to show you how ASUS products empower AI, cloud, telco and more. Read on for a sneak peek at what we have planned for OCP 2023.

ASUS servers combined with Modular Data Center Solution products from partner Rakworx address high-capacity demands while minimizing infrastructure cooling requirements, making it an ideal choice for small and medium deployments in diverse environments for simplified deployment of edge and on-premises applications. Its exceptional versatility makes it the go-to solution for scenarios that require utmost efficiency and reliability.

With ASUS server systems, you can choose from a wide range of versatile, resilient, and scalable rack units designed for diverse workloads in data-center environments of all sizes. At the OCP Global Summit 2023, well be showcasing some of our latest designs:

TWSC is a subsidiary of ASUS which provides AI Foundry Service (AFS), featuring a world-leading AI supercomputing cloud platform to help customers establish tailored generative AI (GAI) solutions. AFS is a one-step solution for large-language modeling (LLM), from fine-tuning to deployment. Users can deploy fine-tuned models on Taiwan Computing Cloud (TWCC) or on-premise datacenters with AFS appliances. Enterprises are thus able to deploy their own trustworthy AI on an ASUS GPU server as an integrated appliance in an environment that enterprises can fully manage. AFS empowers enterprises to effectively and efficiently implement GAI solutions without starting from scratch, potentially slashing the cost of GAI IT investment and setup by millions of US dollars.

As an infrastructure solution partner, ASUS offers a complete portfolio of products ready to help enterprises increase their agility through digital transformation, boost employee engagement with flexible digital tools, and protect your assets through enterprise-grade security and comprehensive system protection.

We hope youll give us the opportunity to demonstrate our latest innovations and optimized designs for the open compute community. Visit us at booth A6 at the OCP Global Summit between October 17 and October 19, 2023 at the San Jose McEnery Convention Center. Click here for information on registering for the event we look forward to meeting you.

To schedule a meeting or get more information about our demo products, contact your local ASUS representative or email us at ESBU_Sales@asus.com.

See the original post:
Check out our latest solutions for AI, cloud, telco, and more at the ... - ASUS Edge Up

Ten key cloud industry trends in 2023 – IT World Canada

The cloud computing industry is on the cusp of a major transformation, driven by artificial intelligence (AI), multicloud adoption, and industry-specific cloud solutions. These trends are shaping a more complex cloud landscape, but they also offer new opportunities for businesses to innovate and grow.

According to a report by Forrester Research, the AI boom is creating chaos as every cloud provider promotes new services. This abundance of options is challenging enterprises to develop coherent AI strategies. However, embedded AI is also making operations smarter, with technologies like observability, predictive analytics, conversational interfaces and automation. Both cloud vendors and enterprises are infusing AI across monitoring, management and support functions.

WebAssembly (WASM) is emerging as the future of cloud-native applications. WASM is a low-level bytecode that can be executed on any platform, including cloud servers, edge devices, and web browsers. This makes it ideal for developing portable and scalable applications.

With the release of a new server environment standard, WASM is now positioned as a universal application platform. This means that developers can build WASM applications once and deploy them anywhere, without having to worry about underlying hardware or operating system differences.

Also, governments around the world are increasingly enacting digital sovereignty regulations that restrict where data can be stored and processed. This is complicating cloud strategies for enterprises, which must now re-evaluate their supply chain reliance and ensure that their cloud providers comply with all applicable regulations.

In addition, as more workloads shift to the cloud, hardware vendors are pivoting to subscription models to offset declining product sales. This is a win-win for both vendors and customers, as it allows vendors to generate recurring revenue and customers to avoid upfront capital costs.

Cloud cost optimization, also known as FinOps, is seeing renewed interest as businesses seek to rein in their cloud spending. FinOps is a discipline that combines financial management with cloud computing to help businesses optimize their cloud costs.

Multicloud networking also remains an underserved issue, but the convergence of multicloud with zero trust edge (ZTE) architectures offers new hope. ZTE is a security-focused networking approach that can help businesses connect their data centers, clouds, and remote locations securely.

The cloud security ecosystem is advancing to meet the needs of cloud-native enterprises. Cloud vendors and security providers are offering new and innovative solutions to protect data, applications, and workloads in the cloud. Industry-specific clouds are gaining ground as businesses seek specialized solutions to address their unique vertical problems. These clouds are typically led by global systems integrators (GSIs) and offer a range of features and services tailored to specific industries.

Managed service providers (MSPs) are shifting their focus to advisory-led transformations as businesses move to the cloud. MSPs are now helping businesses to develop and implement cloud strategies, as well as to manage their cloud environments once they are in place.

The cloud computing industry is undergoing a major transformation, driven by AI, multicloud, and industry-specific cloud solutions. These trends are creating a more complex cloud landscape, but they also offer new opportunities for businesses to innovate and grow.

The sources for this piece include an article in DataCenterKnowledge.

View original post here:
Ten key cloud industry trends in 2023 - IT World Canada

A proactive approach to cybersecurity in Asean – The Manila Times

THE acceleration in the adoption of cloud technology has revolutionized the business landscape, and, in doing so, significantly altered the cybersecurity ecosystem.

The vast potential of cloud technology, such as its scalability, adaptability and cost-effectiveness, has not gone unnoticed by nefarious entities seeking opportunities for exploitation. As businesses across Asean continue their transition to the cloud, they are increasingly confronted with escalating incidents of data breaches, ransomware attacks, and insider threats.

Therefore, it's vital for organizations to devise and implement a robust cloud-specific incident response plan. Such a plan could help minimize the impact of security incidents, accelerate recovery time, and ensure optimal data protection in this rapidly evolving digital space.

Cloud Incident Response (IR) today needs to grapple with a radically different set of challenges, including data volume, accessibility, and the speed at which threats could multiply within cloud architectures. The interplay of various components, such as virtualization, storage, workloads, and cloud management software, intensifies the complexity of securing cloud environments.

That being said, Cloud IR cannot be done in isolation of the company's overall incident response activities and business continuity plans. When possible, cloud security tools should use the same SOC, SOAR, and communication tools currently being used to secure other company elements. Using the same infrastructure ensures that suspicious and threatening cloud activities receive an immediate and appropriate response.

Creating an effective response plan involves understanding and managing the unique cloud platforms, being fully aware of data storage and access, and adeptly handling the dynamic nature of the cloud. Specifically:

Managing the cloud platform. The administrative console, the control center of each cloud platform, facilitates the creation of new identities, service deployment, updates and configurations impacting all cloud-hosted assets. This becomes an attractive target for threat actors, considering it offers direct access to the cloud infrastructure and user identities.

Understanding data in the cloud. The cloud hosts data, apps and components on external servers, making it crucial to maintain correct configurations and timely updates. This is vital not just to prevent external threats, but also to manage internal vulnerabilities, such as misconfigurations, given the inherent complexity and size of cloud networks.

Handling a dynamic cloud. The cloud is a dynamic space requiring security teams to remain agile and maintain visibility across all services and apps. A lack of familiarity with the environment could lead to an overwhelming volume of data, potentially slowing down threat-hunting, triage, and incident investigation processes.

Cloud computing presents new security challenges requiring a more robust and nuanced incident response plan, focused on cloud-specific risks. This includes identifying, analyzing and responding to security incidents within a cloud environment to maintain data confidentiality, integrity and availability. Such a plan could shield businesses from financial loss, protect their reputation, and maintain regulatory compliance.

Establishing a well-defined, routinely tested, and updated plan could effectively reduce the impact of security incidents and foster swift recovery after an attack. It should comprise procedures for responding to various incidents, like data breaches, DDoS attacks, and malware infections, including steps for incident containment, investigation and recovery using tools that are already being deployed by the company.

Mastering cloud IR begins with a thorough risk assessment, identifying potential threats, vulnerabilities and risks to the cloud environment. Security teams must thoroughly understand their cloud infrastructure to effectively defend it, considering factors like data sensitivity, legal requirements, access controls, encryption, network security and third-party risks.

Data and tool availability is a key factor in accelerating a security team's progress during an active security event. Deploying real-time monitoring of cloud resources, network traffic analysis, user activity tracking, intrusion detection systems and automated alerts could ensure swift incident identification and response.

Cloud IR demands efficiency and effective communication. Having pre-set processes and playbooks, defining roles and responsibilities, and maintaining clear communication between team members are essential elements of a Cloud IR strategy. Regular drills and simulations to test the IR plan and improve upon it are vital for optimal incident response.

In conclusion, as businesses in the Asean region increasingly embrace cloud technologies, the need for a well-defined cloud IR plan has never been more crucial. By efficiently identifying signs of cloud-based threats, mitigating breaches and limiting or eliminating damage, organizations could secure their cloud infrastructures, enhance their response processes, and reduce time to resolution.

Evan Davidson is the vice president for Asia Pacific and Japan at SentinelOne, an American cybersecurity company that uses AI-based protection for enterprises.

View post:
A proactive approach to cybersecurity in Asean - The Manila Times

Axiado and Wiwynn Forge Alliance to Pioneer Cutting-Edge OCP Recognized Servers – Yahoo Finance

Initial Collaboration Project Features Wiwynn's OCP Accepted Yosemite V3-Based Server Platform Powered by Axiado's Trusted Control/Compute Unit

SAN JOSE, Calif. and TAIPEI, Taiwan, Oct. 10, 2023 /PRNewswire/ -- 2023OCP Global Summit Axiado Corporation, an AI-enhanced hardware cybersecurity company, and Wiwynn, a leading cloud IT infrastructure provider that designs and manufactures advanced server and storage solutions for data centers, today unveiled their partnership to revolutionize the landscape of server technology. This collaboration opens the door to a new era in data center security by developing servers powered by Axiado's innovative Trusted Control/Compute Units(TCUs).

Axiado (PRNewsfoto/Axiado)

"Platforms such as Wiwynn's OCP Accepted Yosemite V3, a high-density, multi-sled system, provide a launching pad for players that seek to innovate and leverage the momentum behind highly modular systems that are essential to meet changing workload requirements," said Bijan Nowroozi, CTO of OCP. "Wiwynn and Axiado jointly push the boundary with new features in platform security based on an approved OCP specification."

Leveraging the Open Compute Project's (OCP) open-source multi-node "Yosemite V3" server specification, the collaboration aims to develop a groundbreaking solution that combines the strengths of both companies. The forthcoming TCU-based servers will not only leverage the capabilities of the preexisting service platform but also contribute to the seamless and efficient management of data centers with its integrated platform root of trust (PRoT) and hardware-anchored, AI-driven platform security. The partnership signifies a convergence of security and performance to address the evolving needs of the cloud service provider (CSP) industry.

Axiado's TCUs incorporate a blend of AI, data collection, and software within a compact, power-efficient system-on-chip (SoC). This single-chip solution includes real-time and proactive AI capabilities for preemptive threat detection and comprehensive protection through a dedicated coprocessor. It also seamlessly integrates the root of trust (RoT), baseboard management controller (BMC), trusted platform module (TPM), and complex programmable logic device (CPLD) functions.

Story continues

"The collaboration between Wiwynn and Axiado signifies a critical leap forward for the cloud service provider and enterprise markets," said Steven Lu, Executive Vice President at Wiwynn, "This shift toward modular systems aligns perfectly with the industry's trajectory and reinforces Wiwynn's position as a Tier 1 player."

"The integration of Axiado's TCUs into Wiwynn's OCP-recognized servers opens doors to an architecture based upon OCP-approved specifications, enabled by hardware-anchored, AI-driven solutions," said Gopi Sirineni, President and CEO at Axiado. "By harnessing the capabilities of Axiado's TCUs, we're poised to streamline the deployment of next-generation systems crucial for the future of data centers."

As Wiwynn continues to embrace transformative solutions like Axiado's TCUs, the company's presence within the CSP market will be solidified, while concurrently expanding Axiado's reach across a global customer base.

OCP Summit 2023Learn more about Axiado's and Wiwynn's joint venture at the OCP Summit 2023 on October 17-19, where the two companies will display live demos of platform security innovations. Engage with experts, gain insights into the future of data center security and performance, and explore how this collaboration is shaping the industry's trajectory toward a more secure and efficient computing landscape.

About AxiadoAxiado is a cybersecurity semiconductor company deploying a novel, AI-driven approach to platform security against ransomware, supply chain, side-channel and other cyberattacks in the growing ecosystem of cloud data centers, 5G networks and other disaggregated compute networks. The company is developing new class of processors called the trusted control/compute unit (TCU) that redefines security from the ground-up: its hardware-anchored and AI-driven security technologies include Secure Vault root-of-trust/cryptography core and per-platform Secure AI pre-emptive threat detection engine. Axiado is a San Jose based company with a mission to protect the users of everyday technologies from digital threats. For more information, go to axiado.comor follow us on LinkedIn.

About WiwynnWiwynn is an innovative cloud IT infrastructure provider of high-quality computing and storage products, plus rack solutions for leading data centers. We are committed to the vision of "unleash the power of digitalization; ignite the innovation of sustainability." The company aggressively invests in next-generation technologies to provide the best total cost of ownership (TCO), workload and energy-optimized IT solutions from cloud to edge. For more information, please visit Wiwynn website, Facebookand Linkedinor contact productinfo@wiwynn.com

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/axiado-and-wiwynn-forge-alliance-to-pioneer-cutting-edge-ocp-recognized-servers-301951646.html

SOURCE Axiado

See the original post here:
Axiado and Wiwynn Forge Alliance to Pioneer Cutting-Edge OCP Recognized Servers - Yahoo Finance

Nutanix, Cisco say buyers will get the best of them both – The Register

Cisco has become Nutanix's closest hardware partner meaning integration of the hyperconverged upstart's stack and Cisco's UCS servers will be stronger, sooner, as their partnership gathers steam.

The two announced a "strategic partnership" in August, when Cisco revealed it would resell Nutanix's stack on its UCS servers. A few weeks later, the real reason for the partnership became plain: Cisco discontnued its Hyperflex hyperconverged stack, and anointed Nutanix as its preferred vendor of such software.

The Register chatted to reps of the happy couple about their relationship this week. Nutanix's senior veep of product management, Thomas Cornely, told us that his company and Cisco have a "much deeper tech partnership in terms of early engagement" compared to other hardware makers.

That matters because while Nutanix started life bundling its stack with hardware, it backed away from that position years ago and now partners with server makers. That arrangement means it needs to tune its stack to work with each vendor's hardware. Earlier access to Cisco kit and closer work should make UCS an appealing prospect.

So might planned integration between Nutanix's stack and Cisco's Intersight cloud manager a work in progress that Jeremy Foster, Cisco's senior veep and general manager for networking and compute, said could mean "We can bring a network in a way that has never been integrated before."

The two haven't had time for the fruits of those labors to ripen, but Cornely and Foster told The Register they believe the team-up is already functional, as each vendor has partners familiar with both parties' technologies.

Those partners, they opined, are excited to sell Nutanix on UCS and capable of doing so. Other partners will be trained in due course.

Foster said in future, Cisco hopes the tie-up will help the networking giant to sell UCS servers for more storage workloads. Cisco has dabbled in storage over the years, but stopped well short of producing dedicated hardware. The UCS S-Series storage servers, which can house up to 56 disk drives, are probably the closest thing to a storage array Cisco has ever built.

For now, Cisco and Nutanix's arrangement focuses on the X-series blade server. But Cornely told The Register Nutanix certainly likes the look of the S-Series and sees them as good fit for the vendor's recent disaggregation of compute and storage nodes.

Foster forecast the evolving Ciscanix relationship will soon be championed by global system integrators. He also thinks managed services providers are a fine target for the alliance, with Nutanix-as-a-service or Nutanix-powered clouds in prospect.

View post:
Nutanix, Cisco say buyers will get the best of them both - The Register