Page 1,952«..1020..1,9511,9521,9531,954..1,9601,970..»

Is Cloud DX Poised To Innovate Its Way Into The $250 Billion Telehealth Opportunity? – Benzinga – Benzinga

With the outbreak of the COVID-19 pandemic in 2020, telehealth usage reportedly soared to unprecedented levels, as both consumers and health care providers sought ways to safely access and deliver healthcare. Regulatory changes enacted by the government during this period also played a role by enabling increased access to telehealth and greater ease of reimbursements.

Taking stock of the potential, Mckinsey estimated in May 2020 that up to $250 billion of U.S. healthcare dollars could potentially be shifted to telehealth care.

Investment in virtual health has continued to accelerate post pandemic per Rock Healths 2021 digital healthcare funding report. The total venture capital investment into the digital healthcare sector in the first half of 2021 totaled $14.7 billion, which is more than all of the investment in 2020 ($14.6 billion) and nearly twice the investment in 2019 ($7.7 billion).

In addition, the total revenue of the top 60 virtual healthcare players also reportedly increased to $5.5 billion in 2020, from around $3 billion the year before.

With the continued interest in telehealth, a favorable regulatory environment and strong investment in this sector, it is expected that telehealth will continue to remain a robust option for healthcare in the future.

Companies like Teladoc Health Inc. TDOC, Goodrx Holdings Inc. GDRX and Dialogue Health Technologies Inc. CARE aim to disrupt the healthcare industry and take advantage of the immense potential opportunity presented by the telehealth segment.

But despite the relevance demonstrated by telehealth services during the pandemic, it is believed that there is a need for innovative product designs and digital solutions products with seamless capabilities to meet consumer preferences.

Cloud DX Inc. CDX, a Kitchener, Canada-based virtual care platform provider with headquarters in Brooklyn, New York, claims it has carved a unique niche for itself in the highly regulated digital healthcare industry by providing sophisticated hardware and software solutions to advanced healthcare providers for remote patient monitoring.

The company asserts that innovation is the cornerstone of its operation, one which sets it apart from other players in the telehealth space. Speaking with Benzinga, Cloud DX CEO and founder Robert Kaul said that Cloud DX could be best defined as a software platform that, through its innovative technologies, enables its proprietary or third party hardware devices used by patients to make their virtual care experience better. Dedicated to innovation, the company also makes its own hardware in cases where its primary focus is to collect more data for its system to use to come up with better outcomes for patients. Typically, at-home medical tools or hardware do not provide clinical level data, which is often why physicians prefer Cloud DXs proprietary devices.

The company boasts core competencies in biomedical hardware engineering, cloud-based medical device architecture, and algorithm-based result generation. The company claims it is pushing the boundaries of medical device technology with smart sensors, ease of use, cloud diagnostics, artificial intelligence (AI), and state-of-the-art design.

For example, its Pulsewave, a unique pulse acquisition device records up to 4,000 data points from a patients radial artery pulse and securely transmits the raw pulse signal to cloud diagnostics servers, which display nearly instant results on heart rate, blood pressure, pulse variability, average breathing rate and a proprietary total anomaly score that can have significant potential for identifying cardiac diseases, according to the company.

Its smartphone app AcuScreen is capable of detecting numerous respiratory illnesses, including tuberculosis from the sound of a person coughing, and its VITALITI continuous vital sign monitor (currently undergoing clinical trial evaluation), a highly advanced wearable, will measure ECG, heart rate, oxygen saturation, respiration, core body temperature, blood pressure, movement, steps and posture.

According to Cloud DX, these competencies coupled with its positive regulatory approval experience and internationally ISO-certified quality management enable it to create medically accurate, consumer-vital platforms that position it to be a front runner in clinical-grade data collection.

An added advantage of its Connected Health platform, according to the company, is the ability to integrate with many Electronic Medical Record (EMRs) systems, improving efficiency and return on investment (ROI).

Cloud DX maintains that, through innovation, collaboration and integration, its platform has the ability to unify the clinical and home monitoring experience, delivering futuristic, connected healthcare solutions.

In April this year, Cloud DX announced its partnership with Sheridan College on a project involving the companys eXtended Reality division, Cloud XR,to further develop its Clinic of the Future, an augmented reality (AR) platform.

With exciting new medical metaverse products in the pipeline, a strong patent portfolio, solid partners like Medtronic Plc. MDT, and a sales strategy that's driving rapid adoption among global healthcare providers, Cloud DX believes it is well positioned for success in the highly competitive digital healthcare arena.

Get the latest on Cloud DX here.

This post contains sponsored advertising content. This content is for informational purposes only and is not intended to be investing advice.

Featured Photo by National Cancer Institute on Unsplash

Visit link:
Is Cloud DX Poised To Innovate Its Way Into The $250 Billion Telehealth Opportunity? - Benzinga - Benzinga

Read More..

GoTab Unveils EasyTab, A New Feature that Helps Servers and Bartenders Seamlessly Bridge Mobile Order and Pay-at-Table with Traditional Service – PR…

Restaurant commerce platform makes tab management - from order to close - fast, easy and hassle-free

ARLINGTON, Va., July 18, 2022 /PRNewswire/ --Already at the forefront of contactless ordering and payment technology, restaurant commerce platformGoTab has announced today it is launching a new distinctive feature: EasyTab. This feature, which leverages guest mobile devices, bridges traditional service and empowers guests by making opening, closing, reordering, and transferring tabs from bar to table fast, easy and hassle-free.

Designed to make the dine-in ordering experience even more convenient for servers and guests, EasyTab is one more feature that makes GoTab the most flexible tab-based ordering solution available on the market. Bridging traditional service models with guest-led ordering, EasyTab helps GoTab operators introduce contactless ordering to their guests, guiding them over the hurdle of using their mobile device to place their food and drink orders. EasyTabis changing the way technology supports restaurant operations, creating more positive, profitable experiences, specifically in restaurants with multiple dining areas and who cater to large groups or events.

"Operators are struggling with the double-edged sword of an influx of guests that are eager to return to in-venue dining and entertainment, while having to operate with less staff. At the same time, they can be reluctant to introduce mobile order and pay due to the resistance of guests and staff to learning new operating models. With EasyTab, we effectively solve the problem. The result is an elegant transition from traditional ordering through a server, to mobile ordering and payment, and back again, depending on the guest's individual preference," says Tim McLaughlin, GoTab CEO and Co-Founder.

How EasyTab Works

Guests can open a tab the traditional way through a server or bartender. When prompted to keep the tab open, the server (or guest) simply dips or swipes the card provided for payment, and enters the guest's mobile number. A link to the open tab is provided via text, and the guest accesses the tab on their device. Now they can order, re-order, and pay throughout their visit, from anywhere in the venue. And because their payment method is automatically applied to their open tab, they can split or settle their check out without having to wait for a physical check or head to the counter to pay. Click here to learn more: https://vimeo.com/727438052/1c24be5764

Works Seamlessly with the GoTab All-in-One POS

Unlike other POS systems that make guests close their tab when they move from the bar to the table, GoTab tabs stay open no matter where guests venture. Where on-premise dining used to rule the scene, restaurants have adapted to business models that also include online ordering and 3rd party marketplace platforms increasing their revenue streams more than ever before. The EasyTab feature from the GoTab POS takes the guesswork out of managing complex omni-channel front-of-house operations while integrating back-of-house operations.

GoTab's all-in-one, cloud-based POSmakes the systems restaurants use work more effectively, and reduces the top pain points restaurants face from staffing shortages, managing 3rd party platforms, and time spent on multiple transactions. The company's customizable, scalable POS moves restaurants towards a more frictionless experience. The POS can run on nearly any existing hardware (iOS, Android, or PC), removing cost and time barriers restaurants face when looking to switch to innovative, customizable solutions. From quick-service to mid-size and fine dining restaurants, GoTab's POS anticipates the needs of restaurant staff and guests, so restaurants can focus more time anticipating the needs of their guests, leading to enhanced guest experience and higher profitability.

EasyTab + All-in-One POS supports Back of House and Payment Processing

Using GoTab's Kitchen Display System and integrated printers, servers always know where to locate guests and deliver the food and beverage orders. Meanwhile, guests are free to move about, and reorder at their convenience without ever having to flag a server, close or reopen their tab. When friends join the party, guests can share their tab using GoTab's native features for tab sharing (via text or QR code specific to their tab). When everyone is ready to leave, they can expedite the payment process by closing out and splitting the tab on their own on their mobile device. GoTab makes the payment process easy and seamless by clearly displaying all charges, fees and tip recommendations.

EasyTab is now available for all GoTab customers. Click here to learn more and request a demo.

About GoTab, Inc.

GoTab, Inc., a Restaurant Commerce Platform (RCP), is helping large- and mid-sized restaurants, breweries, bars, hotels and other venues run lean, profitable operations while making guests even more satisfied. It integrates with popular point-of-sale (POS) systems and allows patrons to order and pay through a server, order and pay directly from their own mobile phones, or blend the two experiences all on one tab, through its easy-to-use mobile POS, contactless ordering, payment features, and kitchen management systems (KMS). The guest never has to download a mobile app or create a password. Operators get flexible features that can be rapidly applied to access new revenue streams via dine-in, take-out and delivery, ghost kitchens, retail groceries, and more. Founded in 2016, GoTab processes over $250M transactions per year with operations across 35 U.S. states and growing. For more information, consult our media kit, request a demo here or learn more at https://gotab.io/en

Media Contact:Madison McGillicuddy[emailprotected](203) 268-8269

SOURCE GoTab

View original post here:
GoTab Unveils EasyTab, A New Feature that Helps Servers and Bartenders Seamlessly Bridge Mobile Order and Pay-at-Table with Traditional Service - PR...

Read More..

KAIST Shows Off DirectCXL Disaggregated Memory Prototype – The Next Platform

The hyperscalers and cloud builders are not the only ones having fun with the CXL protocol and its ability to create tiered, disaggregated, and composable main memory for systems. HPC centers are getting in on the action, too, and in this case, we are specifically talking about the Korea Advanced Institute of Science and Technology.

Researchers at KAISTs CAMELab have joined the ranks of Meta Platforms (Facebook), with its Transparent Page Placement protocol and Chameleon memory tracking, and Microsoft with its zNUMA memory project, is creating actual hardware and software to do memory disaggregation and composition using the CXL 2.0 protocol atop the PCI-Express bus and a PCI-Express switching complex in what amounts to a memory server that it calls DirectCXL. The DirectCXL proof of concept was talked about it in a paper that was presented at the USENIX Annual Technical Conference last week, in a brochure that you can browse through here, and in a short video you can watch here. (This sure looks like startup potential to us.)

We expect to see many more such prototypes and POCs in the coming weeks and months, and it is exciting to see people experimenting with the possibilities of CXL memory pooling. Back in March, we reported on the research into CXL memory that Pacific Northwest National Laboratory and memory maker Micron Technology are undertaking to accelerate HPC and AI workloads, and Intel and Marvell are both keen on seeing CXL memory break open the memory hierarchy in systems and across clusters to drive up memory utilization and therefore drive down aggregate memory costs in systems. There is a lot of stranded memory out there, and Microsoft did a great job quantifying what we all know to be true instinctively with its zNUMA research, which was done in conjunction with Carnegie Mellon University. Facebook is working with the University of Michigan, as it often does on memory and storage research.

Given the HPC roots of KAIST, the researchers who put together the DirectCXL prototype focused on comparing the CXL memory pooling to direct memory access across systems using the Remote Direct Memory Access (RDMA) protocol. They used a pretty vintage Mellanox SwitchX FDR InfiniBand and ConnectX-3 interconnect running at 56 Gb/sec as a benchmark against the CXL effort, and the latencies did get lower for InfiniBand. But they have certainly stopped getting lower in the past several generations and the expectation is that PCI-Express latencies have the potential to go lower and, we think, even surpass RDMA over InfiniBand or Ethernet in the long run. The more protocol you can eliminate, the better.

RDMA, of course, is best known as the means by which InfiniBand networking originally got its legendary low latency, allowing machines to directly put data into each others main memory over the network without going through operating system kernels and drivers. RDMA has been part of the InfiniBand protocol for so long that it was virtually synonymous with InfiniBand until the protocol was ported to Ethernet with the RDMA over Converged Ethernet (RoCE) protocol. Interesting fact: RDMA is actually is based on work done in 1995 by researchers at Cornell University (including Verner Vogels, long-time chief technology officer at Amazon Web Services) and Thorsten von Eicken (best known to our readers as the founder and chief technology officer at RightScale) that predates the creation of InfiniBand by about four years.

Here is what the DirectCXL memory cluster looks like:

On the right hand side, and shown in greater detail in the feature image at the top of this story, are four memory boards, which have FPGAs creating the PCI-Express links and running the CXL.memory protocol for load/store memory addressing between the memory server and hosts attached to it over PCI-Express links. In the middle of the system are four server hosts and on the far right is a PCI-Express switch that links the four CXL memory servers to these hosts.

To put the DirectCXL memory to the test, KAIST employed Facebooks Deep Learning Recommendation Model (DLRM) for personalization on the server nodes using just RDMA over InfiniBand and then using the DirectCXL memory as extra capacity to store memory and share it over the PCI-Express bus. On this test, the CXL memory approach was quite a bit faster than RDMA, as you can see:

On this baby cluster, the tensor initialization phase of the DLRM application was 2.71X faster on the DirectCXL memory than using RDMA over the FDR InfiniBand interconnect, the inference phase where the recommender actually comes up with recommendations based on user profiles ran 2.83X faster, and the overall performance of the recommender from first to last was 3.32X faster.

This chart shows how local DRAM, DirectCXL, and RDMA over InfiniBand stack up, and the performance of CXL versus RDMA for various workloads:

Heres the neat bit about the KAIST work at CAMELab. No operating systems currently support CXL memory addressing and by no operating systems, we mean neither commercial-grade Linux or Windows Server do, and so KAIST created the DirectCXL software stack to allow hosts to reach out and directly address the remote CXL memory using load/store operations. There is no moving data to the hosts for processing data is processed from that remote location, just as would happen in a multi-socket system with the NUMA protocol. And there is a whole lot less complexity to this DirectCXL driver than Intel created with its Optane persistent memory.

Direct access of CXL devices, which is a similar concept to the memory-mapped file management of the Persistent Memory Development Toolkit (PMDK), the KAIST researchers write in the paper. However, it is much simpler and more flexible for namespace management than PMDK. For example, PMDKs namespace is very much the same idea as NVMe namespace, managed by file systems or DAX with a fixed size. In contrast, our cxl-namespace is more similar to the conventional memory segment, which is directly exposed to the application without a file system employment.

We are not sure what is happening with research papers these days, but people are cramming a lot of small charts across two columns, and it makes it tough to read. But this set of charts, which we have enlarged, shows some salient differences between the DirectCXL and RDMA approaches:

The top left chart is the interesting one as far as we are concerned. To read 64 bytes of data, RDMA needs to do two direct memory operations, which means it has twice the PCI-Express transfer and memory latency, and then the InfiniBand protocol takes up 2,129 cycles of a total of 2,705 processor cycles during the RDMA. The DirectCXL read of the 64 bytes of data takes only 328 cycles, and one reason it can do this is that the DirectCXL protocol converts load/store requests from the last level cache in the processor to CXL flits, while RDMA has to use the DMA protocol to read and write data to memory.

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.Subscribe now

See more here:
KAIST Shows Off DirectCXL Disaggregated Memory Prototype - The Next Platform

Read More..

What Is Containerization and How It Can Help Your Applications Get to the Market Faster. – TechGenix

Pack it up, wrap it up, ship your container! (source: UnSpalsh).

Containerization has been a significant development within the IT sphere since Docker was released in 2013 and was the first commercially successful containerization tool. We always hear people in IT saying, well it works on my machine!. But this doesnt guarantee the app will work on others computers. And this is where containerization comes into play. Everyone in IT is always talking about containers and containerization because it can save time, money, and effort.

But what does it mean exactly? What does it do and what are its benefits? Ill answer all these questions in this article. Ill also show you how it works and the different types of containerization and services. Lets get started!

Containerization is where an application runs in a container isolated from the hardware. This container houses a separate environment specific to the application inside it. Everything the application needs to run is encapsulated and isolated inside of its container. For example, binaries, libraries, configuration files, and dependencies all live in containers. This means you dont need to manually configure them on your machine to run the application.

Additionally, you dont need to configure a container multiple times. When you run a containerized application on your local machine, itll run as expected. This makes your applications easily portable. As a result, you wont worry whether the apps will run on other peoples machines.

But how exactly does this happen? Lets jump into how containerization technology works.

Lets break down the containers layers.

Think of a containerized application as the top layer of a multi-tier cake. Now, well work our way up from the bottom.

To sum up, each container is an executable software package. This package also runs on top of a host OS. A host may even support many containers concurrently.

Sometimes, you may need thousands of containers. For example, this happens in the case of a complex microservices architecture. Generally, this architecture uses numerous containerized application delivery controllers (ADCs). This configuration works so well because the containers run fewer resource-isolated processes. And you cant access these processes outside the container.

But why should you use containerization technology? Lets look at its benefits.

Containerization technology offers many benefits. One of which I mentioned earlier, is portability. You dont need to worry if the application wont run because the environment isnt the same. You also can deliver containerized apps easily to users in a virtual workspace. But lets take a look at 4 other benefits of containerization:

Containerization cuts down on overhead costs. For example, organizations can reduce some of their server and licensing costs. Containers enable greater server efficiency and cost-effectiveness. In other words, you dont need to buy and maintain physical hardware. Instead, you can run containers in the cloud or on VMs.

Containerization offers more agility to the software development life cycle. It also enables DevOps teams to quickly spin up and spin down containers. In turn, this increases developer productivity and efficiency.

Encapsulation is another benefit. How so? Suppose one container fails or gets infected with a virus. All these problems wont spread to the kernel nor to the other containers. Thats because each container is encapsulated. You can simply delete that instance and create a new one.

Containers let you orchestrate them with Kubernetes. Its possible to automate rollouts and rollbacks, orchestrate storage systems, perform load balancing, and restart any failing containers. Kubernetes is also compatible with other container tools. Thats why it is so popular! But you also can use other container orchestration tools, such as OpenShift, Docker Swarm, and Rancher.

Clearly then, containerization technology offers many, many benefits. But how does it differ from virtualization? Lets find out!

Both VMs and containers provide an execution environment, but theyre still different. To simplify matters, Ive created this table below.

Now, you know the difference between virtualization and containerization. But where can you use containers? And why should you use them? Lets see.

Besides applications, you also can containerize some services. This can facilitate their delivery in their containers. Lets take a look at all the services that you can containerize.

This is a big one and perhaps the most used. Previously, software development used a monolithic code base. This meant including everything in one repo. But this method was hard to manage. Instead, its more efficient to break services (features or any data sent via third-party APIs) down into separate parts. After that, we can inject them into the application. Generally, separate development teams own these microservices, and they communicate with the main app via APIs.

Databases can be containerized to provide applications with a dedicated database. As a result, you wont need to connect all apps to a monolithic database. This makes the connection to the database dedicated and easier to manage, all from within the container.

Web servers are quickly configurable and deployable with just a few commands on the CLI. Its also better for development to separate the server from the host. And you can achieve that with the container. Itll encapsulate the server.

You also can run containers within a VM (virtual machine). This helps maximize security, talk to selected services, or max out the physical hardware. Its almost like a picture within a picture within another picture. Containerizing VMs lets you use the hardware to its maximum.

An application delivery controller manages an applications performance and security. If you containerize ADCs, layer 4-7 services will become more available in DevOps environments. These services supply data storage, manipulation, and communication. This contributes to the overall efficiency of development.

Next, lets take a look at some of the top containerization providers.

If you want to use containerization technology, youll need the help of a third-party solution. To this end, Ive compiled this list of the top 4 vendors on the market. (Note: I classified these in alphabetical order, not from best to worst).

ECR is an Amazon Web Services product that stores, manages, and deploys Docker images. These are managed clusters of Amazon EC2 (compute) instances. Amazon ECR also hosts images with high availability and scalable architecture. In turn, your team can easily deploy containers for your applications.

The pricing for AWS tools varies based on the number of tools you use and the subscription rates. Consult AWS for actual prices.

Mesos is an open-source cluster manager. Like Kubernetes, it manages the running containers. You also can integrate your own monitoring systems to keep an eye on your containers. Additionally, Mesos excels at running numerous clusters in large environments.

Mesos is an open-source tool.

AKS is Microsoft Azures container orchestration service based on the open-source Kubernetes system. If your organization is using Azure, then you definitely need to use AKS. In fact, it easily integrates Kubernetes into Azure. Your development team can use AKS to deploy, scale, and manage Docker containers and container-based applications across a cluster of container hosts.

Azure services are also subscription-based. Consult Azure for the latest pricing for these services.

This Google container orchestration tool creates a managed, production-ready environment to deploy your applications. It facilitates the deployment, updating, and management of apps and services. This also gives you quick app development and iteration.

Google Cloud Services are also subscription-based. Consult Google for updated pricing.

These are some of the top vendors for containerization. But did you notice weve been talking a lot about Kubernetes and Docker? Lets talk more about these tools and see why they go together like PB and J!

The Docker Engine is perhaps the most well-known container tool worldwide. Its the main component of container architecture. Additionally, Docker is a Linux kernel-based open-source tool. Its responsible for creating containers inside an OS.

Kubernetes usually works together with Docker. It specifically orchestrates all the Docker containers running in various nodes and clusters. This provides maximum application availability.

Developers generally create containers from Docker images. Generally, these have a read-only status, but Docker creates a container by adding a read-write file system. It creates a network interface to allow communication between the container and a local host. Then, it adds an IP address and executes the indicated process.

Finally, lets take a look at some of the options you have to get started using containers.

You need to keep these 7 points in mind when shopping around for a containerization platform.

These are some of the things to consider before selecting a vendor. Lets wrap up!

To sum up, containerization offers many benefits. Primarily, it saves your IT operations money and time. It also makes IT jobs a lot easier. However, you should consider many things before picking the right tools.

Often, the best combination is Docker and Kubernetes. But depending on your environment, you might want to opt for AWS, Azure, Google, or open-source tools. I dont recommend that only one person make this decision. Your development and DevOps teams need to come together and choose the best solution.

Do you have more questions? Are you looking for more information? Check out the FAQ and Resources sections below!

Docker is a containerization tool released in 2013. It revolutionized how applications and services are handled virtually. Docker has also made it much easier for developers to port their applications to different systems and machines. Thats because Docker creates images of the application and environment. Then, it places them inside a container that can be on any machine.

Kubernetes is an open-source container orchestration tool. It helps manage and deploy applications on premises or in the cloud. Kubernetes also operates at the container level, not on the hardware level. It offers features such as deployment, scaling, and load balancing. It also allows you to integrate your own logging and monitoring.

Virtualization lets you create a virtual machine or isolated environment. This helps you use environments to run more than one project on one machine. Isolating environments even stops variable conflicts between dependencies. It also allows for a cleaner, less buggy development process.

Containerization is a form of virtualization. But instead of running a VM, you can create many containers and run many applications in their own environments. Containerization lets you transfer programs between teams and developers. It also allows you to take advantage of your hardware by hosting many applications from one server. Additionally, you can run all kinds of environments on one server without conflicts.

Network orchestration creates an abstraction between the administrator and cloud-based network solutions. Administrators can easily provision resources and infrastructure dynamically across multiple networks. Orchestration tools are very useful if you have multiple applications running in containers. The more containers you have, the harder it is to manage without the proper orchestration software.

Learn about the differences and similarities between IaaS, Virtualization, and Containerization.

Learn about Docker and Kubernetes in this comparison guide.

Learn how Docker brought containerization to the forefront of software development.

Learn how Azure makes it easier to handle containers and the benefits it brings.

Learn about all the Kubernetes networking trends coming down the road in 2022.

Read the original post:
What Is Containerization and How It Can Help Your Applications Get to the Market Faster. - TechGenix

Read More..

DeepMind details AI work with YouTube on video compression and AutoChapters – 9to5Google

Besides research, Alphabets artificial intelligence lab is tasked with applying its various innovations to help improve Google products. DeepMind today detailed three specific areas where AI research helped enhance the YouTube experience.

Since 2018, DeepMind has worked with YouTube on a label quality model (LQM) that more accurately identifies what videos meet advertiser-friendly guidelines and can display ads.

Since launching to production on a portion of YouTubes live traffic, weve demonstrated an average 4% bitrate reduction across a large, diverse set of videos.

Calling YouTube one of its key partners, DeepMind starts with how its MuZero AI model helps optimize video compression in the open source VP9 codec. More details and examples can be found here.

By learning the dynamics of video encoding and determining how best to allocate bits, our MuZero Rate-Controller (MuZero-RC) is able to reduce bitrate without quality degradation.

Most recently, DeepMind is behind AutoChapters, which are available for 8 million videos today. The plan is to scale this feature to more than 80M auto-generated chapters over the next year.

Collaborating with the YouTube Search team, we developed AutoChapters. First we use a transformer model that generates the chapter segments and timestamps in a two-step process. Then, a multimodal model capable of processing text, visual, and audio data helps generate the chapter titles.

DeepMind has previously worked on improving Google Maps ETA predictions, Play Store recommendations, and data center cooling.

FTC: We use income earning auto affiliate links. More.

Check out 9to5Google on YouTube for more news:

Original post:
DeepMind details AI work with YouTube on video compression and AutoChapters - 9to5Google

Read More..

This Week’s Awesome Tech Stories From Around the Web (Through July 16) – Singularity Hub

ARTIFICIAL INTELLIGENCE

Meet Plato, an AI That Gains Intuition Like a Human BabyMonisha Ravisetti | CNETIn collaboration with AI research laboratory DeepMind in the UK, this team developed an artificial intelligence system that learned intuitive physics, that is, commonsense understanding of how our universes mechanics work, just like a human baby. Current artificial intelligence systems pale in their understanding of intuitive physics, in comparison to even very young children, the study authors wrote in their paper. Here we address this gap between humans and machines by drawing on the field of developmental psychology.i

150,000 Qubits Printed on a ChipCharles Q. Choi | IEEE SpectrumQuantum computers can theoretically solve problems no classical computer ever couldeven given billions of yearsbut only if they possess many components known as qubits. Now scientists have fabricated more than 150,000 silicon-based qubits on a chip that they may be able to link together with light, to help form powerful quantum computers connected by a quantum internet.

Editing Cholesterol Genes Could Stop the Biggest Killer on EarthAntonio Regalado | MIT Technology ReviewA volunteer in New Zealand has become the first person to undergo DNA editing in order to lower their blood cholesterol, a step that may foreshadow wide use of the technology to prevent heart attacks. Of all the different genome editing ongoing on the clinic, this one could have the most profound impact because of the number of people who could benefit, says Eric Topol, a cardiologist and researcher at Scripps Research.

Inside a Radical New Project to Democratize AIMelissa Heikkil | MIT Technology ReviewA group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3and theyre giving it out for free. The researchers hope developing an open-access LLM that performs as well as other leading models will lead to long-lasting changes in the culture of AI development and help democratize access to cutting-edge AI technology for researchers around the world.

Advanced EV Batteries Move From Labs to Mass ProductionJack Ewing | The New York TimesFor years, scientists in laboratories from Silicon Valley to Boston have been searching for an elusive potion of chemicals, minerals and metals that would allow electric vehicles to recharge in minutes and travel hundreds of miles between charges, all for a much lower cost than batteries available now. [Now a few of those scientists and their companies] are building factories to produce next-generation battery cells, allowing carmakers to begin road testing the technologies and determine whether they are safe and reliable.

7 Spectacular Lessons From James Webbs First Deep-Field ImageEthan Siegel | Big ThinkWith merely 12.5 hours of exposure time in its first deep-field image, the James Webb Space Telescope has truly ushered in an entirely new era in astronomy and astrophysics. Despite devoting just 1/50th of the time that went into Hubbles deepest image of the universe, the Hubble eXtreme Deep Field, JWST has revealed details weve never seen before. Here are seven spectacular lessons we can learn from its first deep field image, along with tremendous reasons to be excited for all the amazing science to come!

GM Announces Plans to Build Coast-to-Coast Network of 2,000 EV Chargers at Truck StopsAndrew J. Hawkins | The VergeGM and Pilot Company say the new network will include 2,000 DC fast chargers installed at up to 500 truck stops and travel centers, capable of offering speeds of up to 350kW. The chargers will be in addition to the 3,250 chargers that GM is currently installing with EVgo, which the automaker has said will be completed by the end of 2025. The automaker has said it would spend $750 million in total on EV charging infrastructure.

archive page

Can Planting a Trillion New Trees Save the World?Zach St. George | The New York TimesThe idea that planting trees can effectively and simultaneously cure a host of the worlds most pressing maladies has become increasingly popular in recent years, bolstered by a series of widely cited scientific studies and by the inspiring and marketable goal, memorably proposed by a charismatic 13-year-old, of planting one trillion trees. Nearly everyone agrees that planting trees can be a useful, wholesome activity. The problem is that, in practice, planting trees is more complicated than it sounds..

Is It a Bird? Is It a Plane? No, Its a Flying FerryNicole Kobie | WiredThree feet above the waves, the Candela P-12 sprints across Lake Mlaren near Stockholm, Sweden. With only its hydrofoils cutting through the water, the boat leaves virtually no wake, noise, or emissionsa sea change from the hulking diesel-powered ferries that currently haul commuters through the archipelago that makes up the Swedish capital.

Image Credit: NASAs James Webb Space Telescope

See the original post here:
This Week's Awesome Tech Stories From Around the Web (Through July 16) - Singularity Hub

Read More..

Struggle To Stay Asleep At Night? This Supplement Is For You – mindbodygreen.com

The hum of the air conditioning, the creak of a door, a rogue dog snorethese tiny, innocuous sounds are enough to jolt light sleepers awake in the middle of the night. It's unclear what causes some people to be lighter sleepers than others, but experts suspect it's partially genetic. Whatever the cause, those who wake up a lot in the middle of the night tend to miss out on deeper sleep stages like slow-wave sleep and REM sleep, which are essential for rest and recovery.

For all those light sleepers there frustrated by mid-night wakeups, there are a few ways to tune out distractions (that don't involve trading in your dog). Investing in gadgets that make your room darker and less noisysuch as blackout curtains, an eye mask, and a sound machinecan be a good place to start. Going to bed slightly earlier in the evening, keeping bedtime and wake-up time consistent, and fine-tuning your nightly routine to be more relaxing will also help.

See the rest here:
Struggle To Stay Asleep At Night? This Supplement Is For You - mindbodygreen.com

Read More..

I want to be beautiful like the ocean – The Michigan Daily

Deep endless blue for thousands of miles. I used to think I could see the other side of the world when I looked at the ocean and that spot on the horizon where it met the orange sky. I wondered where it was. Perhaps Italy or France or India or Australia. All realistic options to me at the time since I refused to learn geography as a child. I used to think about swimming to the other side, just keep going even after hitting the plastic rope that was used to mark off the areas the lifeguard deemed safe to swim. Then, swimming under the rope and continuing on for miles and miles. They wouldnt notice, and if they did, Id be too far for them to stop me, especially since crossing those plastic lines would be far too much of a hassle. I craved the silence; far from the people on the beach, far from the sailboats and the jet skis, far enough where it felt like I was too close to the other side to turn back. I wanted my mind to be empty, purely empty, for a brief moment. Something about the silence and brown noise of the ocean was alluring. The ocean captivated me. It drew me in further and further until I couldnt help but think about the ocean wherever I was, about the fuzziness my brain would feel from the silence, about the mystery of what was on the other side.

Everything about the ocean is enchanting. How cold it feels on my skin on a warm day. The sound of the waves on the shore and how quickly it calms my nerves. People drive hours just to lay on the sand and watch it. And no matter how far I roll my pants up, the ocean leaves them soaked. The ocean connects us all. The people living on the other side, who I may never get a chance to meet, see the same blue as I do, the same water, the same floating seaweed. Its a connection that makes me feel like I am not alone in this world. They feel the same joy and pain and anger and grief that I do. The ocean also brings this feeling of insignificance. How I am just one of billions in this world. And theres something comforting in that, knowing the power and size this world has, and knowing how the never-ending dark thoughts questioning my worth and existence that pile in my head take up virtually no space beyond my mind. It was naive to think that the ocean would actually travel to me, that it would be the same ocean that carried seaweed stuck on a nine year olds leg, the same ocean carried in a pail to carve out a river through a sand castle. Nevertheless, its nice to imagine the inherent shared connection between us all. The ocean is homely and meaningful in that way, holding the souls and ashes of thousands of people, including my ancestors, their stories, their dreams, their deepest thoughts. The place for them to finally rest and be at peace, releasing them of the harshness of life and all their worries. To give them peace is to give them a never-ending space for their ashes to spread, while releasing their soul, cleansing it to fill them with purity as they become one with higher power, and stopping any more pain brought onto the soul by ceasing the possibility of another reincarnated life in this world. Finally returning back to where it all began.

I was jealous of the ocean, which sounds crazy but is ultimately true. It is free, it flows in whichever direction it pleases. The waves collapse on each other, pushing themselves closer and closer to where they want to go. The only boundaries are the beaches and land it meets. The ocean is stubborn, and would sometimes push further and further onto the land, collapsing those seemingly untouchable boundaries and teaching us that it truly knows no bounds. It is unapologetic for the strength it uses to take control of the land. It reminds us that it is raging and wild, one of the few things that man cannot tame and control no matter how many times he tries. It is out of our reach, too strong to be held down and too light to chase after, free of pain and worry, just at peace.

Yet, I found the ocean intimidating. Deep endless blue for thousands of miles, too strong to be held down, truly having no bounds. It was scary and mysterious. If I swam, what would pull me down? The pile of self-doubting dark thoughts sitting heavy in my head? Or whatever hid deep underneath, under the foamy white waves, past the murky green water, where the ocean is truly blue? But I wasnt just scared of what couldve physically happened to me. I wondered what the silence would do, what the freedom would do. Would it get to me? Would I ever come back to the shore? Or would I live amongst the mystery of whatever lived deep underneath? Who would it make me? Or what would it turn me into? I was scared of how powerful the ocean was, how vast and glorious, how dark and mysterious it was. These thoughts flood my mind often, slowly drowning me in curiosity and fear.

But this curiosity and fear that the ocean brought me only made it more enticing. It was the good kind of fear, the kind that leaves you with a slightly faster heart beat and a tiny pit in your stomach and a smile on your face. Its a fear that leaves you wanting more, where you forever seek it and chase after it. And I did. I spent my time at beaches, staring at the water, wanting to go deeper and deeper but stopping myself. I spent it at pools in the deep end, trying to swim to the bottom until the force of the water pushed me back up. But nothing gave me the same feeling as the thought of swimming under the plastic rope in the dark blue ocean.

I swim past the rope into a sea of meaning that holds stories and dreams and souls bigger than my own. Beyond this rope, I could gain strength, unapologetically crashing through my boundaries onto undiscovered land. I could be as mysterious, sharing parts of me with only myself, as independent and happy being alone, as stubborn and uncontrollable, and mostly, as beautiful.

MiC Columnist Roshni Mohan can be contacted at romohan@umich.edu

Read the rest here:
I want to be beautiful like the ocean - The Michigan Daily

Read More..

The imperative need for machine learning in the public sector – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

The sheer number of backlogs and delays across the public sector are unsettling for an industry designed to serve constituents. Making the news last summer was the four-month wait period to receive passports, up substantially from the pre-pandemic norm of 6-8 weeks turnaround time. Most recently, the Internal Revenue Service (IRS) announced it entered the 2022 tax season with 15 times the usual amount of filing backlogs, alongside its plan for moving forward.

These frequently publicized backlogs dont exist due to a lack of effort. The sector has made strides with technological advancements over the last decade. Yet, legacy technology and outdated processes still plague some of our nations most prominent departments. Todays agencies must adopt digital transformation efforts designed to reduce data backlogs, improve citizen response times and drive better agency outcomes.

By embracing machine learning (ML) solutions and incorporating advancements in natural language processing (NLP), backlogs can be a thing of the past.

Whether tax documents or passport applications, processing items manually takes time and is prone to errors on the sending and receiving sides. For example, a sender may mistakenly check an incorrect box or the receiver may interpret the number 5 as the letter S. This creates unforeseen processing delays or, worse, inaccurate outcomes.

But managing the growing government document and data backlog problem is not as simple and clean-cut as uploading information to processing systems. The sheer number of documents and citizens information entering agencies in varied unstructured data formats and states, often with poor readability, make it nearly impossible to reliably and efficiently extract data for downstream decision-making.

Embracing artificial intelligence (AI) and machine learning in daily government operations, just as other industries have done in recent years, can provide the intelligence, agility and edge needed to streamline processes and enable end-to-end automation of document-centric processes.

Government agencies must understand that real change and lasting success will not come with quick patchworks built upon legacy optical character recognition (OCR) or alternative automation solutions, given the vast amount of inbound data.

Bridging the physical and digital worlds can be attained with intelligent document processing (IDP), which leverages proprietary ML models and human intelligence to classify and convert complex, human-readable document formats. PDFs, images, emails and scanned forms can all be converted into structured, machine-readable information using IDP. It does so with greater accuracy and efficiency than legacy alternatives or manual approaches.

In the case of the IRS, inundated with millions of documents such as 1099 forms and individuals W-2s, sophisticated ML models and IDP can automatically identify the digitized document, extract printed and handwritten text, and structure it into a machine-readable format. This automated approach speeds up processing times, incorporates human support where needed and is highly effective and accurate.

Alongside automation and IDP, introducing ML and NLP technologies can significantly support the sectors quest to improve processes and reduce backlogs. NLP isan area of computer science that processes and understands text and spoken words like humans do, traditionally grounded in computational linguistics, statistics and data science.

The field has experienced significant advancements, like the introduction of complex language models that contain more than 100 billion parameters. These models could power many complex text processing tasks, such as classification, speech recognition and machine translation. These advancements could support even greater data extraction in a world overrun by documents.

Looking ahead, NLP is on course to reach the level of text understanding capability similar to that of a human knowledge worker, thanks to technological advancements driven by deep learning.Similar advancements in deep learning also enable the computer to understand and process other human-readable content such as images.

For the public sector specifically, this could be images included in disability claims or other forms or applications consisting of more than just text. These advancements could also improve downstream stages of public sector processes, such as ML-powered decision-making for agencies determining unemployment assistance, Medicaid insurance and other invaluable government services.

Though weve seen a handful of promising digital transformation improvements, the call for systemic change has yet to be fully answered.

Ensuring agencies go beyond patching and investing in various legacy systems is needed to move forward today. Patchwork and investments in outdated processes fail to support new use cases, are fragile to change and cannot handle unexpected surges in volume. Instead, introducing a flexible solution that can take the most complex, difficult-to-read documents from input to outcome should be a no-brainer.

Why? Citizens deserve more out of the agencies who serve them.

CF Su is VP of machine learning at Hyperscience.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Visit link:
The imperative need for machine learning in the public sector - VentureBeat

Read More..

This robot dog just taught itself to walk – MIT Technology Review

The teams algorithm, called Dreamer, uses past experiences to build up a model of the surrounding world. Dreamer also allows the robot to conduct trial-and-error calculations in a computer program as opposed to the real world, by predicting potential future outcomes of its potential actions. This allows it to learn faster than it could purely by doing. Once the robot had learned to walk, it kept learning to adapt to unexpected situations, such as resisting being toppled by a stick.

Teaching robots through trial and error is a difficult problem, made even harder by the long training times such teaching requires, says Lerrel Pinto, an assistant professor of computer science at New York University, who specializes in robotics and machine learning. Dreamer shows that deep reinforcement learning and world models are able to teach robots new skills in a really short amount of time, he says.

Jonathan Hurst, a professor of robotics at Oregon State University, says the findings, which have not yet been peer-reviewed, make it clear that reinforcement learning will be a cornerstone tool in the future of robot control.

Removing the simulator from robot training has many perks. The algorithm could be useful for teaching robots how to learn skills in the real world and adapt to situations like hardware failures, Hafner saysfor example, a robot could learn to walk with a malfunctioning motor in one leg.

The approach could also have huge potential for more complicated things like autonomous driving, which require complex and expensive simulators, says Stefano Albrecht, an assistant professor of artificial intelligence at the University of Edinburgh. A new generation of reinforcement-learning algorithms could super quickly pick up in the real world how the environment works, Albrecht says.

Original post:
This robot dog just taught itself to walk - MIT Technology Review

Read More..