Page 3,327«..1020..3,3263,3273,3283,329..3,3403,350..»

NXP Invests in Au-Zone to Enhance Machine Learning Capabilities – Mobile ID World

NXP is hoping to improve its machine learning offerings after making a strategic investment in Au-Zone Technologies. The exclusive arrangement specifically concerns Au-Zones DeepView ML Tool Suite, which will be used to bolster NXPs eIQ Machine Learning software development environment and lead to the creation of new Edge machine learning products.

In that regard, the DeepView Suite comes with a graphical user interface (GUI) and workflows that will make it easier to import datasets, and to train neural network models for Edge devices. DeepViews run-time inference engine will give eIQ developers more insight into system memory usage, data movement, and other performance metrics in real time, which will in turn allow them to optimize their model before deploying it in a System-on-Chip (SoC) solution.

This partnership will accelerate the deployment ofembedded Machine Learningfeatures, said Au-Zone CEOBrad Scott. This will serve as a catalyst to deliver more advanced Machine Learning technologies and turnkey solutions as [Original Equipment Manufacturers] continue to transition inferencing to the Edge.

In other news, NXP also revealed that it will be integrating Arms Ethos-U65 microNPU (neural processing unit) into its own i.MXapplications processors. The Ethos-U65 is comparable to the Ethos-U55 in terms of power efficiency, but extends its utility to Cortex-A SoCs. The microNPU is compatible with the Cortex-M core featured in NXPs i.MX SoCs (including the i.MX 8M Plus), and will allow NXP to expand its Industrial and IoT Edge portfolio.

NXPs scalable applications processors deliver a broad ecosystem for our customers to quickly deliver innovative systems, addedNXP SVP and Edge Processing GM Ron Martino. Through these partnerships, our goal is to increase the efficiency of our processors while simultaneously increasing our customers productivity and reducing their time to market.

Both Au-Zone and Arm have collaborated with NXP on other projects in the past. In the meantime, NXP has announced that Facebooks Glow Neural Network compiler is now available through its own eIQ development environment. It has also released i.MX RT106F and i.MX RT106L MCUs to support the development of applications with face and voice recognition.

Here is the original post:
NXP Invests in Au-Zone to Enhance Machine Learning Capabilities - Mobile ID World

Read More..

NXP Announces Expansion of its Scalable Machine Learning Portfolio and Capabilities – GlobeNewswire

NXP Image

NXP expands scalable machine learning capabilities

EINDHOVEN,The Netherlands, Oct. 19, 2020 (GLOBE NEWSWIRE) -- NXP Semiconductors N.V.(NASDAQ: NXPI) todayannouncedthat it is enhancing its machine learning development environment and product portfolio. Through an investment, NXP has established an exclusive, strategic partnership with Canada-based Au-Zone Technologies to expand NXPs eIQ Machine Learning (ML) software development environment with easy-to-use ML tools and expand its offering of silicon-optimized inference engines for Edge ML.

Additionally, NXP announced that it has been working with Arm as the lead technology partner in evolving Arm Ethos-U microNPU (Neural Processing Unit) architecture to support applications processors. NXP will integrate the Ethos-U65 microNPU into its next generation of i.MX applications processors to deliver energy-efficient, cost-effective ML solutions for the fast-growing Industrial and IoT Edge.

NXPs scalable applications processors deliver an efficient product platform and a broad ecosystem for our customers to quickly deliver innovative systems, said Ron Martino, Senior Vice President and General Manager of Edge Processing at NXP Semiconductors. Through these partnerships with both Arm and Au-Zone, in addition to technology developments within NXP, our goal is to continuously increase the efficiency of our processors while simultaneously increasing our customers productivity and reducing their time to market. NXPs vision is to help our customers achieve lower cost of ownership, maintain high levels of security with critical data, and to stay safe with enhanced forms of human-machine-interaction.

EnablingMachine Learning for All

Au-Zones DeepView ML Tool Suite will augment eIQ with an intuitive,graphical user interface (GUI) and workflow, enabling developers of all experience levels to import datasets and models, rapidly train, and deploy NN models and ML workloads acrossthe NXPEdge processing portfolio. To meet the demanding requirements of todays industrial and IoTapplications, NXPs eIQ-DeepViewML Tool Suite will provide developers with advanced features to prune,quantize, validate, and deploypublic or proprietary NNmodels on NXP devices. Its on-target, graph-level profiling capability will provide developers with unique, run-time insights tooptimize NN model architectures, system parameters, and run-time performance. By adding Au-Zones DeepView run-time inference engine to complement open source inference technologies in NXP eIQ, users will be able to quickly deploy and evaluate ML workloads and performance across NXP devices with minimal effort. A key feature of this run-time inference engine is that it optimizes the system memory usage and data movement uniquely for each SoC architecture.

Au-Zone is incredibly excited to announce this investment and strategic partnership with NXP, especially with its exciting roadmap for additional ML accelerated devices, said Brad Scott, CEO of Au-Zone. We created DeepViewTM to provide developers with intuitive tools and inferencing technology, so this partnership represents a great union of world class silicon, run-time inference engine technology, and a development environment that will further accelerate the deployment of embedded ML features. This partnership builds on a decade of engineering collaboration with NXP and will serve as a catalyst to deliver more advanced Machine Learning technologies and turnkey solutions as OEMs continue to transition inferencing to the Edge.

ExpandingMachine Learning Acceleration

Toacceleratemachine learningin awiderrange ofEdgeapplications, NXPwill expand itspopulari.MXapplications processors for the Industrial and IoT Edge with the integration of the Arm Ethos-U65microNPU, complementing the previously announced i.MX 8M Plus applications processor with integrated NPU. The NXP and Arm technologypartnershipfocused ondefiningthe system-levelaspectsof this microNPUwhichsupportsup to1 TOPS(512 parallelmultiply-accumulateoperationsat 1GHz). The Ethos-U65 maintains the MCU-class power efficiency of the Ethos-U55 while extending its applicability to higher performance Cortex-A-based system-on-chip (SoC)s. The Ethos-U65 microNPU works in concert with the Cortex-M core already present in NXPs i.MX families of heterogeneous SoCs, resulting in improved efficiency.

There has been a surge of AI and ML across industrial and IoT applications driving demand for more on-device ML capabilities, said Dennis Laudick, Vice President of Marketing, Machine Learning Group, at Arm. The Ethos-U65 will power a new wave of edge AI, providing NXP customers with secure, reliable, and smart on-device intelligence.

Availability

Arm Ethos-U65 will be available in future NXPs i.MX applications processors. The eIQ-DeepViewMLTool SuiteandDeepViewrun-time inference engine, integrated into eIQ,will be available Q1, 2021. The end-to-end software enablement,fromtraining, validatingand deployingexisting or new neural network modelsfor i.MX 8M Plusand other NXP SoCs, as well as future devices integrating the Ethos-U55 and U65, will be accessible through NXPseIQ Machine Learning software development environment. To learn more read our blog and register for the joint NXP and Arm webinar on November 10.

About NXP SemiconductorsNXP Semiconductors N.V. enables secure connections for a smarter world, advancing solutions that make lives easier, better, and safer. As the world leader in secure connectivity solutions for embedded applications, NXP is driving innovation in the automotive, industrial & IoT, mobile, and communication infrastructure markets. Built on more than 60 years of combined experience and expertise, the company has approximately 29,000 employees in more than 30 countries and posted revenue of $8.88 billion in 2019. Find out more at http://www.nxp.com.

NXP, the NXP logo and EdgeVerse are trademarks of NXP B.V. All other product or service names are the property of their respective owners. Amazon Web Services and all related logos and motion marks are trademarks of Amazon.com, Inc. or its affiliates. The Bluetooth word mark and logos are registered trademarks owned by Bluetooth SIG, Inc. and any use of such marks by NXP Semiconductors is under license. All rights reserved. 2020 NXP B.V.

For more information, please contact:

NXP-IoTNXP-Smart HomeNXP-Corp

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/ea5038f6-c957-4d81-866f-e613cbe439f6

Continue reading here:
NXP Announces Expansion of its Scalable Machine Learning Portfolio and Capabilities - GlobeNewswire

Read More..

Factories of The Future Are Using Machine Learning Analytics to Optimize Assets – Embedded Computing Design

From food to cars to complex manufacturing machinery,qualityis a top concern of manufacturers. Factors such as safety, efficiency, and reliability affect product quality and ultimately influence customer satisfaction. Sourcing, design, testing, and inspection all play a crucial role in ensuring products meet the bar when it comes to quality. Product inspections at early stages in the production cycle help reduce risks and cost. While inspections can be conducted at any point throughout the production process, the goal is to identify, contain and resolve issues as quickly as possible.

Many manufacturers are increasingly looking to their smart, connected machines to help with anomaly detection. These assets can alert end users of such anomalies to ensure accelerated interventions, helping maintain quality and uptime. Using advanced analytics, assets can collect user feedback or alert accuracy and improve over time. This enables higher outputs and lowers labor costs because of the reduction in time spent resolving issues.

Rockwell Automation saw an opportunity to introduce a product inspection much earlier in one of their production processes. The process involves screening conductive paste onto a circuit board and then placing ball grid arrays (BGAs), which serve as the contact point for parts added later in the process. The board then travels through several placement machines where parts are added, increasing the value of the board significantly with each part, and finally through the oven where the board is set.

Lack of early inspection often resulted in a significant amount of time spent fixing errors that occurred very early in the process. If the BGAs are not properly connected to the board, it can take 30 minutes per part to correct the work. With 12 BGAs on a board, employees were sometime spending upwards of 6 hours on replacing parts and re-working the boards. The inspection, which is an automated optical inspection, did not occur until after the board had gone through three placement machines and the oven. Rockwell Automation knew that to avoid the potential of more than 6 hours of rework for faulty BGA placement, they need to catch the issue much sooner.

The company implemented an advanced analytics solution that scores the conductive paste profile prior to the board going through a single placement machine. Using high-speed edge computing and machine learning, the solution creates and executes a 3D model of the paste profile within 7 seconds. It then predicts whether the board will meet the quality bar or have defects. If alerted to a poor paste profile, operators can immediately stop production, remove the bad board, wash it, and send it back through the production linea process that takes less than two minutes.

Even in the pilot process, Rockwell Automation has seen impressive results. By catching issues early in real time, they have gained back hours of productivity and kept the quality bar high. Errors that used to take 6 hours to resolve can now be determined and fixed within minutes. The solution allowed Rockwell Automation to determine paste issues right away; it only takes them two minutes to do a rework with machine learning. And, given that the value to the board comes with adding various parts, there has been a cost-saving benefit by resolving issues before any parts have been placed, reducing scrap and other waste.

For Rockwell Automation, these new capabilities have yielded almost immediate results in the form of time and cost savings. Rockwell Automation sees massive potential for real-time analytics to improve their circuit board production across their facilities, and they are looking towards additional use cases of advanced analytics and machine learning to bring even more intelligence into their operations.

Continued here:
Factories of The Future Are Using Machine Learning Analytics to Optimize Assets - Embedded Computing Design

Read More..

Lantronix Brings Advanced AI and Machine Learning to Smart Cameras With New Open-Q 610 SOM Based on the Powerful Qualcomm QCS610 System on Chip (SOC)…

IRVINE, Calif., Oct. 15, 2020 (GLOBE NEWSWIRE) -- Lantronix Inc. (NASDAQ: LTRX), a global provider of Software as a Service (SaaS), engineering services and hardware for Edge Computing, the Internet of Things (IoT) and Remote Environment Management (REM), today announced the availability of its new Lantronix Open-Q 610 SOM based on the powerful Qualcomm QCS610System on Chip (SOC). This micro System on Module (SOM) is designed for connected visual intelligence applications with high-resolution camera capabilities, on-device artificial intelligence (AI) processing and native Ethernet interface.

Our long and successful relationship with Qualcomm Technologies enables us to deliver powerful micro SOM solutions that can accelerate IoT design and implementation, empowering innovators to create IoT applications that go beyond hardware and enabletheir wildest dreams, said Paul Pickle, CEO of Lantronix.

The new Lantronix ultra-compact (50mm x 25mm), production-ready Open-Q 610 SOM is based on the powerful Qualcomm QCS610SOC, the latest in the Qualcomm Vision Intelligence Platform lineup targeting smart cameras with edge computing. Delivering up to 50 percent improved AI performance than the previous generation as well as image signal processing and sensor processing capabilities, it is designed to bring smart camera technology, including powerful artificial intelligence and machine learning features formerly only available to high-end devices, into mid-tier camera segments, including smart cities, commercial and enterprise, homes and vehicles.

Bringing Advanced AI and Machine Learning to Smart Camera Application

Created to bring advanced artificial intelligence and machine learning capabilities to smart cameras in multiple vertical markets, the Open-Q 610 SOM is designed for developers seeking to innovate new products utilizing the latest vision and AI edge capabilities, such as smart connected cameras, video conference systems, machine vision and robotics. With the Open-Q 610 SOM, developers gain a pre-tested, pre-certified, production-ready computing module that reduces risk and expedites innovative product development.

The Open-Q 610 SOM provides the core computing capabilities for:

Connectivity solutions include Wi-Fi/BT, Gigabit Ethernet, multiple USB ports and three-camera interfaces.

The Lantronix Open-Q 610 SOM provides advanced artificial intelligence and machine learning capabilities that enable developers to innovate new product designs, including smart connected cameras, video conference systems, machine vision and robotics, said Jonathan Shipman, VP of Strategy at Lantronix Inc. Lantronix micro SOMs and solutions enable IoT device makers to jumpstart new product development and accelerate time-to-market by shortening the design cycle, reducing development risk and simplifying the manufacturing process.

Open-Q 610 Development Kit

The companion Open-Q 610 Development Kit is a full-featured platform with available software tools, documentation and optional accessories. It delivers everything required to immediately begin evaluation and initial product development.

The development kit integrates the production-ready OpenQ 610 SOM with a carrier board, providing numerous expansion and connectivity options to support development and testing of peripherals and applications. The development kit, along with the available documentation, also provides a proven reference design for custom carrier boards, providing a low-risk fast track to market for new products.

In addition to production-ready SOMs, development platforms and tools, Lantronix offers turnkey product development services, driver and application software development and technical support.

For more information, visit Open-Q 610 SOM and Open Q 610 SOM Development kit.

About Lantronix

Lantronix Inc. is a global provider of software as a service (SaaS), engineering services and hardware for Edge Computing, the Internet of Things (IoT) and Remote Environment Management (REM). Lantronix enables its customers to provide reliable and secure solutions while accelerating their time to market. Lantronixs products and services dramatically simplify operations through the creation, development, deployment and management of customer projects at scale while providing quality, reliability and security.

Lantronixs portfolio of services and products address each layer of the IoT Stack, including Collect, Connect, Compute, Control and Comprehend, enabling its customers to deploy successful IoT and REM solutions. Lantronixs services and products deliver a holistic approach, addressing its customers needs by integrating a SaaS management platform with custom application development layered on top of external and embedded hardware, enabling intelligent edge computing, secure communications (wired, Wi-Fi and cellular), location and positional tracking and environmental sensing and reporting.

With three decades of proven experience in creating robust industry and customer-specific solutions, Lantronix is an innovator in enabling its customers to build new business models, leverage greater efficiencies and realize the possibilities of IoT and REM.Lantronixs solutions are deployed inside millions of machines at data centers, offices and remote sites serving a wide range of industries, including energy, agriculture, medical, security, manufacturing, distribution, transportation, retail, financial, environmental, infrastructure and government.

For more information, visit http://www.lantronix.com. Learn more at the Lantronix blog, http://www.lantronix.com/blog, featuring industry discussion and updates. To follow Lantronix on Twitter, please visit http://www.twitter.com/Lantronix. View our video library on YouTube at http://www.youtube.com/user/LantronixInc or connect with us on LinkedIn at http://www.linkedin.com/company/lantronix

Safe Harbor Statement under the Private Securities Litigation Reform Act of 1995: Any statements set forth in this news release that are not entirely historical and factual in nature, including without limitation statements related to our solutions, technologies and products as well as the advanced Lantronix Open-Q 610 SOM, are forward-looking statements. These forward-looking statements are based on our current expectations and are subject to substantial risks and uncertainties that could cause our actual results, future business, financial condition, or performance to differ materially from our historical results or those expressed or implied in any forward-looking statement contained in this news release. The potential risks and uncertainties include, but are not limited to, such factors as the effects of negative or worsening regional and worldwide economic conditions or market instability on our business, including effects on purchasing decisions by our customers; the impact of the COVID-19 outbreak on our employees, supply and distribution chains, and the global economy; cybersecurity risks; changes in applicable U.S. and foreign government laws, regulations, and tariffs; our ability to successfully implement our acquisitions strategy or integrate acquired companies; difficulties and costs of protecting patents and other proprietary rights; the level of our indebtedness, our ability to service our indebtedness and the restrictions in our debt agreements; and any additional factors included in our Annual Report on Form 10-K for the fiscal year ended June 30, 2019, filed with the Securities and Exchange Commission (the SEC) on September 11, 2019, including in the section entitled Risk Factors in Item 1A of Part I of such report, as well as in our other public filings with the SEC. Additional risk factors may be identified from time to time in our future filings. The forward-looking statements included in this release speak only as of the date hereof, and we do not undertake any obligation to update these forward-looking statements to reflect subsequent events or circumstances.

Lantronix Media Contact:Gail Kathryn MillerCorporate Marketing &Communications Managermedia@lantronix.com949-453-7158

Lantronix Analyst and Investor Contact:Jeremy WhitakerChief Financial Officerinvestors@lantronix.com 949-450-7241

Lantronix Sales: sales@lantronix.comAmericas +1 (800) 422-7055 (US and Canada) or +1 949-453-3990Europe, Middle East and Africa +31 (0)76 52 36 744Asia Pacific + 852 3428-2338China + 86 21-6237-8868Japan +81 (0) 50-1354-6201India +91 994-551-2488

2020 Lantronix, Inc. All rights reserved. Lantronix is a registered trademark, and EMG, and SLC are trademarks of Lantronix Inc. Other trademarks and trade names are those of their respective owners.

Qualcomm is a trademark or registered trademark of Qualcomm Incorporated.

Qualcomm Vision Intelligence Platform and Qualcomm QCS610 are products of Qualcomm Technologies, Inc. and/or its subsidiaries.

Originally posted here:
Lantronix Brings Advanced AI and Machine Learning to Smart Cameras With New Open-Q 610 SOM Based on the Powerful Qualcomm QCS610 System on Chip (SOC)...

Read More..

EMA Webinar to Uncover How Machine Learning and Predictive Analytics Can Improve Workload Automation Outcomes – PR Web

Empowering Workload Automation with Intelligence Webinar

BOULDER, Colo. (PRWEB) October 20, 2020

Enterprise Management Associates (EMA), a leading IT and data management research and consulting firm, today announced it will host a webinar titled Empowering Workload Automation with Intelligence, featuring Dan Twing, President and COO at EMA, and Jennifer Chisik, Head of Product for Automic Automation Intelligence at Broadcom.

Automation is critical to modern IT operations. Workload automation (WLA) software is one of the critical tools required to run an effective IT operation. Large enterprises often have multiple WLA solutions, which make it difficult to create a holistic, end-to-end view of workload health and outcomes, and to see and resolve problems as they develop. Adding intelligence to automation solutions with machine learning data from a wide range of sources and disparate tools empowers automation to predict outcomes, identify problems as they develop, and provide prescriptive suggestions to resolve developing problems quickly.

During this webinar, Twing and Chisik will discuss changes to the operating model for IT and benefits of automation intelligence, as well as:

The webinar is Wednesday, October 28 at 11:00 a.m. Eastern. Registration is available at https://info.enterprisemanagement.com/empowering-workload-automation-with-intelligence-webinar-pr.

About EMA Founded in 1996, EMA is a leading industry analyst firm that specializes in providing deep insight across the full spectrum of IT and data management technologies. EMA analysts leverage a unique combination of practical experience, insight into industry best practices, and in-depth knowledge of current and planned vendor solutions to help their clients achieve their goals. Learn more about EMA research, analysis, and consulting services for enterprise line of business users, IT professionals and IT vendors at https://www.enterprisemanagement.com.

Share article on social media or email:

Visit link:
EMA Webinar to Uncover How Machine Learning and Predictive Analytics Can Improve Workload Automation Outcomes - PR Web

Read More..

How To Choose The Best Machine Learning Algorithm For A Particular Problem? – Analytics India Magazine

How do you know what machine learning algorithm to choose for your problem? Why dont we try all the machine learning algorithms or some of the algorithms which we consider will give good accuracy. If we apply each and every algorithm it will take a lot of time. So, it is better to apply a technique to identify the algorithm that can be used.

Choosing the right algorithm is linked up with the problem statement. It can save both money and time. So, it is important to know what type of problem we are dealing with.

In this article, we will be discussing the key techniques that can be used to choose the right machine algorithm in a particular work. Through this article, we will discuss how we can decide to use which machine learning model using the plotting of dataset properties. We will also discuss how the size of the dataset can be a considerable measure in choosing a machine learning algorithm.

The dataset is taken from Kaggle, you can find it here. It has information about the diabetic patient and whether or not each patient will have an onset of diabetes. It has 9 columns and 767 rows. Rows and columns represent patient numbers and details.

Practical Implication:

First of all, we will import the required libraries.

After it we will proceed by reading the csv file.

By applying the pair plot we will be able to understand which algorithm to choose.

From the plot, we can see that there is a lot of overlap between the data points.KNN should be preferred as it works on the principle of Euclidean distance. In case KNN is not performing as per the expectation then we can use the Decision Tree or Random Forest algorithm.

A decision tree or Random Forest works on the principle of non-linear classification. We can use it if some of the data points are overlapping with each other.

Many algorithms work on the assumption that classes can be separated by a straight line. In such cases, Logistic regression or Support Vector Machine should be preferred. It easily separates the data points by drawing a line that divides the target class. Linear regression algorithms assume that data trends follow a straight line. These algorithms perform well for the present case.

Import the various algorithm classifiers to check the training time of small and large dataset.

Split the data into train and test. Now we can proceed by applying Decision Tree, Logistic Regression, Random Forest and Support Vector Machine algorithms to check the training time for a classification problem.

Now, we will fit several machine learning models on this dataset and check the training time taken by these models.

From the above results, we can conclude that Decision Trees will take much less time than all algorithms for small dataset. Hence, it is recommended to use a low bias/high variance classifier like a decision tree.

The dataset is taken from Kaggle, you can find it here. It has information about credit card fraud that occurred in two days. Feature Class is a target variable and it takes 1 in case of fraud and 0 otherwise. It has 284807 rows and 31columns.

#Train-Test Split

Now again, on this second dataset, we will fit the above machine learning models on this dataset and check the training time taken by these models.

With the huge dataset size depth of Decision Tree grows, it implements multiple if-else statements which increase complexity and time. Both Random Forest and Xgboost use the Decision Tree algorithm which takes more time. The result shows Logistic regression outperforms others.

I have concluded my analysis in selecting the correct machine learning algorithm. Furthermore, it is always advisable to use two algorithms for addressing the problem statement. This could provide a good reference point for the audience.

Read more here:
How To Choose The Best Machine Learning Algorithm For A Particular Problem? - Analytics India Magazine

Read More..

AI and Machine Learning Technologies Expected to Play a Key Role in Expanding Multi Billion Dollar Digital Banking Sector: Report – Crowdfund Insider

The global digital banking platform market size is expected to reach $10.87 billion by 2027, which means that its expanding at about a 13.6% CAGR (compounded annual growth rate), according to estimates from Allied Market Research (AMR).

A release that summarizes that findings of the report notes:

Growing adoption of online banking over traditional banking drives the growth of the global digital banking platforms market. North America contributed the highest share in 2019, and will maintain its dominance throughout the forecast period. During the Covid-19 pandemic, users are preferring digital banking platforms such as internet banking to avoid physical contact with individuals and prevent transmission of coronavirus.

But the report also mentioned that compliance and online data security issues may begin to limit the growth of the virtual banking market. Despite cybersecurity issues, the digital banking sector is on track to grow steadily in the coming years due to advancements in related technologies such as artificial intelligence (AI) and machine learning (ML).

AI and ML are used to make intelligent decisions about key business and banking processes. They may also be used to analyze large amounts of data in order to determine the creditworthiness of an application. Additionally, AI can help detect suspicious or potentially fraudulent transactions by using context clues or by looking for certain patterns in the way payments or transactions are made.

The report further noted that the Reserve Bank of India (RBI) has confirmed that around twice as many transactions were made via digital banking platforms in April 2020 (when compared to March 2020 which was the time the Coronavirus pandemic began).

Virtual banks across the globe appear to be doing quite well and continue to launch new products and promotional offers. Digital banking group Varo Bank has launched Varo Advice, which is a new product that instantly advances up to $100 to qualifying customers. As noted in a release, the new offer is designed to help customers proactively manage their finances, Varo Advance offers instant access to up to $100 cash right in the Varo Bank app.

US digital banking platform Greenwood recently revealed that there have been over 100,000 sign-ups just days after its debut.

As covered, emerging digital technology breakthroughs in AI and IoT are fundamentally changing consumers banking experience, according to a new report. Meanwhile, another report found that consumers in European countries like Germany are not downloading new digital banking apps as much as expected (in a post COVID world). However, theyre still using the virtual banking apps theyve already installed a lot more than before, the report revealed.

See the original post here:
AI and Machine Learning Technologies Expected to Play a Key Role in Expanding Multi Billion Dollar Digital Banking Sector: Report - Crowdfund Insider

Read More..

EXCLUSIVE: Amazon AI executive explains three things every business needs to address before using machine lear – Business Insider India

The first of which is data readiness. This includes having sufficient data thats been cleaner up for ML use. The second is the anticipated business impact what problem needs to be solved to either boost business efficiency or improve customer experience.

Advertisement

If the basics arent in place, the talent hired to build the ML model will spend the majority of their time just filtering through data rather than building use cases.

We often notice what we call data hugging. There are multiple teams in every company working on different projects, and they get really attached to their data and they dont want to share their data with other groups, explained Anbazhagan.

This kind of data hugging can work in the short run allowing individual teams to roll out their little ML models, but in the long run, companies need to have an expansive view of all the data across their organisation. Advertisement

Picking a business problem to solve and using MLThis is the tricky part. According to Anbazhagan, some companies tend to either overshoot or undershoot. Ideally, the chosen business problem should have a huge impact, but it should also be something that the ML model can solve within six to 10 months.

Some companies over-index on the business impact and pick some really complicated problem for which there may not be a lot of data, he explained.Advertisement

Data readiness, business impact, and ML applicability are the three parameters that need to line up for a business to have a successful and robust ML strategy at the helm.

Addressing the ML skill gapDespite the growing demand for engineers and data scientists that specialise in ML, there is still a shortage in the market. Advertisement

This makes recruiting talent for ML applications difficult and expensive. Anbazhagan believes companies would be better served by developing internal talent instead and creating a community around reskilling as ML continues to grow.

Advertisement

SEE ALSO:Nokia is set to build the first 4G network on the Moon for NASA it will eventually be upgraded to 5G

The Moon photobombs NASA's view of the Sun

Wipro wins five-year contract with Europes third-largest producer of clean electricity

Continue reading here:
EXCLUSIVE: Amazon AI executive explains three things every business needs to address before using machine lear - Business Insider India

Read More..

Photoshops AI neural filters can tweak age and expression with a few clicks – The Verge

Artificial intelligence is changing the world of image editing and manipulation, and Adobe doesnt want to be left behind. Today, the company is releasing an update to Photoshop version 22.0 that comes with a host of AI-powered features, some new, some already shared with the public. These include a sky replacement tool, improved AI edge selection, and the star of the show a suite of image-editing tools that Adobe calls neural filters.

These filters include a number of simple overlays and effects but also tools that allow for deeper edits, particularly to portraits. With neural filters, Photoshop can adjust a subjects age and facial expression, amplifying or reducing feelings like joy, surprise, or anger with simple sliders. You can remove someones glasses or smooth out their spots. One of the weirder filters even lets you transfer makeup from one person to another. And its all done in just a few clicks, with the output easily tweaked or reversed entirely.

This is where I feel we can now say that Photoshop is the worlds most advanced AI application, Maria Yap, Adobes vice president of digital imaging told The Verge. Were creating things in images that werent there before.

To achieve these effects, Adobe is harnessing the power of generative adversarial networks or GANs a type of machine learning technique thats proved particularly adept at generating visual imagery. Some of the processing is done locally and some in the cloud, depending on the computational demands of each individual tool, but each filter takes just seconds to apply. (The demo we saw was done on an old Mac Book Pro and was perfectly fast enough.)

Many of these filters are familiar to those who follow AI image editing. Theyre the sort of tools that have been turning up in papers and demos for years. But its always significant when techniques like these go from bleeding-edge experiments, shared on Twitter among those in the know, to headline features in consumer juggernauts like Photoshop.

As always with these sorts of features, the proof will be in the editing, and the actual utility of neural filters will depend on how Photoshops many users react to them. But in a virtual demo The Verge saw, the new tools delivered fast and good quality results (though we didnt see the facial expression adjustment tool). These AI-powered edits werent flawless, and most professional retouchers would want to step in and make some adjustments of their own afterwards, but they seemed like they would speed up many editing tasks.

AI tools like this work by learning from past examples. So, to create the neural filter thats used to smooth away skin blemishes, for example, Adobe collected thousands of before and after shots of edits made by professional photographers, feeding this data into their algorithms. The GANs operate like a paired student and teacher, with one part trying to copy these examples while the other tries to distinguish between this output and the training data. Eventually, when even the GAN is getting confused trying to tell the difference between the two, the training process is complete.

Basically, were training the GAN to make the same corrections a professional retoucher would do, Alexandru Costin, Adobes vice president of engineering for Creative Cloud, told The Verge.

It sounds straightforward, but there are lots of ways this training can go wrong. A big one is biased data. The algorithms only know the world you show them, so if you only show them images of, say, white faces, they wont be able to make edits for anyone whose complexion doesnt fit within this narrow range. This sort of bias is why facial recognition systems often perform worse on women and people of color. These faces just arent in the training data.

Costin says Adobe is acutely aware of this problem. If it trained its algorithms on too many white faces, he says, its neural filters might end up pushing AI-edited portraits toward whiter complexions (a problem weve seen in the past with other ML applications).

One of the biggest challenges we have is preserving the skin tone, says Costin. This is a very sensitive area. To help root out this bias, Adobe has set up review teams and an AI ethics committee that test the algorithms every time a major update is made. We do a very thorough review of every ML feature, to look at this criteria and try and raise the bar.

But one key advantage Adobe has over other teams building AI image-editing tools is its catalog of stock photography a huge array of images that span different ages, races, genders. This, says Costin, made it easy for Adobes researchers to balance their datasets to try to minimize bias. We complemented our training data with Adobe stock photos, says Costin, and that allowed us to have a good as possible, distributed training set.

Of course, all this is no guarantee that biased results wont appear somewhere, especially when the neural filters get out of beta testing and into the hands of the general public. For that reason, each time a filter is applied, Photoshop will ask users whether theyre happy with the results, and, if theyre not, give them the option of reporting inappropriate content. If users choose, they can also send their before and after images anonymously to Adobe for further study. In that way, the company hopes to not only remove bias, but also expand its training data even further, pushing its neural filters to greater levels of fidelity.

This sort of speedy update based on real-world usage is common in the fast-moving world of AI research. Often, when a new machine learning technique is published (usually on a site named arXiv, an open-access collection of scientific papers that havent yet been published in a journal), other researchers will read it, adopt it, and adapt it within days, sharing results and tips with one another on social media.

Some AI-focused competitors to Photoshop distinguish themselves by embracing this sort of culture. A program like Runway ML, for example, not only allows users to train machine learning filters using their own data (something that Photoshop does not), but it operates a user-generated marketplace that makes it easy for people to share and experiment with the latest tools. If a designer or illustrator sees something cool floating around on Twitter, they want to start playing with it immediately rather than wait for it to trickle into Photoshop.

As a widely used product with customers who value stability, Adobe cant truly compete with this sort of speed, but with neural filters, the company is dipping a toe into these fast-moving waters. While two of the filters are presented as finished features, six are labeled as beta tools, and eight more are only listed as names, with users having to request access. You can see a full list of the different filters and their respective tiers below:

Featured Neural Filters: Skin Smoothing, Style TransferBeta Neural Filters: Smart Portrait, Makeup Transfer, Depth-Aware Haze, Colorize, Super Zoom, JPEG Artifacts RemovalFuture Neural Filters: Photo Restoration, Dust and Scratches, Noise Reduction, Face Cleanup, Photo to Sketch, Sketch to Portrait, Pencil Artwork, Face to Caricature

Yap says this sort of approach is new to Photoshop but will hopefully let Adobe temper users expectations about AI tools, giving them the license to update the tools more quickly. Weve built this framework that allows us to bring models [to users] faster, from research to Photoshop, says Yap. Traditionally when we do features, like sky replacement, theyre really deeply integrated into the product and so take a longer time to mature. With neural filters, that update cycle will ideally be much faster.

Its this pace that were trying to bring into Photoshop, says Costin. And it will come at the cost of the feature not being perfect when we launch, but were counting on our community of users to tell us how good it is [...] and then we will take in that data and refine it and improve it.

In other words: the flywheel of AI progress, wherein more users create more data that creates better tools, is coming to Photoshop. Tweaking someones age is just the start.

Read the original:
Photoshops AI neural filters can tweak age and expression with a few clicks - The Verge

Read More..

Cryptocurrency Is Just a Minor Threat to the State- CoinDesk – CoinDesk

Are cryptocurrencies a new form of money and, if so, do they threaten state power?

Our friend Nic Carter has recentlycommentedon these questions indialoguewith the Federal Reserve Bank of New York. We would like to add our perspective and thoughts on this, as we believe there is value to be derived from discussing these matters in depth. For better and worse, we believe that blockchains such as Bitcoin, Ethereum and Handshake (in which I am involved) have features that make them a novel threat to the powers that states derive from currency issuance but only a very marginal threat. This fairly mild conclusion flows from more controversial premises.

Steven McKie is a founding partner and managing director at Amentum Capital, developer on HandyMiner and HandyBrowser for Handshake and host of the BlockChannel podcast. A version of this article first appeared on Amentum's blog.

The New York Fed writers name three kinds of money: fiat money, money with intrinsic or commodity value and claim-backed money. Without getting lost in the weeds, we think this overcomplicates things. All money that we can think of falls into two categories: either it has intrinsic value (like edible grains) or it doesnt. If it doesnt, then its value comes from the supposition that someone else values it.

This mysterious someone else might be totally unspecified, as when we suppose someone will pay us for gold; or it might include a specific party, such as a state, that promises to take the money in exchange for, e.g., discharging tax obligations. Bitcoin, like gold in the post-gold-standard era, falls into the former category. It has no intrinsic value and nobody in particular has promised to exchange anything for it. We just guess that someone will.

But we should not be surprised that the worlds most popular kinds of money are the ones that states explicitly promise to honor. For states, such promises are an extremely important instrument of their power. For example, by only accepting dollars as tax payment, the United States obliges its hundreds of millions of people to make sure they have dollars handy. Because of this, everyone in the world knows they can sell their dollars to someone (i.e., to U.S. residents). Moreover, everyone knows that by accumulating dollars they gain certain leverage over the United States. This situation enables the United States to print its own money and in so doing, project its power around the world.

The power to print money also gives states another kind of power: It enables them to maximize their productivity. By increasing the money supply, they can pull more people on the margins of the economy into the productive process. But this comes at the cost of the scarcity of money and, because it puts the newly minted money directly into the pockets of the less-powerful, tends to decrease the power of those who have already accumulated a lot of money. Hence, artificial constraints of the money supply, like the gold standard, are often associated with extremely conservative politics. Constraining the money supply hurts productivity, but it preserves social hierarchies.

This is where the more benign hopes of transcending nation-states mix with the darker fantasies of so-called bitcoin maximalists. On the one hand, a meaningful alternative to national currencies could allow people in abusive regimes not to rely on their governments worthless promises. On the other hand, a mechanistically fixed supply of money could put an unequal social hierarchy beyond the reach of democratic power, as the gold standard once did.

Bitcoin, in this respect, is very much like gold. And like gold, it poses no active threat to state currencies or state power. For the value of state currencies as described above is predicated upon the actual, practical power of states. Throughout modern history, the preeminent reserve currency has been the coin of the worlds preeminent military power. Only if states lose their status as the main global powers are their currencies likely to follow suit.

Cryptocurrencies are only playing around the margins of this reality. Still, they can play an interesting role because they have features that prior non-state currencies did not. For example, they can facilitate coordination and communication between their holders. Imagine if all the holders of gold could, for example, vote on whether to mine more. Moreover, some cryptocurrencies have intrinsic value, such as ether (paying for the use of a distributed network), or HNS (paying for domain names on a decentralized registry).

The ongoing improvements in global cooperation that happen in the bitcoin/crypto private sector derive from the many players that ensure a proof-of-work (PoW) system remains secure.

The intricacies that go into the production of hashrate, such as power and chipmaker pricing negotiation, manufacturing, international sales and marketing, mining pools and hashpower secondary markets. All are playing a piece in hardening relationships locally and internationally.

Therefore, a properly secured chain has then worked its way into regional regulations and labor, becoming a localized economic staple over time as it approaches scale. And, the second-order effects that come from that embedded chain of incentives include a public blockchain that is secure, not just technically but socially and politically. The most secure chains possessing such widespread economies of scale become powerful economic instruments of finance and political social progress (albeit slowly, but each new major public chain hastens this emergent process, thankfully).

In essence, though these systems may at first seem adversarial to state power by their very design, if you look more closely youll see they inherently (slowly) improve diplomacy via scalable trustless cooperation and international business over time.

To understand more on the alchemy of PoW hashpower and how it naturally derives incentives for international business cooperation, see thisongoing series from Anicca Research. The trustless systems we deploy globally have powerful consequences, and its important that we as an industry understand how to continually scale the positive aspects of decentralized monetary systems, without amplifying the negative effects such as centralized financial influence.

States are not wrong to be somewhat threatened by these hard-to-assess possibilities. If many people decide they would rather hold cryptocurrencies than state-backed currencies, it will diminish states abilities to project power through their coins.

But states still have the armies, the police and on a good day anyway democratic legitimacy. All of that still matters, and will for a long time.

View original post here:
Cryptocurrency Is Just a Minor Threat to the State- CoinDesk - CoinDesk

Read More..