Page 3,523«..1020..3,5223,5233,5243,525..3,5303,540..»

What a machine learning tool that turns Obama white can (and cant) tell us about AI bias – The Verge

Its a startling image that illustrates the deep-rooted biases of AI research. Input a low-resolution picture of Barack Obama, the first black president of the United States, into an algorithm designed to generate depixelated faces, and the output is a white man.

Its not just Obama, either. Get the same algorithm to generate high-resolution images of actress Lucy Liu or congresswoman Alexandria Ocasio-Cortez from low-resolution inputs, and the resulting faces look distinctly white. As one popular tweet quoting the Obama example put it: This image speaks volumes about the dangers of bias in AI.

But whats causing these outputs and what do they really tell us about AI bias?

First, we need to know a little a bit about the technology being used here. The program generating these images is an algorithm called PULSE, which uses a technique known as upscaling to process visual data. Upscaling is like the zoom and enhance tropes you see in TV and film, but, unlike in Hollywood, real software cant just generate new data from nothing. In order to turn a low-resolution image into a high-resolution one, the software has to fill in the blanks using machine learning.

In the case of PULSE, the algorithm doing this work is StyleGAN, which was created by researchers from NVIDIA. Although you might not have heard of StyleGAN before, youre probably familiar with its work. Its the algorithm responsible for making those eerily realistic human faces that you can see on websites like ThisPersonDoesNotExist.com; faces so realistic theyre often used to generate fake social media profiles.

What PULSE does is use StyleGAN to imagine the high-res version of pixelated inputs. It does this not by enhancing the original low-res image, but by generating a completely new high-res face that, when pixelated, looks the same as the one inputted by the user.

This means each depixelated image can be upscaled in a variety of ways, the same way a single set of ingredients makes different dishes. Its also why you can use PULSE to see what Doom guy, or the hero of Wolfenstein 3D, or even the crying emoji look like at high resolution. Its not that the algorithm is finding new detail in the image as in the zoom and enhance trope; its instead inventing new faces that revert to the input data.

This sort of work has been theoretically possible for a few years now, but, as is often the case in the AI world, it reached a larger audience when an easy-to-run version of the code was shared online this weekend. Thats when the racial disparities started to leap out.

PULSEs creators say the trend is clear: when using the algorithm to scale up pixelated images, the algorithm more often generates faces with Caucasian features.

It does appear that PULSE is producing white faces much more frequently than faces of people of color, wrote the algorithms creators on Github. This bias is likely inherited from the dataset StyleGAN was trained on [...] though there could be other factors that we are unaware of.

In other words, because of the data StyleGAN was trained on, when its trying to come up with a face that looks like the pixelated input image, it defaults to white features.

This problem is extremely common in machine learning, and its one of the reasons facial recognition algorithms perform worse on non-white and female faces. Data used to train AI is often skewed toward a single demographic, white men, and when a program sees data not in that demographic it performs poorly. Not coincidentally, its white men who dominate AI research.

But exactly what the Obama example reveals about bias and how the problems it represents might be fixed are complicated questions. Indeed, theyre so complicated that this single image has sparked heated disagreement among AI academics, engineers, and researchers.

On a technical level, some experts arent sure this is even an example of dataset bias. The AI artist Mario Klingemann suggests that the PULSE selection algorithm itself, rather than the data, is to blame. Klingemann notes that he was able to use StyleGAN to generate more non-white outputs from the same pixelated Obama image, as shown below:

These faces were generated using the same concept and the same StyleGAN model but different search methods to Pulse, says Klingemann, who says we cant really judge an algorithm from just a few samples. There are probably millions of possible faces that will all reduce to the same pixel pattern and all of them are equally correct, he told The Verge.

(Incidentally, this is also the reason why tools like this are unlikely to be of use for surveillance purposes. The faces created by these processes are imaginary and, as the above examples show, have little relation to the ground truth of the input. However, its not like huge technical flaws have stopped police from adopting technology in the past.)

But regardless of the cause, the outputs of the algorithm seem biased something that the researchers didnt notice before the tool became widely accessible. This speaks to a different and more pervasive sort of bias: one that operates on a social level.

Deborah Raji, a researcher in AI accountability, tells The Verge that this sort of bias is all too typical in the AI world. Given the basic existence of people of color, the negligence of not testing for this situation is astounding, and likely reflects the lack of diversity we continue to see with respect to who gets to build such systems, says Raji. People of color are not outliers. Were not edge cases authors can just forget.

The fact that some researchers seem keen to only address the data side of the bias problem is what sparked larger arguments about the Obama image. Facebooks chief AI scientist Yann LeCun became a flashpoint for these conversations after tweeting a response to the image saying that ML systems are biased when data is biased, and adding that this sort of bias is a far more serious problem in a deployed product than in an academic paper. The implication being: lets not worry too much about this particular example.

Many researchers, Raji among them, took issue with LeCuns framing, pointing out that bias in AI is affected by wider social injustices and prejudices, and that simply using correct data does not deal with the larger injustices.

Others noted that even from the point of view of a purely technical fix, fair datasets can often be anything but. For example, a dataset of faces that accurately reflected the demographics of the UK would be predominantly white because the UK is predominantly white. An algorithm trained on this data would perform better on white faces than non-white faces. In other words, fair datasets can still created biased systems. (In a later thread on Twitter, LeCun acknowledged there were multiple causes for AI bias.)

Raji tells The Verge she was also surprised by LeCuns suggestion that researchers should worry about bias less than engineers producing commercial systems, and that this reflected a lack of awareness at the very highest levels of the industry.

Yann LeCun leads an industry lab known for working on many applied research problems that they regularly seek to productize, says Raji. I literally cannot understand how someone in that position doesnt acknowledge the role that research has in setting up norms for engineering deployments.

When contacted by The Verge about these comments, LeCun noted that hed helped set up a number of groups, inside and outside of Facebook, that focus on AI fairness and safety, including the Partnership on AI. I absolutely never, ever said or even hinted at the fact that research does not play a role is setting up norms, he told The Verge.

Many commercial AI systems, though, are built directly from research data and algorithms without any adjustment for racial or gender disparities. Failing to address the problem of bias at the research stage just perpetuates existing problems.

In this sense, then, the value of the Obama image isnt that it exposes a single flaw in a single algorithm; its that it communicates, at an intuitive level, the pervasive nature of AI bias. What it hides, however, is that the problem of bias goes far deeper than any dataset or algorithm. Its a pervasive issue that requires much more than technical fixes.

As one researcher, Vidushi Marda, responded on Twitter to the white faces produced by the algorithm: In case it needed to be said explicitly - This isnt a call for diversity in datasets or improved accuracy in performance - its a call for a fundamental reconsideration of the institutions and individuals that design, develop, deploy this tech in the first place.

Update, Wednesday, June 24: This piece has been updated to include additional comment from Yann LeCun.

Read the original here:
What a machine learning tool that turns Obama white can (and cant) tell us about AI bias - The Verge

Read More..

AI and Machine Learning Are Changing Everything. Here’s How You Can Get In On The Fun – ExtremeTech

This site may earn affiliate commissions from the links on this page. Terms of use.

There isnt a new story every week about an interesting new application of artificial intelligence and machine learning happening out there somewhere. There are actually at least five of those stories. Maybe 10. Sometimes, even more.

Like how UK officials are using AI tospot invasive plant species and stop thembefore they cause expensive damage to roads. Or how artificial intelligence is playing a key role inthe fight against COVID-19. Or even in the ultimate in mind-bending Black Mirror-type ideas, how AI is actually being used to help tobuild and manageother AIs.

Scariness aside, the power of artificial intelligence and machine learning to revolutionize the planet is taking hold in virtually every industry imaginable. With implications like that, it isnt hard to understand how a computer science type trained in AI practices can become a key member of any business witha paycheck to match.

The skills to get into this exploding field can be had in training likeThe Ultimate Artificial Intelligence Scientist Certification Bundle ($34.99, over 90 percent off).

The collection features four courses and almost 80 hours of content, introducing interested students to the skills, tools and processes needed to not only understand AI, but apply that knowledge to any given field. With nearly 200,000 positive reviews offered from more than a million students who have taken the courses, its clear why these Super Data Science-taught training sessions attract so many followers.

The coursework begins at the heart of AI and machine learning with thePython A-Zcourse.

The language most prominently linked to the development of such techniques, students follow step-by-step tutorials to understand how Python coding works, then apply that training to actual real-world exercises. Even learners who had never delved into AIs inner workers said the course made them fascinated to learn more in data science.

With the basic underpinnings in hand, students move toMachine Learning A-Z, where more advanced theories and algorithms take on practical shape with a true users guide to crafting your own thinking computers. Students get a true feel for machine learning from professional data scientists, who help even complex ideas like dimensionality reduction become relatable.

InDeep Learning A-Z, large data sets work hand-in-hand with programming fundamentals to help students unlock AI principles in some exciting projects. Students work with artificial neural networks and put them into practice to see how machines can actually think for themselves.

Finally,Tensorflow 2.0: A Complete Guide on the Brand New Tensorflowtakes a closer look at Tensorflow, one of the most powerful tools AI experts use to craft working networks. Actual Tensorflow exercises will explain how to build models and construct large-scale neural networks so machines can understand all the information theyre processing, then use that data to define their own solutions to problems.

Regularly priced at $200 per course, you can pick up all four courses now forjust $34.99.

Note: Terms and conditions apply. See the relevant retail sites for more information. For more great deals, go to our partners atTechBargains.com.

Now read:

See the original post here:
AI and Machine Learning Are Changing Everything. Here's How You Can Get In On The Fun - ExtremeTech

Read More..

SLAM + Machine Learning Ushers in the "Age of Perception – Robotics Business Review

The recent crisis has increased focus on autonomous robots being used for practical benefit. Weve seen robots cleaning hospitals, delivering food and medicines and even assessing patients. These are all amazing use cases, and clearly illustrate the ways in which robots will play a greater role in our lives from now on.

However, for all their benefits, currently the ability for a robot to autonomously map its surroundings and successfully locate itself is still quite limited. Robots are getting better at doing specific things in planned, consistent environments; but dynamic, untrained situations remain a challenge.

Age of PerceptionWhat excites me is the next generation of SLAM (Simultaneous Localization and Mapping) that will allow robot designers to create robots much more capable of autonomous operation in a broad range of scenarios. It is already under development and attracting investment and interest across the industry.

We are calling it the Age of Perception, and it combines recent advances in machine and deep learning to enhance SLAM. Increasing the richness of maps with semantic scene understanding improves localization, mapping quality and robustness.

Simplifying MapsCurrently, most SLAM solutions take raw data from sensors and use probabilistic algorithms to calculate the location and a map of the surroundings of the robot. LIDAR is most commonly used but increasingly lower-cost cameras are providing rich data streams for enhanced maps. Whatever sensors are used the data creates maps made up of millions of 3-dimensional reference points. These allow the robot to calculate its location.

The problem is that these clouds of 3D points have no meaning they are just a spatial reference for the robot to calculate its position. Constantly processing all of these millions of points is also a heavy load on the robots processors and memory. By inserting machine learning into the processing pipeline we can both improve the utility of these maps and simplify them.

Panoptic SegmentationPanoptic Segmentation techniques use machine learning to categorize collections of pixels from camera feeds into recognizable objects. For example, the millions of pixels representing a wall can be categorized as a single object. In addition, we can use machine learning to predict the geometry and the shape of these pixels in the 3D world. So, millions of 3D points representing a wall can be all summarized into a single plane. Millions of 3D points representing a chair can be all summarized into a shape model with a small number of parameters. Breaking scenes down into distinct objects into 2D and 3D lowers the overhead on processors and memory.

What excites me is the next generation of SLAM that will allow robot designers to create robots much more capable of autonomous operation in a broad range of scenarios. It is already under development and attracting investment and interest across the industry.

Adding UnderstandingAs well as simplification of maps, this approach provides the foundation of greater understanding of the scenes the robots sensors capture. With machine learning we are able to categorize individual objects within the scene and then write code that determines how they should be handled.

The first goal of this emerging capability is to be able to remove moving objects, including people, from maps. In order to navigate effectively, robots need to reference static elements of a scene; things that will not move, and so can be used as a reliable locating point. Machine learning can be used to teach autonomous robots which elements of a scene to use for location, and which to disregard as parts of the map or classify them as obstacles to avoid. Combining the panoptic segmentation of objects in a scene with underlying map and location data will soon deliver massive increases in accuracy and capability of robotic SLAM.

Perceiving ObjectsThe next exciting step will be to build on this categorization to add a level of understanding of individual objects. Machine learning, working as part of the SLAM system, will allow a robot to learn to distinguish the walls and floors of a room from the furniture and other objects within it. Storing these elements as individual objects means that adding or removing a chair will not necessitate the complete redrawing of the map.

This combination of benefits is the key to massive advances in the capability of autonomous robots. Robots do not generalize well in untrained situations; changes, particularly rapid movement, disrupt maps and add significant computational load. Machine learning creates a layer of abstraction that improves the stability of maps. The greater efficiency it allows in processing data creates the overhead to add more sensors and more data that can increase the granularity and information that can be included in maps.

Machine learning can be used to teach autonomous robots which elements of a scene to use for location, and which to disregard as parts of the map or classify them as obstacles to avoid.

Natural InteractionLinking location, mapping and perception will allow robots to understand more about their surroundings and operate in more useful ways. For example, a robot that can perceive the difference between a hall and a kitchen can undertake more complex sets of instructions. Being able to identify and categorize objects such as chairs, desks, cabinets etc will improve this still further. Instructing a robot to go to a specific room to get a specific thing will become much simpler.

The real revolution in robotics will come when robots start interacting more with people in more natural ways. Robots that learn from multiple situations and combine that knowledge into a model that allows them to take on new, un-trained tasks based on maps and objects preserved in memory. Creating those models and abstraction demands complete integration of all three layers of SLAM. Thanks to the efforts of the those who are leading the industry in these areas, I believe that the Age of Perception is just around the corner.

Editors Note: Robotics Business Review would like to thank SLAMcore for permission to reprint the original article (found HERE).

Visit link:
SLAM + Machine Learning Ushers in the "Age of Perception - Robotics Business Review

Read More..

Googles new ML Kit SDK keeps all machine learning on the device – SlashGear

Smartphones today have become so powerful that sometimes even mid-range handsets can support some fancy machine learning and AI applications. Most of those, however, still rely on cloud-hosted neural networks, machine learning models, and processing, which has both privacy and efficiency drawbacks. Contrary to what most would expect, Google has been moving to offload much of that machine learning activity from the cloud to the device and its latest machine learning development tool is its latest step in that direction.

Googles machine learning or ML Kit SDK has been around for two years now but it has largely been tied to its Firebase mobile and web development platform. Like many Google products, this creates a dependency on a cloud-platform that entails not just some latency due to network bandwidth but also risks leaking potentially private data in transit.

While Google is still leaving that ML Kit + Firebase combo available, it is now also launching a standalone software development kit or SDK for both Android and iOS app developers that focuses on on-device machine learning. Since everything happens locally, the users privacy is protected and the app can function almost in real-time regardless of the speed of the Internet connection. In fact, an ML-using app can even work offline for that matter.

The implications of this new SDK can be quite significant but it still depends on developers switching from the Firebase version to the standalone SDK. To give them a hand, Google created a code lab that combines the new ML Kit with its CameraX app in order to translate text in real-time without connecting to the Internet.

This can definitely help boost confidence in AI-based apps if the user no longer has to worry about privacy or network problems. Of course, Google would probably prefer that developers keep using the Firebase connection which it even describes as getting the best of both products.

Excerpt from:
Googles new ML Kit SDK keeps all machine learning on the device - SlashGear

Read More..

Machine Learning vs Predictive Analytics: Are they same? – Analytics Insight

Artificial Intelligence (AI) has been trending in headlines for quite some time for all exciting reasons. While it is not a new buzzword in the technical nor business world, it is successfully transforming industries around the globe. To date, enterprises, firms and start-ups are racing to adopt AI in their business culture. This emerging technology has blessed us with improved computing and analysis of data, cloud-based services and many more. The applications are so vast that, business leaders might find themselves caught up in confusion on what to implement for their business practices and get maximized ROI.

Well, as per the most preferred options, machine learning and predictive analytics are used to cater to such needs. Thanks to them, companies can extract relevant insights about their clients, market and businesses with a fraction of operational costs. Although they are both centered on effectual data processing, machine learning (ML) and predictive analytics are sometimes used interchangeably. Predictive analysis works on the lines of machine learning, yet they are different terms with varied potential.

Machine Learning is an AI methodology where algorithms are given data and asked to process it without predetermined rules. This allows the machine learning models to make assumptions, test them and learn autonomously, without being explicitly programmed. It is accomplished by feeding the model with data and information in the form of observations and real-world interactions. E.g. Machine learning is used for understanding the difference between spam, malicious comments, and positive comments on Reddit by studying a given set data of comments existing on the social community discussion page.

There are two types of machine learning:supervisedandunsupervised.

Supervised or Assisted machine learning requires an operator to feed pre-defined patterns, known behaviors, and inputs from human operators to help models learn more accurately.It helps the machine model comprehend the kind of output desired and allows the operator to gain control of the process. On the other hand, unsupervised or unassisted machine learning depends on the machines ability to identify those patterns and behaviors from data streams as no training data is provided. One instance of its application is employing it for intelligent profiling to find parallels between a restaurant chains most valuable customers.

Predictive Analytics, whereas, refers to the process of analyzing historical data,as well as existing external data to find patterns and behaviors. Although an advanced form of AI analytics, it existed much before the birth of AI. Mathematician, Alan Turing harnessed it to decode encrypted German messages (Enigma Code) during World War II.

It also automates forecasting with substantial accuracy so that business firms can focus on other crucial daily tasks. However, since the patterns remain the same in most cases, predictive analytics is more static and less adaptive than machine learning. Therefore, any change to the analysis model or parameters must be done manually by data scientists. Its common adopters are banks and Fintech industries. There these analytics tools are used to detect and reduce fraud, determine market risk, identify prospects, and more.

One cannot possibly decide which of the two is the better option for business; as their use cases are not the same. For example, one of the business applications of machine learning is to measure real-time employee satisfaction while predictive analytics is better suited for fields like marketing campaign optimization. Strategies based on predictive analysis can empower brands to identify, engage, and secure suitable markets for their services and products, and boost efficiency and ROIof marketing campaigns. This is possible as here analysis is focused on data streams that require specific pre-defined parameters. The software can display foresight on KPIs, which includes revenue, churn rate, conversion rate, and other metrics.

As mentioned earlier, it is an indispensable asset in Fintech and banking sectors. It is also used to gain insight into their customers buying habits.

Machine learning iscompetent in scanning business assets to locate security risks and origins of possible threats, thereby playing a significant role in cyber-security. They further help in increasing the value of user-generated content (UGC)by skimming out the bad, spamming, and hate content. Also, by observing and understanding customer behavior, it can determine the success of an advertisements performance and speed up product discovery.

Apart from their apparent difference, both these branches of AI hold immense and impressive possibilities. They can be adjusted to match a projects scale, and accordingly include tools that align most in achieving the project goals.Companies must act quickly, lest they risk being trampled by their rivals who have already implemented them. Also, it is important to remember that all predictive analytics methods are not part of machine learning.

Continued here:
Machine Learning vs Predictive Analytics: Are they same? - Analytics Insight

Read More..

AI experts say research into algorithms that claim to predict criminality must end – The Verge

A coalition of AI researchers, data scientists, and sociologists has called on the academic world to stop publishing studies that claim to predict an individuals criminality using algorithms trained on data like facial scans and criminal statistics.

Such work is not only scientifically illiterate, says the Coalition for Critical Technology, but perpetuates a cycle of prejudice against Black people and people of color. Numerous studies show the justice system treats these groups more harshly than white people, so any software trained on this data simply amplifies and entrenches societal bias and racism.

Lets be clear: there is no way to develop a system that can predict or identify criminality that is not racially biased because the category of criminality itself is racially biased, write the group. Research of this nature and its accompanying claims to accuracy rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral.

An open letter written by the Coalition was drafted in response to news that Springer, the worlds largest publisher of academic books, planned to publish just such a study. The letter, which has now been signed by 1,700 experts, calls on Springer to rescind the paper and for other academic publishers to refrain from publishing similar work in the future.

At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, write the group. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world.

In the study in question, titled A Deep Neural Network Model to Predict Criminality Using Image Processing, researchers claimed to have created a facial recognition system that was capable of predicting whether someone is likely going to be a criminal ... with 80 percent accuracy and no racial bias, according to a now-deleted press release. The papers authors included Phd student and former NYPD police officer Jonathan W. Korn.

In response to the open letter, Springer said it would not publish the paper, according to MIT Technology Review. The paper you are referring to was submitted to a forthcoming conference for which Springer had planned to publish the proceedings, said the company. After a thorough peer review process the paper was rejected.

However, as the Coalition for Critical Technology makes clear, this incident is only one example in a wider trend within data science and machine learning, where researchers use socially-contingent data to try and predict or classify complex human behavior.

In one notable example from 2016, researchers from Shanghai Jiao Tong University claimed to have created an algorithm that could also predict criminality from facial features. The study was criticized and refuted, with researchers from Google and Princeton publishing a lengthy rebuttal warning that AI researchers were revisiting the pseudoscience of physiognomy. This was a discipline was founded in the 19th century by Cesare Lombroso, who claimed he could identify born criminals by measuring the dimensions of their faces.

When put into practice, the pseudoscience of physiognomy becomes the pseudoscience of scientific racism, wrote the researchers. Rapid developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era, in which machine-learned models embed biases present in the human behavior used for model development.

The 2016 paper also demonstrated how easy it is for AI practitioners to fool themselves into thinking theyve found an objective system of measuring criminality. The researchers from Google and Princeton noted that, based on the data shared in the paper, all the non-criminals appeared to be smiling and wearing collared shirts and suits, while none of the (frowning) criminals were. Its possible this simple and misleading visual tell was guiding the algorithms supposed sophisticated analysis.

The Coalition for Critical Technologys letter comes at a time when movements around the world are highlighting issues of racial justice, triggered by the killing of George Floyd by law enforcement. These protests have also seen major tech companies pull back on their use of facial recognition systems, which research by Black academics has shown is racially biased.

The letters authors and signatories call on the AI community to reconsider how it evaluates the goodness of its work thinking not just about metrics like accuracy and precision, but about the social affect such technology can have on the world. If machine learning is to bring about the social good touted in grant proposals and press releases, researchers in this space must actively reflect on the power structures (and the attendant oppressions) that make their work possible, write the authors.

Read more from the original source:
AI experts say research into algorithms that claim to predict criminality must end - The Verge

Read More..

Machine Learning as a Service Market How the Industry Will Witness Substantial Growth in the Upcoming years to 2023 – Cole of Duty

Premium market insights delivers well-researched industry-wide information on the Machine Learning as a Service market. It studies the markets essential aspects such as top participants, expansion strategies, business models, and other market features to gain improved market insights. Additionally, it focuses on the latest advancements in the sector and technological development, executive tools, and tactics that can enhance the performance of the sectors.

Request Sample Copy of Machine Learning as a Service Market at: https://www.premiummarketinsights.com/sample/AMR00013469

Top Key Players:

Key players that operate in the machine learning as a service market are Google Inc., SAS Institute Inc., FICO, Hewlett Packard Enterprise, Yottamine Analytics, Amazon Web Services, BigML, Inc., Microsoft Corporation, Predictron Labs Ltd., and IBM Corporation

Scope of the Report

The research on the Machine Learning as a Service market concentrates on extracting valuable data on swelling investment pockets, significant growth opportunities, and major market vendors to help understand business owners what their competitors are doing best to stay ahead in the competition. The research also segments the Machine Learning as a Service market on the basis of end user, product type, application, and demography for the forecast period 20192023. Detailed analysis of critical aspects such as impacting factors and competitive landscape are showcased with the help of vital resources, which include charts, tables, and infographics.

For more clarity on the real potential of the Machine Learning as a Service market for the forecast period 20192023, the study provides vital intelligence on major opportunities, threats, and challenges posed by the industry. Additionally, a strong emphasis is laid on the weaknesses and strengths of a few prominent players operating in the same market. Quantitative assessment of the recent momentum brought about by events such as collaborations, acquisition and mergers, product launches and technology innovation empower product owners, as well as marketing professionals and business analysts make a profitable decision to reduce cost and increase their customer base.

!!! Limited Time DISCOUNT Available!!! Get Your Copy at Discounted [emailprotected] https://www.premiummarketinsights.com/discount/AMR00013469

Our reports will help clients solve the following issues:

Insecurity about the future:

Our research and insights help our clients anticipate upcoming revenue compartments and growth ranges. This help our client invest or divest their assets.

Understanding market opinions:

It is extremely vital to have an impartial understanding of market opinions for a strategy. Our insights provide a keen view on the market sentiment. We keep this reconnaissance by engaging with Key Opinion Leaders of a value chain of each industry we track.

Understanding the most reliable investment centers:

Our research ranks investment centers of market by considering their future demands, returns, and profit margins. Our clients can focus on most prominent investment centers by procuring our market research.

Evaluating potential business partners:

Our research and insights help our clients in identifying compatible business partners.

The research provides answers to the following key questions:

Interested in purchasing this Report? Click here @ https://www.premiummarketinsights.com/inquiry/AMR00013469

Geographically, this report focuses on product sales, value, market share, and growth opportunity in key regions such as United States, Europe, China, Japan, Southeast Asia, and India.

About Premium market insights:

Premiummarketinsights.comis a one stop shop of market research reports and solutions to various companies across the globe. We help our clients in their decision support system by helping them choose most relevant and cost effective research reports and solutions from various publishers. We provide best in class customer service and our customer support team is always available to help you on your research queries.

Contact US:

Sameer Joshi Call: US: +1-646-491-9876, Apac: +912067274191Email: [emailprotected]

See the rest here:
Machine Learning as a Service Market How the Industry Will Witness Substantial Growth in the Upcoming years to 2023 - Cole of Duty

Read More..

If AI is going to help us in a crisis, we need a new kind of ethics – MIT Technology Review

What opportunities have we missed by not having these procedures in place?

Its easy to overhype whats possible, and AI was probably never going to play a huge role in this crisis. Machine-learning systems are not mature enough.

But there are a handful of cases in which AI is being tested for medical diagnosis or for resource allocation across hospitals. We might have been able to use those sorts of systems more widely, reducing some of the load on health care, had they been designed from the start with ethics in mind.

With resource allocation in particular, you are deciding which patients are highest priority. You need an ethical framework built in before you use AI to help with those kinds of decisions.

So is ethics for urgency simply a call to make existing AI ethics better?

Thats part of it. The fact that we dont have robust, practical processes for AI ethics makes things more difficult in a crisis scenario. But in times like this you also have greater need for transparency. People talk a lot about the lack of transparency with machine-learning systems as black boxes. But there is another kind of transparency, concerning how the systems are used.

This is especially important in a crisis, when governments and organizations are making urgent decisions that involve trade-offs. Whose health do you prioritize? How do you save lives without destroying the economy? If an AI is being used in public decision-making, transparency is more important than ever.

What needs to change?

We need to think about ethics differently. It shouldnt be something that happens on the side or afterwardssomething that slows you down. It should simply be part of how we build these systems in the first place: ethics by design.

I sometimes feel ethics is the wrong word. What were saying is that machine-learning researchers and engineers need to be trained to think through the implications of what theyre building, whether theyre doing fundamental research like designing a new reinforcement-learning algorithm or something more practical like developing a health-care application. If their work finds its way into real-world products and services, what might that look like? What kinds of issues might it raise?

Some of this has started already. We are working with some early-career AI researchers, talking to them about how to bring this way of thinking to their work. Its a bit of an experiment, to see what happens. But even NeurIPS [a leading AI conference] now asks researchers to include a statement at the end of their papers outlining potential societal impacts of their work.

Youve said that we need people with technical expertise at all levels of AI design and use. Why is that?

Im not saying that technical expertise is the be-all and end-all of ethics, but its a perspective that needs to be represented. And I dont want to sound like Im saying all the responsibility is on researchers, because a lot of the important decisions about how AI gets used are made further up the chain, by industry or by governments.

But I worry that the people who are making those decisions dont always fully understand the ways it might go wrong. So you need to involve people with technical expertise. Our intuitions about what AI can and cant do are not very reliable.

What you need at all levels of AI development are people who really understand the details of machine learning to work with people who really understand ethics. Interdisciplinary collaboration is hard, however. People with different areas of expertise often talk about things in different ways. What a machine-learning researcher means by privacy may be very different from what a lawyer means by privacy, and you can end up with people talking past each other. Thats why its important for these different groups to get used to working together.

Youre pushing for a pretty big institutional and cultural overhaul. What makes you think people will want to do this rather than set up ethics boards or oversight committeeswhich always make me sigh a bit because they tend to be toothless?

Yeah, I also sigh. But I think this crisis is forcing people to see the importance of practical solutions. Maybe instead of saying, Oh, lets have this oversight board and that oversight board, people will be saying, We need to get this done, and we need to get it done properly.

Read more:
If AI is going to help us in a crisis, we need a new kind of ethics - MIT Technology Review

Read More..

Here’s Why Enterprise AI Is Being Drafted to Fight Stimulus Fraud – EnterpriseAI

Without an enterprise AI approach, prosecutors see fraud in the federal governments Paycheck Protection Program, they admit there are too many scams to count, let alone stop. Organized crime is scheming to take a growing cut of the emergency spending in the CARES Act. The rules of stimulus programs are constantly changing, making it hard to know who should and shouldnt obtain that financing or how they should spend it.

This sounds like a job for enterprise artificial intelligence, and banks are indeed turning to it for help. But what qualifies as AI in quelling stimulus fraud, and how exactly would it work? It if works at all.

Rules engines slipping under the waves

It is clear that common approaches, often billed as machine learning and sometimes as artificial intelligence, fail to address todays stimulus fraud-fighting needs.

A banks anti-fraud officer has added new rules to their system for flagging activity that looks suspicious, often based on dated government law enforcement data. Theyve introduced party and account level monitoring. They have tuned their system as often as they can. But under the pressures of the massive numbers of stimulus program checks, their alert backlog is increasing, their investigators are fatiguing, and their risk is escalating.

Buried under unmanageable volumes of false positives, risk officers are unable to identify false negatives. These are the worst, the existing bank customer for instance who has always stayed out of the spotlight, but with loosened know your customer rules under the stimulus program, they press their advantage.

Bank officers are also striving to meet the budgetary cost cutting measures imposed on them as their institutions try to keep compliance costs under control. These officers do the only thing they can, attempt to tune their thresholds once again, only to recognize that they can no longer tune their way out of trouble. K-Means clustering, as a safe go-to, does not provide the accuracy or uplift banks officers need.

Starting with basics

Simply put, anti-fraud teams need alerts to be more accurate and false positives rare. It gives investigators valuable context, so they can focus on what matters most, genuinely suspicious behavior.

An augmented anti-fraud process applies intelligence at key lever points to produce significantly more accurate alerts. Its designed in three parts. They are system optimization, emerging behaviors detection, and new entity risk detection. This allows you to take advantage of just what you need, when you need it. That is, you get only what you need to improve the parts of your process that are weakest.

Knowns knowns, unknowns unknowns, and the rest

Optimizing a system is done best by focusing on improving the effectiveness in discovering known knowns. The key is to optimize an existing system with greater segmentation accuracy of all parties and improve the speed, accuracy and effectiveness of your periodic threshold tuning process.

Emerging behavior identification should be focused on unknown knowns and keeping your system relevant. Introduce dynamic, intelligent tuning and visibility to emerging behaviors to your process and retire the periodic projects that are so costly, cumbersome and immediately outdated.

New entity risk detection means discovering net new unknown unknown risks and vulnerabilities previously missed or not thought about. Identify and be alerted to new risks. Not just at a loan level, or account, or customer, but for any context, party or hierarchy and not just for stopping fraud, but for cyber, surveillance, conduct, trafficking, liquidity exposure, credit risk and beyond.

Segmenting for success

The false-positive problem in fraud detection is primarily a function of poor segmentation of the input data. Even sophisticated financial services institutions using machine learning for detecting fraud can suffer from low accuracy and high false negatives. This is because open source machine learning techniques analyze data in large groups and cannot get specific enough to correctly surface genuine suspicious behavior.

A typical segmentation process produces uneven groups, and this means that thresholds must be set artificially low resulting in a significant number of false positives. Smart segmentation is the crucial first step for a system to accurately detect suspicious patterns, without needlessly flagging expected ones. The process falls short when institutions only sort static account information using pre-determined rules.

A good enterprise AI approach should ingest the greatest volume and variety of data available - about customers, counterparties and transactions - and then apply objective machine learning to create the most refined and up-to-date segments possible. Topological data analysis is perhaps one of the best tools for this given its ability to handle multiple variables, but its also not well known, even in the artificial intelligence field.

The crucial point is that enterprise grade anti-fraud AI needs to be able to assign, and reassign parties to segments based on their actual behavior, revealed in their real transactions and true inter-relationships, over time. An intelligent segmentation process should deliver far more granular and uniform groups, resulting in higher thresholds and fewer false positives. In addition, these granular groups should catch false negatives.

Paying dividends

The unknown questions that the data and proper enterprise AI can answer will create new opportunities and growth areas, too. High performance enterprise AI cuts the time it takes to produce insights, grows along with datasets, explores automatically and without bias, incorporates new data into older analyses and can actually reduce hardware costs.

Bank clients wont necessarily appreciate these secondary machine learning benefits at first. They are measures that help managers detect and track patterns of fraud, not marketing tools. But they can provide winning insights and defensive alerts that will protect a companys brand, public relations and image.

About the Author

Simon Moss is CEO of Symphony AyasdiAI, an enterprise artificial intelligence company serving financial services and other industries.

Related

See the original post:
Here's Why Enterprise AI Is Being Drafted to Fight Stimulus Fraud - EnterpriseAI

Read More..

Cryptocurrency trading vs. forex: The similarities and differences – AZ Big Media

The concept of trading cryptocurrencies is coming up fast on the outside of forex trading as a popular way of investing funds in financial markets. Some people believe that the mechanics of crypto trading are similar to trading fiat currencies like the US dollar or the British pound.

But although there are some undeniable areas of overlap, there are plenty of areas where cryptocurrency trading differs from conventional foreign exchange trading too. Let us take a closer look.

Image: Piqsels

Unlike the foreign exchangemarkets, which are only accessible 24 hours a day, five days a week, cryptocurrency markets are open 24/7. There is always an opportunity to buy or sell a cryptocurrency, regardless of which cryptocurrency exchange you use. Think of cryptos as a byproduct of todays digital society. Just like the always-on, always-connected digital world, cryptocurrency price moves wait for no-one.

The Daily Hodls report into forex and crypto trading found that forex liquidity is still far greater than even the biggest crypto assets like Bitcoin. In 2016, some $5 trillion of US dollars were traded daily in the forex markets. Compare that with just $1 billion in the Bitcoin markets and its easy to see that cashflow still reigns supreme in the traditional forex markets for now.

Cryptos tend to be much more volatile than flat currencies

As cryptocurrency markets are much, much newer than conventional forex markets, they tend to be considerably more volatile. With little history to go by, the markets can fluctuate enormously in the space of 24 hours based on economic or political news. For instance, the price of Bitcoin crashed by 20% in under an hour, back in March.

One attribute thats similar in both the crypto and forex markets is that price activity is driven largely by supply and demand. When there is heightened demand for Bitcoin or the US dollar, its price will go up and,similarly, it will fall when supply exceeds demand.

Both types of trading can be automated

There is software that can be used to automate the execution of trades in both the forex and crypto markets. This software can automatically set entry and exit points in the market, as well as stop-loss points, to ensure that you manage your risk. A popular crypto trading robot is one that was said to be backed by Peter Jones Bitcoin Trader, which yields daily returns of as much as 400%.

Risk management is vital to be profitable in both markets

Its impossible to know which way the cryptocurrency and forex markets will move with each trade you open. Thats why both forms of trading require rock-solid risk management to maintain profitability. You may have sound fundamental and technical analysis awarenessbut, without stop losses to protect your positions, you could face losses far greater than your losses if the markets dont move the way you expect them to.

Put simply, there are pros and cons of trading either market. Given that crypto tradingis more volatile than forex, its possible that you could trade both simultaneously, with the slower-paced forex markets offering lower-risk opportunities and cryptos giving you a chance to generate those higher returns.

Continue reading here:
Cryptocurrency trading vs. forex: The similarities and differences - AZ Big Media

Read More..