Page 3,888«..1020..3,8873,8883,8893,890..3,9003,910..»

Return of the IT architects: how edge computing is unlocking value for global organisations – ITProPortal

With 2020 on the horizon, its clear that digital transformation has now outgrown its buzzword status and is a fact of life for many businesses. Reports suggest that over 90 per cent of organisations in the US and UK are planning or currently undertaking these projects to gain a competitive advantage. However, the mundane reality is that many firms still arent getting much more than incremental improvements.

One innovation set to shake things up and open the door to a whole new world of innovation-fuelled growth is edge computing. Yet many firms are held back by complexity, skills shortages and legacy database technology. The answer may lie with edge-ready database tools that support rapidly evolving developer demands, and a return to prominence for IT architects, who increasingly are the key to making edge computing a reality.

At the heart of digital transformation is the ability to give end users and customers unique experiences that help foster loyalty and ultimately drive operational efficiencies and profits. It couldnt happen without cloud computing and the low-cost, on-demand, highly scalable compute power that it provides to developer teams. However, theres a problem: digital transformation has become its own worst enemy. As the number of websites, applications, IoT devices and online services grows, so does the volume of data. According to Cisco, global datacentre traffic will triple from 2017 to 2021, to reach nearly 21 zettabytes annually.

The problem with all this data is that its clogging up the pipes that carry it to and from the cloud and physical datacentres. Organisations want access to more and more data to uncover customer insight, and want to present their users with more and more data to improve their mission-critical and revenue generating services and maintain a competitive edge. However, passing this even increasing amount of data back and forth between the edge and the core results in skyrocketing bandwidth costs and a larger exposure to the impact from latency and network outages. On top of all that are the security and compliance challenges of storing and transporting the data. This is where edge computing comes in.

Edge computing is a distributed computing model, where data processing and storage is carried out on the periphery of the network, closer to where its actually needed. This minimises the need to send it back and forth to a centralised server or cloud, reducing bandwidth usage and latency. Crucially, it allows for much faster decision making than a more traditional centralised computing model can afford, which makes all the difference when its applied to something time-sensitive self-driving cars, for instance. If a self-driving car relied on cloud computing alone, it would need to send data up to the cloud or server and wait for the decision to be sent back down, regardless of whether the data was time-sensitive or not. With edge computing, the car can act on urgent items locally, so if a hazard were detected, the car can immediately make the decision to stop without waiting for data or instructions to come back from the central server.

The benefits of edge dont stop there. Take Ryanairs experience, for example. As one of the worlds biggest airlines, it handles the data demands of more than three million mobile users via its app. Edge computing allowed it to cut network bandwidth from its cloud provider by as much as 80 per cent after implementing edge-enabled database technology. Plus, since it had to pay for each byte transferred to and from the cloud, this amounted to a massive cut in operating expenses.

Edge also helps businesses avoid the threat of IT outages. This type of disruption can be particularly severe Gartner estimates that a service outage can cost up to $5,600 per minute. Centralised computing networks may not be to blame when an outage occurs, but they make addressing them more difficult. If everything on the network relies on the core to function, any downtime will be felt across the board. With edge computing, the outage is usually limited to the device in question, while the rest of the network continues uninterrupted. Its evidence like this which led IDC to predict back in 2016 that by this year at least 40 per cent of IoT-created data will be stored, processed, analysed and acted upon close to or at the network edge.

The truth is that IT systems have undergone massive architectural change over the past decade as the emergence of cloud computing centralises compute, data and storage capabilities. As infrastructure also moved to the cloud, many saw this as the death knell for the traditional IT architect. Well, now theyre very much back in demand, as edge computing forces a rethink of the old architectural assumptions around cloud computing. Data is on the move again, from the centre to the edge, requiring an accompanying shift in infrastructure and the skills to understand how to manage this evolution.

Edge computing really comes into its own when you start to look at some of the companies already using it to drive value in a variety of use cases from retail to healthcare.

One such company is SyncThink: a neuro-technology firm that uses eye-tracking metrics and devices to help improve medical assessments of traumatic brain injuries. The firm required an offline mode for environments like sporting stadiums where doctors sometimes need to conduct urgent assessments of athletes, but bandwidth is often patchy because of heavy mobile usage by fans. Edge computing supported by a mobile-ready NoSQL database, allowed the firm to offer offline capabilities and then seamlessly sync with the Azure cloud when sufficient bandwidth becomes available.

Also in healthcare, medical tech firm Becton Dickinson tapped the power of edge computing to optimise treatment for Type-2 diabetes sufferers. Medical devices and a patient app automatically collect real-time data on a patients insulin and glucose levels, activities, meals, and location, and then provide them with customised alerts and recommendations. Once again, the value of edge is in offering offline capabilities, to ensure the consistency of collected data, with secure synchronisation offered once connectivity is available again.

Another standout example is UK delivery service Doddle. More than 80 of its locations around the country suffered from patchy mobile coverage at peak times when customers saturated the network. The answer was an edge computing set-up to ensure its customers and employees always have access to its app-based services.

Yet with new opportunities come new complexities for organisations. Although nearly 15 per cent of European IT leaders in 2018 claimed to be already using edge computing, an even bigger number (21 per cent) admitted that it will take them more than five years to do so. Alongside the complexity of using multiple technologies (43 per cent), respondents cited reliance on legacy database technology (37 per cent), a lack of resources (36 per cent), and a lack of skills (33 per cent) as key barriers to adopting new digital services.

Its no exaggeration to say that IT architects have a critical role today, sitting between the C-suite and development teams to unlock value from edge computing and help everyone to realise their ambitions.

Perry Krug, Architect, Office of the CTO, Couchbase

Read more:
Return of the IT architects: how edge computing is unlocking value for global organisations - ITProPortal

Read More..

LinkShadow to Showcase Machine Learning Based Threat Analytics Technology at RSA Conference 2020 – PRNewswire

ATHENS, Ga., Feb. 7, 2020 /PRNewswire/ -- LinkShadow,Next-Generation Cybersecurity Analytics, announces its presence at the prestigious RSA Conference 2020 in San Francisco from February 24-28.

LinkShadow offers a wide spectrum of cybersecurity solutions that focuses on how to overcome the critical challenges in this smart cyberattacks era.These products include ThreatScore Quadrant, Identity Intelligence, Asset AutoDiscovery, TrafficScene Visualizer & AttackScape Viewer, CXO Dashboards and Threat Shadow. When combined with state-of-art machine-learning capabilities, LinkShadow delivers supreme solutions which include Behavioral Analytics, Threat Intelligence, Insider Threat Management, Privileged Users Analytics, Network Security Optimization, Application Security Visibility, Risk Scoring and Prioritization, Machine Learning and Statistical Analysis and, finally, Anomaly Detection and Predictive Analytics.

At RSA Conference, LinkShadow expert teams will be sharing valuable insights on how this dynamic platform can empower organizations and help improve their defenses against advanced cyberattacks.

Duncan Hume, Vice President USA, LinkShadow, commented that "Undoubtedly RSA Conference is the perfect platform to showcase this unique technology, and we plan to make the best of this opportunity.While you are there, meet the technical teams for a demo session and learn how LinkShadow's best-in-class threat hunting capabilities powered by intense and extensive machine learning algorithms can help organizations become cyber-resilient."

To schedule a personalized demo or fix a meeting at LinkShadow - Booth No. 5487, North Hall, register now:https://www.linkshadow.com/events/RSA-Conference

About LinkShadow

LinkShadow is a U.S.-registered company with regional offices in the Middle East.It is pioneered by a team of highly skilled solution architects, product specialists and programmers with a vision to formulate a next-generation cybersecurity solution that provides unparalleled detection of even the most sophisticated threats. LinkShadow was built with the vision of enhancing organizations' defenses against advanced cyberattacks, zero-day malware and ransomware, while simultaneously gaining rapid insight into the effectiveness of their existing security investments.For more information, visit http://www.linkshadow.com.

Raji John | Head of Client ServiceseMediaLinkT: +971 4 279 4091E: raji@emedialinkme.net

Related Links

Website

Registration page

SOURCE LinkShadow

https://www.linkshadow.com

Read more here:
LinkShadow to Showcase Machine Learning Based Threat Analytics Technology at RSA Conference 2020 - PRNewswire

Read More..

Overview of causal inference in machine learning – Ericsson

In a major operators network control center complaints are flooding in. The network is down across a large US city; calls are getting dropped and critical infrastructure is slow to respond. Pulling up the systems event history, the manager sees that new 5G towers were installed in the affected area today.

Did installing those towers cause the outage, or was it merely a coincidence? In circumstances such as these, being able to answer this question accurately is crucial for Ericsson.

Most machine learning-based data science focuses on predicting outcomes, not understanding causality. However, some of the biggest names in the field agree its important to start incorporating causality into our AI and machine learning systems.

Yoshua Bengio, one of the worlds most highly recognized AI experts, explained in a recent Wired interview: Its a big thing to integrate [causality] into AI. Current approaches to machine learning assume that the trained AI system will be applied on the same kind of data as the training data. In real life it is often not the case.

Yann LeCun, a recent Turing Award winner, shares the same view, tweeting: Lots of people in ML/DL [deep learning] know that causal inference is an important way to improve generalization.

Causal inference and machine learning can address one of the biggest problems facing machine learning today that a lot of real-world data is not generated in the same way as the data that we use to train AI models. This means that machine learning models often arent robust enough to handle changes in the input data type, and cant always generalize well. By contrast, causal inference explicitly overcomes this problem by considering what might have happened when faced with a lack of information. Ultimately, this means we can utilize causal inference to make our ML models more robust and generalizable.

When humans rationalize the world, we often think in terms of cause and effect if we understand why something happened, we can change our behavior to improve future outcomes. Causal inference is a statistical tool that enables our AI and machine learning algorithms to reason in similar ways.

Lets say were looking at data from a network of servers. Were interested in understanding how changes in our network settings affect latency, so we use causal inference to proactively choose our settings based on this knowledge.

The gold standard for inferring causal effects is randomized controlled trials (RCTs) or A/B tests. In RCTs, we can split a population of individuals into two groups: treatment and control, administering treatment to one group and nothing (or a placebo) to the other and measuring the outcome of both groups. Assuming that the treatment and control groups arent too dissimilar, we can infer whether the treatment was effective based on the difference in outcome between the two groups.

However, we can't always run such experiments. Flooding half of our servers with lots of requests might be a great way to find out how response time is affected, but if theyre mission-critical servers, we cant go around performing DDOS attacks on them. Instead, we rely on observational datastudying the differences between servers that naturally get a lot of requests and those with very few requests.

There are many ways of answering this question. One of the most popular approaches is Judea Pearl's technique for using to statistics to make causal inferences. In this approach, wed take a model or graph that includes measurable variables that can affect one another, as shown below.

To use this graph, we must assume the Causal Markov Condition. Formally, it says that subject to the set of all its direct causes, a node is independent of all the variables which are not direct causes or direct effects of that node. Simply put, it is the assumption that this graph captures all the real relationships between the variables.

Another popular method for inferring causes from observational data is Donald Rubin's potential outcomes framework. This method does not explicitly rely on a causal graph, but still assumes a lot about the data, for example, that there are no additional causes besides the ones we are considering.

For simplicity, our data contains three variables: a treatment , an outcome , and a covariate . We want to know if having a high number of server requests affects the response time of a server.

In our example, the number of server requests is determined by the memory value: a higher memory usage means the server is less likely to get fed requests. More precisely, the probability of having a high number of requests is equal to 1 minus the memory value (i.e. P(x=1)=1-z , where P(x=1) is the probability that x is equal to 1). The response time of our system is determined by the equation (or hypothetical model):

y=1x+5z+

Where is the error, that is, the deviation from the expected value of given values of and depends on other factors not included in the model. Our goal is to understand the effect of on via observations of the memory value, number of requests, and response times of a number of servers with no access to this equation.

There are two possible assignments (treatment and control) and an outcome. Given a random group of subjects and a treatment, each subject has a pair of potential outcomes: and , the outcomes Y_i (0) and Y_i (1) under control and treatment respectively. However, only one outcome is observed for each subject, the outcome under the actual treatment received: Y_i=xY_i (1)+(1-x)Y_i (0). The opposite potential outcome is unobserved for each subject and is therefore referred to as a counterfactual.

For each subject, the effect of treatment is defined to be Y_i (1)-Y_i (0) . The average treatment effect (ATE) is defined as the average difference in outcomes between the treatment and control groups:

E[Y_i (1)-Y_i (0)]

Here, denotes an expectation over values of Y_i (1)-Y_i (0)for each subject , which is the average value across all subjects. In our network example, a correct estimate of the average treatment effect would lead us to the coefficient in front of x in equation (1) .

If we try to estimate this by directly subtracting the average response time of servers with x=0 from the average response time of our hypothetical servers with x=1, we get an estimate of the ATE as 0.177 . This happens because our treatment and control groups are not inherently directly comparable. In an RTC, we know that the two groups are similar because we chose them ourselves. When we have only observational data, the other variables (such as the memory value in our case) may affect whether or not one unit is placed in the treatment or control group. We need to account for this difference in the memory value between the treatment and control groups before estimating the ATE.

One way to correct this bias is to compare individual units in the treatment and control groups with similar covariates. In other words, we want to match subjects that are equally likely to receive treatment.

The propensity score ei for subject is defined as:

e_i=P(x=1z=z_i ),z_i[0,1]

or the probability that x is equal to 1the unit receives treatmentgiven that we know its covariate is equal to the value z_i. Creating matches based on the probability that a subject will receive treatment is called propensity score matching. To find the propensity score of a subject, we need to predict how likely the subject is to receive treatment based on their covariates.

The most common way to calculate propensity scores is through logistic regression:

Now that we have calculated propensity scores for each subject, we can do basic matching on the propensity score and calculate the ATE exactly as before. Running propensity score matching on the example network data gets us an estimate of 1.008 !

We were interested in understanding the causal effect of binary treatment x variable on outcome y . If we find that the ATE is positive, this means an increase in x results in an increase in y. Similarly, a negative ATE says that an increase in x will result in a decrease in y .

This could help us understand the root cause of an issue or build more robust machine learning models. Causal inference gives us tools to understand what it means for some variables to affect others. In the future, we could use causal inference models to address a wider scope of problems both in and out of telecommunications so that our models of the world become more intelligent.

Special thanks to the other team members of GAIA working on causality analysis: Wenting Sun, Nikita Butakov, Paul Mclachlan, Fuyu Zou, Chenhua Shi, Lule Yu and Sheyda Kiani Mehr.

If youre interested in advancing this field with us, join our worldwide team of data scientists and AI specialists at GAIA.

In this Wired article, Turing Award winner Yoshua Bengio shares why deep learning must begin to understand the why before it can replicate true human intelligence.

In this technical overview of causal inference in statistics, find out whats needed to evolve AI from traditional statistical analysis to causal analysis of multivariate data.

This journal essay from 1999 offers an introduction to the Causal Markov Condition.

Read more:
Overview of causal inference in machine learning - Ericsson

Read More..

How Will Machine Learning Serve the Hotel Industry in 2020 and Beyond? – CIOReview

Machine learning will help the hotel industry to remain tech-savvy and also help them to save money, improve service, and grow more efficient.

Fremont, CA: Artificial intelligence (AI) implementation grew tremendously last year alone such that any business that does not consider the implications of machine learning (ML) will find itself in multiple binds. It has become mandatory that companies should question themselves how they will utilize machine learning to reap its benefits while staying in business. Similarly, hotels should interrogate themselves about how they will use ML. However, trying to catch-up with this technology is potentially dangerous when companies realize that their competition is outperforming them. When hotels believe that robotic housekeepers and facial recognition kiosks are the effective applications of ML, they can do much more. Here is how ML serves the hotel industry while helping save money, improve service, and grow more efficient.

For successfully running the hotel industry, energy and water are the two most important factors. Will there be a no if there is a technology that controls the use of the two critical factors without affecting the guest's comfort zone. Every dollar saved on energy and water can impact the bottom line of the business in a big way. Hotels can track the actual consumption of energy against predictive models allowing them to manage performance against competitors. Hotel brands can link-in room energy to the PMS so that when the room is empty, the heater or any other electrical appliances, automatically turns off.

ML helps brands hire suitable candidates and also highly qualified candidates who might have been overlooked for not fulfilling traditional expectations. ML algorithms were used to create assessments to test candidates for recruiting against the personas using gamification-based tools. Further, ML maximizes the value of premium inventory and increases guest satisfaction by offering guests personalized upgrades based on their previous stay at a price that the guest is ready to pay at booking and pre-arrival period. Using ML technology, hotel brands can create offers at any point during the guest stay, including the front desk. Thus, the future of sustainability in the hospitality industry relies on ML.

Link:
How Will Machine Learning Serve the Hotel Industry in 2020 and Beyond? - CIOReview

Read More..

European Central Bank Partners with Digital Innovation Platform Reply to Offer AI and Machine Learning Coding Marathon – Crowdfund Insider

The European Central Bank (ECB) has partnered with Reply, a platform focused on digital innovation, in order to offer a 48-hour coding marathon, which will focus on teaching participants how to apply the latest artificial intelligence (AI) and machine learning (ML) algorithms.

The marathon is scheduled to take place during the final days of February 2020 at the ECB in Frankfurt, Germany. The supervisory data hackathon will have over 80 participants from the ECB, Reply and various other organizations.

Participants will be using AI and ML techniques to gain a better understanding and quicker insights into the large amounts of supervisory data gathered by the ECB from various banks and other financial institutions via regular reporting methods for risk analysis purposes.

Program participants will have to turn in projects in the areas of data quality, interlinkages in supervisory reporting and risk indicators, before the event takes place. The best submissions will be worked on for a 48-hour period by multidisciplinary teams.

Last month, the Bank of England (BoE) and UKs financial regulator, the Financial Conduct Authority (FCA), announced that they would be running a public/private forum that would cover the relevant technical and public policy issues related to bank adoption of artificial intelligence (AI) and machine learning (ML) technologies and software.

A survey conducted by the BoE last year revealed that ML tools are being used in around two-thirds, or 66%, of UKs financial institutions, with the technology expected to enter a new stage of development and maturity that could lead to more advanced deployments in the future.

Read the original post:
European Central Bank Partners with Digital Innovation Platform Reply to Offer AI and Machine Learning Coding Marathon - Crowdfund Insider

Read More..

Machine Learning Patentability In 2019: 5 Cases Analyzed And Lessons Learned Part 1 – Mondaq News Alerts

To print this article, all you need is to be registered or login on Mondaq.com.

This article is the first of a five-part series of articlesdealing with what patentability of machine learning looks like in2019. This article begins the series by describing the USPTO's2019 Revised Patent Subject MatterEligibility Guidance (2019 PEG) in the context of the U.S.patent system. Then, this article and the four followingarticles will describe one of five cases in whichExaminer's rejections under Section 101 were reversed bythe PTAB under this new 2019 PEG. Each of the five cases discusseddeal with machine-learning patents, andmay provide some insight into how the 2019 PEG affects the patentability ofmachine-learning, as well as software more broadly.

The US patent laws are set out in Title 35 of the United StatesCode (35 U.S.C.). Section 101 of Title 35 focuses on severalthings, including whether the invention is classified aspatent-eligible subject matter. As a general rule, an invention isconsidered to be patent-eligible subject matter if it "fallswithin one of the four enumerated categories of patentable subjectmatter recited in 35 U.S.C. 101 (i.e.,process, machine, manufacture, or composition of matter)."1 This,on its own, is an easy hurdle to overcome. However, there areexceptions (judicial exceptions). These include (1) laws of nature;(2) natural phenomena; and (3) abstract ideas. If the subjectmatter of the claimed invention fits into any of these judicialexceptions, it is not patent-eligible, and a patent cannot beobtained. The machine-learning and software aspects of a claim face101 issues based on the "abstract idea" exception, andnot the other two.

Section 101 is applied by Examiners at the USPTO in determiningwhether patents should be issued; by district courts in determiningthe validity of existing patents; in the Patent Trial and Appeal Board(PTAB) in appeals from Examinerrejections, in post-grant-review (PGR)proceedings, and in covered-business-method-review(CBM) proceedings; and in the Federal Circuit on appeals. ThePTAB is part of the USPTO, and may hear an appeal of anExaminer's rejection of claims of a patent application when theclaims have been rejected at least twice.

In determining whether a claim fits into the "abstractidea" category at the USPTO, the Examiners and the PTAB mustapply the 2019 PEG, which is described in the following section ofthis paper. In determining whether a claim is patent-ineligible asan "abstract idea" in the district courts and the FederalCircuit, however, the courts apply the "Alice/Mayo" test;and not the 2019 PEG. The definition of "abstract idea"was formulated by the Alice and Mayo Supreme Court cases. Thesetwo cases have been interpreted by a number of Federal Circuitopinions, which has led to a complicated legal framework that theUSPTO and the district courts must follow.2

The USPTO, which governs the issuance of patents, decided thatit needed a more practical, predictable, and consistent method forits over 8,500 patent examiners to apply when determining whether aclaim is patent-ineligible as an abstract idea.3 Previously, theUSPTO synthesized and organized, for its examiners to compare to anapplicant's claims, the facts and holdings of each FederalCircuit case that deals with section 101. However, the large andstill-growing number of cases, and the confusion arising from"similar subject matter [being] described both as abstract andnot abstract in different cases,"4 led to issues.Accordingly, the USPTO issued its 2019 Revised Patent SubjectMatter Eligibility Guidance on January 7, 2019 (2019 PEG), whichshifted from the case-comparison structure to a new examinationstructure.5 The new examination structure,described below, is more patent-applicant friendly than the priorstructure,6 thereby having the potential toresult in a higher rate of patent issuances. The 2019 PEG does notalter the federal statutory law or case law that make up the U.S.patent system.

The 2019 PEG has a structure consisting of four parts: Step 1,Step 2A Prong 1, Step 2A Prong 2, and Step 2B. Step 1 refers to thestatutory categories of patent-eligible subject matter, while Step2 refers to the judicial exceptions. In Step 1, the Examiners mustdetermine whether the subject matter of the claim is a process,machine, manufacture, or composition of matter. If it is, theExaminer moves on to Step 2.

In Step 2A, Prong 1, the Examiners are to determine whether theclaim "recites" a judicial exception includinglaws of nature, natural phenomenon, and abstract ideas. Forabstract ideas, the Examiners must determine whether the claimfalls into at least one of three enumerated categories: (1)"mathematical concepts" (mathematical relationships,mathematical formulas or equations, mathematical calculations); (2)"certain methods of organizing human activity"(fundamental economic principles or practices, commercial or legalinteractions, managing personal behavior or relationships orinteractions between people); and (3) "mental processes"(concepts performed in the human mind: encompassing acts people canperform using their mind, or using pen and paper). These threeenumerated categories are not mere examples, but arefully-encompassing. The Examiners are directed that "[i]n therare circumstance in which they believe[] a claim limitation thatdoes not fall within the enumerated groupings of abstract ideasshould nonetheless be treated as reciting an abstract idea,"they are to follow a particular procedure involving providingjustifications and getting approval from the Technology CenterDirector.

Next, if the claim limitation "recites" one of theenumerated categories of abstract ideas under Prong 1 of Step 2A,the Examiner is instructed to proceed to Prong 2 of Step 2A. InStep 2A, Prong 2, the Examiners are to determine if the claim is"directed to" the recited abstract idea. In this step,the claim does not fall within the exception, despite reciting theexception, if the exception is integrated into a practicalapplication. The 2019 PEG provides a non-exhaustive list ofexamples for this, including, among others: (1) an improvement inthe functioning of a computer; (2) a particular treatment for adisease or medical condition; and (3) an application of "thejudicial exception in some other meaningful way beyond generallylinking the use of the judicial exception to a particulartechnological environment, such that the claim as a whole is morethan a drafting effort designed to monopolize theexception."

Finally, even if the claim recites a judicial exception underStep 2A Prong 1, and the claim is directed to the judicialexception under Step 2A Prong 2, it might still be patent-eligibleif it satisfies the requirement of Step 2B. In Step 2B, theExaminer must determine if there is an "inventiveconcept": that "the additional elements recited in theclaims provide[] 'significantly more' than the recitedjudicial exception." This step attempts to distinguish betweenwhether the elements combined to the judicial exception (1)"add[] a specific limitation or combination of limitationsthat are not well-understood, routine, conventional activity in thefield"; or alternatively (2) "simply append[]well-understood, routine, conventional activities previously knownto the industry, specified at a high level of generality."Furthermore, the 2019 PEG indicates that where "an additionalelement was insignificant extra-solution activity, [the Examiner]should reevaluate that conclusion in Step 2B. If such reevaluationindicates that the element is unconventional . . . this finding mayindicate that an inventive concept is present and that the claim isthus eligible."

In summary, the 2019 PEG provides an approach for the Examinersto apply, involving steps and prongs, to determine if a claim ispatent-ineligible based on being an abstract idea. Conceptually,the 2019-PEG method begins with categorizing the type of claiminvolved (process, machine, etc.); proceeds to determining if anexception applies (e.g., abstract idea); then, if an exceptionapplies, proceeds to determining if an exclusion applies (i.e.,practical application or inventive concept). Interestingly, thePTAB not only applies the 2019 PEG in appeals from Examinerrejections, but also applies the 2019 PEG in its other Section-101decisions, including CBM review and PGRs.7 However, the 2019PEG only applies to the Examiners and PTAB (the Examiners and thePTAB are both part of the USPTO), and does not apply to districtcourts or to the Federal Circuit.

Case 1: Appeal 2018-0074438 (Decided October 10,2019)

This case involves the PTAB reversing the Examiner's Section101 rejections of claims of the 14/815,940 patent application. Thispatent application relates to applying AI classificationtechnologies and combinational logic to predict whether machinesneed to be serviced, and whether there is likely to be equipmentfailure in a system. The Examiner contended that the claims fitinto the judicial exception of "abstract idea" because"monitoring the operation of machines is a fundamentaleconomic practice." The Examiner explained that "thelimitations in the claims that set forth the abstract idea are:'a method for reading data; assessing data; presenting data;classifying data; collecting data; and tallying data.'"The PTAB disagreed with the Examiner. The PTAB stated:

Specifically, we do not find 'monitoring the operation ofmachines,' as recited in the instant application, is afundamental economic principle (such as hedging, insurance, ormitigating risk). Rather, the claims recite monitoring operation ofmachines using neural networks, logic decision trees, confidenceassessments, fuzzy logic, smart agent profiling, and case-basedreasoning.

As explained in the previous section of this paper, the 2019 PEGset forth three possible categories of abstract ideas: mathematicalconcepts, certain methods of organizing human activity, and mentalprocesses. Here, the PTAB addressed the second of these categories.The PTAB found that the claims do not recite a fundamental economicprinciple (one method of organizing human activity) because theclaims recite AI components like "neural networks" in thecontext of monitoring machines. Clearly, economic principles and AIcomponents are not always mutually exclusive concepts.9 Forexample, there may be situations where these algorithms are applieddirectly to mitigating business risks. Accordingly, the PTAB waslikely focusing on the distinction between monitoring machines andmitigating risk; and not solely on the recitation of the AIcomponents. However, the recitation of the AI components did notseem to hurt.

Then, moving on to another category of abstract ideas, the PTABstated:

Claims 1 and 8 as recited are not practically performed in thehuman mind. As discussed above, the claims recite monitoringoperation of machines using neural networks, logic decision trees,confidence assessments, fuzzy logic, smart agent profiling, andcase-based reasoning. . . . [Also,] claim 8 recites 'an outputdevice that transforms the composite prediction output intohuman-readable form.'

. . . .

In other words, the 'classifying' steps of claims 1 and'modules' of claim 8 when read in light of theSpecification, recite a method and system difficult and challengingfor non-experts due to their computational complexity. As such, wefind that one of ordinary skill in the art would not find itpractical to perform the aforementioned 'classifying' stepsrecited in claim 1 and function of the 'modules' recited inclaim 8 mentally.

In the language above, the PTAB addressed the third category ofabstract ideas: mental processes. The PTAB provided that the claimdoes not recite a mental process because the AI algorithms, basedon the context in which they are applied, are computationallycomplex.

The PTAB also addressed the first of the three categories ofabstract ideas (mathematical concepts), and found that it does notapply because "the specific mathematical algorithm or formulais not explicitly recited in the claims." Requiring that amathematical concept be "explicitly recited" seems to bea narrow interpretation of the 2019 PEG. The 2019 PEG does notrequire that the recitation be explicit, and leaves the mathcategory open to relationships, equations, or calculations. Fromthis, the PTAB might have meant that the claims list a mathematicalconcept (the AI algorithm) by its name, as a component of theprocess, rather than trying to claim the steps of the algorithmitself. Clearly, the names of the algorithms are "explicitlyrecited"; the steps of the AI algorithms, however, are notrecited in the claims.

Notably, reciting only the name of an algorithm, rather thanreciting the steps of the algorithm, seems to indicate that theclaims are not directed to the algorithms (i.e., the claims have apractical application for the algorithms). It indicates that theclaims include an algorithm, but that there is more going on in theclaim than just the algorithm. However, instead of determining thatthere is a practical application of the algorithms, or an inventiveconcept, the PTAB determined that the claim does not even recitethe mathematical concepts.

Additionally, the PTAB found that even if the claims hadbeen classified as reciting an abstract idea, as the Examiner hadcontended the claims are not directed to that abstractidea, but are integrated into a practical application. The PTABstated:

"Appellant's claims address a problem specificallyusing several artificial intelligence classification technologiesto monitor the operation of machines and to predict preventativemaintenance needs and equipment failure."

The PTAB seems to say that because the claims solve a problemusing the abstract idea, they are integrated into a practicalapplication. The PTAB did not specify why the additional elementsare sufficient to integrate the invention. The opinion actuallydoes not even specifically mention that there are additionalelements. Instead, the PTAB's conclusion might have been that,based on a totality of the circumstances, it believed that theclaims are not directed to the algorithms, but actually just applythe algorithms in a meaningful way. The PTAB could have fit thisreasoning into the 2019 PEG structure through one of the Step 2A,Prong 2 examples (e.g., that the claim applies additional elements"in some other meaningful way"), but did not expressly doso.

This case illustrates:

(1) the monitoring of machines was held to not be an abstractidea, in this context;(2) the recitation of AI components such as "neuralnetworks" in the claims did not seem to hurt for arguing anyof the three categories of abstract ideas;(3) complexity of algorithms implemented can help with the"mental processes" category of abstract ideas; and(4) the PTAB might not always explicitly state how the rule for"practical application" applies, but seems to apply itconsistently with the examples from the 2019 PEG.

The next four articles will build on this background, and willprovide different examples of how the PTAB approaches reversingExaminer 101-rejections of machine-learning patents under the 2019PEG. Stay tuned for the analysis and lessons of the next case,which includes methods for overcoming rejections based on the"mental processes" category of abstract ideas, on anapplication for a "probabilistic programming compiler"that performs the seemingly 101-vulnerable function of"generat[ing] data-parallel inference code."

Footnotes

1 MPEP 2106.04.

2Accordingly, the USPTO must follow both the Federal Circuit'scase law that interprets Title 35 of the United States Code, andmust follow the 2019 PEG. The 2019 PEG is not the same as theFederal Circuit's standard the 2019 PEG does notinvolve distinguishing case law (the USPTO, in its 2019 PEG, hasdeclared the Federal Circuit's case law to be too clouded to bepractically applied by the Examiners. 84 Fed. Reg. 52.). The USPTOpractically could not, and actually did not, synthesize theholdings of each of the Federal Circuit opinions regarding Section101 into the standard of the 2019 PEG. Therefore, logically, theonly way to ensure that the 2019 PEG does not impinge on thestatutory rights (provided by 35 U.S.C.) of patent applicants, asinterpreted by the Federal Circuit, is for the 2019 PEG to definethe scope of the 101 judicial exceptions more narrowly than theStatutory requirement. However, assuming there are instances wherethe 2019 PEG defines the 101 judicial exceptions more broadly thanthe statutory standard (if the USPTO rejects claims that theFederal Circuit would not have), that patent applicant may haveadditional arguments for eligibility.

3 84 Fed.Reg. 50, 52.

4Id.

5 TheUSPTO also, on October 17 of 2019, issued an update to the 2019PEG. The October update is consistent with the 2019 PEG, and merelyprovides clarification to some of the terms used in the 2019 PEG,and clarification as to the scope of the 2019 PEG. October 2019 Update: Subject MatterEligibility (October 17, 2019), https://www.uspto.gov/sites/default/files/documents/peg_oct_2019_update.pdf.

6See "Frequently Asked Questions (FAQs) on the 2019Revised Patent Subject Matter Eligibility Guidance ('2019PEG')", C-6 (https://www.uspto.gov/sites/default/files/documents/faqs_on_2019peg_20190107.pdf)("Any claim considered patent eligible under the currentversion of the MPEP and subsequent guidance should be consideredpatent eligible under the 2019 PEG. Because the claim at issue wasconsidered eligible under the current version of the MPEP, theExaminer should not make a rejection under 101 in view ofthe 2019 PEG.").

7See American Express v. Signature Systems, CBM2018-00035(Oct. 30, 2019); Supercell Oy v. Gree, Inc., PGR2018-00061 (Oct.15, 2019).

8 https://e-foia.uspto.gov/Foia/RetrievePdf?system=BPAI&flNm=fd2018007443-10-10-2019-0.

9Notably, the "mental process" category and notthe "certain methods of organizing human activity"category is the one that focuses on the complexity of theprocess. Furthermore, as shown in the following paragraph, the"mental process" category was separately discussed by thePTAB, again mentioning the algorithms. Accordingly, the PTAB islikely not mentioning the algorithms for the purpose of describingthe complexity of the method.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

View original post here:
Machine Learning Patentability In 2019: 5 Cases Analyzed And Lessons Learned Part 1 - Mondaq News Alerts

Read More..

New cybersecurity system protects networks with LIDAR, no not that LiDAR – C4ISRNet

When it comes to identifying early cyber threats, its important to have laser-like precision. Mapping out a threat environment can be done with a range of approaches, and a team of researchers from Purdue University created a new system for just such applications. They are calling that approach LIDAR, or lifelong, intelligent, diverse, agile and robust.

This is not to be confused with LiDAR, for Light Detection and Ranging, a kind of remote sensing system that uses laser pulses to measure distances from the sensor. The light-specific LiDAR, sometimes also written LIDAR, is a valuable tool for remote sensing and mapping, and features prominently in the awareness tools of self-driving vehicles.

Purdues LIDAR, instead, is a kind of architecture for network security. It can adapt to threats, thanks in part to its ability to learn three ways. These include supervised machine learning, where an algorithm looks at unusual features in the system and compares them to known attacks. An unsupervised machine learning component looks through the whole system for anything unusual, not just unusual features that resemble attacks. These two machine-learning components are mediated by a rules-based supervisor.

One of the fascinating things about LIDAR is that the rule-based learning component really serves as the brain for the operation, said Aly El Gamal, an assistant professor of electrical and computer engineering in Purdues College of Engineering. That component takes the information from the other two parts and decides the validity of a potential attack and necessary steps to move forward.

By knowing existing attacks, matching to detected threats, and learning from experience, this LIDAR system can potentially offer a long-term solution based on how the machines themselves become more capable over time.

Aiding the security approach, said the researchers, is the use of a novel curiosity-driven honeypot, which can like a carnivorous pitcher plant lure attackers and then trap them where they will do no harm. Once attackers are trapped, it is possible the learning algorithm can incorporate new information about the threat, and adapt to prevent future attacks making it through.

The research team behind this LIDAR approach is looking to patent the technology for commercialization. In the process, they may also want to settle on a less-confusing moniker. Otherwise, we may stumble into a future where users securing a network of LiDAR sensors with LIDAR have to enact an entire Whos on First? routine every time they update their cybersecurity.

See more here:
New cybersecurity system protects networks with LIDAR, no not that LiDAR - C4ISRNet

Read More..

Artnome Wants to Predict the Price of a Masterpiece. The Problem? There’s Only One. – Built In

Buying a Picasso is like buying a mansion.

Theres not that many of them, so it can be hard to know what a fair price should be. In real estate, if the house last sold in 2008 right before the lending crisis devastated the real estate market basing todays price on the last sale doesnt make sense.

Paintings are also affected by market conditions and a lack of data. Kyle Waters, a data scientist at Artnome, explained to us how his Boston-area firm is addressing this dilemma and, in doing so, aims to do for the art world what Zillow did for real estate.

If only 3 percent of houses are on the market at a time, we only see the prices for those 3 percent. But what about the rest of the market? Waters said. Its similar for art too. We want to price the entire market and give transparency.

We want to price the entire market and give transparency.

Artnome is building the worlds largest database of paintings by blue-chip artists like Georgia OKeeffe, including her super-famous works, lesser-known items, those privately heldand artworks publicly displayed. Waters is tinkering with the data to create a machine learning model that predicts how much people will pay for these works at auctions. Because this model includes an artists entire collection, and not just those works that have been publicly sold before, Artnome claims its machine learning model will be more accurate than the auction industrys previous practice of simply basing current prices on previous sales.

The companys goal is to bring transparency to the auction house industry. But Artnomes new model faces the old problem: Its machine learning system performs poorly on the works that typically sell for the most the ones that people are the most interested in since its hard to predict the price of a one-of-a-kind masterpiece.

With a limited data set, its just harder to generalize, Waters said.

We talked to Waters about how he compiled, cleaned and created Artnomes machine learning model for predicting auction prices, which launched in late January.

Most of the information about artists included in Artnomes model comes from the dusty basement libraries of auction houses, where they store their catalog raissons, which are books that serve as complete records of an artists work. Artnome is compiling and digitizing these records representing the first time these books have ever been brought online, Waters said.

Artnomes model currently includes information from about 5,000 artists whose works have been sold over the last 15 years. Prices in the data set range from $100 at the low end to Leonardo DaVincis record-breaking Salvator Mundi a painting thatsold for $450.3 million in 2017, making it the most expensive work of art ever sold.

How hard was it to predict what DaVincis 500-year-old Mundi would sell for? Before the sale, Christies auction house estimated his portrait of Jesus Christ was worth around $100 million less than a quarter of the price.

It was unbelievable, Alex Rotter, chairman of Christies postwar and contemporary art department, told The Art Newspaper after the sale. Rotter reported the winning phone bid.

I tried to look casual up there, but it was very nerve-wracking. All I can say is, the buyer really wanted the painting and it was very adrenaline-driven.

The buyer really wanted the painting and it was very adrenaline-driven.

A piece like Salvatore Mundi could come to market in 2017 and then not go up for auction again for 50 years. And because a machine learning model is only as good as the quality and quantity of the data it is trained on, market, condition and changes in availability make it hard to predict a future price for a painting.

These variables are categorized into two types of data: structured and unstructured. And cleaning all of it represents a major challenge.

Structured data includes information like what artist painted which painting on what medium, and in whichyear.

Waters intentionally limited the types of structured information he included in the model to keep the system from becoming too unruly to work with. But defining paintings as solely two-dimensional works on only certain mediums proved difficult, since there are so many different types of paintings (Salvador Dali famously painted on a cigar box, after all). Artnomes problem represents an issue of high cardinality, Waters said, since there are so many different categorical variables he could include in the machine learning system.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit, Waters said, adding that large models also become more unruly to work with.

Other structured data focuses on the artist herself, denoting details like when the creator was born or if they were alive during the time of auction. Waters also built a natural language processing system that analyzes the type and frequency of the words an artist used in her paintings titles, noting trends like Georgia OKeeffe using the word white in many of her famous works.

Including information on market conditions, like current stock prices or real estate data, was important from a structured perspective too.

How popular is an artist, are they exhibiting right now? How many people are interested in this artist? Whats the state of the market? Waters said. Really getting those trends and quantifying those could be just as important as more data.

Another type of data included in the model is unstructured data which, as the name might suggest, is a little less concrete than the structured items. This type of data is mined from the actual painting, and includes information like the artworks dominant color, number of corner points and if faces are pictured.

Waters created a pre-trained convolutional neural network to look for these variables, modeling the project after the ResNet 50 model, which famously won the ImageNet Large Scale Visual Recognition Challenge in 2012 after it correctly identified and classified nearly all of the 14 billion objects featured.

Including unstructured data helps quantify the complexity of an image, Waters said, giving it what he called an edge score.

An edge score helps the machine learning system quantify the subjective points of a painting thatseem intuitive to humans, Waters said. An example might be Vincent Van Goghs series of paintings of red-haired men posing in front of a blue background. When youre looking at the painting, its not hard to see youre looking at self portraits of Van Gogh, by Van Gogh.

Including unstructured data in Artnomes system helps the machine spot visual cues that suggest images are part of a series, which has an impact on their value, Waters said.

When you start interacting with different variables, then you can start getting into more granular details.

Knowing that thats a self-portrait would be important for that artist, Waters said. When you start interacting with different variables, then you can start getting into more granular details that, for some paintings by different artists, might be more important than others.

Artnomes convoluted neural network is good at analyzing paintings for data that tells a deeper story about the work. Butsometimes, there are holes inthe story being told.

In its current iteration, Artnomes model includes both paintings with and without frames it doesnt specify which work falls into which category. Not identifying the frame could affect the dominant color the system discovers, Waters said, adding an error to its results.

That could maybe skew your results and say, like, the dominant color was yellow when really the painting was a landscape and it was green, Waters said.

Interested in convolutional neural networks?Convolutional Neural Networks Explained: From Pytorch to CNN

The model also lacks information on the condition of the painting, which, again, could impact the artworks price. If the model cant detect a crease in the painting, it might overestimate its value. Also missing is data on an artworks provenance, or its ownership history. Some evidence suggests that paintings that have been displayed by prominent institutions sell for more. Theres also the issue of popularity. Waters hasnt found a concrete way to tell the system that people like the work of OKeeffe more than the paintings by artist and actor James Franco.

Im trying to think of a way to come up with a popularity score for these very popular artists, Waters said.

An auctioneer hits the hammer to indicate a sale has been made. But the last price the bidder shouts isnt what theyactually pay.

Buyers also must pay the auction house a commission, which varies between auction houses and has changed over time. Waters has had to dig up the commission rates for these outlets over the years and add them to the sales price listed. Hes also had to make sure all sales prices are listed in dollars, converting those listed in other currencies. Standardizing each sale ensures the predictions the model makes are accurate, Waters said.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did, Waters said. It would be clearly wrong to start comparing the two.

Once Artnomes data has been gleaned and cleaned, information is input into the machine learning system, which Waters structured into a random forest model, an algorithm that builds and merges multiple decision trees to arrive at an accurate prediction. Waters said using a random forest model keeps the system from overfitting paintings into one category, and also offers a level of explainability through its permutation score a metric that basically decides the most important aspects of a painting.

Waters doesnt weigh the data he puts into the model. Instead, he lets the machine learning system tell him whats important, with the model weighing factors like todays S&P prices more heavily than the dominant color of a work.

Thats kind of one way to get the feature importance, for kind of a black box estimator, Waters said.

Although Artnome has been approached by private collectors, gallery owners and startups in the art tech world interested in its machine learning system, Waters said its important this data set and model remain open to the public.

His aim is for Artnomes machine learning model to eventually function like Zillows Zestimate, which estimates real estate prices for homes on and off the market, and act as a general starting point for those interested in finding out the price of an artwork.

When it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

We might not catch a specific genre, or era, or point in the art history movement, Waters said. I dont think itll ever be perfect. But when it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

Want to learn more about machine learning? A Tour of the Top 10 Algorithms for Machine Learning Newbies

Original post:
Artnome Wants to Predict the Price of a Masterpiece. The Problem? There's Only One. - Built In

Read More..

The FCA and Bank of England step into the AI and machine learning debate – Lexology

On 23 January 2020, the Financial Conduct Authority (FCA) and the Bank of England (BofE) announced that they will be establishing the Financial Services Artificial Intelligence Public-Private Forum (AIPPF).

The aim of the AIPPF will be to progress the regulators dialogue with the public and private sectors to better understand the relevant technical and public policy issues related to the adoption of artificial intelligence (AI) and machine learning (ML). It will gather views on potential areas where principles, guidance or good practice examples could support the safe adoption of such technologies, and explore whether ongoing industry input could be useful and what form this could take. The AIPPF will also share information and understand the practical challenges of using AI and ML within the financial services sector, as well as the barriers to deployment and any potential risks or trade-offs.

Participating in the AIPPF will be by invitation only, with the final selection taken at the discretion of both the BofE and the FCA. Firms that are active in the development of AI and the use of ML will be prioritised over public authorities and academics. It will be co-chaired by Sir Dave Ramsden, deputy governor for markets and banking at the BofE, and Christopher Woolard, Executive Director of Strategy and Competition at the FCA.

This comes at a time when financial services institutions such as banks and fund managers rely heavily on technology to facilitate increased regulatory reporting that is both timely and accurate, and when the stakes can be high for institutions that can face multi-million pound penalties for failing to meet their reporting obligations, despite having invested the time and money in systems to meet their reporting obligations. The regulators themselves are also looking to reduce the onus on their own supervisors as they continue to receive ever-increasing data sets each week from financial services firms, as explored in the BofEs discussion paper called Transforming data collection from the UK financial sector published on 7 January 2020.

The BofEs discussion paper marked the start of its process to work closely with firms to ensure it and the FCA improves data collection, and eases the administrative and financial burden on firms to deliver that data; the announcement of the AIPPF marks the next step in this process.

In a similar way, the European Commission (EC) established its High-Level Expert Group on Artificial Intelligence in October 2019, to support the implementation of its AI vision, namely that AI must be trustworthy and human-centric. Notable recommendations made by the group relevant to the banking sector included developing and supporting AI-specific cybersecurity infrastructures, upskilling and reskilling the current workforce, and developing legally compliant and ethical data management and sharing initiatives in Europe. The EC has also plans to set out rules relating to AI over the next five years (as from June 2019).

There is a clear indication that the regulators both in and outside the UK are taking a keener interest in how regulated entities deploy AI and machine learning. Given the direction of regulatory travel planned by the EC, we expect this emerging trend to result in similar regulatory guidance and possible rules in the UK over the coming years.

Go here to see the original:
The FCA and Bank of England step into the AI and machine learning debate - Lexology

Read More..

Here’s what happens when you apply machine learning to enhance the Lumires’ 1896 movie "Arrival of a Train at La Ciotat" – Boing Boing

Here's what happens when you apply machine learning to enhance the Lumires' 1896 movie "Arrival of a Train at La Ciotat" / Boing Boing

First, take a look at this 1895 short movie "L'Arrive d'un Train la Ciotat" ("Arrival of a Train at La Ciotat," from the Lumire Brothers. This film was upscaled to 4K and 60 frames per second using a variety of neural networks and other enhancement techniques. The result can be seen in the video below:

The Spot has an article about how it was done:

[YouTuber Denis Shiryaev] used a mix of neural networks from Gigapixel AI and a technique called depth-aware video frame interpolation to not only upscale the resolution of the video, but also increase its frame rate to something that looks a lot smoother to the human eye.

At this years Conference on Computer Vision and Pattern Recognition, researcher Chen Chen presented a cool project that vastly improves the quality of images captured in low-light conditions. Via his presentation: Imaging in low light is challenging due to low photon count and low SNR. Short-exposure images suffer from noise, while long exposure can induce []

In Identifiable Images of Bystanders Extracted from Corneal Reflections, British psychology researchers Rob Jenkins and Christie Kerr show that recognizable images of the faces of unpictured bystanders can be captured from modern, high-resolution photography by zooming in on subjects eyes to see the reflections in their corneas. The researchers asked experimental subjects to identify faces []

If you want to have an edge in the increasingly lucrative yet competitive world of graphic design, you need to have the right skills on your resume to help you stand out from the crowd. But instead of investing large amounts of time and money in traditional graphic design education, check out the All-in-One Adobe []

Any frequent traveler will tell you that one of the most important things you can do in order to ensure that youre comfortable on the go is to have a great piece of luggage. But luggage has come a long way since its humble inception, and its now possible to grab a piece of gear []

The dreaded tax season is nearly upon us, but you dont have to suffer this year as you dig up old receipts and fill out an endless number of forms. In fact, you can turn this tax season into an opportunity to maximize your returns while boosting your financial literacy when you pick up the []

Read more:
Here's what happens when you apply machine learning to enhance the Lumires' 1896 movie "Arrival of a Train at La Ciotat" - Boing Boing

Read More..