Page 2,521«..1020..2,5202,5212,5222,523..2,5302,540..»

Ekinops acquires SixSq and steps up its presence in Edge Computing – PRNewswire

PARIS, Nov. 2, 2021 /PRNewswire/ -- EKINOPS (Euronext Paris - FR0011466069 EKI), a leading supplier of telecommunications solutions for telecom operators and enterprises, is announcing the acquisition of the start-up SixSq, a software-as-a-service (SaaS) provider for Edge Computing.

Based in Geneva, Switzerland, SixSq has developed an ultra-innovative solution allowing businesses to take full advantage of the added value of Edge Computing. Complementing Cloud computing, the SixSq solution enables smart data processing directly on the company's site.

The SixSq range comprises the Nuvla.io marketplace, which hosts all types of business applications in container format, and the NuvlaBox software, which converts enterprise routers or other open hardware platforms capable of processing data, into smart edge systems.

The Nuvla.io marketplace makes available to all NuvlaBoxes deployed in the field inside enterprises, allthe applications it hosts in the Cloud, similar to the App Store or Play Store for consumers. This way, SixSq makes it possible for all software vendors to reach the enterprise market and sell their innovative software applications.

With this acquisition, Ekinops is stepping up its strategy to provide greater added value to its customers. "After enriching OneOS6 middleware with SD-WAN and SBC solutions, it is now possible to extend it to alltypes of applications through the integration of NuvlaBox into OneOS6 and access to the Nuvla.io marketplace. The possibilities are infinite!" said Didier Brdy, CEO of Ekinops. "We are looking forward to presenting this opportunity to our telecom operator customers. It is a new way for them to monetize their presence at enterprise branch sites through our OneOS6 routers."

"Our solution, already productized, offers a unique value proposition to various verticals such as industry, mass retail and telecoms. For us, joining Ekinops is an enormous accelerator," said Marc-Elian Bgin, co-founder and CEO of SixSq. "Thanks to Ekinops' support, we now have the firepower to rapidly move into the B2B market focusing on large accounts and telecom operators. We have already identified opportunities. The market has been waiting for this type of solution, so the timing is perfect."

A key step in the development of Ekinops' software business

The alliance is a major step forward in Ekinops upscale and software business development strategy.

Ekinops and SixSq solutions are already integrated through the Ekinops virtualization offering (OneOS6-LIM). Next, the goal is to integrate NuvlaBox software directly into OneOS6 middleware. All OneOS6 routers will be able to run "container" business applications, downloaded via the Nuvla.io marketplace. Combined with its new 5G routers, Ekinops will make artificial intelligence available to all companies and use cases, particularly for the Internet of Things, Industry 4.0 and smart retail.

SixSq is expected to contribute 1m to 2m to revenue in 2022 from its software business. The company is targeting triple-digit growth in the coming years.

In three years' time, Ekinops aims to generate at least 20% to 30% of its revenue through software and services (vs. 12% in H1 2021).

Transaction details

The transaction consists of the acquisition by Ekinops of 100% of the capital of SixSq SA, which will be consolidated by the Group from November 1, 2021.

This transaction also includes financing from Ekinops to SixSq, which will allow the companyto significantly increase its sales operation and R&D activities at its Geneva headquarters.

The transaction will have a non-material impact on Ekinops' 2021 financial statements.

About Edge Computing

With the advent of our "all digital" society, the amount of data generated by individuals, employees, machines and everyday devices is experiencing exponential growth. These extensive data sets also need to be processed in near real time. Cloud computing, long used to store and process data, is no longer able to address all our needs.We need to process video in real time, take immediate decisions and process data with a lot less latency. And these requirements apply to numerous fields, including healthcare, smart cities and retail.

Supplementing the capacities of the Cloud, Edge Computing provides users with computing power as close as possible to data sources. This is true for points of presence of telecom operators closest to users (known as "near Edge Computing") and user sites ("Far Edge Computing", enabled by SixSq's technology).

Edge Computing has a host of advantages, including:

EKINOPS ContactDidier BrdyChairman and CEO[emailprotected]

Investors Mathieu OmnesInvestor relationTel.: +33 (0)1 53 67 36 92[emailprotected]

Press Amaury Dugast Press relationTel.: +33 (0)1 53 67 36 74[emailprotected]

SOURCE Ekinops

See the article here:
Ekinops acquires SixSq and steps up its presence in Edge Computing - PRNewswire

Read More..

Secrets to success with cybersecurity hiring and retention – Healthcare IT News

There has been a dearth of cybersecurity professionals to protect healthcare provider organizations for some time and the problem is only getting worse.

That's one of the most pressing trends when it comes to recruiting and retaining cybersecurity talent. And it will also be a major topic addressed during "Team Building, Growing and Retaining Talent: The Secret to Success," a panel discussion at the HIMSS Cybersecurity Forum, a digital event coming December 6-7.

The panel will explore current trends in the information security job marketplace, culture cultivation strategies, assessing what future hiring and training requirements will look like, as well as challenges around retaining talent.

Healthcare IT News sat down with panelist James L. Angle, product manager, IT services, information security, at Livonia, Michigan-based health system Trinity Health, to get a preview of the discussion. Angle has a doctorate in business administration with a specialization in computer and information security.

Q. What are a couple trends you see in today's information security job marketplace?

A. First and foremost, the biggest trend in the cybersecurity marketplace is the lack of talented cybersecurity professionals. The gap keeps getting wider with each new threat that materializes.

As threats like ransomware evolve and become more sophisticated, employers realize they need more help, and this puts a strain on the limited number of cybersecurity professionals. As demand goes up, so do salaries, and this makes it more difficult for small to mid-sized healthcare organizations to compete for available talent.

Another trend is the attack surface of healthcare organizations that is expanding and changing with the move to cloud computing. In the past, organizations built a strong perimeter defense to keep unauthorized people out.

James L. Angle, Trinity Health

This approach is no longer viable as cloud computing places the organization data outside the perimeter. This requires a different skill set to manage the threat, which means more cybersecurity professionals with these skills are needed. This exacerbates the problem.

These two issues are driving employers to ask for cybersecurity professionals with multiple skill sets to cover their requirements. If you look at job announcements, you will see employers asking for someone who is an expert with perimeter security, endpoint security, cloud computing, and governance, risk and compliance (GRC).

The problem is that most security professionals do not have multiple skill sets. While they have a basic knowledge of all these skills, they do not have the expertise in all of them.

Q. How do you cultivate a good information security culture in healthcare?

A. First, hire the right person for the right job. By that I mean don't hire someone with a soft skill like GRC to be your firewall administrator, or a firewall administrator to be your security architect. These are vastly different skills, and each takes training to become proficient.

You should cross-train all your personnel, but don't hire people for jobs they are not qualified for. You are only setting them up for failure.

Provide training for your cybersecurity staff. The threat is evolving and getting more sophisticated every day, so they must keep up with the changes.

Also, if you have people with IT skills who want to learn cybersecurity, encourage them by setting up in-house training and help them develop the skills. Most security people I know would like to help others develop cybersecurity skills, and could help train others.

Another important thing for developing good security practices is for leadership to talk about cybersecurity and lead by example.

Q. What are a couple challenges around retaining information security talent, and how do you overcome these challenges?

A. There are two big challenges around retaining cybersecurity professionals. The first is the shortage of cybersecurity professionals. This shortage means that some organizations will attempt to hire workers from other companies. This drives up salaries and makes it harder for healthcare organizations to hire and keep talent.

The second and most important aspect is how cybersecurity professionals are treated by their organizations. Let's face it, no one likes having to practice good security. Long passwords, blocked websites and many problems that arise are blamed on security. This leads to security professionals being treated as if they were an impediment to productivity, rather than an asset.

James Angle, along with Vugar Zeynalo, CISO at Cleveland Clinic Health Systems, and Steve Martano, partner in the cybersecurity practice at Artico Search, will explain more in the session, "Team Building, Growing and Retaining Talent: The Secret to Success." It's scheduled to air from 3:10-3:40 p.m. ET on Tuesday, December 7.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

See more here:
Secrets to success with cybersecurity hiring and retention - Healthcare IT News

Read More..

Microsoft Rides Cloud Computing Boost to Nearly Overtake Apple as Most Valuable Company | Technology News – Gadgets 360

A surge in Microsoft's shares nearly unseated Apple Inc as the world's most valuable company on Wednesday, a day before the iPhone maker reports its quarterly results.

Fuelled by strong quarterly growth in its Azure cloud-computing business, Microsoft's shares jumped 4.2 percent to end at a record $323.17 (roughly Rs. 24,225), elevating the software maker's market capitalisation to $2.426 trillion (roughly Rs. 1,81,86,020 crore), just short of Apple's $2.461 trillion (roughly Rs.1,84,45,070 crore)valuation, according to Refinitiv data.

Apple's shares dipped 0.3 percent ahead of its report due after the bell on Thursday, with investors focused on how the global supply-chain crisis is challenging the company's ability to meet demand for its iPhone models.

Microsoft's stock has rallied 45 percent this year, with pandemic-induced demand for its cloud-based services driving sales. Shares of Apple have climbed 12 percent in 2021.

Apple's stock market value overtook Microsoft's in 2010 as the iPhone made it the world's premier consumer technology company. The two companies have taken turns as Wall Street's most valuable company in recent years, with Apple holding the title since mid-2020.

In its report late on Tuesday, Microsoft forecast a strong end to the calendar year thanks to its booming cloud business, but it warned that supply-chain woes will continue to dog key units, such as those producing its Surface laptops and Xbox gaming consoles.

Analysts on average expect Apple to report September-quarter revenue up 31 percent to $84.8 billion (roughly Rs. 6,35,560 crore)and adjusted earnings per share of $1.24 (roughly Rs. 90), according to Refinitiv.

Link:
Microsoft Rides Cloud Computing Boost to Nearly Overtake Apple as Most Valuable Company | Technology News - Gadgets 360

Read More..

Qualcomm is researching machine learning at the edge – Stacey on IoT

Regular newsletter readers know that I am beyond excited about machine learning (ML) at the edge. Running algorithms on gateways or even on sensors instead of sending data to the cloud to be analyzed can save time, bandwidth costs, and energy, and can protect peoples privacy.

So far, ML at the edge has only involved inference, the process of running incoming data against an existing model to see if it matches. Training the algorithm still takes place in the cloud. But Qualcomm has been researching ways to make the training of ML algorithms at the edge less energy-intensive, which means it could happen at the edge.

Bringing ML to edge devices means user data stays on the device, which boosts privacy; italso reduces the energy and costs associated with moving data around. It can also lead to highly personalized services. These are all good things. So what has Qualcomm discovered?

In an interview with me, Qualcomms Joseph Soriaga, senior director of technology, broke down the companys research into four different categories. But first, lets talk about what it takes to train an ML model.

Training usually happens in the cloud because it requires a computer to analyze a lot of data and hold much of that data in memory while performing probabilities to assess if the data matches whatever goal the algorithm is trying to meet.So to train a model to identify cats, you have to give it a lot of pictures of cats; the computer then tries to figure out what makes a cat. As it refines its understanding, it will produce calculations that a data scientist can assess and refine further by weighting different elements of the assessment more heavily in favor of elements that make something look like a cat.

It requires a lot of computational heft, memory, and bandwidth to build a good model. The edge doesnt historically have a lot of computing power or memory available, which is why edge devices perform inference and dont learn while in operation. Soriaga and his team have come up with methods that can enable personalization and adaptation of existing models at the edge, which is a step in the right direction.

One method is called few-shot learning, which is designed for situations where a researcher wants to tweak an algorithm to better meet the needs of outliers. Soriaga offered up an example involving wake word detection. For customers who have an accent or a hard time saying a wake word, using this method to improve accuracy can boost detection rates by 30%. Because there is a limited and clear data set, and labels, its possible to train existing models without consuming much power or computing resources.

Another method for training at the edge is continuous learning with unlabeled data. Here, an existing model gets updated with new data coming into the edge device over time. But because the data is unlabeled and the edge data may be over-personalized a data scientist has to be aware of those limits when trying to adapt the model.

My favorite research topic is federated device learning, where you might use the prior two methods to tweak algorithms locally and then send the tweaked models back to the cloud or share them with other edge devices. Qualcomm, for example, has explored how to identify people based on biometrics. Recognizing someone based on their face, fingerprint, or voice could involve sending all of those data points to the cloud, but it would be far more secure to have an algorithm that can be trained locally for each user.

So the trained algorithm built in the cloud might recognize how to differentiate a face, but locally, it would have to match with an individual face. That individual face data would stay private but the features that make it a face would get sent back to help adjust the initial algorithm. Then that tweaked version of the algorithm would get sent back to the edge devices where some noise would get added to the face data to ensure privacy, but also to ensure that over time the cloud-based algorithm gets better without sharing that persons data.

This approach provides large sets of face or voice data without having to scrape it from social media or photo sites without permission. Federating the learning over many devices also means data scientists can get a lot of inputs but that the raw data doesnt ever leave the device.

Finally, we also need ways to reduce the computational complexity associated with building algorithms from scratch. Im not going to get too into depth here, because theres a lot of math, butheres where you can find more information. Broadly speaking, the solution to traditional training in the cloud is to make training on less compute-heavy devices easier.

Qualcomm researchers have decided that one way to do that is to avoid using backpropagationto figure out how to weigh certain elements when building a model. Instead, data scientists can use quantized training to reduce the complexity associated with backpropagationand use more efficient models. Qualcomms researchers came up with something called in-hindsight range estimation, to efficiently adapt models for edge devices. If you are keen on understanding this, then click through to the research paper. But the money statement is that using this method was as accurate as traditional training methods and resulted in a 79% reduction in memory transfer. That reduction computes to needing less memory and compute power.

This research is very exciting because training at the edge has long been the dream, but a dream that has been so hard to turn into reality. As regulations promote more privacy and security for the IoT, all while demanding reduced energy consumption, edge-based training is moving from a wish-we-had-it option to a need-to-have it option. Im hoping R&D keeps up.

Related

Read the rest here:
Qualcomm is researching machine learning at the edge - Stacey on IoT

Read More..

TrainerRoad Announces Release of Adaptive Training Platform, Making Machine Learning-Powered Training Available to Cyclists – Outside Business Journal

Get access to everything we publish when you sign up for Outside+.

RENO, Nevada (November 2, 2021) TrainerRoad, cyclings most complete and effective training system and the market leader in making athletes faster, announced today the official release of their Adaptive Training system, making TrainerRoad the worlds leading machine learning-driven training platform for cyclists.

TrainerRoads Adaptive Training system uses machine learning, science-based coaching principles, and an unprecedented data set to train athletes as individuals rather than offering cookie-cutter programs that dont account for variability in training. With Adaptive Training, TrainerRoad is able to recommend the workout each athlete needs at the right time to reach their goals.

The full integration of Adaptive Training is the next step in the ongoing development of TrainerRoads data-driven training ecosystem, TrainerRoad Communications Director, Jonathan Lee said. Thanks to a successful beta testing period, weve optimized the Adaptive Training experience and created a tool which puts the power of a seasoned coach in the hands of each TrainerRoad athlete. With every input and we now have tens of millions on the TrainerRoad platform, Adaptive Training evolves and becomes better at making intelligent recommendations for individual athletes.

Starting today, all TrainerRoad athletes have access to this powerful tool. If an athlete is targeting a specific training goal or event, TrainerRoads Plan Builder will quickly create a custom plan, and Adaptive Training will offer intelligent adjustments throughout training to maximize athlete success. For those that prefer the freedom of picking workouts as they go, TrainNow uses Adaptive Trainings insights to automatically recommend workouts based on their current abilities. The more an athlete uses TrainerRoad, the better and more finely-tuned their training becomes.

Adaptive Training is our most capable training system to date, but that doesnt mean innovation stops here, Lee said. TrainerRoad is built on a foundation of always striving to improve and get better. We continue to push ourselves forward and develop the best tools possible to improve athlete performance.

New athletes looking to increase their fitness and get faster with TrainerRoad can sign up today risk-free and receive a full refund within the first 30 days if theyre not 100% satisfied. To sign up or for more information on TrainerRoad, visit http://www.TrainerRoad.com.

Click here for Adaptive Training Media Kit

TrainerRoad is the leading training system for cyclists and triathletes who want to get faster. Athletes in over 150 countries use TrainerRoads training calendar, apps, workouts, training plans and analysis tools to elevate their performance. Additionally, TrainerRoads forum, blog, and podcasts are trusted educational resources for athletes around the world. Learn more at http://www.TrainerRoad.com.

The rest is here:
TrainerRoad Announces Release of Adaptive Training Platform, Making Machine Learning-Powered Training Available to Cyclists - Outside Business Journal

Read More..

Turn your tech skills into machine learning expertise with this book and class bundle – TechRepublic

Now that you've got mid-level tech skills, it's time to direct them into one of the tech industry's most in-demand fields.

Image: Chinnawat Ngamsom, Getty Images/iStockphoto

If your tech skills have reached the intermediate level and you're ready to turbocharge your career, then check out the Pay What You Want: The Comprehensive Machine Learning Bundle and learn the latest commercial machine learning methods.

The bundle consists of six books and four courses, and you get to choose what you want to pay. Here's the way it works.

Four of the books focus on Python. "Machine Learning for the Web" explores how to use Python to make better predictions, while "Python Machine Learning" will teach you how to generate the most useful data insights by using Python to build extremely powerful machine learning algorithms.

In "Advanced Machine Learning with Python" you can learn how to master Python's latest machine learning techniques and use them to solve problems in data science. Then you can find out how to quickly build potent machine learning models and implement predictive applications on a large scale in "Large Scale Machine Learning with Python."

In the two remaining books, you can learn how to use test-driven development to control machine learning algorithms in "Test Driven Machine Learning" and how to use Apache Spark to create a range of machine learning projects in "Apache Spark Machine Learning Blueprints."

The "Step-by-Step Machine Learning with Python" course covers the most effective tools and techniques for machine learning. And you'll find out how to perform real-world machine learning tasks in the "Python Machine Learning Solutions" course. The "Machine Learning with Open CV and Python" class can teach you how to use Python to analyze and understand data.

The final course is "Machine Learning with TensorFlow." It explains how to use Google's TensorFlow library to solve machine learning issues.

These books and courses are offered by Packt Publishing, which has created more than 3,000 books and videos full of actionable information for IT professionals, from optimizing skills in existing tools to emerging technology. Thousands of these bundles have already been sold.

Don't miss this chance to get a great machine learning bargain. Get the Pay What You Want: The Comprehensive Machine Learning Bundle today(normally $843.92).

Prices subject to change.

Read the original here:
Turn your tech skills into machine learning expertise with this book and class bundle - TechRepublic

Read More..

Psychologists use machine learning algorithm to pinpoint top predictors of cheating in a relationship – PsyPost

According to a study published in the Journal of Sex Research, relationship characteristics like relationship satisfaction, relationship length, and romantic love are among the top predictors of cheating within a relationship. The researchers used a machine learning algorithm to pinpoint the top predictors of infidelity among over 95 different variables.

While a host of studies have investigated predictors of infidelity, the research has largely revealed mixed and often contradictory findings. Study authors Laura M. Vowels and her colleagues aimed to improve on these inconsistencies by using machine learning models. This approach would allow them to compare the relative predictability of various relationship factors within the same analyses.

The research topic was actually suggested by my co-author, Dr. Kristen Mark, who was interested in understanding predictors of infidelity better. She has previously published several articles on infidelity and is interested in the topic, explained Vowels, a principal researcher forBlueheart.ioand postdoctoral researcher at the University of Lausanne.

Vowels and her team pooled data from two different studies. The first data set came from a study of 891 adults, the majority of whom were married or cohabitating with a partner (63%). Around 54% of the sample identified as straight, 21% identified as bisexual, 11% identified as gay, and 7% identified as lesbian. A second data set was collected from both members of 202 mixed-sex couples who had been together for an average of 9 years, the majority of whom were straight (93%).

Data from the two studies included many of the same variables such as demographic measures like age, race, sexual orientation, and education, in addition to assessments of participants sexual behavior, sexual satisfaction, relationship satisfaction, and attachment styles. Both studies also included a measure of in-person infidelity (having interacted sexually with someone other than ones current partner) and online infidelity (having interacted sexually with someone other than ones current partner on the internet).

Using machine learning techniques, the researchers analyzed the data sets together first for all respondents and then separately for men and women. They then identified the top ten predictors for in-person cheating and for online cheating. Across both samples and among both men and women, higher relationship satisfaction predicted a lower likelihood of in-person cheating. By contrast, higher desire for solo sexual activity, higher desire for sex with ones partner, and being in a longer relationship predicted a higher likelihood of in-person cheating. In the second data set only, greater sexual satisfaction and romantic love predicted a lower likelihood of in-person infidelity.

When it came to online cheating, greater sexual desire and being in a longer relationship predicted a higher likelihood of cheating. Never having had anal sex with ones current partner decreased the likelihood of cheating online a finding the authors say likely reflects more conservative attitudes toward sexuality. In the second data set only, higher relationship and sexual satisfaction also predicted a lower likelihood of cheating.

Overall, I would say that there isnt one specific thing that would predict infidelity. However, relationship related variables were more predictive of infidelity compared to individual variables like personality. Therefore, preventing infidelity might be more successful by maintaining a good and healthy relationship rather than thinking about specific characteristics of the person, Vowels told PsyPost.

Consistent with previous studies, relationship characteristics like romantic love and sexual satisfaction surfaced as top predictors of infidelity across both samples. The researchers say this suggests that the strongest predictors for cheating are often found within the relationship, noting that, addressing relationship issues may buffer against the likelihood of one partner going out of the relationship to seek fulfillment.

These results suggest that intervening in relationships when difficulties first arise may be the best way to prevent future infidelity. Furthermore, because sexual desire was one of the most robust predictors of infidelity, discussing sexual needs and desires and finding ways to meet those needs in relationships may also decrease the risk of infidelity, the authors report.

The researchers emphasize that their analysis involved predicting past experiences of infidelity from an array of present-day assessments. They say that this design may have affected their findings, since couples who had previously dealt with cheating within the relationship may have worked through it by the time they completed the survey.

The study was exploratory in nature and didnt include all the potential predictors, Vowels explained. It also predicted infidelity in the past rather than current or future infidelity, so there are certain elements like relationship satisfaction that might have changed since the infidelity occurred. I think in the future it would be useful to look into other variables and also look at recent infidelity because that would make the measure of infidelity more reliable.

The study, Is Infidelity Predictable? Using Explainable Machine Learning to Identify the Most Important Predictors of Infidelity, was authored by Laura M. Vowels, Matthew J. Vowels, and Kristen P. Mark.

See the original post here:
Psychologists use machine learning algorithm to pinpoint top predictors of cheating in a relationship - PsyPost

Read More..

MIT: Forcing ML Models to Avoid Shortcuts (and Use More Data) for Better Predictions – insideHPC

CAMBRIDGE, Mass. If your Uber driver takes a shortcut, you might get to your destination faster. But if a machine learning model takes a shortcut, it might fail in unexpected ways.

In machine learning, a shortcut solution occurs when the model relies on a simple characteristic of a dataset to make a decision, rather than learning the true essence of the data, which can lead to inaccurate predictions. For example, a model might learn to identify images of cows by focusing on the green grass that appears in the photos, rather than the more complex shapes and patterns of the cows.

A new study by researchers at MIT explores the problem of shortcuts in a popular machine-learning method and proposes a solution that can prevent shortcuts by forcing the model to use more data in its decision-making.

By removing the simpler characteristics the model is focusing on, the researchers force it to focus on more complex features of the data that it hadnt been considering. Then, by asking the model to solve the same task two ways once using those simpler features, and then also using the complex features it has now learned to identify they reduce the tendency for shortcut solutions and boost the performance of the model.

One potential application of this work is to enhance the effectiveness of machine learning models that are used to identify disease in medical images. Shortcut solutions in this context could lead to false diagnoses and have dangerous implications for patients.

It is still difficult to tell why deep networks make the decisions that they do, and in particular, which parts of the data these networks choose to focus upon when making a decision. If we can understand how shortcuts work in further detail, we can go even farther to answer some of the fundamental but very practical questions that are really important to people who are trying to deploy these networks, says Joshua Robinson, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper.

Robinson wrote thepaperwith his advisors, senior author Suvrit Sra, the Esther and Harold E. Edgerton Career Development Associate Professor in the Department of Electrical Engineering and Computer Science (EECS) and a core member of the Institute for Data, Systems, and Society (IDSS) and the Laboratory for Information and Decision Systems; and Stefanie Jegelka, the X-Consortium Career Development Associate Professor in EECS and a member of CSAIL and IDSS; as well as University of Pittsburgh assistant professor Kayhan Batmanghelich and PhD students Li Sun and Ke Yu. The research will be presented at the Conference on Neural Information Processing Systems in December.

The long road to Understanding Shortcuts

The researchers focused their study on contrastive learning, which is a powerful form of self-supervised machine learning. In self-supervised machine learning, a model is trained using raw data that do not have label descriptions from humans. It can therefore be used successfully for a larger variety of data.

A self-supervised learning model learns useful representations of data, which are used as inputs for different tasks, like image classification. But if the model takes shortcuts and fails to capture important information, these tasks wont be able to use that information either.

For example, if a self-supervised learning model is trained to classify pneumonia in X-rays from a number of hospitals, but it learns to make predictions based on a tag that identifies the hospital the scan came from (because some hospitals have more pneumonia cases than others), the model wont perform well when it is given data from a new hospital.

For contrastive learning models, an encoder algorithm is trained to discriminate between pairs of similar inputs and pairs of dissimilar inputs. This process encodes rich and complex data, like images, in a way that the contrastive learning model can interpret.

The researchers tested contrastive learning encoders with a series of images and found that, during this training procedure, they also fall prey to shortcut solutions. The encoders tend to focus on the simplest features of an image to decide which pairs of inputs are similar and which are dissimilar. Ideally, the encoder should focus on all the useful characteristics of the data when making a decision, Jegelka says.

So, the team made it harder to tell the difference between the similar and dissimilar pairs, and found that this changes which features the encoder will look at to make a decision.

If you make the task of discriminating between similar and dissimilar items harder and harder, then your system is forced to learn more meaningful information in the data, because without learning that it cannot solve the task, she says.

But increasing this difficulty resulted in a tradeoff the encoder got better at focusing on some features of the data but became worse at focusing on others. It almost seemed to forget the simpler features, Robinson says.

To avoid this tradeoff, the researchers asked the encoder to discriminate between the pairs the same way it had originally, using the simpler features, and also after the researchers removed the information it had already learned. Solving the task both ways simultaneously caused the encoder to improve across all features.

Their method, called implicit feature modification, adaptively modifies samples to remove the simpler features the encoder is using to discriminate between the pairs. The technique does not rely on human input, which is important because real-world data sets can have hundreds of different features that could combine in complex ways, Sra explains.

From Cars to COPD

The researchers ran one test of this method using images of vehicles. They used implicit feature modification to adjust the color, orientation, and vehicle type to make it harder for the encoder to discriminate between similar and dissimilar pairs of images. The encoder improved its accuracy across all three features texture, shape, and color simultaneously.

To see if the method would stand up to more complex data, the researchers also tested it with samples from a medical image database of chronic obstructive pulmonary disease (COPD). Again, the method led to simultaneous improvements across all features they evaluated.

While this work takes some important steps forward in understanding the causes of shortcut solutions and working to solve them, the researchers say that continuing to refine these methods and applying them to other types of self-supervised learning will be key to future advancements.

This ties into some of the biggest questions about deep learning systems, like Why do they fail? and Can we know in advance the situations where your model will fail? There is still a lot farther to go if you want to understand shortcut learning in its full generality, Robinson says.

This research is supported by the National Science Foundation, National Institutes of Health, and the Pennsylvania Department of Healths SAP SE Commonwealth Universal Research Enhancement (CURE) program.

Written by Adam Zewe, MIT News Office

Paper: Can contrastive learning avoid shortcut solutions?

Original post:
MIT: Forcing ML Models to Avoid Shortcuts (and Use More Data) for Better Predictions - insideHPC

Read More..

Top Machine Learning Tools Used By Experts In 2021 – Analytics Insight

The amount of data generated on a day-to-day basis is humungous so much so that the term given to identify such a large volume of data is coined as big data. Big data is usually raw and cannot be used to meet business objectives. Thus, transforming this data into a form that is easy to understand is important. This is exactly where machine learning comes into play. With machine learning in place, it is possible to understand the customer demands, their behavioral pattern and a lot more thereby enabling the business to meet its objectives. For this very purpose, companies and experts rely on certain machine learning tools. Here is our find of top machine learning tools used by experts in 2021. Have a look!

Keras is a free and open-source Python library popularly used for machine learning. Designed by Google engineer Franois Chollet, Keras acts as an interface for the TensorFlow library. In addition to being user-friendly, this machine learning tool is quick, easy and runs on both CPU and GPU. Keras is written in Python language and functions as an API for neural networks.

Yet another widely used machine learning tool across the globe is KNIME. It is easy to learn, free and ideal for data reporting, analytics, and integration platforms. One of the many remarkable features of this machine learning tool is that it can integrate codes of programming languages like Java, JavaScript, R, Python, C, and C++.

WEKA, designed at the University of Waikato, in New Zealand is a tried-and-tested solution for open-source machine learning. This machine learning tool is considered ideal for research, teaching I models, and creating powerful applications. This is written in Java and supports platforms like Linux, Mac OS, Windows. It is extensively used for teaching and research purposes and also for industrial applications for the sole reason that the algorithms employed are easy to understand.

Shogun, an open-source and free-to-use software library for machine learning is quite easily accessible for businesses of all backgrounds and sizes. Shoguns solution is entirely in C++. One can access it in other development languages, including R, Python, Ruby, Scala, and more. Everything from regression and classification to Hidden Markov models, this machine learning tool has got you covered.

If you are a beginner then there cannot be a better machine learning tool to start with other than Rapid Miner. It is because of the fact that it doesnt require any programming skills in the first place. This machine learning tool is considered to be ideal for text mining, data preparation, and predictive analytics. Designed for business leaders, data scientists, and forward-thinking organisations, Rapid Miner surely has grabbed attention for all the right reasons.

TensorFlow is yet another machine learning tool that has gained immense popularity in no time. This open-source framework blends both neural network models with other machine learning strategies. With its ability to run on both CPU as well as GPU, TensorFlow has managed to make it to the list of favourite machine learning tools.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Go here to read the rest:
Top Machine Learning Tools Used By Experts In 2021 - Analytics Insight

Read More..

New exhibition to investigate the history of AI & machine learning in art. – FAD magazine

Gazelli Art House to present Code of Arms a group exhibition investigating the history of artificial intelligence (AI) and machine learning in art. The exploration of implementing code and AI in art in the 1970s 80s comes at a time of rapid change in our understanding and appreciation of computer art.

The exhibition brings together pioneer artists in computer and generative art such as George Nees (b.1926), Frieder Nake (b.1938), Manfred Mohr (b.1938) and Vera Molnar (b.1924), and iconic artists employing AI in their practice such as Harold Cohen (b.1928), Lynn Hershman Leeson (b.1941), and Mario Klingemann (b.1970).

Code of Arms follows the evolution of the medium through the works of exhibited artists. Harold Cohens painting, Aspect (1964), a work shown at the Whitechapel Gallery in 1965 (Harold Cohen: Paintings 1960-1965), marks the artists earliest point of enquiry unfolding his scientific and artistic genius. Cohen, who was most famous for creating the computer program AARON, a predecessor of contemporary AI technologies, implemented the program in his work from 80s onwards as seen by the drawings from this period in the exhibition.

Much of the early computer artworks explored geometric forms and structure employing the technology which was still in the infant stage. Plotter drawings carried out by flatbed precision plotter and early printouts by Manfred Mohr, Georg Nees, Frieder Nake and Vera Molnar from the mid-1960s through the 1980s are an excellent representation of that period: the artists focused on the visual forms rather than addressing the underlying meaning and ethics of using computers in their art. The artists saw machines as an external force that would allow them to explore the visual aspect of the works and experiment with the form in an objective manner. Coming from different backgrounds, they worked alongside each other and made an immense contribution to early computer art.

Initially working as an abstract expressionist artist, Manfred Mohr (b. 1938) was inspired by Max Benses information aesthetics which defined his approach to the creative process from the 1960s onwards. Encouraged by the computer music composer Pierre Barbaud whom he met in 1967, Mohr programmed his first computer drawings in 1969. On display are Mohrs plotter drawings of the 70s and 80s alongside a generative software piece from 2015.

Georg Nees (1926-2016) was a German academic who showed one of the worlds first computer graphics created with a digital computer in 1965. In 1970, at the 35th Venice Biennale, he presented his sculptures and architectural designs, which he continued to work on through the 1980s as seen through his drawings in this exhibition.

Frieder Nake (b. 1938) was actively pursuing computer art in the 1960s. With over 300 works produced and shown at various exhibitions (including Cybernetic Serendipity at ICA, London in 1968), Nake brought his background in computer science and mathematics into his art practice. At the The Great Temptation exhibition at the ZKM in 2005, Nees said: ?There it was, the great temptation for me, for once not to represent something technical with this machine but rather something useless geometrical patterns.? Alongside his iconic 60s plotter drawings, Nakes recent body of work (Sets of Straight Lines, 2018) will be on view as a reminder of the artists ability to transform and move away from the geometric abstraction.

Vera Molnar (b. 1924) is a Hungarian-French artist who is considered a pioneer of computer and generative art. Having created combinational images since 1959, Molnars first non-representational images (abstract geometric and systematic paintings) were produced in 1946. Her plotter drawings from the 80s are displayed alongside her later canvas and work on paper (Double Signe Sans Signification, 2005; Deux Angles Droits, 2006) demonstrating the artists consistency and dedication to the process over three decades.

The exhibition moves on to explore relationships between digital technologies and humans through works by Lynn Hershman Leeson (b.1941), an American artist and filmmaker working in moving image, collage, drawing and new media. The artist, who has recently been the focus of a solo exhibition at the New Museum, New York, will show a series of her rare drawings from the 60s and 70s, as well as her seminal work; Agent Ruby, commissioned by SFMoMA (2001), which is an algorithmic work that interacts with online users through a website, shaping the AIs memory, knowledge and moods. Leeson is known for the first interactive piece using Videodisc (Lorna (1983)), and Deep Contact (1984), the first artwork to incorporate a touch screen.

Mario Klingemann brings neural networks, code and algorithms into the contemporary context. The artist investigates systems of todays society employing deep learning, generative and evolutionary art, glitch art, data classification. The exhibition features his recent digital artwork Memories of Passersby I (Solitaire Version), 2018, and prints Morgan le Fay and Cobalamime from 2017.

Mark Westall

Mark Westall is the Founder and Editor of FAD magazine Founder and co-publisher of Art of Conversation and founder of the platform @worldoffad

The New Museums summer 2021 exhibition line-up features three monographic presentations installed in the Museums main galleries. On the Second []

DRAG: Self-portraits and Body Politics ?is the first institutional exhibition to expand on the traditional representations of drag, involving drag queens, drag kings and bio drags from different generations and backgrounds.

Art Basel will screen a premier program of 16 film and video works presented by the shows participating galleries. The Film program is curated for the fourth consecutive year by Cairo-based film curator Maxa Zoller

Dreamlands: Immersive Cinema and Art, 19052016 focuses on the ways in which artists have dismantled and reassembled the conventions of cinemascreen, projection, darknessto create new experiences of the moving image. The exhibition will fill the Whitney Museums 18,000-square-foot fifth-floor galleries, and will include a film series in the third-floor theatre.

Read more here:
New exhibition to investigate the history of AI & machine learning in art. - FAD magazine

Read More..