Page 3,986«..1020..3,9853,9863,9873,988..4,0004,010..»

Head to Head Survey: Snap (NYSE:SNAP) and Infinite Group (NYSE:IMCI) – Trent Times

Infinite Group (OTCMKTS:IMCI) and Snap (NYSE:SNAP) are both computer and technology companies, but which is the better business? We will contrast the two companies based on the strength of their institutional ownership, analyst recommendations, profitability, valuation, earnings, risk and dividends.

Insider and Institutional Ownership

0.0% of Infinite Group shares are held by institutional investors. Comparatively, 32.8% of Snap shares are held by institutional investors. 39.4% of Infinite Group shares are held by company insiders. Strong institutional ownership is an indication that hedge funds, large money managers and endowments believe a company will outperform the market over the long term.

Risk & Volatility

Infinite Group has a beta of 3.54, meaning that its share price is 254% more volatile than the S&P 500. Comparatively, Snap has a beta of 1.09, meaning that its share price is 9% more volatile than the S&P 500.

Profitability

This table compares Infinite Group and Snaps net margins, return on equity and return on assets.

Analyst Ratings

This is a summary of current ratings and recommmendations for Infinite Group and Snap, as reported by MarketBeat.com.

Snap has a consensus price target of $17.18, indicating a potential upside of 7.19%. Given Snaps higher probable upside, analysts clearly believe Snap is more favorable than Infinite Group.

Valuation & Earnings

This table compares Infinite Group and Snaps gross revenue, earnings per share (EPS) and valuation.

Infinite Group has higher earnings, but lower revenue than Snap.

Summary

Infinite Group beats Snap on 6 of the 11 factors compared between the two stocks.

Infinite Group Company Profile

Infinite Group, Inc. provides managed information technology (IT) and virtualization services, and develops and provides cybersecurity tools and solutions to private businesses and government agencies in the United States. It offers Nodeware, an automated network vulnerability management system that assesses vulnerabilities in a computer network using scanning technology. The company also provides cloud computing services, including public and private cloud architecture, hybrid cloud hosting, server virtualization, and desktop virtualization solutions; and level 2 Microsoft and Hewlett Packard server, and software-based managed services through its partnership with Hewlett Packard Enterprise Company. In addition, it sells third party software licenses, as well as offers virtualization support services. The company was formerly known as Infinite Machines Corp. and changed its name to Infinite Group, Inc. in January 1998. Infinite Group, Inc. was founded in 1986 and is headquartered in Pittsford, New York.

Snap Company Profile

Snap Inc. operates as a camera company in the United States and internationally. The company offers Snapchat, a camera application that helps people to communicate through short videos and images. It also provides Camera, a tool to personalize and add context to Snaps; Friends Page that allows creating and watching stories, chatting with groups, making voice and video calls, and communicating through a range of contextual stickers and Bitmojis; and Discover that helps surfacing the stories and shows from publishers, creators, and the community, based on a user's subscriptions and interests. In addition, the company offers Snap Map, which brings to a live map of individual location, showing nearby friends, popular stories, and a heatmap of recent snaps posted; Memories that allows users to choose to save the Snaps they create in a searchable personal collection, and users to create Snaps and stories from their saved Snaps and camera roll; and Spectacles, a hardware product that connects with Snapchat and captures video from a human perspective. The company was formerly known as Snapchat, Inc. and changed its name to Snap Inc. in September 2016. Snap Inc. was founded in 2010 and is headquartered in Santa Monica, California.

Receive News & Ratings for Infinite Group Daily - Enter your email address below to receive a concise daily summary of the latest news and analysts' ratings for Infinite Group and related companies with MarketBeat.com's FREE daily email newsletter.

View post:
Head to Head Survey: Snap (NYSE:SNAP) and Infinite Group (NYSE:IMCI) - Trent Times

Read More..

Artificial intelligence is helping us talk to animals (yes, really) – Wired.co.uk

Each time any of us uses a tool, such as Gmail, where theres a powerful agent to help correct our spellings, and suggest sentence endings, theres an AI machine in the background, steadily getting better and better at understanding language. Sentence structures are parsed, word choices understood, idioms recognised.

That exact capability could, in 2020, grant the ability to speak with other large animals. Really. Maybe even faster than brain-computer interfaces will take the stage.

Our AI-enhanced abilities to decode languages have reached a point where they could start to parse languages not spoken by anyone alive. Recently, researchers from MIT and Google applied these abilities to ancient scripts Linear B and Ugaritic (a precursor of Hebrew) with reasonable success (no luck so far with the older, and as-yet undeciphered Linear A).

First, word-to-word relations for a specific language are mapped, using vast databases of text. The system searches texts to see how often each word appears next to every other word. This pattern of appearances is a unique signature that defines the word in a multidimensional parameter space. Researchers estimate that languages all languages can be best described as having 600 independent dimensions of relationships, where each word-word relationship can be seen as a vector in this space. This vector acts as a powerful constraint on how the word can appear in any translation the machine comes up with.

These vectors obey some simple rules. For example: king man + woman = queen. Any sentence can be described as a set of vectors that in turn form a trajectory through the word space.

These relationships persist even when a language has multiple words for related concepts: the famed near-100 words Inuits have for snow will all be in similar dimensional spaces each time someone talks about snow, it will always be in a similar linguistic context.

Take a leap. Imagine that whale songs are communicating in a word-like structure. Then, what if the relationships that whales have for their ideas have dimensional relationships similar to those we see in human languages?

That means we should be able to map key elements of whale songs to dimensional spaces, and thus to comprehend what whales are talking about and perhaps to talk to and hear back from them. Remember: some whales have brain volumes three times larger than adult humans, larger cortical areas, and lower but comparable neuron counts. African elephants have three times as many neurons as humans, but in very different distributions than are seen in our own brains. It seems reasonable to assume that the other large mammals on earth, at the very least, have thinking and communicating and learning attributes we can connect with.

What are the key elements of whale songs and of elephant sounds? Phonemes? Blocks of repeated sounds? Tones? Nobody knows, yet, but at least the journey has begun. Projects such as the Earth Species Project aim to put the tools of our time particularly artificial intelligence, and all that we have learned in using computers to understand our own languages to the awesome task of hearing what animals have to say to each other, and to us.

There is something deeply comforting to think that AI language tools could do something so beautiful, going beyond completing our emails and putting ads in front of us, to knitting together all thinking species. That, we perhaps can all agree, is a better and perhaps nearer-term ideal to reach than brain-computer communications. The beauty of communicating with them will then be joined to the market ideal of talking to our pet dogs. (Cats may remain beyond reach.)

Mary Lou Jepsen is the founder and CEO of Openwater. John Ryan, her husband, is a former partner at Monitor Group

The illegal trade of Siberian mammoth tusks revealed

I ditched Google for DuckDuckGo. Here's why you should too

How to use psychology to get people to answer your emails

The WIRED Recommends guide to the best Black Friday deals

Get The Email from WIRED, your no-nonsense briefing on all the biggest stories in technology, business and science. In your inbox every weekday at 12pm sharp.

by entering your email address, you agree to our privacy policy

Thank You. You have successfully subscribed to our newsletter. You will hear from us shortly.

Sorry, you have entered an invalid email. Please refresh and try again.

Visit link:

Artificial intelligence is helping us talk to animals (yes, really) - Wired.co.uk

Read More..

Shocking ways AI technology will revolutionise every day industries in YOUR lifetime – Express.co.uk

Science fiction has helped shaped societys understanding and expectation of advanced AI technology in the future. However, Shadow Robot Company Director Rich Walker argued artificial intelligence technology could be used in industries we would not expect. While speaking to Express.co.uk, he explained that a new A.I tech could be introduced to be used in sectors such as the estate agency or booking services.

He added massive leaps in AI capabilities in recent years had raised expectations of what people believe artificial intelligence can be used for.

He said: AI technology has really been promising a lot for a very long time.

In the last few years we have really started to see some very impressive and surprising successes.

Self-driving cars are starting to be something that has gone from a complete fairy pipe-dream to the question of when are we going to see a self-driving car, because surely we can get one now.

DON'T MISS:AI WILL lead to human extinction if one crucial change isnt made

I think what will happen in the next couple of year is we will see some areas that we werent expecting suddenly being done by AI

Everyone will be like, yes, of course, we could have artificial intelligence in this industry.

Maybe it will be an estate agency or train booking.

Something that is a complicated annoying problem.

More:

Shocking ways AI technology will revolutionise every day industries in YOUR lifetime - Express.co.uk

Read More..

Artificial Intelligence Identifies Previously Unknown Features Associated with Cancer Recurrence – Imaging Technology News

December 27, 2019 Artificial intelligence (AI) technology developed by the RIKEN Center for Advanced Intelligence Project (AIP) in Japan has successfully found features in pathology images from human cancer patients, without annotation, that could be understood by human doctors. Further, the AI identified features relevant to cancer prognosis that were not previously noted by pathologists, leading to a higher accuracy of prostate cancer recurrence compared to pathologist-based diagnosis. Combining the predictions made by the AI with predictions by human pathologists led to an even greater accuracy.

According to Yoichiro Yamamoto, M.D., Ph.D., the first author of the study published in Nature Communications, "This technology could contribute to personalized medicine by making highly accurate prediction of cancer recurrence possible by acquiring new knowledge from images. It could also contribute to understanding how AI can be used safely in medicine by helping to resolve the issue of AI being seen as a 'black box.'"

The research group led by Yamamoto and Go Kimura, in collaboration with a number of university hospitals in Japan, adopted an approach called "unsupervised learning." As long as humans teach the AI, it is not possible to acquire knowledge beyond what is currently known. Rather than being "taught" medical knowledge, the AI was asked to learn using unsupervised deep neural networks, known as autoencoders, without being given any medical knowledge. The researchers developed a method for translating the features found by the AI only numbers initially into high-resolution images that can be understood by humans.

To perform this feat the group acquired 13,188 whole-mount pathology slide images of the prostate from Nippon Medical School Hospital (NMSH), The amount of data was enormous, equivalent to approximately 86 billion image patches (sub-images divided for deep neural networks), and the computation was performed on AIP's powerful RAIDEN supercomputer.

The AI learned using pathology images without diagnostic annotation from 11 million image patches. Features found by AI included cancer diagnostic criteria that have been used worldwide, on the Gleason score, but also features involving the stroma connective tissues supporting an organ in non-cancer areas that experts were not aware of. In order to evaluate these AI-found features, the research group verified the performance of recurrence prediction using the remaining cases from NMSH (internal validation). The group found that the features discovered by the AI were more accurate (AUC=0.820) than predictions made based on the human-established cancer criteria developed by pathologists, the Gleason score (AUC=0.744). Furthermore, combining both AI-found features and the human-established criteria predicted the recurrence more accurately than using either method alone (AUC=0.842). The group confirmed the results using another dataset including 2,276 whole-mount pathology images (10 billion image patches) from St. Marianna University Hospital and Aichi Medical University Hospital (external validation).

"I was very happy," said Yamamoto, "to discover that the AI was able to identify cancer on its own from unannotated pathology images. I was extremely surprised to see that AI found features that can be used to predict recurrence that pathologists had not identified."

He continued, "We have shown that AI can automatically acquire human-understandable knowledge from diagnostic annotation-free histopathology images. This 'newborn' knowledge could be useful for patients by allowing highly-accurate predictions of cancer recurrence. What is very nice is that we found that combining the AI's predictions with those of a pathologist increased the accuracy even further, showing that AI can be used hand-in-hand with doctors to improve medical care. In addition, the AI can be used as a tool to discover characteristics of diseases that have not been noted so far, and since it does not require human knowledge, it could be used in other fields outside medicine."

For more information:www.riken.jp/en/research/labs/aip/

Visit link:

Artificial Intelligence Identifies Previously Unknown Features Associated with Cancer Recurrence - Imaging Technology News

Read More..

Quantum leap: Why we first need to focus on the ethical challenges of artificial intelligence – Economic Times

By Vivek Wadhwa

AI has the potential to be as transformative to the world as electricity, by helping us understand the patterns of information around us. But it is not close to living up to the hype. The super-intelligent machines and runaway AI that we fear are far from reality; what we have today is a rudimentary technology that requires lots of training. Whats more, the phrase artificial intelligence might be a misnomer because human intelligence and spirit amount to much more than what bits and bytes can encapsulate.

I encourage readers to go back to the ancient wisdoms of their faith to understand the role of the soul and the deeper self. This is what shapes our consciousness and makes us human, what we are always striving to evolve and perfect. Can this be uploaded to the cloud or duplicated with computer algorithms? I dont think so.

What about the predictions that AI will enable machines to have human-like feeling and emotions? This, too, is hype. Love, hate and compassion arent things that can be codified. Not to say that a machine interaction cant seem human we humans are gullible, after all. According to Amazon, more than 1 million people had asked their Alexa-powered devices to marry them in 2017 alone. I doubt those marriages, should Alexa agree, would last very long!

Todays AI systems do their best to replicate the functioning of the human brains neural networks, but their emulations are very limited. They use a technique called Deep Learning. After you tell a machine exactly what you want it to learn and provide it with clearly labelled examples, it analyses the patterns in those data and stores them for future application. The accuracy of its patterns depends on completeness of data. So the more examples you give it, the more useful it becomes.

Herein lies a problem, though an AI system is only as good as the data it receives. It is able to interpret them only within the narrow confines of the supplied context. It doesnt understand what it has analysed so it is unable to apply its analysis to other scenarios. And it cant distinguish causation from correlation.

AI shines in performing tasks that match patterns in order to obtain objective outcomes. Examples of what it does well include playing chess, driving a car on a street and identifying a cancer lesion in a mammogram. These systems can be incredibly helpful extensions of how humans work, and with more data, the systems will keep improving. Although an AI machine may best a human radiologist in spotting cancer, it will not, for many years to come, replicate the wisdom and perspective of the best human radiologists. And it wont be able to empathise with a patient in the way that a doctor does. This is where AI presents its greatest risk and what we really need to worry about use of AI in tasks that may have objective outcomes but incorporate what we would normally call judgement. Some such tasks exercise much influence over peoples lives. Granting a loan, admitting a student to a university, or deciding whether children should be separated from their birth parents due to suspicions of abuse falls into this category. Such judgements are highly susceptible to human biases but they are biases that only humans themselves have the ability to detect.

And AI throws up many ethical dilemmas around how we use technology. It is being used to create killing machines for the battlefield with drones which can recognise faces and attack people. China is using AI for mass surveillance, and wielding its analytical capabilities to assign each citizen a social credit based on their behaviour. In America, AI is mostly being built by white people and Asians. So, it amplifies their inbuilt biases and misreads African Americans. It can lead to outcomes that prefer males over females for jobs and give men higher loan amount than women. One of the biggest problems we are facing with Facebook and YouTube is that you are shown more and more of the same thing based on your past views, which creates filter bubbles and a hotbed of misinformation. Thats all thanks to AI.

Rather than worrying about super-intelligence, we need to focus on the ethical issues about how we should be using this technology. Should it be used to recognise the faces of students who are protesting against the Citizenship (Amendment) Act? Should India install cameras and systems like China has? These are the types of questions the country needs to be asking.The writer is a distinguished fellow and professor, Carnegie Mellon Universitys College of Engineering, Silicon Valley.

This story is part of the 'Tech that can change your life in the next decade' package.

Link:

Quantum leap: Why we first need to focus on the ethical challenges of artificial intelligence - Economic Times

Read More..

In 2020, lets stop AI ethics-washing and actually do something – MIT Technology Review

Last year, just as I was beginning to cover artificial intelligence, the AI world was getting a major wake-up call. There were some incredible advancements in AI research in 2018from reinforcement learning to generative adversarial networks (GANs) to better natural-language understanding. But the year also saw several high-profile illustrations of the harm these systems can cause when they are deployed too hastily.

A Tesla crashed on Autopilot, killing the driver, and a self-driving Uber crashed, killing a pedestrian. Commercial face recognition systems performed terribly in audits on dark-skinned people, but tech giants continued to peddle them anyway, to customers including law enforcement. At the beginning of this year, reflecting on these events, I wrote a resolution for the AI community: Stop treating AI like magic, and take responsibility for creating, applying, and regulating it ethically.

In some ways, my wish did come true. In 2019, there was more talk of AI ethics than ever before. Dozens of organizations produced AI ethics guidelines; companies rushed to establish responsible AI teams and parade them in front of the media. Its hard to attend an AI-related conference anymore without part of the programming being dedicated to an ethics-related message: How do we protect peoples privacy when AI needs so much data? How do we empower marginalized communities instead of exploiting them? How do we continue to trust media in the face of algorithmically created and distributed disinformation?

Sign up for The Algorithm artificial intelligence, demystified

But talk is just thatits not enough. For all the lip service paid to these issues, many organizations AI ethics guidelines remain vague and hard to implement. Few companies can show tangible changes to the way AI products and services get evaluated and approved. Were falling into a trap of ethics-washing, where genuine action gets replaced by superficial promises. In the most acute example, Google formed a nominal AI ethics board with no actual veto power over questionable projects, and with a couple of members whose inclusion provoked controversy. A backlash immediately led to its dissolution.

Meanwhile, the need for greater ethical responsibility has only grown more urgent. The same advancements made in GANs in 2018 have led to the proliferation of hyper-realistic deepfakes, which are now being used to target women and erode peoples belief in documentation and evidence. New findings have shed light on the massive climate impact of deep learning, but organizations have continued to train ever larger and more energy-guzzling models. Scholars and journalists have also revealed just how many humans are behind the algorithmic curtain. The AI industry is creating an entirely new class of hidden laborerscontent moderators, data labelers, transcriberswho toil away in often brutal conditions.

But not all is dark and gloomy: 2019 was the year of the greatest grassroots pushback against harmful AI from community groups, policymakers, and tech employees themselves. Several citiesincluding San Francisco and Oakland, California, and Somerville, Massachusettsbanned public use of face recognition, and proposed federal legislation could soon ban it from US public housing as well. Employees of tech giants like Microsoft, Google, and Salesforce also grew increasingly vocal against their companies use of AI for tracking migrants and for drone surveillance.

Within the AI community, researchers also doubled down on mitigating AI bias and reexamined the incentives that lead to the fields runaway energy consumption. Companies invested more resources in protecting user privacy and combating deepfakes and disinformation. Experts and policymakers worked in tandem to propose thoughtful new legislationmeant to rein in unintended consequences without dampening innovation. At the largest annual gathering in the field this year, I was both touched and surprised by how many of the keynotes, workshops, and posters focused on real-world problemsboth those created by AI and those it could help solve.

So here is my hope for 2020: that industry and academia sustain this momentum and make concrete bottom-up and top-down changes that realign AI development. While we still have time, we shouldnt lose sight of the dream animating the field. Decades ago, humans began the quest to build intelligent machines so they could one day help us solve some of our toughest challenges.

AI, in other words, is meant to help humanity prosper. Lets not forget.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

More here:

In 2020, lets stop AI ethics-washing and actually do something - MIT Technology Review

Read More..

The Power Of Purpose: How We Counter Hate Used Artificial Intelligence To Battle Hate Speech Online – Forbes

We Counter Hate

One of the most fascinating examples of social innovation Ive been tracking recently was the We Counter Hate platform, by Seattle-based agency Wunderman Thompson Seattle (formerly POSSIBLE) that sought to reduce hate speech on Twitter by turning retweets of these hateful messages into donations for a good cause.

Heres how it worked: Using machine learning, it first identified hateful speech on the platform. A human moderator then selected the most offensive and most dangerous tweets and attached an undeletable reply, which informed recipients that if they retweet the message, a donation will be committed to an anti-hate group. In a beautiful twist this non-profit wasLife After Hate, a group that helps members of extremist groups leave and transition to mainstream life.

Unfortunately (and ironically) on the very day I reached out to the team, Twitter decided to allow users to hide replies in their feeds in an effort to empower people faced with bullying and harassment, eliminating the reply function which was the main mechanism that gave #WeCounterHate its power and led to it being able to remove more than 20M potentialhatespeech impressions.

Undeterred, I caught up with some members of the core teamShawn Herron, Jason Carmel and Matt Gilmoreto find out more about their journey.

(From left to right)Shawn Herron, Experience Technology Director @ Wunderman ThompsonMatt ... [+] Gilmore, Creative Director @ Wunderman ThompsonJason Carmel, Chief Data Officer @ Wunderman Thompson

Afdhel Aziz: Gentlemen, welcome. How did the idea for WeCounterHate come about?

Shawn Herron: It started when we caught wind of what the citizens of the town of Wunsiedel, Germany were doing to combat the annual extremists that were descending on their town every year to hold rally and march through the streets. The towns people had devised a peaceful way to upend the extremists efforts by turning their hateful march into an involuntary walk-a-thon that benefitted EXIT Deutschland, an organization that helps people escape extremist groups. For every meter the neo Nazis marched 10 euro would be donated to Exit Deutschland. The question became, how can we scale something like that so anyone, anywhere, could have the ability to fight against hate in a meaningful way?

Jason Carmel: We knew that, to create scale, it had to be digital in nature and Twitter seemed like the perfect problem in need of a solution. We figured if we could reduce hate on a platform of that magnitude, even a small percentage, it could have a big impact. We started by developing an innovative machine-learning and natural-language processing technology that could identify and classify hate speech.

Matt Gilmore: But we still needed the mechanic, a catch 22, that would present those looking to spread hate on the platform with a no-win decision to make. Thats when we stumbled onto the fact that Twitter didnt allow people to delete comments on their tweets. The only way to remove a comment was to delete the post entirely. That mechanic is what gave us a way put a permanent marker, in the form of an image and message, on tweets containing hate speech. Its that permanent marker that let those looking to retweet, and spread hate, know that doing so would benefit an organization theyre opposed to, Life After Hate. No matter what they chose to do, love wins.

Aziz: Fascinating. So, what led you to the partnership with Life After Hate and how did that work?

Carmel: Staffed and founded by former hate group members and violent extremists, Life After Hate is a non-profit that helps people in extremist groups break from that hate-filled lifestyle. They offer a welcoming way out thats free of judgement.We collaborated with them in training the AI thats used to identify hate speech in near real time on Twitter. With the benefit of their knowledge our AI can even find hidden forms of hate speech (coded language, secret emoji combinations) in a vast sea of tweets. Their expertise was crucial to align the language we used when countering hate, making it more compassionate and matter-of-fact, rather than confrontational.

Herron: Additionally, their partnership just made perfect sense on a conceptual level as the beneficiary of the effort. If youre one of those people looking to spread hate on Twitter, youre much less likely to hit retweet knowing that youll be benefiting an organization youre opposed to.

Aziz: Was it hard to wade through that much hate speech? What surprised you?

Herron: Being exposed to all the hate filled tweets was easily the most difficult part of the whole thing. The human brain is not wired to read and see the kinds of messages we encountered for long periods of time. At the end of the countering process, after the AI identified hate, we always relied on a human moderator to validate it before countering/tagging it. We broke up the shifts between many volunteers, but it was always quite difficult when it was your shift.

Carmel: We learned that the identification of hate speech was much easier than categorizing it. Or initial understanding of hate speech, especially before Life After Hate helped us, was really just the movie version of hate speech and missed a lot of hidden context. We were also surprised at how much the language would evolve relative to current events. It was definitely something we had to stay on top of.

We were surprised by how broad a spectrum of people the hate was coming from. We went in thinking wed just encounter a bunch of thugs, but many of these people held themselves out as academics, comedians, or historians. The brands of hate some of them shared were nuanced and, in an insidious way, very compelling.

We were caught off guard by the amount of time and effort those who disliked our platform would take to slam or discredit it. A lot of these people are quite savvy and would go to great lengths to attempt to undermine our efforts. Outside of the things we dealt with in Twitter, one YouTube hate-fluencer made a video, close to an hour long, that wove all sorts of intricate theories and conspiracies about our platform.

Gilmore: We were also surprised by how wrong our instincts were. When we first started, the things we were seeing made us angry and frustrated. We wanted to come after these hateful people in an aggressive way. We wanted to fight back. Life After Hate was essential in helping course-correct our tone and message. They helped us understand (and wed like more people to know) the power of empathy combined with education, and its ability to remove walls rather than build them between people. It can be difficult to take this approach, but it ultimately gets everyone to a better place.

Aziz: I love that idea empathy with education. What were the results of the work youve done so far? How did you measure success?

Carmel: The WeCounterHate platform radically outperformed expectations of identifying hate speech (91% success) relative to a human moderator, as we continued to improve the model over the course of the project.

When @WeCounterHatereplied to a tweet containing hate, it reduces the spread of that hate by an average of 54%. Furthermore, 19% of the "hatefluencers" deleted their original tweet outright once it had been countered.

By our estimates, the Hate Tweets we countered were shared roughly 20 million fewer times compared to similar Hate Tweets by the same authors that werent countered.

Matt: It was a pretty mind-bending exercise for people working in an ad agency, that have spent our entire careers trying to gain exposure for the work we do on behalf of clients, to suddenly be trying to reduce impressions. We even began referring to WCH as the worlds first reverse-media plan, designed to reduce impressions by stopping retweets.

Aziz: So now that the project has ended, how do you hope to take this idea forward in an open source way?

Herron: Our hope was to counter hate speech online, while collecting insightful data about how hate speech online propagates. Going forward, hopefully this data will allow experts in the field to address the hate speech problem at a more systemic level. Our goal is to publicly open source archived data that has been gathered, hopefully next quarter (Q1 2020)

I love this idea on so many different levels. The ingenuity of finding a way to counteract hate speech without resorting to censorship. The partnership with Life After Hate to improve the sophistication of the detection. And the potential for this same model to be applied to so many different problems in the world (*anyone want to build a version for climate change deniers?). It proves that the creativity of the advertising world can truly be turned into a force for good, and for that I salute the team for showing us this powerful act of moral imagination.

Read this article:

The Power Of Purpose: How We Counter Hate Used Artificial Intelligence To Battle Hate Speech Online - Forbes

Read More..

The skills needed to land the hottest tech job of 2020 – Business Insider Nordic

Artificial intelligence is one of the hottest topics in corporate America. So it's no surprise that companies are rushing to find the talent to support the push to adopt the advanced tech.

Demand for AI specialists grew 74% in the last five years and is expected to be one of the most highly sought-after roles in 2020, according to a new study from LinkedIn. Among the necessary skills for the position are machine learning and natural language processing.

But it's not just AI experts that are in high-demand. Cloud engineers, developers, cybersecurity experts, and data scientists also made the list. Alongside the individuals needed to support the technology, companies are also seeking leaders, like a chief transformation officer and chief culture officer, to oversee the adoption. Even non-tech positions like managing the customer experience a key focus for many digital overhauls are hot positions for 2020.

Those projections indicate just how aggressively organizations are trying to adopt more sophisticated technology, but also the major problem they face in navigating the skills gap and the tight labor market.

A struggle, however, will be finding the talent to fill the vacancies. One way companies are tackling that challenge is by upskilling their current employees.

Jeff McMillan, the chief data and analytics officer for Morgan Stanley's wealth management division, runs an internal AI boot camp that covers the basics of the technology. And Microsoft and others are working with online educational platforms like OpenClassrooms to craft comprehensive curriculum to give existing workers the chance to train for new jobs within the organization.

With tech-heavy skills in such short supply, some experts even suggest that corporations should appoint a "chief reskilling" officer to manage the push to reskill employees. "What this new role will be doing is future thinking, future strategy, future alignment with talent and people," Jason Wingard, the dean of the School of Professional Studies at Columbia University, previously told Business Insider.

While investments in larger, enterprise-wide AI projects could slip in 2020, the push to adopt the tech will remain fervent, creating a lucrative job market for those who have the skills to support the shift.

See original here:

The skills needed to land the hottest tech job of 2020 - Business Insider Nordic

Read More..

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test – Inverse

The past decade has seen the rise of remarkably human personal assistants, increasing automation in transportation and industrial environments, and even the alleged passing of Alan Turings famous robot consciousness test. Such innovations have taken artificial intelligence out labs and into our hands.

A.I. programs have become painters, drivers, doctors assistants, and even friends. But with these new benefits have also come increasing dangers. This ending decade saw the first, and likely not the last, death caused by a self-driving car.

This is #20 on Inverses 20 predictions for the 2020s.

And as we head toward another decade of machine learning and robotics research, questions surrounding the moral programming of A.I. and the limits of their autonomy will no longer be just thought-experiments but time-sensitive problem.

One such area to keep on eye on going forward into a new decade will be partially defined by this question: what kind of legal status will A.I. be granted as their capabilities and intelligence continues to scale closer to that of humans? This is a conversation the archipelago nation Malta started in 2018 when its leaders proposed that it should prepare to grant or deny citizenship to A.I.s just as they would humans.

The logic behind this being that A.I.s of the future could have just as much agency and potential to cause disruption as any other non-robotic being. Francois Piccione, policy advisor for the Maltese government, told Inverse in 2019 that not taking such measures would be irresponsible.

Artificial Intelligence is being seen in many quarters as the most transformative technology since the invention of electricity, said Piccione. To realize that such a revolution is taking place and not do ones best to prepare for it would be irresponsible.

While the 2020s might not see fully fledged citizenship for A.I.s, Inverse predicts that there will be increasing legal scrutiny in coming years over who is legally responsible over the actions of A.I., whether it be their owners or the companies designing them. Instead of citizenship or visas for A.I., this could lead to further restrictions on the humans who travel with them and the ways in which A.I. can be used in different settings.

Another critical point of increasing scrutiny in the coming years will be how to ensure A.I. programmers continue to think critically about the algorithms they design.

This past decade saw racism and death as the result of poorly designed algorithms and even poorer introspection. Inverse predicts that as A.I. continues to scale labs will increasingly call upon outside experts, such as ethicists and moral psychologists, to make sure these human-like machines are not doomed to repeat our same, dehumanizing, mistakes.

As 2019 draws to a close, Inverse is looking to the future. These are our 20 predictions for science and technology for the 2020s. Some are terrifying, some are fascinating, and others we can barely wait for. This has been #20. Read a related story here.

See the article here:

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test - Inverse

Read More..

Samsung to announce its Neon artificial intelligence project at CES 2020 – Firstpost

tech2 News StaffDec 26, 2019 17:21:10 IST

Samsung has been teasing Neon for quite a while on social media. It appears to be an artificial intelligence (AI) project by its research arm and the company will be announcing more details about it during CES 2020 in January.

Samsung Neon AI project. Image: Neon

Neon hasnt really revealed any details. Its being developed under Samsung Technology & Advanced Research Labs (STAR Labs). STAR Labs could be a reference to the Scientific and Technological Advanced Research Laboratories (STAR Labs) from DC Comics, but we cant confirm that. Samsungs research division is led by Pranav Mistry who earlier worked on the Samsung Galaxy Gear and is now the President and CEO of STAR Labs.

The company has set up a website with a landing page that doesnt really mention any details. It only has a message saying, Have you ever met an Artificial? It has been continuously posting images on Twitter and Instagram, including a couple of videos. These images contain the same message in different languages as well, indicating that the AI has multilingual functionality. Mistry has also been teasing Neon on his own Twitter account.

This wont be Samsungs first venture into AI since it already has the Bixby digital assistant. However, it never really took off. CES 2020 begins on 7 January and well get to know more about Neon during the expo.

Find latest and upcoming tech gadgets online on Tech2 Gadgets. Get technology news, gadgets reviews & ratings. Popular gadgets including laptop, tablet and mobile specifications, features, prices, comparison.

Go here to read the rest:

Samsung to announce its Neon artificial intelligence project at CES 2020 - Firstpost

Read More..