Page 3,838«..1020..3,8373,8383,8393,840..3,8503,860..»

This Crypto is Positioned for an Insane Rally as it Reaches Key Resistance – Ethereum World News

The month of February has been great for altcoins, with many smaller cryptos seeing intense upwards momentum that has allowed some to set fresh all-time highs, while others post significant gains against Bitcoin.

This upwards momentum seen amongst many altcoins has led investors and analysts alike to grow increasingly keen on finding the next altcoin that will see a face ripping rally, and some analysts believe that Algorand could be this crypto.

One top trader is noting that ALGO recently rallied up to a key level that bulls are currently attempting to close above, suggesting that it could soon rally.

At the time of writing, Algorand is trading up just under 1% at its current price of $0.45, which marks a notable climb from weekly lows of $0.30, and only a slight decline from highs of $0.49 that were set just a couple of days ago.

This volatility has come about after the token saw a quiet start to the year, with its slow uptrend turning parabolic in late January when it rallied from $0.25 to its recent highs.

This uptrend has shown signs of being similar to that seen by Chainlink, Ethereum, and Tezos, at the early stages of their intense multi-week parabolic runs.

This has led some analysts, including The Crypto Dog, to question whether ALGO is the next XTZ.

It does appear that the crypto is bound to see further near-term gains, as its buyers are currently attempting to hold it above the upper boundary of a wide trading range that it has been caught within for months.

Bagsy, a prominent trader, spoke about Algorand in a recent tweet, telling his followers that it appears to be ready to run assuming that its bulls are able to hold it above this level before its daily close.

ALGO update: No pullback, bulls fighting to close above former range high. This looks about ready to run, he explained.

If Bitcoin continues to remain in a firm uptrend, it is probable that smaller cryptos will have further room to run, and the technical strength currently surrounding ALGO seems to suggest it will see some intense momentum.

Originally posted here:
This Crypto is Positioned for an Insane Rally as it Reaches Key Resistance - Ethereum World News

Read More..

5-year altcoin exchange says goodbye: It will close in 10 days – Somag News

Launched in 2015, Trade Satoshi closes as of March 1, after nearly 5 years of activity. The crypto currency exchange Trade Satoshi, which opened in October 2015, has decided to terminate its operations.

In the official statement made by the stock exchange today, it was stated that the tradesatoshi.com board decided to close the stock market and it is no longer economically feasible to continue to provide the necessary security, support and technology.

It was noted that when the users were warned not to deposit money on the stock exchange, all services would be terminated by March 1, 2020.

Commenting on the issue, Binance CEO Changpeng Zhao said, The stock market is probably one of the most difficult jobs to continue. One of the most difficult aspects is the economy of scale. Without doing this, you cannot invest in security and many other issues, and there is no point in continuing. I hope everyones money is safe in this case. said.

Trade Satoshi was a stock exchange that stood out with alternative cryptocurrencies rather than Bitcoin.

View original post here:
5-year altcoin exchange says goodbye: It will close in 10 days - Somag News

Read More..

Why 2020 will be the Year of Automated Machine Learning – Gigabit Magazine – Technology News, Magazine and Website

As the fuel that powers their ongoing digital transformation efforts, businesses everywhere are looking for ways to derive as much insight as possible from their data. The accompanying increased demand for advanced predictive and prescriptive analytics has, in turn, led to a call for more data scientists proficient with the latest artificial intelligence (AI) and machine learning (ML) tools.

But such highly-skilled data scientists are expensive and in short supply. In fact, theyre such a precious resource that the phenomenon of the citizen data scientist has recently arisen to help close the skills gap. A complementary role, rather than a direct replacement, citizen data scientists lack specific advanced data science expertise. However, they are capable of generating models using state-of-the-art diagnostic and predictive analytics. And this capability is partly due to the advent of accessible new technologies such as automated machine learning (AutoML) that now automate many of the tasks once performed by data scientists.

Algorithms and automation

According to a recent Harvard Business Review article, Organisations have shifted towards amplifying predictive power by coupling big data with complex automated machine learning. AutoML, which uses machine learning to generate better machine learning, is advertised as affording opportunities to democratise machine learning by allowing firms with limited data science expertise to develop analytical pipelines capable of solving sophisticated business problems.

Comprising a set of algorithms that automate the writing of other ML algorithms, AutoML automates the end-to-end process of applying ML to real-world problems. By way of illustration, a standard ML pipeline is made up of the following: data pre-processing, feature extraction, feature selection, feature engineering, algorithm selection, and hyper-parameter tuning. But the considerable expertise and time it takes to implement these steps means theres a high barrier to entry.

AutoML removes some of these constraints. Not only does it significantly reduce the time it would typically take to implement an ML process under human supervision, it can also often improve the accuracy of the model in comparison to hand-crafted models, trained and deployed by humans. In doing so, it offers organisations a gateway into ML, as well as freeing up the time of ML engineers and data practitioners, allowing them to focus on higher-order challenges.

SEE ALSO:

Overcoming scalability problems

The trend for combining ML with Big Data for advanced data analytics began back in 2012, when deep learning became the dominant approach to solving ML problems. This approach heralded the generation of a wealth of new software, tooling, and techniques that altered both the workload and the workflow associated with ML on a large scale. Entirely new ML toolsets, such as TensorFlow and PyTorch were created, and people increasingly began to engage more with graphics processing units (GPUs) to accelerate their work.

Until this point, companies efforts had been hindered by the scalability problems associated with running ML algorithms on huge datasets. Now, though, they were able to overcome these issues. By quickly developing sophisticated internal tooling capable of building world-class AI applications, the BigTech powerhouses soon overtook their Fortune 500 peers when it came to realising the benefits of smarter data-driven decision-making and applications.

Insight, innovation and data-driven decisions

AutoML represents the next stage in MLs evolution, promising to help non-tech companies access the capabilities they need to quickly and cheaply build ML applications.

In 2018, for example, Google launched its Cloud AutoML. Based on Neural Architecture Search (NAS) and transfer learning, it was described by Google executives as having the potential to make AI experts even more productive, advance new fields in AI, and help less-skilled engineers build powerful AI systems they previously only dreamed of.

The one downside to Googles AutoML is that its a proprietary algorithm. There are, however, a number of alternative open-source AutoML libraries such as AutoKeras, developed by researchers at Texas University and used to power the NAS algorithm.

Technological breakthroughs such as these have given companies the capability to easily build production-ready models without the need for expensive human resources. By leveraging AI, ML, and deep learning capabilities, AutoML gives businesses across all industries the opportunity to benefit from data-driven applications powered by statistical models - even when advanced data science expertise is scarce.

With organisations increasingly reliant on civilian data scientists, 2020 is likely to be the year that enterprise adoption of AutoML will start to become mainstream. Its ease of access will compel business leaders to finally open the black box of ML, thereby elevating their knowledge of its processes and capabilities. AI and ML tools and practices will become ever more ingrained in businesses everyday thinking and operations as they become more empowered to identify those projects whose invaluable insight will drive better decision-making and innovation.

By Senthil Ravindran, EVP and global head of cloud transformation and digital innovation, Virtusa

Read more:
Why 2020 will be the Year of Automated Machine Learning - Gigabit Magazine - Technology News, Magazine and Website

Read More..

Machine Learning: Real-life applications and it’s significance in Data Science – Techstory

Do you know how Google Maps predicts traffic? Are you amused by how Amazon Prime or Netflix subscribes to you just the movie you would watch? We all know it must be some approach of Artificial Intelligence. Machine Learning involves algorithms and statistical models to perform tasks. This same approach is used to find faces in Facebook and detect cancer too. A Machine Learning course can educate in the development and application of such models.

Artificial Intelligence mimics human intelligence. Machine Learning is one of the significant branches of it. There is an ongoing and increasing need for its development.

Tasks as simple as Spam detection in Gmail illustrates its significance in our day-to-day lives. That is why the roles of Data scientists are in demand to yield more productivity at present. An aspiring data scientist can learn to develop algorithms and apply such by availing Machine Learning certification.

Machine learning as a subset of Artificial Intelligence, is applied for varied purposes. There is a misconception that applying Machine Learning algorithms would need a prior mathematical knowledge. But, a Machine Learning Online course would suggest otherwise. On contrary to the popular approach of studying, here top-to-bottom approach is involved. An aspiring data scientist, a business person or anyone can learn how to apply statistical models for various purposes. Here, is a list of some well-known applications of Machine Learning.

Microsofts research lab uses Machine Learning to study cancer. This helps in Individualized oncological treatment and detailed progress reports generation. The data engineers apply pattern recognition, Natural Language Processing and Computer vision algorithms to work through large data. This aids oncologists to conduct precise and breakthrough tests.

Likewise, machine learning is applied in biomedical engineering. This has led to automation of diagnostic tools. Such tools are used in detecting neurological and psychiatric disorders of many sorts.

We all have had a conversation with Siri or Alexa. They use speech recognition to input our requests. Machine Learning is applied here to auto generate responses based on previous data. Hello Barbie is the Siri version for the kids to play with. It uses advanced analytics, machine learning and Natural language processing to respond. This is the first AI enabled toy which could lead to more such inventions.

Google uses Machine Learning statistical models to acquire inputs. The statistical models collect details such as distance from the start point to the endpoint, duration and bus schedules. Such historical data is rescheduled and reused. Machine Learning algorithms are developed with the objective of data prediction. They recognise the pattern between such inputs and predict approximate time delays.

Another well-known application of Google, Google translate involves Machine Learning. Deep learning aids in learning language rules through recorded conversations. Neural networks such as Long-short term memory networks aids in long-term information updates and learning. Recurrent Neural networks identify the sequences of learning. Even bi-lingual processing is made feasible nowadays.

Facebook uses image recognition and computer vision to detect images. Such images are fed as inputs. The statistical models developed using Machine Learning maps any information associated with these images. Facebook generates automated captions for images. These captions are meant to provide directions for visually impaired people. This innovation of Facebook has nudged Data engineers to come up with other such valuable real-time applications.

The aim here is to increase the possibility of the customer, watching a movie recommendation. It is achieved by studying the previous thumbnails. An algorithm is developed to study these thumbnails and derive recommendation results. Every image of available movies has separate thumbnails. A recommendation is generated by pattern recognition among the numerical data. The thumbnails are assigned individual numerical values.

Tesla uses computer vision, data prediction, and path planning for this purpose. The machine learning practices applied makes the innovation stand-out. The deep neural networks work with trained data and generate instructions. Many technological advancements such as changing lanes are instructed based on imitation learning.

Gmail, Yahoo mail and Outlook engage machine learning techniques such as neural networks. These networks detect patterns in historical data. They train on received data about spamming messages and phishing messages. It is noted that these spam filters provide 99.9 percent accuracy.

As people grow more health conscious, the development of fitness monitoring applications are on the rise. Being on top of the market, Fitbit ensures its productivity by the employment of machine learning methods. The trained machine learning models predicts user activities. This is achieved through data pre-processing, data processing and data partitioning. There is a need to improve the application in terms of additional purposes.

The above mentioned applications are like the tip of an iceberg. Machine learning being a subset of Artificial Intelligence finds its necessity in many other streams of daily activities.

comments

Go here to read the rest:
Machine Learning: Real-life applications and it's significance in Data Science - Techstory

Read More..

This AI Researcher Thinks We Have It All Wrong – Forbes

Dr. Luis Perez-Breva

Luis Perez-Breva is an MIT professor and the faculty director of innovation teams at the MIT School or Engineering. He is also an entrepreneur and part of The Martin Trust Center for MIT Entrepreneurship. Luis works to see how we can use technology to make our lives better and also on how we can work to get new technology out into the world. On a recent AI Today podcast, Professor Perez-Breva managed to get us to think deeply into our understanding of both artificial intelligence and machine learning.

Are we too focused on data?

Anyone who has been following artificial intelligence and machine learning knows the vital centrality of data. Without data, we cant train machine learning models. And without machine learning models, we dont have a way for systems to learn from experience. Surely, data needs to be the center of our attention to make AI systems a reality.

However, Dr. Perez-Breva thinks that we are overly focusing on data and perhaps that extensive focus is causing goals for machine learning and AI to go astray. According to Luis, so much focus is put into obtaining data that we judge how good a machine learning system is by how much data was collected, how large the neural network is, and how much training data was used. When you collect a lot of data you are using that data to build systems that are primarily driven by statistics. Luis says that we latch onto statistics when we feed AI so much data, and that we ascribe to systems intelligence, when in reality, all we have done is created large probabilistic systems that by virtue of large data sets exhibit things we ascribe to intelligence. He says that when our systems arent learning as we want, the primary gut reaction is to give these AI system more data so that we dont have to think as much about the hard parts about generalization and intelligence.

Many would argue that there are some areas where you do need data to help teach AI. Computers are better able to learn image recognition and similar tasks by having more data. The more data, the better the networks, and the more accurate the results. On the podcast, Luis asked whether deep learning is great enough that this works or if we have a big enough data set that image recognition now works. Basically: is it the algorithm or just the sheer quantity of data that is making this work?

Rather, what Luis argues is that if we can find a better way to structure the system as a whole, then the AI system should be able to reason through problems, even with very limited data. Luis compares using machine learning in every application to the retail world. He talks about how physical stores are seeing the success in online stores and trying to copy on that success. One of the ways they are doing this is by using apps to navigate stores. Luis mentioned that he visited a Target where he had to use his phone to navigate the store which was harder than being able to look at signs. Having a human to ask questions and talk to is both faster and part of the experience of being in a brick and mortar retail location. Luis says he would much rather have a human to interact with at one of these locations than a computer.

Is the problem deep learning?

He compares this to machine learning by saying that machine learning has a very narrow application. If you try to apply machine learning to every aspect of AI that you will end up with issues like he did at the Target. Basically looking at neural networks as a hammer and every AI problem as a nail. No one technology or solution works for every application. Perhaps deep learning only works because of vast quantities of data? Maybe theres a better algorithm that can generalize better, apply knowledge learned in one domain to another better, and use smaller amounts of data to get much better quality insights.

People have tried recently to automate many of the jobs that people do. Throughout history, Luis says that technology has killed businesses when it tries to replace humans. Technology and businesses are successful when they expand on what humans can do. Attempting to replace humans is a difficult task and one that is going to lead companies down the road to failure. As humans, he points out, we crave human interaction. Even the age that is constantly on their technology desires human interaction greatly.

Luis also makes a point that while many people mistakenly confuse automation and AI. Automation is using a computer to carry out specific tasks, it is not the creation of intelligence. This is something that many are mentioning on several occasions. Indeed, its the fear of automation and the fictional superintelligence that has many people worried about AI. Dr. Perez-Breva makes the point that many ascribe to machines human characteristics. But this should not be the case with AI system.

Rather, he sees AI systems more akin to a new species with a different mode of intelligence than humans. His opinion is that researchers are very far from creating an AI that is similar to what you will find in books and movies. He blames movies for giving people the impression of robots (AI) killing people and being dangerous technologies. While there are good robots in movies there are few of them and they get pushed to the side by bad robots. He points out that we need to move away from this pushing images of bad robots. Our focus needs to be on how artificial intelligence can help humans grow. It would be beneficial if the movie-making industry could help with this. As such, AI should be thought of as a new intelligent species were trying to create, not something that is meant to replace us.

A positive AI future

Despite negative images and talk, Luis is sure that artificial intelligence is here to stay. At least for a while. So many companies have made large investments into AI that it would be difficult for them to just stop using them or to stop the development.

As a final question in the interview, Luis was asked where he sees the industry of artificial intelligence going. Prefacing his answer with the fact that based on the earlier discussion people are investing in machine learning and not true artificial intelligence, Luis said that he is happy in the investment that businesses are making in what they call AI. He believes that these investments will help the development of this technology to stay around for a minimum of four years.

Once we can stop comparing humans to artificial intelligence, Luis believes that we will see great advancements in what AI can do. He believes that AI has the power to work alongside humans to unlock knowledge and tasks that we werent previously able to do. The point when this happens, he doesnt believe is that far away. We are getting closer to it every day.

Many of Luiss ideas are contrary to popular beliefs by many people who are interested in the world of artificial intelligence. At the same time, these ideas that he presents are presented in a very logical manner and are very thought-provoking. The only way that we will be able to see what is right or where his ideas go is time.

Original post:
This AI Researcher Thinks We Have It All Wrong - Forbes

Read More..

Grok combines Machine Learning and the Human Brain to build smarter AIOps – Diginomica

A few weeks ago I wrote a piece here about Moogsoft which has been making waves in the service assurance space by applying artificial intelligence and machine learning to the arcane task of keeping on keeping critical IT up and running and lessening the business impact of service interruptions. Its a hot area for startups and Ive since gotten article pitches from several other AIops firms at varying levels of development.

The most intriguing of these is a company called Grok which was formed by a partnership between Numenta, a pioneering AI research firm co-founded by Jeff Hawkins and Donna Dubinsky, who are famous for having started two classic mobile computing companies, Palm and Handspring, and Avik Partners. Avik is a company formed by brothers Casey and Josh Kindiger, two veteran entrepreneurs who have successfully started and grown multiple technology companies in service assurance and automation over the past two decadesmost recently Resolve Systems.

Josh Kindiger told me in a telephone interview how the partnership came about:

Numenta is primarily a research entity started by Jeff and Donna about 15 years ago to support Jeffs ideas about the intersection of neuroscience and data science. About five years ago, they developed an algorithm called HTM and a product called Grok for AWS which monitors servers on a network for anomalies. They werent interested in developing a company around it but we came along and saw a way to link our deep domain experience in the service management and automation areas with their technology. So, we licensed the name and the technology and built part of our Grok AIOps platform around it.

Jeff Hawkins has spent most of his post-Palm and Handspring years trying to figure out how the human brain works and then reverse engineering that knowledge into structures that machines can replicate. His model or theory, called hierarchical temporal memory (HTM), was originally described in his 2004 book On Intelligence written with Sandra Blakeslee. HTM is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian (in particular, human) brain. For a little light reading, I recommend a peer-reviewed paper called A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex.

Grok AIOps also uses traditional machine learning, alongside HTM. Said Kindiger:

When I came in, the focus was purely on anomaly detection and I immediately engaged with a lot of my old customers--large fortune 500 companies, very large service providers and quickly found out that while anomaly detection was extremely important, that first signal wasn't going to be enough. So, we transformed Grok into a platform. And essentially what we do is we apply the correct algorithm, whether it's HTM or something else, to the proper stream events, logs and performance metrics. Grok can enable predictive, self-healing operations within minutes.

The Grok AIOps platform uses multiple layers of intelligence to identify issues and support their resolution:

Anomaly detection

The HTM algorithm has proven exceptionally good at detecting and predicting anomalies and reducing noise, often up to 90%, by providing the critical context needed to identify incidents before they happen. It can detect anomalies in signals beyond low and high thresholds, such as signal frequency changes that reflect changes in the behavior of the underlying systems. Said Kindiger:

We believe HTM is the leading anomaly detection engine in the market. In fact, it has consistently been the best performing anomaly detection algorithm in the industry resulting in less noise, less false positives and more accurate detection. It is not only best at detecting an anomaly with the smallest amount of noise but it also scales, which is the biggest challenge.

Anomaly clustering

To help reduce noise, Grok clusters anomalies that belong together through the same event or cause.

Event and log clustering

Grok ingests all the events and logs from the integrated monitors and then applies to it to event and log clustering algorithms, including pattern recognition and dynamic time warping which also reduce noise.

IT operations have become almost impossible for humans alone to manage. Many companies struggle to meet the high demand due to increased cloud complexity. Distributed apps make it difficult to track where problems occur during an IT incident. Every minute of downtime directly impacts the bottom line.

In this environment, the relatively new solution to reduce this burden of IT management, dubbed AIOps, looks like a much needed lifeline to stay afloat. AIOps translates to "Algorithmic IT Operations" and its premise is that algorithms, not humans or traditional statistics, will help to make smarter IT decisions and help ensure application efficiency. AIOps platforms reduce the need for human intervention by using ML to set alerts and automation to resolve issues. Over time, AIOps platforms can learn patterns of behavior within distributed cloud systems and predict disasters before they happen.

Grok detects latent issues with cloud apps and services and triggers automations to troubleshoot these problems before requiring further human intervention. Its technology is solid, its owners have lots of experience in the service assurance and automation spaces, and who can resist the story of the first commercial use of an algorithm modeled on the human brain.

View post:
Grok combines Machine Learning and the Human Brain to build smarter AIOps - Diginomica

Read More..

Buzzwords ahoy as Microsoft tears the wraps off machine-learning enhancements, new application for Dynamics 365 – The Register

Microsoft has announced a new application, Dynamics 365 Project Operations, as well as additional AI-driven features for its Dynamics 365 range.

If you are averse to buzzwords, look away now. Microsoft Business Applications President James Phillips announced the new features in a post which promises AI-driven insights, a holistic 360-degree view of a customer, personalized customer experiences across every touchpoint, and real-time actionable insights.

Dynamics 365 is Microsofts cloud-based suite of business applications covering sales, marketing, customer service, field service, human resources, finance, supply chain management and more. There are even mixed reality offerings for product visualisation and remote assistance.

Dynamics is a growing business for Microsoft, thanks in part to integration with Office 365, even though some of the applications are quirky and awkward to use in places. Licensing is complex too and can be expensive.

Keeping up with what is new is a challenge. If you have a few hours to spare, you could read the 546-page 2019 Release Wave 2 [PDF] document, for features which have mostly been delivered, or the 405-page 2020 Release Wave 1 [PDF], about what is coming from April to September this year.

Many of the new features are small tweaks, but the company is also putting its energy into connecting data, both from internal business sources and from third parties, to drive AI analytics.

The updated Dynamics 365 Customer Insights includes data sources such as demographics and interests, firmographics, market trends, and product and service usage data, says Phillips. AI is also used in new forecasting features in Dynamics 365 Sales and in Dynamics 365 Finance Insights, coming in preview in May.

Dynamics 365 Project Operations ... Click to enlarge

The company is also introducing a new application, Dynamics 365 Business Operations, with general availability promised for October 1 2020. This looks like a business-oriented take on project management, with the ability to generate quotes, track progress, allocate resources, and generate invoices.

Microsoft already offers project management through its Project products, though this is part of Office rather than Dynamics. What can you do with Project Operations that you could not do before with a combination of Project and Dynamics 365?

There is not a lot of detail in the overview, but rest assured that it has AI-powered business insights and seamless interoperability with Microsoft Teams, so it must be great, right? More will no doubt be revealed at the May Business Applications Summit in Dallas, Texas.

Sponsored: Detecting cyber attacks as a small to medium business

See original here:
Buzzwords ahoy as Microsoft tears the wraps off machine-learning enhancements, new application for Dynamics 365 - The Register

Read More..

Removing the robot factor from AI – Gigabit Magazine – Technology News, Magazine and Website

AI and machine learning have something of an image problem.

Theyve never been quite so widely discussed as topics, or, arguably, their potential so widely debated. This is, to some extent, part of the problem. Artificial Intelligence can, still, be anything, achieve anything. But until its results are put into practice for people, it remains a misunderstood concept, especially to the layperson.

While well-established industry thought leaders are rightly championing the fact that AI has the potential to be transformative and capable of a wide range of solutions, the lack of context for most people is fuelling fears that it is simply going to replace peoples roles and take over tasks, wholesale. It also ignores the fact that AI applications have been quietly assisting peoples jobs, in a light touch manner, for some time now and people are still in those roles.

Many people are imagining AI to be something it is not. Given the technology is still in a fast-development phase, some people think it is helpful to consider the tech as a type of plug and play, black box technology. Some believe this helps people to put it into the context of how it will work and what it will deliver for businesses. In our opinion, this limits a true understanding of its potential and what it could be delivering for companies day in, day out.

The hyperbole is also not helping. The statements we use AI and our products AI driven have already become well-worn by enthusiastic salespeople and marketeers. While theres a great sales case to be made by that exciting assertion, its rarely speaking the truth about the situation. What is really meant by the current use of artificial intelligence? Arguably, AI is not yet a thing in its own right; i.e the capability of machines to be able to do the things which people do instinctively, which machines instinctively do not. Instead of being excited by hearing the phrase we do AI!, people should see it as a red flag to dig deeper into the technology and the AI capability in question.

SEE ALSO:

Machine learning, similarly, doesnt benefit from sci-fi associations or big sales patter bravado. In its simplest form, while machine learning sounds like a defined and independent process, it is actually a technique to deliver AI functions. Its maths, essentially, applied alongside data, processing power and technology to deliver an AI capability. Machine learning models dont execute actions or do anything themselves, unless people put them to use. They are still human tools, to be deployed by someone to undertake a specific action.

The tools and models are only as good as the human knowledge and skills programming them. People, especially in the legal sectors autologyx works with, are smart, adaptable and vastly knowledgeable. They can quickly shift from one case to another, and have their own methods and processes of approaching problem solving in the workplace. Where AI is coming in to lift the load is on lengthy, detailed, and highly repetitive tasks such as contract renewals. Humans can get understandably bored when reviewing highly repetitive, vast volumes of contracts to change just a few clauses and update the document. A machine learning solution does notnget bored, and performs consistently with a high degree of accuracy, freeing those legal teams up to work on more interesting, varied, or complicated casework.

Together, AI, machine learning and automation are the arms and armour businesses across a range of sectors need to acquire to adapt and continue to compete in the future. The future of the legal industry, for instance, is still a human one where knowledge of people will continue to be an asset. AI in that sector is more focused on codifying and leveraging that intelligence and while the machine and AI models learn and grow from people, so those people will continue to grow and expand their knowledge within the sector too. Today, AI and ML technologies are only as good as the people power programming them.

As Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence put it, AI is neither good nor evil. Its a tool. A technology for us to use. How we choose to apply it is entirely up to us.

By Ben Stoneham, founder and CEO, autologyx

Follow this link:
Removing the robot factor from AI - Gigabit Magazine - Technology News, Magazine and Website

Read More..

Zoho reinvents its cloud storage with WorkDrive – GeekWire

Work as we know it has evolved. The modern workplace is more flexible, dynamic, and team-centric. Employees collaborate across multiple locations, and every team member has an individual sense of responsibility.

Keeping pace with the changing workplace means the tools we use need to evolve too. This is why Zoho introduces WorkDriveto fit right into a team-driven work environment.

WorkDrive has been designed from the bottom up to serve the needs of the team as a cohesive work unit. Team-wide collaboration, sharing, and distribution of contentfrom draft to final copyis built into the product explicitly. A range of collaboration toolssuch as drafting, notifications, commenting, and activity trackingenable joint work.

That joint work happens best when the system gets out of the way and lets people do their jobs. To minimize bottlenecks and empower employees, WorkDrive prioritizes self-management. By setting up policies governing permissions and access control at the outset, a team can define the parameters for ongoing collaborative work, requiring no additional supervision and avoiding continuous intervention and management by system admins.

With the structure of the team in place, WorkDrive then equips employees with the tools they need to get their work done. More than just a file storage platform, WorkDrive comes with a full-featured cloud Office Suite that includes a spreadsheet app (Zoho Sheet), a document editor (Zoho Writer), and presentation software (Zoho Show). With these sophisticated tools, WorkDrive teams can create complex and attractive business documentslike financial plans, contracts, and marketing presentationsby working together from start to finish.

WorkDrive offers a desktop app that lets you sync files to multiple computers, edit them offline, and perform complete or partial syncs back to the cloud. This, along with native mobile iOS and Android apps, enables work and engagement from any device or locationon the ground or in the air.

Past approaches to cloud storage and sharing were built around an individual user. WorkDrive, however, went about reinventing this category. Rather than tacking collaboration features onto a product designed for personal use, WorkDrive has been built from the ground up to provide all organizations with a true content collaboration platform in every sense.

Not only is WorkDrive integrated with Zohos extensive suite of business applications, but it also works well with Microsoft Office Suite, Gmail, and other external apps through Zapier. This makes WorkDrive an ideal solution for any industry or work process.

WorkDrive is set up for modern teams. Whether that team consists of all the employees of a small business or a subset of specialists at a much larger enterprise, WorkDrive ensures that the right information is available to the right people, at the right time.

Continued here:
Zoho reinvents its cloud storage with WorkDrive - GeekWire

Read More..

Cloud-enabled threats are on the rise, sensitive data is moving between cloud apps – Help Net Security

44% of malicious threats are cloud enabled, meaning that cybercriminals see the cloud as an effective method for subverting detection, according to Netskope.

We are seeing increasingly complex threat techniques being used across cloud applications, spanning from cloud phishing and malware delivery, to cloud command and control and ultimately cloud data exfiltration, said Ray Canzanese, Threat Research Director at Netskope.

Our research shows the sophistication and scale of the cloud enabled kill chain increasing, requiring security defenses that understand thousands of cloud apps to keep pace with attackers and block cloud threats. For these reasons, any enterprise using the cloud needs to modernize and extend their security architectures.

89% of enterprise users are in the cloud, actively using at least one cloud app every day. Cloud storage, collaboration, and webmail apps are among the most popular in use.

Enterprises also use a variety of apps in those categories 142 on average indicating that while enterprises may officially sanction a handful of apps, users tend to gravitate toward a much wider set in their day-to-day activities. Overall, the average enterprise uses over 2,400 distinct cloud services and apps.

44% of threats are cloud-based. Attackers are moving to the cloud to blend in, increase success rates and evade detections.

Attackers launch attacks through cloud services and apps using familiar techniques including scams, phishing, malware delivery, command and control, formjacking, chatbots, and data exfiltration. Of these, the two most popular cloud threat techniques are phishing and malware delivery. The top threat techniques in the cloud are phishing and malware delivery.

Over 50% of data policy violations come from cloud storage, collaboration, and webmail apps, and the types of data being detected are primarily DLP rules and policies related to privacy, healthcare, and finance.

This shows that users are moving sensitive data across multiple dimensions among a wide variety of cloud services and apps, including personal instances and unmanaged apps in violation of organizational policies.

20% of users move data laterally between cloud apps, such as copying a document from OneDrive to Google Drive or sharing it via Slack. More importantly, the data crosses many boundaries: moving between cloud app suites, between managed and unmanaged apps, between app categories, and between app risk levels.

Moreover, 37% of the data that users move across cloud apps is sensitive. In total, lateral data movement has been tracked among 2,481 different cloud services and apps, indicating the scale and the variety of cloud use across which sensitive information is being dispersed.

One-third of enterprise users work remotely on any given day, across more than eight locations on average, accessing both public and private apps in the cloud. This trend has contributed to the inversion of the traditional network, with users, data, and apps now on the outside.

It also shows increasing demand on legacy VPNs and questions the availability of defenses to protect remote workers.

Original post:
Cloud-enabled threats are on the rise, sensitive data is moving between cloud apps - Help Net Security

Read More..