Page 3,679«..1020..3,6783,6793,6803,681..3,6903,700..»

Why tokens are not the same thing as Bitcoin – CoinGeek

Lets talk about the way the mainstream media talk about digital assets. A recent article we saw from a Forbes contributor about the Chainlink token highlighted some of the problems that have led the public to misunderstand Bitcoin, and naive investors to lose their savings.

The problems arent just with the way media articles presents information, but also with the infrastructure that supports it. There are exchanges who trade tokens alongside Bitcoin and other native blockchain assets, and data sites that list and compare them as if theyre the same thing.

The article was titled This Bitcoin Challenger Is Suddenly Soaring And Fast Approaching Its All-Time High Price. However this isnt a unique example; there have been thousands of articles like this over the years.

You can probably see a few problems in the headline already. First, theres yet another reference to a Bitcoin challenger (of which there have been thousands now). Then theres the focus on price, and an implied call to jump in before its too late. But first, lets clear up the tokens/assets issue.

Bitcoin, altcoins and tokens

Bitcoin, altcoins and tokens are all digital assets, but there are big differences in what they are, what theyre intended for, and what they do. Theres nothing inherently wrong with tokens, but not knowing the difference is one of the reasons digital assets have gained such a bad reputation in investor circles.

The digital asset space includes Bitcoin, altcoins, and tokens. Bitcoin is the first and only legitimate asset hereits the only one capable of scaling to meet the entire worlds demands. And when we say Bitcoin, we mean BSV.

An altcoin is a digital asset that is not Bitcoin, but is still the native asset on its blockchain. Even these assets differ: only those based on proof-of-work (POW) transaction processing contain the necessary economic incentives in their protocols to function effectively. Those based on proof-of-stake (POS) or hybrid POW/POS processing algorithms have warped incentives and are open to manipulation.

Tokens are another thing entirely. They exist as smart contracts on another blockchainfor example, ERC20 standard tokens on Ethereum, EOS or similar. Their ownership can be transferred or traded like any asset, and the records of such recorded on a blockchain, but their nature makes them fundamentally different to Bitcoin and altcoins. If the base blockchain fails for some reason, so do all the contracts and tokens that live on it.

Then there are assets that are native on their protocols, but use a different non-blockchain-based system entirely. The best known examples of this are Ripple XRP and Stellar Lumens.

Chainlinks tradeable asset is called Link, and its a token based on an Ethereum contract. So its clear that Link, whatever its utility, is no Bitcoin challenger as it depends on Ethereum to exist. Tether (USDT), the popular stablecoin nominally (but not actually) backed by actual USD reserves, is actually a number of different tokens living on multiple blockchains.

(Some) tokens are OK

As mentioned above, theres nothing inherently wrong with the concept of tokens. The problems arise when theyre presented as something equal to Bitcoin or other native blockchain assets, and/or traded as speculative assets regardless of their actual utility.

The Forbes article mentions: Chainlinks recent gains have also been attributed to the worlds largest bitcoin and cryptocurrency exchange by volume, Binance. Binance, and other similar exchanges, became popular by listing hundreds, if not thousands of tokens alongside Bitcoin. Market data sites usually compare their prices, market caps and trading volumes as one big list without any explanation of the difference. These practices have led to speculative bubbles, price pumps and money-losing drops like the ICO frenzy of 2017-18.

For the record, were not suggesting Chainlink itself is a bad idea, or that the Link token has no value or utility. Its a decentralized oracle network that facilitates connectivity through the use of external adapters, which connect oracles to any API endpoint. This would allow smart contracts of any kind to hook into data sources, extending their functionality.

In fact, developers at Google Cloud have built trial demonstrations with Chainlink, using its nodes to listen for API calls, execute the job requested, and make payments to the source. Google Cloud Developer Advocate Allen Day wrote that Chainlink could be used in prediction markets, hedging against blockchain platform risk, and enabling commit/reveals across Ethereum using submarine sends.

It could be a pretty useful and feasible projectif it lived in Bitcoin BSV. Unless its capable of migrating at some stage, it could face future issues with Ethereums scalability and the uncertainty surrounding its protocol.

Dont worry about all that, just pump the price

The Forbes article we saw, written by a crypto and blockchain contributor, devotes only two lines to what Chainlink actually doesand only mentions the Google trials briefly in its final sentence.

The rest of the article is devoted to the Link tokens price, percentage gains, and comparisons to BTCs price performance in the past few months. Its a strange parallel to draw, given the two assets are completely different things, but its clear where the authors priorities lie and what hed like readers to do.

Poor communication of the differences in digital asset type, structure, purpose, and utility has caused all of them to be mislabeled cryptocurrencies. It has led many an uninformed investor to leap blindly into the ocean of unregulated exchanges, with a false belief that a Bitcoin challenger is ready to roll, and that buying in is the path to quick riches.

Usually it isntwithout that knowledge, its more like an invitation to stay holding an expensive yet worthless bag long after all the speculators have moved on to the next target. Unfortunately, mainstream media sites use their famous brands to draw traffic and dollars, feeding the speculation beast another meal.

Exchanges and data sites could help by listing public proof-of-work assets separately to proof-of-stake ones, and tokens on their own platform. But since their income depends on not highlighting these differences, we wont hold our breath waiting for current offerings to do that.

Only Bitcoin BSV can provide the scalability and stability future utility tokens will need to serve enterprise-class applications. The tokens themselves need to be useful, and trading should be only for usage, not price pumping. And if they havent already learned the hard way, readers would be better served if they paid attention to trusted news written by experienced blockchain reporters. These articles will at least examine the technology and wont try to sell anything.

New to Bitcoin? Check out CoinGeeksBitcoin for Beginnerssection, the ultimate resource guide to learn more about Bitcoinas originally envisioned by Satoshi Nakamotoand blockchain.

Original post:
Why tokens are not the same thing as Bitcoin - CoinGeek

Read More..

Basic Attention Token Price Negative Move To Reach $0.15 Support Level – The Coin Republic

Source: Coin360

The 7-day weekly chart shows the price corrections faced by the altcoin which brought the price level from $0.18 to the major support level of $0.15. The crypto asset has now shown positive movements for avoiding any further downfall below the major support level.

An upward movement from the major support level indicates that BAT still has the potential to build a sustainable bullish momentum in upcoming days. The altcoin started the year with the price level of $0.18 and went to year-high of $0.30 before facing major downtrend in the cryptomarket

Over the past few days, BAT was planning to break the crucial resistance level of $0.20 and was successful. A break below the price level of $0.15 will indicate that the altcoin has might just open up vulnerability to reach the critical support area of $0.10. However, the overall downfall in the market doesnt look much significant and can avoid any further significant downfall looking at the price recovery the market was making for the past two days.

Source: Tradingview

The technical chart represents the bearish momentum faced by the altcoin from the previous month which is followed by the price recovery. However, basic attention price has started to signs positive momentum which can be seen at the tip of the downtrend line.

The technical indicators and oscillators also favor bullish momentum. MACD levels have finally reached a bullish zone after remaining in the bearish zone. However, RSI dropped to the level of 40 but strongly favors BAT bulls with current positive nature. CCI also had positive divergence from the oversold region which indicates that the altcoin can have a long-term bullish momentum in the cryptomarket

Resistance level: $0.18 and $0.20

Support level: $0.15 and $0.12

The rest is here:
Basic Attention Token Price Negative Move To Reach $0.15 Support Level - The Coin Republic

Read More..

Dogecoin Price Sustaining On $0.0020 Level With Overall Bullish Market – The Coin Republic

Source: Coinnarketcap

As dogecoin was suffering price consolidation since the past week, on the 7day-weekly chart it started with the price level of $0.0018 and had a top price level of $0.0020. This price level is proving to provide key resistance to Shiba Inu. However, the market forces provided dogecoin a needed positive boost for breaking the resistance level of $0.0020.

The current price is at the level of $0.002034 with the market capitalization of $250,486,942 and volume traded of $163,545,377. The current circulating supply is 124,654,460,995 DOGE.

The short-term prediction for dogecoin isbullish.The only barrier for the altcoin will be the resistance level of $0.0023. It showed its potential at the starting of February when it crossed the resistance level of $0.0030 and touched its year-high of $0.0035.

Source: Tradingview

Thedogecoin price chartreflects on the bearish momentum build by Shiba Inu from the starting of March. The downtrend was very significant for the altcoin which can be easily noticed in the graph.

DOGE is facing price consolidation between the price range of $0.0015 and $0.0020. The significant uptrend in yesterdays market was able to break the price bracket as the cryptoasset climbed to the price level of $0.0022.

The technical indicators and oscillators somewhat favor the bulls. The MACD levels have again reached the bullish zone after downtrending to the bearish zone. This is certainly a good indication for dogecoin holders

The 24hour- RSI is also showing positive nature and is about to reach a bullish level of 60. This also favors the bulls.

The CCI managed to have positive divergence avoiding the oversold region, which indicates that theres still some buying left for the altcoin in the market and theres time for profit booking stage among the investors.

Resistance Level: $0.0023 and $0.0025Support Level: $0.0015 and $0.0014

See the original post here:
Dogecoin Price Sustaining On $0.0020 Level With Overall Bullish Market - The Coin Republic

Read More..

Litecoin LTC price holds steady at $42, but analyst warns of a potential correction – Cryptopolitan

Another day of mostly sideways movement sees Bitcoin holding onto $7,000, up 0.90 percent over 24 hours. Together with Ethereum, Bitcoin SV, and EOS, Litecoin is the only other top 10 altcoin in the red. At the time of writing, the Litecoin LTC price has lost 0.33%, with a trading value of $42.35.

After seeing a sharp increase yesterday in which the LTC price jumped from about $39 to $42, Litecoin has since managed to hold on to much of those gains. After looking at the technicals, David Smith has stated that Litecoin has been trading in a tight range today between $41.80 and $43.15. He, therefore, suggests trading the confirmed breakout from one of the boundaries.

Litecoin LTC price chart by Trading View

Alex Clay believes the Litecoin LTC price is set to continue its descent. As per his analysis, the price has found strong support at the $41.92 mark. However, having formed a descending triangle pattern, the trader states that if the price makes a confirmed breakout below this support level, it would be prudent to take a short position.

Litecoin LTC price chart by Trading View

In Grayscales most recent quarterly report, the cryptocurrency asset management firm reported that it holds almost 2% of the circulating supply of Bitcoin. This figure is up from 0.1% recorded at the end of 2019. The large uptick in ownership is attributed to increasing institutional interest in the Grayscale Bitcoin Trust (GBTC). As per the report, the firms ten crypto funds drew in over $500 million in investments, its best quarter on record.

Disclaimer: The information provided is not trading advice but an informative analysis of the price movement. Cryptopolitan.com holds no liability towards any investments based on the information provided on this page.

Read the original:
Litecoin LTC price holds steady at $42, but analyst warns of a potential correction - Cryptopolitan

Read More..

AI Could Save the World, If It Doesnt Ruin the Environment First – PCMag Portugal

We review products independently, but we may earn affiliate commissions from buying links on this page. Terms of use.

When Mohammad Haft-Javaherian, a student at the Massachusetts Institute of Technology, attended MITs Green AI Hackathon in January, it was out of curiosity to learn about the capabilities of a new supercomputer cluster being showcased at the event. But what he had planned as a one-hour exploration of a cool new server drew him into a three-day competition to create energy-efficient artificial-intelligence programs.

The experience resulted in a revelation for Haft-Javaherian, who researches the use of AI in healthcare: The clusters I use every day to build models with the goal of improving healthcare have carbon footprints, Haft-Javaherian says.

The processors used in the development of artificial intelligence algorithms consume a lot of electricity. And in the past few years, as AI usage has grown, its energy consumption and carbon emissions have become an environmental concern.

I changed my plan and stayed for the whole hackathon to work on my project with a different objective: to improve my models in terms of energy consumption and efficiency, says Haft-Javaherian, who walked away with a $1,000 prize from the hackathon. He now considers carbon emission an important factor when developing new AI systems.

But unlike Haft-Javaherian, many developers and researchers overlook or remain oblivious to the environmental costs of their AI projects. In the age of cloud-computing services, developers can rent online servers with dozens of CPUs and strong graphics processors (GPUs) in a matter of minutes and quickly develop powerful artificial intelligence models. And as their computational needs rise, they can add more processors and GPUs with a few clicks (as long as they can foot the bill), not knowing that with every added processor, theyre contributing to the pollution of our green planet.

The recent surge in AIs power consumption is largely caused by the rise in popularity of deep learning, a branch of artificial-intelligence algorithms that depends on processing vast amounts of data. Modern machine-learning algorithms use deep neural networks, which are very large mathematical models with hundreds of millionsor even billionsof parameters, says Kate Saenko, associate professor at the Department of Computer Science at Boston University and director of the Computer Vision and Learning Group.

These many parameters enable neural networks to solve complicated problems such as classifying images, recognizing faces and voices, and generating coherent and convincing text. But before they can perform these tasks with optimal accuracy, neural networks need to undergo training, which involves tuning their parameters by performing complicated calculations on huge numbers of examples.

To make matters worse, the network does not learn immediately after seeing the training examples once; it must be shown examples many times before its parameters become good enough to achieve optimal accuracy, Saenko says.

All this computation requires a lot of electricity. According to a study by researchers at the University of Massachusetts, Amherst, the electricity consumed during the training of a transformer, a type of deep-learning algorithm, can emit more than 626,000 pounds of carbon dioxidenearly five times the emissions of an average American car. Another study found that AlphaZero, Googles Go- and chess-playing AI system, generated 192,000 pounds of CO2 during training.

To be fair, not all AI systems are this costly. Transformers are used in a fraction of deep-learning models, mostly in advanced natural-language processing systems such as OpenAIs GPT-2 and BERT, which was recently integrated into Googles search engine. And few AI labs have the financial resources to develop and train expensive AI models such as AlphaZero.

Also, after a deep-learning model is trained, using it requires much less power. For a trained network to make predictions, it needs to look at the input data only once, and it is only one example rather than a whole large database. So inference is much cheaper to do computationally, Saenko says.

Many deep-learning models can be deployed on smaller devices after being trained on large servers. Many applications of edge AI now run on mobile devices, drones, laptops, and IoT (Internet of Things) devices. But even small deep-learning models consume a lot of energy compared with other software. And given the expansion of deep-learning applications, the cumulative costs of the compute resources being allocated to training neural networks are developing into a problem.

Were only starting to appreciate how energy-intensive current AI techniques are. If you consider how rapidly AI is growing, you can see that we're heading in an unsustainable direction, says John Cohn, a research scientist with IBM who co-led the Green AI hackathon at MIT.

According to one estimate, by 2030, more than 6 percent of the worlds energy may be consumed by data centers. I don't think it will come to that, though I do think exercises like our hackathon show how creative developers can be when given feedback about the choices theyre making. Their solutions will be far more efficient, Cohn says.

CPUs, GPUs, and cloud servers were not designed for AI work. They have been repurposed for it, as a result, are less efficient than processors that were designed specifically for AI work, says Andrew Feldman, CEO and cofounder of Cerebras Systems. He compares the usage of heavy-duty generic processors for AI to using an 18-wheel-truck to take the kids to soccer practice.

Cerebras is one of a handful of companies that are creating specialized hardware for AI algorithms. Last year, it came out of stealth with the release of the CS-1, a huge processor with 1.2 trillion transistors, 18 gigabytes of on-chip memory, and 400,000 processing cores. Effectively, this allows the CS-1, the largest computer chip ever made, to house an entire deep learning model without the need to communicate with other components.

When building a chip, it is important to note that communication on-chip is fast and low-power, while communication across chips is slow and very power-hungry, Feldman says. By building a very large chip, Cerebras keeps the computation and the communication on a single chip, dramatically reducing overall power consumed. GPUs, on the other hand, cluster many chips together through complex switches. This requires frequent communication off-chip, through switches and back to other chips. This process is slow, inefficient, and very power-hungry.

The CS-1 uses a tenth of the power and space of a rack of GPUs that would provide the equivalent computation power.

Satori, the new supercomputer that IBM built for MIT and showcased at the Green AI hackathon, has also been designed to perform energy-efficient AI training. Satori was recently rated as one of the worlds greenest supercomputers. Satori is equipped to give energy/carbon feedback to users, which makes it an excellent laboratory for improving the carbon footprint both AI hardware and software, says IBMs Cohn.

Cohn also believes that the energy sources used to power AI hardware are just as important. Satori is now housed at the Massachusetts Green High Performance Computing Center (MGHPCC), which is powered almost exclusively by renewable energy.

We recently calculated the cost of a high workload on Satori at MGHPCC compared to the average supercomputer at a data center using the average mix of energy sources. The results are astounding: One year of running the load on Satori would release as much carbon into the air as is stored in about five fully-grown maple trees. Running the same load on the 'average' machine would release the carbon equivalent of about 280 maple trees, Cohn says.

Yannis Paschalidis, the Director of Boston Universitys Center for Information and Systems Engineering, proposes a better integration of data centers and energy grids, which he describes as demand-response models. The idea is to coordinate with the grid to reduce or increase consumption on-demand, depending on electricity supply and demand. This helps utilities better manage the grid and integrate more renewables into the production mix, Paschalidis says.

For instance, when renewable energy supplies such as solar and wind power are scarce, data centers can be instructed to reduce consumption by slowing down computation jobs and putting low-priority AI tasks on pause. And when theres an abundance of renewable energy, the data centers can increase consumption by speeding up computations.

The smart integration of power grids and AI data centers, Paschalidis says, will help manage the intermittency of renewable energy sources while also reducing the need to have too much stand-by capacity in dormant electricity plants.

Scientists and researchers are looking for ways to create AI systems that dont need huge amounts of data during training. After all, the human brain, which AI scientists try to replicate, uses a fraction of the data and power that current AI systems use.

During this years AAAI Conference, Yann LeCun, a deep-learning pioneer, discussed self-supervised learning, deep-learning systems that can learn with much less data. Others, including cognitive scientist Gary Marcus, believe that the way forward is hybrid artificial intelligence, a combination of neural networks and the more classic rule-based approach to AI. Hybrid AI systems have proven to be more data- and energy-efficient than pure neural-network-based systems.

It's clear that the human brain doesnt require large amounts of labeled data. We can generalize from relatively few examples and figure out the world using common sense. Thus, 'semi-supervised' or 'unsupervised' learning requires far less data and computation, which leads to both faster computation and less energy use, Cohn says.

Read the original post:
AI Could Save the World, If It Doesnt Ruin the Environment First - PCMag Portugal

Read More..

Google Engineers ‘Mutate’ AI to Make It Evolve Systems Faster Than We Can Code Them – ScienceAlert

Much of the work undertaken by artificial intelligence involves a training process known as machine learning, where AI gets better at a task such as recognising a cat or mapping a route the more it does it. Now that same technique is being use to create new AI systems, without any human intervention.

For years, engineers at Google have been working on a freakishly smart machine learning system known as theAutoML system(or automatic machine learning system), which is already capable of creating AI that outperforms anything we've made.

Now, researchers have tweaked it to incorporate concepts of Darwinian evolution and shown it can build AI programs that continue to improve upon themselves faster than they would if humans were doing the coding.

The new system is called AutoML-Zero, and although it may sound a little alarming, it could lead to the rapid development of smarter systems - for example, neural networked designed to more accurately mimic the human brain with multiple layers and weightings, something human coders have struggled with.

"It is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks," write the researchers in their pre-print paper. "We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space."

The original AutoML system is intended to make it easier for apps to leverage machine learning, and already includes plenty of automated features itself, but AutoML-Zero takes the required amount of human input way down.

Using a simple three-step process - setup, predict and learn - it can be thought of as machine learning from scratch.

The system starts off with a selection of 100 algorithms made by randomly combining simple mathematical operations. A sophisticated trial-and-error process then identifies the best performers, which are retained - with some tweaks - for another round of trials. In other words, the neural network is mutating as it goes.

When new code is produced, it's tested on AI tasks - like spotting the difference between a picture of a truck and a picture of a dog - and the best-performing algorithms are then kept for future iteration. Like survival of the fittest.

And it's fast too: the researchers reckon up to 10,000 possible algorithms can be searched through per second per processor (the more computer processors available for the task, the quicker it can work).

Eventually, this should see artificial intelligence systems become more widely used, and easier to access for programmers with no AI expertise. It might even help us eradicate human bias from AI, because humans are barely involved.

Work to improve AutoML-Zero continues, with the hope that it'll eventually be able to spit out algorithms that mere human programmers would never have thought of. Right now it's only capable of producing simple AI systems, but the researchers think the complexity can be scaled up rather rapidly.

"While most people were taking baby steps, [the researchers] took a giant leap into the unknown," computer scientist Risto Miikkulainen from the University of Texas, Austin, who was not involved in the work, told Edd Gent at Science. "This is one of those papers that could launch a lot of future research."

The research paper has yet to be published in a peer-reviewed journal, but can be viewed online at arXiv.org.

The rest is here:
Google Engineers 'Mutate' AI to Make It Evolve Systems Faster Than We Can Code Them - ScienceAlert

Read More..

This Artificial Intelligence Extracts Emotions And Shows What People Are Feeling – Forbes

An Italian artificial intelligence (AI) company that specializes in natural language reading and semantics is using its AI tech to extract emotions and sentiment from 63,000 English-language social media posts on Twitter every 24 hours to create a semantic analysis of peoples feelings during COVID-19.

Expert System collects the data in the same time frame - 10 am EST (3 pm CET) on the same day of each week. The data is analyzed every 24 hours and interpreted by Sociometrica.The company applied the most frequently used hashtags related to coronavirus to analyze the data such as #coronalockdown, #covid19, #coronavirusuk, #stayathome, #stayhomesavelives, #coronaviruspandemic, #clapforourcarers, #isolationlife.

Expert Systems and Sociometrica analyze the sentiment of 63,000 social media posts each day to ... [+] determine the emotional state of the internet in response to COVID-19

Walt Mayo, CEO of Expert System Group, said that social media sentiment analysis shows that fear and anxiety around the Corona crisis and how it is unfolding and the efforts to combat it dominate communications.

We also have seen growing criticism of individual behavior that is considered irresponsible and goes against advice to follow social distancing and other recommendations to flatten the curve, added Mayo. But we also have seen growing expressions of gratitude toward health care workers and emerging signs of hope more broadly.

Mayo believes its important to monitor peoples sentiment changes because some of the success of the anti-virus strategy depends on the behavior of individuals. From the data, the general trend shows that fear is the most widespread emotion.

Mayo says that sentiment data two weeks ago in early April indicated that people were afraid because they wanted to return to their normal life; they were insisting on answers both regarding the progression of the pandemic and actions to combat the virus. Strong criticism was leveled at those who didnt respect safety distancing rules and other behavior [..] that would prevent the spread of the virus, said Mayo.

The days preceding Easter were a turning point, with more positive emotions correlated to a growing expression of action around the commitment to the fight against the virus and the courage of doctors and nurses working at the forefront of the fight and the confidence in science, said Mayo.

April 17, 2020, data showed positive emotions, including hope and love expressed towards health care personnel, showed a slight increase from 21.6% to 23.9% in 24 hours.

Continued here:
This Artificial Intelligence Extracts Emotions And Shows What People Are Feeling - Forbes

Read More..

Artificial intelligence, 3D scanning being used to improve safety at oil and gas sites – The Province

AI researchers Cory Janssen and Nicole Janssen, Co-founders of AltaML in Edmonton, March 7, 2019.Ed Kaiser / Postmedia

An Edmonton-based tech company is using artificial intelligence to spot potential safety risks at oil and gas facilities.

Last week, AltaML announced a partnership with engineering and design firm Kleinfelder in which the two companies will pair 3D reality scans of facilities with artificial intelligence (AI) to look for potential problems and risks.

Chris Fletcher, business development manager with AltaML, said the use of AI and machine learning, which is a subset of AI, is meant to be another tool for safety inspectors and plant operators. He said the companys goal is to bring AI to the blue-collared industry.

The goal is to basically put higher quality safety plant recommendations in front of inspectors so that they can catch the ones that require more attention earlier on and spend less time looking at safety infractions or concerns that didnt need to be looked at in the first place, he said, adding the end result is facilities becoming safer and more productive.

Fletcher said fireproofing facilities, for example, can be particularly challenging because of the constant monitoring thats needed to watch for degradation. This is where the 3D scanning and AI come in to maintain that monitoring. He said inspectors are able to check plants and facilities even remotely.

Right now, were focused on fireproofing and a couple other small infrastructure assets, he said. Thats what were doing in the short term. Long term is capturing the whole facility, being able to raise those red flags throughout the whole facility.

Fletcher wouldnt say how many facilities are using this technology.

AltaML began operations about three years ago and has since grown to employ around 65 people. Fletcher said the company was started to take advantage of the data-science and computing-science talent coming out of the University of Alberta. He said the university has top professors teaching with students coming all over the world to learn under them but the problem was after graduation they wouldnt stick around.

They ended up going to work for Facebook, Amazon, Apple, Google and so on, Fletcher said. What our founder Cory Janssen did was he went out, built a partnership with (the university). He hired a good chunk of the data scientists who were out of these courses and basically built a company where our whole mission is to help large enterprise organizations traditionally blue-collar deploy AI.

jlabine@postmedia.com

Twitter.com/jefflabine

See the rest here:
Artificial intelligence, 3D scanning being used to improve safety at oil and gas sites - The Province

Read More..

Artificial intelligence: The thinking machine – Urgent Communications

Artificial intelligence is a bit of a buzz term these days but what do people really mean when they say AI? And why should local governments care?

First of all, AI is extremely misunderstood. We arent talking about HAL from 2001: A Space Odyssey, necessarily; were talking about what Alan Turing speculated about thinking machines back in the 1950s. According to the Brookings Institute, AI is generally thought to refer to machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention. More simply put, AI uses algorithms to make decisions using real-time data. But unlike more traditional machines that can only respond in predetermined ways, AI can act on data it can analyze it and respond to it.

The concept has been evolving and the technology has become more sophisticated, but its still a little nebulous particularly for folks working in local government. It seems everyone kind of knows what AI is, but no one is exactly sure how they can apply it in their communities.

I spoke with Eyal Feder-Levy, the CEO and Founder of ZenCity, an AI-based tool that helps local government leaders listen to and synthesize conversations going on in their communities on social media, about the implications of AI for local governments, and how they can utilize these new tools in meaningful, beneficial ways. The following is a gently edited transcription of that discussion.

Derek Prall:So, when you say you use AI tools to analyze social media conversations crunch this data and make it meaningful can you tell me what that means? How does this work?

Eyal Feder-Levy:Lets use the current situation that local governments are facing as an example. First, I have to say I have nothing but admiration for local government leaders right now. Cities and counties are on the front lines of this global crisis that were facing they have to create the policies that will respond to this. They have to shape the information thats going out there. So in this current crisis, cities have a really important job to play. This means they have to constantly know whats working and what isnt working. They need to know if the messaging theyre putting out is resonating with people. They need to know if people are worried about child care or tax breaks for their businesses or are they worried about where to buy groceries. What are the things that they are prioritizing that we as local governments need to respond to in order for our communities to survive this crisis?

One of the only channels where we can still hear the population in this social distancing reality is online. People are talking more than ever on platforms like Facebook, Twitter and Instagram. Were talking about a massive amount of data. If we take a city like Dayton, Ohio, were seeing somewhere along the lines of tens of thousands of these online conversations in a week. Where AI comes into play is that no one in city hall has the time to go over 80,000 conversations a week and try to make sense out of them. We cant.So its amazing we have this information, its amazing we have this data, but we have to find a way to make sense out of it fast. This is where we as a city use AI. These are basically algorithms that break down the data in meaningful ways so it can be acted on.

Prall:Okay that definitely makes sense, but I want to take a bit of a step back. I think a lot of elected and appointed officials arent necessarily the most tech-savvy people. When you say artificial intelligence to someone who doesnt consider themselves to be good with technology how do you talk about what this is this as a concept. What is AI?

Feder-Levy:The first thing I want to say about AI is that its not robots coming to take our jobs its not something scary that only mathematicians can understand. Its actually part of our daily lives already. Its embedded in the technological and software tools we use every day. Its something that if we understand the basic concepts of it can be a very strong tool to help us automate a lot of things we dont have enough staff to do on a manageable level.

To read the complete article, visit American City & County.

Read the original post:
Artificial intelligence: The thinking machine - Urgent Communications

Read More..

Govt bets on artificial intelligence, data analytics to weed out shell cos – Business Standard

The corporate affairs ministry is betting on artificial intelligence and data analytics as key elements in the fight against the menace of shell companies as it works to put in place an ecosystem that will have "zero tolerance" for non-compliance with regulations.

Continuing efforts to have a robust corporate governance system and ensure high level of compliance, the ministry is also in the process of having an advanced MCA 21 portal.

The portal is used for submission of requisite filings under the companies law and is also a repository of data on corporates in the country.

Corporate Affairs Secretary Injeti Srinivas told PTI that once the third version of MCA 21 becomes fully operational, the portal would make it "almost impossible for a shell company to survive."

Generally, shell companies are those which are not complying with regulations and many such entities are allegedly used for money laundering and other illegal activities.

Noting that the third version of the portal might be fully operational in a year from now, the secretary said the ecosystem would have zero tolerance for non-compliance.

"Surveillance with respect to compliance will be on auto pilot mode with artificial intelligence (AI) and data analytics," he said.

MCA 21 system was started in 2006 and currently, the second version is operational.

There are nearly 12 lakh active companies in the country. Active companies are those that are in compliance with various regulatory requirements under the Companies Act.

Over the past two to three years, the ministry has been deregistering the names of companies from official records for prolonged non-compliance.

"From the trend I see, after 4.25 lakh shell companies having got struck off, the numbers getting added each year is reducing. This is a clear indication that the earlier scenario of shell companies openly indulging in accommodation entries has become a matter of past," Srinivas said.

Along with weeding out shell companies, the KYC (Know Your Client) drive for directors and companies has encouraged greater compliance.

"Now, more and more companies are becoming compliant. Compliance levels in terms of filings has crossed 80 per cent. The latest fresh start scheme for companies and settlement scheme for LLPs (Limted Liability Partnerships) are expected to further improve compliance levels it should soon cross 90 per cent," he noted.

At the end of February, there were around 19,89,777 registered companies in the country. Out of them, 7,44,014 companies were closed, 41,974 entities were in the process of being struck-off and 2,170 were assigned dormant status, as per data compiled by the ministry.

According to the ministry, there were 11,95,045 active companies as on February 29.

Read the original here:
Govt bets on artificial intelligence, data analytics to weed out shell cos - Business Standard

Read More..