Page 2,469«..1020..2,4682,4692,4702,471..2,4802,490..»

No-Code Analytics The Best Introduction to Data Science – Analytics Insight

No-code analytics has been one of the best additions in the data science space.Introduction

Although reading books and watching lectures is a great way to learn analytics it is best to start doing. However, it can be quite tricky to start doing when it comes to languages such as Python and R if someone does not have a coding background. Not only do you need to know what you are doing in terms of analytical procedures, but you also need to understand the nuances of programming languages which adds onto the list of things to learn to just get started. Therefore, the best middle ground between knowledge acquisition (books, videos, etc.) and conducting advanced analytics (Python, R, etc.) is by using open-source analytics software. These types of software are great for both knowledge acquisition and actually doing analysis as documentation is built into the software and you can start doing relatively complex tasks with only mouse clicks. Even if you know how to code, the same analysis is usually conducted faster using these types of software.

The term data analytics has become synonymous with programming languages such as Python and R. Although these powerful languages are necessary for conducting advanced analytics with the latest and greatest algorithms, they are not necessary to start analyzing complex datasets! Data analytics software can either be open-source (Orange) or have a free version associated with it (RapidMiner). These tools are great for beginners as the time it takes to learn the nuances of coding languages can instead be spent on the data analytics process and statistical theory which is important for Python and R users as well. Think about it, if you woke up one day and knew everything about Python and R, would you still be able to conduct thorough and accurate analysis? Even if your code works and an output is given, the output may be wrong due to lack of knowledge within the data analytics domain. We live in a beautiful world where very smart people create completely free software so the public can use them without a price barrier. A great website that illustrates the trend of open-source software is alternativeto.net. In this website, you can type in any paid commercial software, and it will recommend open-source alternatives that serve as a substitute for the commercial software. The purpose of this article is to provide the ideal introduction to data analytics for anyone who is interested in this fascinating subject. The software we will be covering can do analytical tasks such as regression, classification, clustering, dimensionality reduction, association rules mining, deep learning/neural networks, ensemble methods, text mining, genetic algorithms, network analysis, image analytics, time series, bioinformatics, and spectroscopy. Some of the software listed can also be connected to a SQL database. In this article, we will go over the various no-code software that is either completely open-source or has a powerful free/academic version associated with.

RapidMiner was founded in 2007 and is still used today. RapidMiner is used by over 40,000 organizations and has been doing well according to the Gartner Magic Quadrant. The types of analyses that can be done are quite broad ranging from simple regression to genetic algorithms and deep learning. It is a point-and-click interface where widgets are placed and connected to one another in order to perform analytics. These are essentially pre-written blocks of code that conduct certain operations. Hyperparameters can be tuned on the side after clicking the widget. One thing that makes RapidMiner unique is its automated machine learning functionality. With just a couple of clicks, various algorithms will run and output the performance metrics where you can compare the results and choose the best model. RapidMiner believes in no black boxes, so it is possible to see how the algorithm works after implementing the automated machine learning. Other capabilities can also be done such as text mining and big data (e.g., Radoop) through the various extensions that it provides. In my opinion, the strongest part of RapidMiner is how rapidly (pun intended) one can learn the theory and underlying mechanisms of how the model works. The documentation is built into the software so you can right-click on each functionality/algorithm and gain a description of each. Each description covers a synopsis, a brief description of the overall algorithm, a description of each hyperparameter, as well as a tutorial on how to use it. The tutorial is extremely useful as you can use it as a template for your own dataset. In tutorials, widgets are formed with the use of sample data so you are given a usable example of how you can use it. Just, plug in your own data and make certain changes and you are good to go! RapidMiner also incorporates a wisdom of crowds functionality where statistics are given on hyperparameter tuning and widget creation. For instance, are you trying to determine the number of trees of your random forest? Well, RapidMiner will state something like 50% chose a value between 100 and 149 along with a bar graph that shows what percentage or RapidMiner users chose what. This streamlines the learning process to see what the professionals are choosing. Overall, I highly recommend RapidMiner for learning analytics and should be one of the first tools someone uses when starting to learn.

Orange is probably the most visually pleasing software on this list and has some of the best data visualizations. It also has the most features for a completely free open-source software. This means that you can take the knowledge learned into the corporate world as it is free and open-source for everyone! Interestingly, the software runs on Python so a lot of the visualization should be familiar. The creators of this software are biostatisticians and so more scientific packages are included in the software such as biostatistics and spectroscopy. Orange also uses widgets similar to RapidMiner and can be downloaded under the Anaconda environment or as stand-alone software.

JASP (Jeffreyss Amazing Statistics Program) is mostly used for traditional statistics in the social sciences but has machine learning functionalities as well. It is more of a substitute for SPSS and the user interface looks very similar to it. The interesting thing about JASP is that the R language works under the hood so the data visualizations should look similar to it. This is a great way to learn traditional statistics as you can load a workflow based on certain statistical techniques where an already conducted analysis will be downloaded along with explanations for why certain analyses are done. The software documentation is also built-in to the software so you can easily learn about the statistical techniques and how to use them in the right way along with already-loaded example datasets. Academic papers and books are also cited under each statistical technique for further resources; R packages are also listed for each technique as well. In JASP, it is possible to conduct t-tests, ANOVA, regression, factor analysis, Bayesian statistics, meta-analysis, network analysis, structural equation modeling, and other classical statistical techniques as well as machine learning.

Voyant Tools specializes in corpus analytics which relates to text data. To get started with minimal effort, you can pre-load corpus data from Shakespeare plays and have a dataset ready for analysis. There is a great number of functionalities within the software and is unique compared to the other software in that it comes in the format of a dashboard where you can change each tile with another form of analysis. Most of the analytical techniques encompass unique ways in visualizing textual data. Statistical techniques such as topic clustering are also possible.

This one is a little different from the others as it pertains to obtaining data opposed to analyzing data. Webscraping is a popular way to obtain data from webpages since there is more control in how the data is collected compared to the use of secondary data. There are plenty of free web scraping services, but my favorite is DataMiner. With the free version, you can scrape up to 500 pages a month (although some websites such as Glassdoor are restricted unless you pay a minimal monthly fee). However, it is very intuitive and comes with live customer support for help in your web scraping projects. This software works by clicking on certain parts of the screen where the html code will be sensed. Then, the software will detect similar areas on the website and gather each instance as a row and put them all in one column. This can be repeated for other areas where you will end up with a nice, personalized dataset.

https://dataminer.io/

We live in a fascinating world where talented people are creating software to help newcomers in certain fields which exponentially increases the collective knowledge of society. There are other great analytical tools that I didnt mention such as KNIME, Weka, QGIS, and Jamovi since Im not as familiar with those, so go out there and explore more! Five, ten, one hundred years from now, this list will be outdated, and new types of code-free software will enter the field, each with a core competency. For instance, I can see the future having specific software for each data type (image, audio, etc.) or each type of data mining technique. We also have access to free real-world datasets from websites such as Kaggle where you can easily start exploring datasets that intrigue you. Datasets can range from Pokmon statistics to healthcare, so the possibilities for analysis are endless!

So, if you are interested in analytics, download a dataset that fascinates you and immediately start conducting advanced analytics by just mouse clicks using the software above, and if it fascinates you, also add on a keyboard to your toolkit to use more advanced methods using Python and R. The latest and greatest methods of analysis (which can be found on GitHub) can only be done using one of these tools so that would be the next step. Then, you can try to replicate scientific papers from paperswithcode.com. I hope this article serves as a good introduction to this field, welcome to the world of analytics!

Dennis Baloglu graduated from UNLV with Bachelors degrees in Finance and Marketing along with Masters degrees in Hotel Administration and Management Information Systems. He taught under the UNLV William F. Harrah College of Hospitality for 2 years and is a current Enterprise Marketing Analyst at MGM Resorts International. He is passionate about the intersection between data analytics and the hospitality industry since he believes data-driven decision-making and algorithms are the keys to industry success.

Company Designation: MGM Resorts International

Location: Las Vegas, NV

Link to Website: https://sites.google.com/view/dennis-baloglu/home

Social Media: https://www.linkedin.com/in/dennisbaloglu/

Share This ArticleDo the sharing thingy

The rest is here:

No-Code Analytics The Best Introduction to Data Science - Analytics Insight

Read More..

Your neighborhood matters: A machine-learning approach to the geospatial and social determinants of health in 9-1-1 activated chest pain – DocWire…

This article was originally published here

Res Nurs Health. 2021 Nov 24. doi: 10.1002/nur.22199. Online ahead of print.

ABSTRACT

Healthcare disparities in the initial management of patients with acute coronary syndrome (ACS) exist. Yet, the complexity of interactions between demographic, social, economic, and geospatial determinants of health hinders incorporating such predictors in existing risk stratification models. We sought to explore a machine-learning-based approach to study the complex interactions between the geospatial and social determinants of health to explain disparities in ACS likelihood in an urban community. This study identified consecutive patients transported by Pittsburgh emergency medical service for a chief complaint of chest pain or ACS-equivalent symptoms. We extracted demographics, clinical data, and location coordinates from electronic health records. Median income was based on US census data by zip code. A random forest (RF) classifier and a regularized logistic regression model were used to identify the most important predictors of ACS likelihood. Our final sample included 2400 patients (age 59 17 years, 47% Females, 41% Blacks, 15.8% adjudicated ACS). In our RF model (area under the receiver operating characteristic curve of 0.71 0.03) age, prior revascularization, income, distance from hospital, and residential neighborhood were the most important predictors of ACS likelihood. In regularized regression (akaike information criterion = 1843, bayesian information criterion = 1912, 2 = 193, df = 10, p < 0.001), residential neighborhood remained a significant and independent predictor of ACS likelihood. Findings from our study suggest that residential neighborhood constitutes an upstream factor to explain the observed healthcare disparity in ACS risk prediction, independent from known demographic, social, and economic determinants of health, which can inform future work on ACS prevention, in-hospital care, and patient discharge.

PMID:34820853 | DOI:10.1002/nur.22199

Originally posted here:
Your neighborhood matters: A machine-learning approach to the geospatial and social determinants of health in 9-1-1 activated chest pain - DocWire...

Read More..

Manchin and Cortez Masto kill chances of reforming outdated hardrock mining law – Grist

This story was originally published by High Country News and is reproduced here as part of the Climate Desk collaboration.

Amid the recent skirmishes over revising the reconciliation bill, known as the Build Back Better Plan, lawmakers once again skipped a chance to reform the General Mining Law of 1872.

Under this outdated law, hardrock miners can extract profitable minerals such as gold and silver from public lands without having to pay any federal royalties. Though it has been challenged several times over the past few decades, mainly by Democrats, the law has not been significantly updated in the nearly 150 years since its passage.

In August, a House committee, chaired by Ral Grijalva, D-Ariz., tried to modernize the legislation by adding language to the reconciliation bill to establish federal royalties of between 4% to 8% on these mines. This would have been the most consequential update that the mining law has received in the nearly 15 decades since President Ulysses S. Grant signed it into existence.

However, hardrock royalty reform never even reached a vote thanks to Democratic Sens. Catherine Cortez Masto, D-Nev., and Joe Manchin, D-W.V., who made his personal fortune in coal mining. Manchin initially signaled support for the royalty provisions in October when he spoke in front of the Senate Committee on Energy and Natural Resources, stating that he could never imagine that we dont receive royalties on so many things we produce in this country. But he later reversed course and reportedly promised Cortez Masto that hed block any mining royalties, effectively killing reform before it even reached the full Senate. On Nov. 4, royalty reform was officially out of both the House and Senate bills.

These senators actions all but guarantee that the U.S. public will continue to miss out on billions of dollars in revenue that could have supported the Build Back Better Plans priorities, including paid family leave and important climate investments. The bill also would have held companies accountable for cleaning up the abandoned mines that pockmark the West. Instead, mining companies will continue to exploit public land for their own financial gain.

The General Mining Law of 1872 law was passed in the wake of the mid-19th century California gold rush as part of a push to encourage white settlement of the West. Previously, prospectors sometimes staked claims to land without the permission of the federal government, let alone that of the Indigenous people who were being dispossessed of the land in question.

In order to regulate the blossoming industry, Congress passed a few early mining laws beginning in 1866. The General Mining Law of 1872 took their place. It established the location system, which permitted individual miners and corporations to stake claims to mineral discoveries on the public domain, on land that had never been in private ownership.

A long list of royalty-free minerals besides gold and silver fall under this location-system regulation, including lithium and copper, which are becoming more valuable due to their use in green energy technologies like solar panels and electric vehicles. The industry has extracted some $300 billion worth of these minerals from public lands since 1872, according to Earthworks. And though mining companies have evolved tremendously since the days of digging with pickaxes and now use some of the largest machinery on earth, the return they make to the American public remains as paltry as ever.

This is why a broad base of critics from conservation organizations to lawmakers think it is high time to reform the 1872 law. Currently, the government earns hardrock mining fees for things like registration and annual maintenance, which generated about $71 million in revenue in fiscal year 2019, but its a small amount compared to the money that would be derived from royalties.

The industry has extracted some $300 billion worth of these minerals from public lands since 1872, according to Earthworks.

For example, in September, the House Natural Resources Committee proposed a new royalty that would have raised $2 billion over 10 years. And thats likely a conservative estimate: The federal government has no data on the amount or value of the hardrock minerals extracted from public lands, which account for more than 80% of the mineral mines on federal lands, according to the Government Accountability Office.

In contrast, mines operating under the more heavily regulated leasing system, for resources like coal and oil shale, account for just 17% of mining on federal lands, but generate much more revenue through royalties. In fiscal year 2018 alone, they brought in $550 million. Coal is by far the primary revenue generator under leasing-system mining.

The proposed reforms also would have added a reclamation fee for abandoned mines and increased the yearly maintenance fee for claims from $165 to $200 per claim, adding another combined $1 billion in revenue over the next decade.

This money could, among other things, provide funding to address a myriad of environmental and health threats across the Western U.S. caused by past mining. Before the 1970s, for example, companies abandoned mines once work was complete leaving behind tens of thousands of often-toxic scars on the land that could cost over $50 billion to address.

Attempts to reform the General Mining Law have been going on for years, but a well-funded network of lobbyists and special interest groups has continued to thwart any success. Mining interests regularly spend north of $16 million annually on lobbying; this year, theyve already spent over $13 million.

The National Mining Association spent the most in 2021, coming in at $1.5 million, according to data from OpenSecrets, a nonprofit campaign finance and lobbying watchdog organization. Several companies that would be directly impacted by mining law reform have lobbied against it, including Newmont Corp., a gold-mining company that has invested over $800,000 to fight efforts to change the law.

This helps explain why one ongoing effort to reform the law the Hardrock Mining and Reclamation Act has stalled in recent years. Democrats have introduced the legislation in Congress at least six times since 2007. The bills most recent iteration, in 2019, failed amid a major industry-led lobbying blitz. Among those fighting it were mining giant BHP Group and the National Mining Association, which targeted the bill in a $1.2 million lobbying campaign.

And mining industry lobbyists have power beyond their financial influence: They are also intricately linked to the government. According to OpenSecrets, nearly 65% of the industrys lobbyists previously worked in the government, many in positions related to mining.

The lobbying campaigns help illuminate why Manchin, who said in October that it was time to bring the outdated law into the 21st century was willing to suddenly reverse course. According to OpenSecrets, he received more campaign donations from the mining industry than anyone else in Congress, raising nearly $50,000 from the industry in the current fundraising cycle. Cortez Mastos campaign also benefited: Both the National Mining Association trade group and Barrick Gold Corp., one of Nevadas largest mining companies, have recently donated to her campaign.

Nevadas economy depends on gold mining; nearly $8.2 billion worth of the metal was extracted in the state in 2020. Cortez Mastos predecessor, former Nevada Democrat Harry Reid, was against any challenges to the 1872 Mining Law, calling them ill-conceived reform efforts that would have hurt rural Nevada in a 2009 op-ed. It seems that Cortez Masto is picking up right where Reid left off, protecting the industry in an attempt to keep rural voters.

Neither Manchin nor Cortez Masto responded to requests for comment.

This story was produced in collaboration with the Project on Government Oversight, a nonpartisan independent watchdog that investigates and exposes waste, corruption and abuse of power.

Read more here:

Manchin and Cortez Masto kill chances of reforming outdated hardrock mining law - Grist

Read More..

Ripple seeing ‘good progress’ in SEC case over XRP, outcome expected next year – CNBC

Fintech company Ripple is making great strides in its legal feud with the U.S. Securities and Exchange Commission, CEO Brad Garlinghouse told CNBC on Monday.

Garlinghouse said he expects the case, which centers on XRP, the world's seventh-biggest cryptocurrency, will likely reach a conclusion next year.

"We're seeing pretty good progress despite a slow-moving judicial process," he told CNBC's Dan Murphy.

"Clearly we're seeing good questions asked by the judge. And I think the judge realizes this is not just about Ripple, this will have broader implications."

Garlinghouse said he was hopeful there would be closure next year.

Ripple, which is based in San Francisco, generated a lot of buzz during the crypto frenzy of late 2017 and 2018, which saw the prices of bitcoin, ether and other cryptocurrencies skyrocket to record highs.

XRP, a token Ripple is closely associated with, benefited from that rally, hitting an all-time high above $3. It's since declined dramatically from that price but is riding the latest crypto wave with a more than 370% gain year-to-date

Ripple's technology is designed to let banks and other financial services firms send money across borders faster and at a lower cost. The company also markets another product that utilizes XRP for cross-border payments called On-Demand Liquidity.

The SEC is concerned about Ripple's ties to XRP, alleging the company and its executives sold $1.3 billion worth of the tokens in an unregistered securities offering. But Ripple contends that XRP should not be considered a security, a classification that would bring it under much more regulatory scrutiny.

It comes as regulators around the world are taking a closer look at crypto, a market that is still largely unregulated but has boomed in the last year.

Garlinghouse said the United Arab Emirates, Japan, Singapore and Switzerland are examples of countries showing "leadership" when it comes to regulating crypto, while China and India have cracked down on the industry.

"In general, the direction of travel is very positive," Garlinghouse said.

Brady Dougan, the former CEO of Credit Suisse, said regulation is a key area in crypto that's likely to develop over time.

"It's a market that's early in its development," Dougan, who now runs fintech firm Exos, told CNBC. "I think it's a healthy market and it's one that will continue to develop in a positive way."

Ripple, a privately-held company, was last valued at $10 billion and counts the likes of Alphabet's venture capital arm GV, Andreessen Horowitz and Japan's SBI Holdings as investors.

Read the rest here:
Ripple seeing 'good progress' in SEC case over XRP, outcome expected next year - CNBC

Read More..

Design of AI may change with the open-source Apache TVM and a little help from startup OctoML – ZDNet

In recent years, artificial intelligence programs have been prompting change in the design of computer chips, and novel computers have likewise made possible new kinds of neural networks in AI. There is a feedback loop going on that is powerful.

At the center of that sits the software technology that converts neural net programs to run on novel hardware. And at the center of that sits a recent open-source project gaining momentum.

Apache TVM is a compiler that operates differently from other compilers. Instead of turning a program into typical chip instructions for a CPU or GPU, it studies the "graph" of compute operations in a neural net, in TensorFlow or Pytorch form, such as convolutions and other transformations, and figures out how best to map those operations to hardware based on dependencies between the operations.

At the heart of that operation sits a two-year-old startup, OctoML, which offers ApacheTVM as a service. As explored in March by ZDNet's George Anadiotis, OctoML is in the field of MLOps, helping to operationalize AI. The company uses TVM to help companies optimize their neural nets for a wide variety of hardware.

Also:OctoML scores $28M to go to market with open source Apache TVM, a de facto standard for MLOps

In the latest development in the hardware and research feedback loop, TVM's process of optimization may already be shaping aspects of how AI is developed.

"Already in research, people are running model candidates through our platform, looking at the performance," said OctoML co-founder Luis Ceze, who serves as CEO, in an interview with ZDNet via Zoom. The detailed performance metrics mean that ML developers can "actually evaluate the models and pick the one that has the desired properties."

Today, TVM is used exclusively for inference, the part of AI where a fully-developed neural network is used to make predictions based on new data. But down the road, TVM will expand to training, the process of first developing the neural network.

"Already in research, people are running model candidates through our platform, looking at the performance," says Luis Ceze, co-founder and CEO of startup OctoML, which is commercializing the open-source Apache TVM compiler for machine learning, turning it into a cloud service. The detailed performance metrics mean that ML developers can "actually evaluate the models and pick the one that has the desired properties."

"Training and architecture search is in our roadmap," said Ceze, referring to the process of designing neural net architectures automatically, by letting neural nets search for the optimal network design. "That's a natural extension of our land-and-expand approach" to selling the commercial service of TVM, he said.

Will neural net developers then use TVM to influence how they train?

"If they aren't yet, I suspect they will start to," said Ceze. "Someone who comes to us with a training job, we can train the model for you" while taking into account how the trained model would perform on hardware.

That expanding role of TVM, and the OctoML service, is a consequence of the fact that the technology is a broader platform than what a compiler typically represents.

"You can think of TVM and OctoML by extension as a flexible, ML-based automation layer for acceleration that runs on top of all sorts of different hardware where machine learning models runGPUs, CPUs, TPUs, accelerators in the cloud," Ceze told ZDNet.

"Each of these pieces of hardware, it doesn't matter which, have their own way of writing and executing code," he said. "Writing that code and figuring out how to best utilize this hardware today is done today by hand across the ML developers and the hardware vendors."

The compiler, and the service, replace that hand tuning today at the inference level, with the model ready for deployment, tomorrow, perhaps, in the actual development/training.

Also: AI is changing the entire nature of compute

The crux of TVM's appeal is greater performance in terms of throughput and latency, and efficiency in terms of computer power consumption. That is becoming more and more important for neural nets that keep getting larger and more challenging to run.

"Some of these models use a crazy amount of compute," observed Ceze, especially natural language processing models such as OpenAI's GPT-3 that are scaling to a trillion neural weights, or parameters, and more.

As such models scale up, they come with "extreme cost," he said, "not just in the training time, but also the serving time" for inference. "That's the case for all the modern machine learning models."

As a consequence, without optimizing the models "by an order of magnitude," said Ceze, the most complicated models aren't really viable in production, they remain merely research curiosities.

But performing optimization with TVM involves its own complexity. "It's a ton of work to get results the way they need to be," observed Ceze.

OctoML simplifies things by making TVM more of a push-button affair.

"It's an optimization platform," is how Ceze characterizes the cloud service.

"From the end user's point of view, they upload the model, they compare the models, and optimize the values on a large set of hardware targets," is how Ceze described the service.

"The key is that this is automatic no sweat and tears from low-level engineers writing code," said Ceze.

OctoML does the development work of making sure the models can be optimized for an increasing constellation of hardware.

"The key here is getting the best out of each piece of hardware." That means "specializing the machine code to the specific parameters of that specific machine learning model on a specific hardware target." Something like an individual convolution in a typical convolutional neural network may become optimized to suit a particular hardware block of a particular hardware accelerator.

The results are demonstrable. In benchmark tests published in September for the MLPerf test suite for neural net inference, OctoML had a top score for inference performance for the venerable ResNet image recognition algorithm in terms of images processed per second.

The OctoML service has been in a pre-release, early access state since December of last year.

To advance its platform strategy, OctoML earlier this month announced it had received $85 million in a Series C round of funding from hedge fund Tiger Global Management, along with existing investors Addition, Madrona Venture Group and Amplify Partners. The round of funding brings OctoML's total funding to $132 million.

The funding is part of OctoML's effort to spread the influence of Apache TVM to more and more AI hardware. Also this month, OctoML announced a partnership with ARM Ltd., the U.K. company that is in the process of being bought by AI chip powerhouse Nvidia. That follows partnerships announced previously with Advanced Micro Devices and Qualcomm. Nvidia is also working with OctoML.

The ARM partnership is expected to spread use of OctoML's service to the licensees of the ARM CPU core, which dominates mobile phones, networking and the Internet of Things.

The feedback loop will probably lead to other changes besides design of neural nets. It may affect more broadly how ML is commercial deployed, which is, after all, the whole point of MLOps.

As optimization via TVM spreads, the technology could dramatically increase portability in ML serving, Ceze predicts.

Because the cloud offers all kinds of trade-offs with all kinds of hardware offerings, being able to optimize on the fly for different hardware targets ultimately means being able to move more nimbly from one target to another.

"Essentially, being able to squeeze more performance out of any hardware target in the cloud is useful because it gives more target flexibility," is how Ceze described it. "Being able to optimize automatically gives portability, and portability gives choice."

That includes running on any available hardware in a cloud configuration, but also choosing the hardware that happens to be cheaper for the same SLAs, such as latency, throughput and cost in dollars.

With two machines that have equal latency on ResNet, for example, "you'll always take the highest throughput per dollar," the machine that's more economical. "As long as I hit the SLAs, I want to run it as cheaply as possible."

See more here:
Design of AI may change with the open-source Apache TVM and a little help from startup OctoML - ZDNet

Read More..

Here’s Why Harmony’s Cryptocurrency Is Skyrocketing Today – Motley Fool

What happened

Harmony's (CRYPTO:ONE) One token is surging again today. The cryptocurrency's price was up roughly 13% over the last 24 hours of trading as of 6 p.m. ET, and it was up more than 33% earlier in the day.

There's been a general rally for a section of tokens that are tied to blockchains that are connected to a service rather than just serving as a currency, and Harmony's One token is participating in the rally. The token's price is also climbing thanks to recent updates announced by the network's development team.

Image source: Getty Images.

Harmony allows data to be easily moved across separate blockchains, including Ethereum, Binance, and several other chains. Applications can be built on the Harmony blockchain, and users pay the cost of carrying out functions with the One token.

Harmony published a message on Twitter on Nov. 22 announcing that its blockchain was now able to serve more than four times as much user activity thanks to a new update. The news sent the One token higher, and momentum has continued into Tuesday's trading. Harmony also published an update on its bridge project with Bitcoin in a tweet today, announcing that it was working with its contracted auditing company on fine-tuning, and that the auditing firm would be evaluating additional solutions on the project.

Harmony's token is up roughly 27% over the last seven days of trading. The One token now has a market capitalization of roughly $3.5 billion, making it the 53rd-largest cryptocurrency by market capitalization.

Harmony continues to look like a high-risk investment, but it's possible that its cryptocurrency will climb significantly above current prices if the network's blockchain winds up being the foundation for popular applications. With interest heating up in metaverses and decentralized finance applications built on blockchains, there could be increasing demand that works to drive One's token price higher.

This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis -- even one of our own -- helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.

Originally posted here:
Here's Why Harmony's Cryptocurrency Is Skyrocketing Today - Motley Fool

Read More..

Seeing the Future: How to Use Predictive Analytics in Your Business – Silicon UK

Your business has been collecting masses of data for several years, but is your enterprise using that information to drive your company forward? Data for its own sake is useless. However, when data forms the basis of a well-designed analytical process, tangible and actionable information can be revealed.

According to IBM, among survey respondents who had implemented predictive analytics, 66% say it provides very high or high business value. Predictive analytics initiatives show a median ROI of 145%, in comparison to non-predictive business intelligence initiatives median ROI of 89%.

Having a predictive analytics strategy for your company is now a commercial imperative. You have already completed the work to rationalise and connect datasets, the next critical step is to make that information work for your business.

Speaking to Silicon UK, Andy Crisp, senior vice president, Global Data Owner at Dun & Bradstreet explained how reliable predictive analytics can be used to base accurate predictions upon.

The reliability of any predictive analytics varies based on a number of factors: Firstly, how many data points you have to build your model given that the more collaborating data points available, the more effective the analytics, said Crisp. The second is how accurate the data collection mechanism for the data points is and, as a consequence, the quality of the data itself. And lastly, how much history you have for these data points and how far into the future youre trying to predict. If all these factors are considered, and the data itself is of a high enough quality, predictive analytics can be extremely reliable and hugely useful for businesses.

How reliable can predictive analytics be especially when a business is using these predictions for major strategic decisions upon? Predictive analytics needs data, but is the predictive analytics engines results only as good as the questions you ask it?

Ash Finnegan, digital transformation officer at Conga also offered this advice to any business leaders looking to improve how they use predictive analytics: It is crucial that companies first establish their digital maturity. This is where they currently stand in their digital transformation journey and how their data is currently being processed and stored. To do this, companies must first evaluate their operational model, assess its suitability, and identify any pain points along the entire data cycle. The key is to arrive at a clear understanding of how and where change needs to occur in a phased manner, to progress and improve the organisations overall operability and unify the data cycle.

Finnegan continued: True business intelligence enables organisations to take themselves to the next level. By establishing their digital maturity and recognising which areas of the operational model need to be improved, predictive analytics will be empowered. Leaders will have far greater visibility of their revenue operations and will have established true business and data intelligence they will be able to identify other areas that may need to be tweaked or fine-tuned, leaving them far more agile and adaptable for any given outcomes.

Implementing predictive data analytics is clearly how businesses can improve their bottom line, but as the Harvard Business Review Pulse Survey concludes, there are obstacles to overcome:

There are many interrelated obstacles to improving the use of analyzed data. The top three barriers cited by survey respondents were lack of employees with necessary skills or training (61%), lack of high-quality data (43%), and a company culture that tends to limit access to information (24%). While the first two challenges are significant, the third one might be the most pressing.

In fact, many of the barriers cited by respondents indicate lack of access, including lack of necessary technology (22%), an organizational structure that impedes access to or use of analysed data (20%), lack of leadership prioritization (19%), and hard-to-access analysed data (18%).

This analysis is telling as it reveals that infrastructure, leadership and even business culture can have an impact on how successful any predictive analytics program could be.

And there is an urgency to embrace these technologies. In their Technology Vision report, Accenture concluded that 92% of executives report that their organization is innovating with urgency and call to action this year. And that over half (63%) of executives state the pace of digital transformation is accelerating.

Leadership demands that enterprises prioritize technology innovation in response to a radically changing world, says Accenture. Small pilots and incremental scaling are an obsolete luxury, and the friction between research, development, and large-scale deployment must diminish or disappear.

However, the Pulse Survey concluded that less than a quarter (24%) of respondents rate their organisations effectiveness using analyzed data as less than optimum. Clearly, more work needs to be done by some enterprises to become data-driven businesses that use the insights predictive analytics can bring to them.

Dun & Bradstreets Andy Crisp also outlined clearly defined ways predictive analytics can be used: Businesses can leverage predictive analytics in many ways, but three of the most powerful areas in which to use it would be to predict risk, identify opportunities and improve efficiency.Crucially, leveraging predictive analytics on client payment behaviour can reduce the risk of bad debt thereby improving a business cash flow.

Perhaps just as importantly is using predictive analytics to analyse customer behaviour and identify opportunities to improve a products usability which, as a consequence, can increase revenue. Finally, analysing client consumption can improve business efficiency ensuring businesses have what the client wants when they want it while still reducing any waste.

Also, Ana Pinczuk is the Chief Development Officer at Anaplan, also explained: The past year has proven that predictive analytics is critical in helping companies anticipate and react to disruption. Weve seen more customers leverage predictive capabilities for everything from forecasting to territory planning as they realise how imperative that external view really is to their operations.

One area where weve seen this really take off is within the sales organisation. The pandemic threatened traditional revenue streams at a time when businesses were highly focused on cash flow and liquidity. Now with a fluctuating economy and job market, revenue leaders are dealing with high-attrition rates of their sales reps. Predictive analytics allows sales leaders to augment internal data with predictive attributes on things like profile fit and buyer intent so they can target and prioritise accounts that are more likely to want to buy from them. This makes it easier to build fair territories and set more realistic quotas so they can optimise their sales resources and ideally retain top sales talent.

Predictive analytics has a wide application across many business processes and customer touchpoints. What is clear for all enterprises is they must have a well-defined predictive analysis strategy that must be high on their agendas as we move into the post-pandemic era.

The strategic importance of data cant be overstated. When data is properly harnessed it can deliver tangible benefits right across a business. However, when data is used as the basis for prediction, new opportunities can often reveal themselves.

David Sweenor, a senior director of product marketing at Alteryx says that placing every piece of analytical data into its proper environment is critical: Automation and Machine Learning (ML) have one primary limitation: context. Without context, making insightful, timely, and accurate predictions is a challenge. While automated analysis functions are extremely effective, they are hollow without knowing how, and where, to apply these learnings most efficiently.

The current skills gap continues to be an issue for businesses looking to make the most of their data, with data prep still one of the biggest issues. On average, data workers leverage more than six data sources, 40 million rows of data and seven different outputs along their analytic journey. Much of the emphasis ML has been on the technology, not the people, and thats where failed projects are rooted.

Sweenor concluded: Another pitfall is the ethics and bias consideration. Artificial intelligence doesnt make moral judgements. It is not inherently biased, but historical data and the creators of the model could be. When using machine learning and advanced analytic methods for predictive analytics, we need to be careful that inputs dont bias the outcomes. Today its data, and not instinct, that facilitates most business decisions.

And what does the future of predictive analytics look like? For Nelson Petracek, global CTO, TIBCOthere are several strands to the development of this technology.Based on my conversations with customers and partners, predictive analytics will continue to evolve in a number of different ways: The technology will become more immersive and embedded, where predictive analytics capabilities will be blended seamlessly into the systems and applications with which we interact.

The technology will be made available to broader audiences (not just data scientists) through the use of continuously improving tools and model-driven approaches to development. Tools will broaden in capability to include not only model development, but also additional functions such as data collection, data quality, and model operationalisation into an end-to-end data fabric or data mesh (depending on your point of view).

Open data and AI/ML model ecosystems will grow, supported by technologies such as APIs, federated learning, and blockchain. And predictive analytics will drive new, emerging use cases around the next generation of digital applications, including metaverse applications (convergence of digital and physical worlds, powered by technologies such as IoT, digital twins, AI/ML, and XR) and the next generation of composable applications.

Can businesses really see the future? Its certainly now possible to make accurate guesses over a wide range of critical business questions. The data that is needed to make these predictions is embedded within every enterprise. The key is to ask the right questions.

Ana Pinczuk is the Chief Development Officer at Anaplan concluded: Business has never been more unpredictable, so companies need access to more data sources and signals than ever before to model future outcomes and react quickly to disruption.We are going to see more integrations and partnerships to access data sources and make predictive analytics and intelligence capabilities from Machine Learning and AI to advanced data science techniques more accessible to the average business user. We need to democratise access to predictive analytics, and technology partnerships are a key part of that equation.

Tim El-Sheik, CEO and co-founder of Nebuli.

Tim El-Sheikh is a biomedical scientist, an entrepreneur since 2001, and CEO and the co-founder of Nebuli, the worlds first Augmented Intelligence Studio. Since the age of 10, he has been a self-taught coder, and has a real passion for designing enhanced human experiences through intelligent algorithms. After a masters degree in Computer Science and Information Technology, Tim combined his experience in design, neuroscience, and engineering to start as an entrepreneur in online multitier system architectures in the media and advertising sectors, scientific publishing, and social enterprises.

What is the difference between business intelligence and predictive analytics?

The conventional definition of business intelligence is the application of various data mining techniques, visualisation tools, and analytics to overview a companys current position as well as its performance within the marketplace. Whilst business intelligence is about asking: Where are we now as a business? predictive analytics involves a more detailed analysis of past and current data trends to predict what will happen in the future in an educated way. However, our view at Nebuli is that modern business intelligence must also involve predictive analytics and one should not be used without the other.

How reliable can predictive analytics be, especially when a business is using these predictions for major strategic decision upon?

Any educated forecast depends entirely on past experiences and accumulated data to tell us about the future. What would predictive analytics tell us about coping with future pandemics after what we have experienced? In other words, if we state our assumptions that the future prediction depends on the previous data, then we can have a better understanding of what the analytics are able to predict.

The problem comes when people do not understand the underlying assumptions in the forecast and end up producing inaccurate predictions. That is why it is essential to combine as many data sources as possible, including those generated from business intelligence, to maximise the accuracy of any assumptions.

What are businesses most interested in using predictive analytics for? For example, pricing or product design?

Predicting the future is the holy grail of making money in all sectors! Predictive analytics are heavily used in the finance sector to understand and assess risk. The future of interest rates, for instance, is of key importance to commercial lenders, as well as borrowers and for any company that trades globally, predicting exchange rates is essential in holding funds in strong currencies.

Another example is investment markets, where predicting the price of an asset is critically important for investors. Other sectors such as sports, where bookkeepers predict possible outcomes of football matches or retail, where predicting sales of a product under a price change is critical due to market price elasticity, also make important use of those analytics.

Predictive analytics needs data, but is the predictive analytics engines results only as good as the questions you ask it?

To some extent, it is, however, getting good results from the predictive analytics engine is not that simple. It is, for example, essential to remain critical of the assumptions made from the given data and always get verification from an experienced data expert about the way these data are being used. Businesses should also have a robust quality management (QM) process around the use of data and include the outcomes in the risk register for the company.

Crucially, decisions should not be made without those elements reinforcing the validity of the data or solely based on the instincts of leaders in the business. In addition to that, including elements of behavioural analysis of the target customers and employees productivity is something that we also encourage at Nebuli to ensure we are happy with the outcome. Overall, it is more about building a comprehensive blueprint of your forecast.

What are the pitfalls to watch out for when using predictive analytics across an enterprise?

The biggest pitfall we see is enterprises believing that their data is structured and holds the answers to all of their questions about the future. Most of the time, this is not the case, and I would go as far as saying that there is no such thing as structured data! Why? Because your companys data can be seen structured if it complies with your data inputting policies even though those policies may have been around for several years.

Your predictions are then based on your data output and possibly hold newly acquired data from other channels that were not part of the original data input processes. Hence, your combined output might not match the key questions you need to answer in an ever-changing data-driven world. That is why we actively advocate enterprises to adopt a comprehensive data strategy as early as possible, which is the foundation for successful business intelligence and predictive analytics.

What do you think the future of predictive analytics looks like?

Avoid the hype! Machine learning and AI are the two key buzzwords that are being pushed around as the holy grail of modern business intelligence and predictive analytics. While both AI and machine learning algorithms can add significant value, the critical point goes back to your data. Without a clear data strategy, no matter how much AI or machine learning algorithms you apply, your analysis will not be any better. In fact, AI can amplify prediction errors and biases much further if the data structure is not scrutinised, optimised or analysed in detail as part of your data strategy.

Photo byGantas VaiiulnasfromPexels

See more here:

Seeing the Future: How to Use Predictive Analytics in Your Business - Silicon UK

Read More..

Global Marketing Automation Market Report 2021: Market to Reach $6.3 Billion by 2026 – GlobeNewswire

Dublin, Nov. 24, 2021 (GLOBE NEWSWIRE) -- The "Marketing Automation - Global Market Trajectory & Analytics" report has been added to ResearchAndMarkets.com's offering.

Global Marketing Automation Market to Reach $6.3 Billion by 2026

Amid the COVID-19 crisis, the global market for Marketing Automation estimated at US$3.9 Billion in the year 2020, is projected to reach a revised size of US$6.3 Billion by 2026, growing at a CAGR of 8.6% over the analysis period. Cloud, one of the segments analyzed in the report, is projected to grow at a 9.6% CAGR to reach US$4.6 Billion by the end of the analysis period.

Growth in the global is set to be driven by rise of digital advertising, growing usage of the Internet and other technologies, and surging popularity of social media networks. Companies are increasingly relying on the digital media marketing techniques such as search engine marketing, social media marketing, online advertising and mobile advertising while continuing to engage in traditional channels to gain benefits of both the worlds.

Ensuring that the brand stands available, relevant and consistent on social media is difficult for various companies. In addition, organizations are required to regularly update blogs and information while tracking trends, measuring effectiveness of social efforts and engaging with customers. These issues have paved way for social media automation solutions that allow companies to realize the power of marketing automation along with social media to drive gains.

After a thorough analysis of the business implications of the pandemic and its induced economic crisis, growth in the On-Premise segment is readjusted to a revised 6.8% CAGR for the next 7-year period. This segment currently accounts for a 37.3% share of the global Marketing Automation market. Cloud-based tools allow marketers to gain more control over their marketing and business content. These tools allow for the proper implementation of strategies independently without the need to rely on other departments.

The U.S. Market is Estimated at $1.2 Billion in 2021, While China is Forecast to Reach $898.4 Million by 2026

The Marketing Automation market in the U.S. is estimated at US$1.2 Billion in the year 2021. The country currently accounts for a 29.31% share in the global market. China, the world's second largest economy, is forecast to reach an estimated market size of US$898.4 Million in the year 2026 trailing a CAGR of 10.6% through the analysis period.

Among the other noteworthy geographic markets are Japan and Canada, each forecast to grow at 7.1% and 7.4% respectively over the analysis period. Within Europe, Germany is forecast to grow at approximately 8.3% CAGR while Rest of European market (as defined in the study) will reach US$989.3 Million by the end of the analysis period.

In the US, the COVID-19 pandemic onset led to a significant impact on digital advertising during the early part of 2020. However, in the second half of the year, the holiday season and ad spend by political parties aided in compensating for the losses registered earlier in the year. Digital ad spend therefore increased at a double-digit rate for the year.

The increase in online shopping, home deliveries, and connected TV helped maintain the market`s growth. Thriving economies, growing employment opportunities, rising income levels, continuous development of cellular markets, rising 4G penetrations, and increasing spending power in major countries are driving growth prospects in the Asia-Pacific region.

Key Topics Covered:

I. METHODOLOGY

II. EXECUTIVE SUMMARY

1. MARKET OVERVIEW

2. FOCUS ON SELECT PLAYERS (Total 252 Featured)

3. MARKET TRENDS & DRIVERS

4. GLOBAL MARKET PERSPECTIVE

III. REGIONAL MARKET ANALYSIS

IV. COMPETITION

For more information about this report visit https://www.researchandmarkets.com/r/snjh6

Go here to see the original:
Global Marketing Automation Market Report 2021: Market to Reach $6.3 Billion by 2026 - GlobeNewswire

Read More..

Japanese firms will test a bank-backed cryptocurrency in 2022 – Yahoo Tech

Japan is about to take a significant step toward developing a digital currency. Per Reuters, a consortium made up of approximately 70 Japanese firms said this week they plan to launch a yen-based cryptocurrency in 2022. Whats notable about the project, tentatively called DCJPY, is that three of the countrys largest banks will back it. At a news conference on Wednesday, Mitsubishi UFJ Financial Group, Mizuho Financial Group and Sumitomo Mitsui Financial Group said theyve been meeting since last year to build a shared settlement infrastructure for digital payments.

Some of the other members of the consortium include the East Japan Railway Company and Kansai Electric Power Company. They plan to start testing the currency in the coming months. The experiment is separate from the work the Bank of Japan is doing to create a digital yen. CBDCs are something China and the US are exploring as well. For Japan, theres an additional incentive to the push. Its a country that famously loves cash. Even as recently as 2018, 80 percent of all retail transactions in the country were completed in notes and coins. Its something the government of Japan has tried to change as a way to make the countrys economy more consumer-friendly and productive.

See the original post here:
Japanese firms will test a bank-backed cryptocurrency in 2022 - Yahoo Tech

Read More..

Bullish: Analysts Just Made A Significant Upgrade To Their Evolution Mining Limited (ASX:EVN) Forecasts – Simply Wall St

Evolution Mining Limited (ASX:EVN) shareholders will have a reason to smile today, with the analysts making substantial upgrades to this year's statutory forecasts. Consensus estimates suggest investors could expect greatly increased statutory revenues and earnings per share, with the analysts modelling a real improvement in business performance.

Following the upgrade, the latest consensus from Evolution Mining's 16 analysts is for revenues of AU$2.1b in 2022, which would reflect a decent 13% improvement in sales compared to the last 12 months. Per-share earnings are expected to bounce 23% to AU$0.23. Before this latest update, the analysts had been forecasting revenues of AU$1.9b and earnings per share (EPS) of AU$0.20 in 2022. So we can see there's been a pretty clear increase in analyst sentiment in recent times, with both revenues and earnings per share receiving a decent lift in the latest estimates.

View our latest analysis for Evolution Mining

It will come as no surprise to learn that the analysts have increased their price target for Evolution Mining 5.9% to AU$4.37 on the back of these upgrades. That's not the only conclusion we can draw from this data however, as some investors also like to consider the spread in estimates when evaluating analyst price targets. Currently, the most bullish analyst values Evolution Mining at AU$5.60 per share, while the most bearish prices it at AU$3.50. Analysts definitely have varying views on the business, but the spread of estimates is not wide enough in our view to suggest that extreme outcomes could await Evolution Mining shareholders.

One way to get more context on these forecasts is to look at how they compare to both past performance, and how other companies in the same industry are performing. It's clear from the latest estimates that Evolution Mining's rate of growth is expected to accelerate meaningfully, with the forecast 13% annualised revenue growth to the end of 2022 noticeably faster than its historical growth of 7.5% p.a. over the past five years. Compare this with other companies in the same industry, which are forecast to see a revenue decline of 0.3% annually. So it's clear with the acceleration in growth, Evolution Mining is expected to grow meaningfully faster than the wider industry.

The most important thing to take away from this upgrade is that analysts upgraded their earnings per share estimates for this year, expecting improving business conditions. Fortunately, they also upgraded their revenue estimates, and our data indicates sales are expected to perform better than the wider market. Given that the consensus looks almost universally bullish, with a substantial increase to forecasts and a higher price target, Evolution Mining could be worth investigating further.

Still, the long-term prospects of the business are much more relevant than next year's earnings. We have estimates - from multiple Evolution Mining analysts - going out to 2024, and you can see them free on our platform here.

Another way to search for interesting companies that could be reaching an inflection point is to track whether management are buying or selling, with our free list of growing companies that insiders are buying.

This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned.

Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) simplywallst.com.

See the article here:

Bullish: Analysts Just Made A Significant Upgrade To Their Evolution Mining Limited (ASX:EVN) Forecasts - Simply Wall St

Read More..