Page 1,814«..1020..1,8131,8141,8151,816..1,8201,830..»

Niki Narayani Named SEC Co-Runner of the Week – Vanderbilt University

NASHVILLE, Tenn. Senior Niki Narayani has been named SEC Co-Runner of the Week as announced by the conference office Tuesday morning.

Niki has given the program its first glimpse into what we are working relentlessly to display this season, said Althea Thomas, Vanderbilt director of cross country, track and field. Hard work coupled with faith and execution was the formula Niki used to compete in the first meet and is the formula that has gotten her recognition among our SEC peers. She is a great leader for our program and a catalyst for the year.

Narayani was the womens 5k winner with a time of 18:00.88 in her come-from-behind victory, bringing Vandy to a total of 24 points. She paced the efforts of the Commodores, who began the competition with a 45-second delayed start.

Narayanis finish set the tone for the rest of the team as Vanderbilt finished in the top 11 and completed the race in less than 20 minutes. The Vanderbilt women are currently ranked No. 6 in the South Region, according to a poll by U.S. Track and Field Cross Country Coaches Association. The Dores finished ahead of Lipscomb, who is ranked seventh in the same region.

The Commodores have the weekend off before heading to Bloomington, Indiana, for the Coaching Tree Invitational on Sept. 16, hosted by Indiana University.

See the original post:

Niki Narayani Named SEC Co-Runner of the Week - Vanderbilt University

Read More..

Ten questions about the hard limits of human intelligence – Aeon

Despite his many intellectual achievements, I suspect there are some concepts my dog cannot conceive of, or even contemplate. He can sit on command and fetch a ball, but I suspect that he cannot imagine that the metal can containing his food is made from processed rocks. I suspect he cannot imagine that the slowly lengthening white lines in the sky are produced by machines also made from rocks like his cans of dog food. I suspect he cannot imagine that these flying repurposed dog food cans in the sky look so small only because they are so high up. And I wonder: is there any way that my dog could know that these ideas even exist? It doesnt take long for this question to spread elsewhere. Soon I start to wonder about concepts that I dont know exist: concepts whose existence I can never even suspect, let alone contemplate. What can I ever know about that which lies beyond the limits of what I can even imagine?

Attempting to answer this question only leads us to more questions. In this essay, Im going to run through a sequence of 10 queries that provide insight into how we might begin conceiving of whats at stake in such a question and how to answer it and there is much at stake. The question of what we can know of that which lies beyond the limits of our imagination is partially about the biological function of intelligence, and partially about our greatest cognitive prostheses, particularly human language and mathematics. Its also about the possibility of a physical reality that far exceeds our own, or endless simulated realities running in the computers of advanced nonhuman lifeforms. And its about our technological progeny, those children who will one day cognitively eclipse us. From the perspective of my 10 queries, human exceptionalism becomes very shaky. Perhaps we are more like dogs (or single-celled paramecia) than wed care to admit. Though human history is filled with rhapsodic testimony to human ingenuity and intelligence, this sequence of questions paints a different picture: I want to emphasise how horribly, and perhaps horrifyingly, limited and limiting our achievements are our language, science, and mathematics.

And so, the first question in the sequence is simple:

1. On some ill-defined objective scale, are we smart or are we stupid?

For vast stretches of time, the highest level of intelligence on Earth seems to have increased very slowly, at best. Even now, our brains process sensory-motor information using all kinds of algorithmic shenanigans that allow us to do as little actual thinking as possible. This suggests that the costs associated with intelligence are high. It turns out that brains are extraordinarily expensive metabolically on a per-unit-mass basis, far more than almost all other organs (the heart and liver being the exceptions). So, the smarter an organism is, the more food it needs, or it dies. Evolutionarily speaking, it is stupid to be smart.

We do not have a good understanding of exactly how our neural hardware grants us abstract intelligence. We do not understand how brain makes mind. But given that more intelligence requires more brain mass, which results in more metabolic costs, one would expect us to have the lowest possible level of abstract intelligence required for surviving in the precise ecological niche in which Homo sapiens developed: the barest minimum intelligence needed to scrape through a few million years of hunting and gathering until we got lucky and stumbled into the Neolithic Revolution.

Is this conclusion correct? To gain insight into the question of whether were smart or stupid, note that there are multiple types of intelligence. The ability to sense the external world is one such type of cognitive capability; the ability to remember past events is another; the ability to plan a future sequence of actions is another. And there are myriad cognitive capabilities that other organisms have but that we lack. This is true even if we consider only intelligences that we have created: modern digital computers vastly outperform us computationally in myriad ways. Moreover, the small set of those cognitive tasks that we can still perform better than our digital computers is substantially shrinking from year to year.

Maybe our mathematics can capture only a tiny sliver of reality

This will continue to change. The capabilities of future terrestrial organisms will likely exceed the current level of our digitally augmented intelligence. This sense of cognitive expansion is not unique to our current moment in history. Think about the collective cognitive capability of all organisms living on Earth. Imagine a graph showing this collective capability changing over billions of years. Arguably, no matter what precise time-series analysis technique we use, and no matter how we formalise cognitive capability, we will conclude that the trend line has a strictly positive slope. After all, in no period has the highest level of some specific cognitive capability held by any entity in the terrestrial biosphere shrunk; the entire biosphere has never lost the ability to engage in certain kinds of cognitive capability. Also, there is not just growth over time in the degree of each cognitive capability among all terrestrial species, but a growth in the kinds of cognitive capability. Life has become only smarter, and smarter in different ways. If we simply extrapolate this trend into the future, were forced to conclude that some future organisms will have cognitive capabilities that no currently living Terran species has including us.

Despite preening in front of our collective mirror about how smart we are, it seems that we have highly limited cognitive abilities compared with those that we (or other Terran organisms) will have in the future.

However, before getting too comfortable with this conclusion, we need to look a little closer at our graph of collective capability. Up until around 50,000 years ago, the collective intelligence on Earth was increasing gradually and smoothly. But then there was a major jump as modern Homo sapiens started on a trajectory that would ultimately produce modern science, art and philosophy. It may appear as though we are still part of this major jump, this vast cognitive acceleration, and that our kinds of intelligence far exceed those of our hominin ancestors.

2. Why does there appear to be a major chasm between the cognitive capabilities of our hominin ancestors and the cognitive capabilities of modern scientists, artists and philosophers?

There is no evident fitness benefit for a savannah-forged hairless ape to be able to extract from the deepest layers of physical reality cognitive palaces like the Standard Model of particle physics, Chaitins incompleteness theorem, or the Zen parable Ten Verses on Oxherding. In fact, there are likely major fitness costs to having such abilities. So why do we have them?

To grapple with this, its helpful to focus on the most universal of humanitys achievements, the most graphic demonstrations of our cognitive abilities: our science and mathematics. Our ability to exploit science and mathematics has provided us with cognitive prostheses and extended minds, from printing presses to artificial intelligences. Furthermore, the capabilities of those extended minds have been greatly magnified over time by the cumulative collective process of culture and technological development. In turn, these extended minds have accelerated the development of culture and technology. This feedback loop has allowed us to expand our cognitive capabilities far beyond those generated solely by genotypic evolution. The loop may even be the cause of the chasm between the cognitive capabilities of our hominin ancestors and the cognitive capabilities of the modern scientists, artists and philosophers.

Though the feedback loop has inflated our original cognitive capabilities (those generated by genotypic evolution), it is not clear that it has provided us with any wholly new cognitive capabilities. In fact, it might never be able to. Perhaps future forms of science and mathematics, generated via the feedback loop, will be forever constrained by the set of cognitive capabilities we had when we first started running the loop.

This suggests a different kind of resolution to the chasm between the cognitive abilities of our hominin ancestors and those of modern humans. Maybe the gap is not really a chasm at all. Perhaps it is more accurately described as a small divot in a vast field of possible knowledge. In an article titled The Unreasonable Effectiveness of Mathematics in the Natural Sciences (1960), the Hungarian-American theoretical physicist Eugene Wigner asked why our mathematical theories work so well at capturing the nature of our physical reality. Maybe the answer to Wigners question is that our mathematics isnt very effective at all. Maybe our mathematics can capture only a tiny sliver of reality. Perhaps the reason it appears to us to be so effective is because our range of vision is restricted to that sliver, to those few aspects of reality that we can conceive of.

The interesting question is not why our augmented minds seem to have abilities greater than those necessary for the survival of our ancestors. Rather, its whether our augmented minds will ever have the minimal abilities necessary for grasping reality.

3. Even aided by our extended minds, can we ever create entirely new forms of science and mathematics that could access aspects of physical reality beyond our conception, or are we forever limited to merely developing the forms we already have?

In 1927, an earlier version of this question was suggested by the English scientist John Burdon Sanderson Haldane in his book of essays Possible Worlds. Now, my own suspicion, he wrote, is that the universe is not only queerer than we suppose, but queerer than we can suppose. In the years that followed, similar verbal baubles suggested that the Universe may be stranger or odder than we can imagine or conceive. But, having other fish to fry, the authors of these early texts rarely fleshed out what they meant. They often implied that the Universe may be stranger than we can currently imagine due to limitations in current scientific understanding, rather than inherent limitations of what we can ever do with future efflorescences of our minds. Haldane, for example, believed that once we embraced different points of view, reality would open itself to us: one day man will be able to do in reality what in this essay I have done in jest, namely, to look at existence from the point of view of non-human minds.

In the decades since, other forms of this question have appeared in academic literature mostly in studies of the hard problem of consciousness and the closely related mind-body problem. This work on consciousness and minds has echoed Haldane by chasing the point of view of octopuses, viruses, insects, plants, and even entire ecosystems in the search for intelligence beyond the human.

The question of whether we are in a simulation or not is actually rather trivial

Many of these investigations have been informal, reflecting the squishy, hard-to-pin-down nature of the hard problem of consciousness. Fortunately, we can approach the underlying question of whether we can think beyond our current limits in a more rigorous manner. Consider the recently (re)popularised idea that our physical universe might be a simulation produced in a computer operated by some super-sophisticated race of aliens. This idea can be extended ad infinitum: perhaps the aliens simulating our universe might themselves be a simulation in the computer of some even more sophisticated species in a sequence of ever-more sophisticated aliens. Going in the other direction, in the not-too-distant future we might produce our own simulation of a universe, complete with entities who have cognitive capabilities. Perhaps those simulated entities can produce their own simulated universe, and so on and so on. The result would be a sequence of species, each running a computer simulation that produces the one just below it, with us somewhere in the sequence.

This question of whether we are in a simulation or not is actually rather trivial: yes, in some universes we are a simulation, and no, in some other universes we are not. For arguments sake though, lets restrict attention to universes in which we are indeed simulated. This leads us to our next question.

4. Is it possible for an entity that exists only in a computer simulation to run an accurate computer simulation of the higher entity that simulated them?

If the answer is no, then whatever we contemplate in our universe is only a small subset of what can be known by those who reside higher in the sequence of more complex simulations. And if the answer is no, it would mean that there are deep aspects of reality that we cannot even imagine.

Of course, the answer to this question depends on the precise definitions of terms such as simulation and computer. Formal systems theory and computer science provide many theorems that suggest that, whatever definitions we adopt, the answer to the question is indeed no. However, rather than expounding on these theorems that suggest our cognitive abilities are limited, Id like to take a step back. These theorems are examples of the content of our mathematics, examples of our mathematical ability and ideas. Much of this content already suggests our cognitive abilities are too limited to fully engage with reality. But what about other aspects of our mathematics?

5. Does the form, rather than the content, of our science and mathematics suggest that the cognitive abilities of humans are also severely constrained?

Open any mathematics textbook and youll see equations linked by explanatory sentences. Human mathematics is really the sum total of every equation and explanatory sentence inside every mathematics textbook ever written.

Now notice that each of those sentences and equations is a finite sequence of marks on the page, a finite sequence of visual symbols consisting of the 52 letters of the Latin alphabet, as well as special symbols such as + and =. For example, 1 + 1 + y = 2x is a sequence of eight elements from a finite set of marks. What we call mathematical proofs are strings of such finite sequences strung together.

This feature of human mathematics has implications for an understanding of reality in the broadest sense. To paraphrase Galileo, all our current knowledge about physics our formal understanding of the foundations of physical reality is written in the language of mathematics. Even the less formal sciences are still structured in terms of human language, using finite strings of symbols, like mathematics. This is the form of our knowledge. Our understanding of reality is nothing more than a large set of finite string sequences, each containing elements from a finite set of possible symbols.

Note that any sequence of marks on a page has no more meaning in and of itself than the sequences one might find in the entrails of a sacrificed sheep, or in the pattern of cracks in a heated tortoise shell. This observation isnt new. Much work in philosophy is a reaction to this observation that our science and mathematics is just a set of finite sequences of symbols with no inherent meaning. This work tries to formalise the precise way that such finite sequences might refer to something outside of themselves the so-called symbol-grounding problem in cognitive science and philosophy. The field of mathematics has reacted to this observation in a similar way, expanding formal logic to include modern model theory (the study of the relationships between sentences and the models they describe) and metamathematics (the study of mathematics using mathematics).

What is truly stunning about the fact that modern science and mathematics are formulated through a sequence of marks is its exclusivity: nothing other than these finite sequences of symbols is ever found in modern mathematical reasoning.

6. Are these finite strings of symbol sequences the form of our mathematics and languages necessary features of physical reality, or do they instead reflect the limits of our ability to formalise aspects of reality?

This question immediately gives rise to another:

7. How would our perception of reality change if human mathematics were expanded to include infinite strings of symbol sequences?

Infinite proofs with an infinite number of lines would never reach their conclusion in finite time, if evaluated at a finite speed. To reach their conclusion in finite time, our cognitive abilities would need to implement some kind of hypercomputation or super-Turing computing, which are fancy ways of referring to speculative computers more powerful than any we can currently construct. (An example of a hypercomputer is a computer on a rocket that approaches the speed of light, and so exploits relativistic time dilation to squeeze an arbitrarily large amount of computation into a finite amount of time.)

But even with hypercomputation, this suggested extension of our current form of mathematics would still be presented in terms of human mathematics. What would a mathematics be like whose very form could not be described using a finite sequence of symbols from a finite alphabet?

The American philosopher Daniel Dennett and others have pointed out that the form of human mathematics, and of our sciences more generally, just happens to exactly coincide with the form of human language. Indeed, starting with Ludwig Wittgenstein, it has become commonplace to identify mathematics as a special case of human language, with its own kind of grammar like that which arises in human conversation.

I marvel at the limits of human language, and the fact that these limitations appear to be universal

The design of inter-human communication matches that of formal logic and Turing-machine theory. Some philosophers have taken this as a wonderful stroke of fortune. We happen to have a cognitive prosthesis human language capable of capturing formal logic. They presume this means we are also capable of fully capturing the laws of the physical universe.

A cynic might comment, with heavy irony: How lucky can you get? Humans have exactly the cognitive capabilities needed to capture all aspects of physical reality, and not a drop more! A cynic might also wonder whether an ant, who is only capable of formulating the rules of the Universe in terms of pheromone trails, would conclude that it is a great stroke of fortune that ants happen to have the cognitive capability of doing precisely that; or whether a phototropic plant would conclude that it is a stroke of fortune that they happen to have the cognitive capability to track the Sun, since that must mean that they can formulate the rules of the Universe.

Linguists such as Noam Chomsky and others have marvelled at the fact that human language allows recursion, that we can produce arbitrary sequences of symbols from a finite alphabet. They marvel at the fact that humans can create what appears to be an apparently amazingly large set of human languages. But I marvel at the limits of human language. I marvel at the limits of our science and mathematics. And I marvel at the fact that these limitations appear to be universal.

8. Is it a lucky coincidence that mathematical and physical reality can be formulated in terms of our current cognitive abilities, or is it just that, tautologically, we cannot conceive of any aspects of mathematical and physical reality that cannot be formulated in terms of our cognitive capabilities?

Consider a single-celled, oblong paramecium, the kind that float in oceans or stagnant pools. It may seem obvious, but a paramecium like my dog cannot conceive of the concept of a question concerning issues that have no direct impact on its behaviour. A paramecium cannot understand the possible answers we have considered for our questions concerning reality, but neither would it understand the questions themselves. More fundamentally, though, no paramecium can even conceive of the possibility of posing a question concerning physical reality. Insofar as the cognitive concept of questions and answers might be a crucial tool to any understanding of physical reality, a paramecium lacks the tools needed to understand physical reality. It presumably does not even understand what understanding reality means, in the sense that we are using the term. Ultimately, this is due to limitations in the kind of cognitive capabilities paramecia possess. But are we so different? We almost surely have similar kinds of limitations in terms of our cognitive capabilities. So, the penultimate (and ironically self-referential) question in this essay is:

9. Just as the notion of a question is forever beyond a paramecium, are there cognitive constructs that are necessary for understanding physical reality, but that remain unimaginable due to the limitations of our brains?

It may help to clarify this question by emphasising what it is not. This question does not concern limitations on what we can know about what it is that we can never know. We can conceive of many things even if they can never be known. But among those things that we can never know is a strictly smaller subset of things that we cannot imagine. The issue is what we can ever perceive of that smaller set.

For example, we can conceive of other branches of the many worlds of quantum mechanics, even if we cannot know what happens in those branches. I am not here concerned with this kind of unknowable. Nor am I concerned with values of variables that are unknown to us simply because we cannot directly observe them, such as the variables of events outside our Hubble sphere, or events within the event horizon of a black hole. These events can never be known to us for the simple reason that our ancillary engineering capabilities are not up to the task, not for any reasons intrinsic to limitations of the science and maths our minds can construct. They can be known, but we cannot find a path to such knowledge.

The concern here is what kinds of unknowable cognitive constructs might exist that we can never even be aware of, never mind describe (or implement).

It seems likely that our successors will have a larger set of things they can imagine than our own

The paramecium cannot even conceive of the cognitive construct of a question in the first place, never mind formulate or answer a question. I wish to draw attention to the issue of whether there are cognitive constructs that we cannot conceive of but that are as crucial to understanding physical reality as the simple construct of a question. I am emphasising the possibility of things that are knowable, but not to us, because we are not capable of conceiving of that kind of knowledge in the first place.

This returns us to an issue that was briefly discussed above, of how the set of what-we-can-imagine might evolve in the future. Suppose that what-can-be-known-but-not-even-conceived-of is non-empty. Suppose we can know something about that which we truly cant imagine.

10. Is there any way that we could imagine testing whether our future science and mathematics can fully capture physical reality?

From a certain perspective, this question might appear to be a scientific version of a conspiracy theory, writ large. One might argue that it is not so different to other grand unsolvable questions. We also cant prove that ghosts dont exist, either theoretically or empirically; nor that Marduk, the patron god of ancient Babylon, doesnt really pull the strings in human affairs. However, there are at least three reasons to suspect that we actually can find the answer to (some aspects of) the question. Firstly, we could make some inroads if we ever constructed a hypercomputer and exploited it to consider the question of what knowledge is beyond us. More speculatively, as our cognitive abilities grow, we might be able to establish the existence of what we can never conceive of through observation, simulation, theory or some other process. In other words, it may be that the feedback loop between our extended minds and our technology does let us break free of the evolutionary accident that formed our hominin ancestors brains. Second, suppose we encounter extraterrestrial intelligence and can plug into, for example, some vast galaxy-wide web of interspecies discourse, containing a cosmic repository of questions and answers. To determine whether there are aspects of physical reality that are knowable but that humans cannot even conceive of might require nothing more than posing that question to the cosmic forum, and then learning the answers that are shared.

Consider our evolutionary progeny in the broadest sense: not just future variants of our species that evolve from us via conventional neo-Darwinian evolution, but future members of any species that we consciously design, organic or inorganic (or both). It seems quite likely that the minds of such successors will have a larger set of things they can imagine than our own.

It also seems likely that these cognitively superior children of ours will be here within the next century. Presumably we will go extinct soon after their arrival (like all good parents making way for their children). So, as one of our last acts on our way out the door, as we gaze up at our successors in open-mouthed wonder, we can simply ask our questions of them.

Parts of this essay were adapted from the article What Can We Know About That Which We Cannot Even Imagine? (2022) by David Wolpert.

Published in association with the Santa Fe Institute, an Aeon Strategic Partner.

Visit link:

Ten questions about the hard limits of human intelligence - Aeon

Read More..

Filings buzz in the mining industry: 30% increase in big data mentions in Q2 of 2022 – Mining Technology

Mentions of big data within the filings of companies in the mining industry rose 30% between the first and second quarters of 2022.

In total, the frequency of sentences related to big data between July 2021 and June 2022 was 279% higher than in 2016 when GlobalData, from whom our data for this article is taken, first began to track the key issues referred to in company filings.

When companies in the mining industry publish annual and quarterly reports, ESG reports and other filings, GlobalData analyses the text and identifies individual sentences that relate to disruptive forces facing companies in the coming years. Big data is one of these topics - companies that excel and invest in these areas are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

To assess whether big data is featuring more in the summaries and strategies of companies in the mining industry, two measures were calculated. Firstly, we looked at the percentage of companies which have mentioned big data at least once in filings during the past twelve months - this was 57% compared to 24% in 2016. Secondly, we calculated the percentage of total analysed sentences that referred to big data.

Of the 10 biggest employers in the mining industry, Caterpillar was the company which referred to big data the most between July 2021 and June 2022. GlobalData identified 25 big data-related sentences in the United States-based company's filings - 0.4% of all sentences. Sibanye-Stillwater mentioned big data the second most - the issue was referred to in 0.18% of sentences in the company's filings. Other top employers with high big data mentions included Honeywell, ThyssenKrupp and CIL.

Across all companies in the mining industry the filing published in the second quarter of 2022 which exhibited the greatest focus on big data came from Erdemir. Of the document's 2,780 sentences, 10 (0.4%) referred to big data.

This analysis provides an approximate indication of which companies are focusing on big data and how important the issue is considered within the mining industry, but it also has limitations and should be interpreted carefully. For example, a company mentioning big data more regularly is not necessarily proof that they are utilising new techniques or prioritising the issue, nor does it indicate whether the company's ventures into big data have been successes or failures.

GlobalData also categorises big data mentions by a series of subthemes. Of these subthemes, the most commonly referred to topic in the second quarter of 2022 was 'data analytics', which made up 72% of all big data subtheme mentions by companies in the mining industry.

See the original post here:

Filings buzz in the mining industry: 30% increase in big data mentions in Q2 of 2022 - Mining Technology

Read More..

Pecan AI Leaps Over the Skills Gap to Enable Data Science On Demand – Datanami

As the big data analytics train keeps rolling on, there are still kinks to work out when implementing it in the business world. Building and maintaining a big data infrastructure capable of quickly turning large data sets into actionable insights requires data science expertise a skillset in high demand but with often low availability. There is also a skills gap between data scientists, analysts, and business users, and while several low or no-code platforms have aimed to resolve this, complexity remains for certain use cases.

One company looking to bridge the gap between business analytics and data science is Pecan AI. The company says its no-code predictive analytics platform is designed for business users across sales, marketing, and operations, as well as the data analytics teams that support them.

Pecan was built under the assumption that the demand for data science far exceeds the supply of data scientists. We said from the get-go, we wanted to help non-data scientists, specifically BI analysts, to basically leap through the gap of data science knowledge with our platform, Pecan AI CEO Zohar Bronfman told Datanami in an interview.

The Pecan AI platform allows users to connect their various data sources through its no-code integration capabilities. A drag-and-drop, SQL-based user interface enables users to create machine learning-ready data sets. Pecans proprietary AI algorithms can then build, optimize, and train predictive models using deep neural networks and other ML tools, depending on the needs of the specific use case. With less statistical knowledge required, along with automated data preparation and feature selection, the platform removes some of the technical barriers that BI analysts may face when leveraging data science.

Interestingly enough, in most of the data science use cases, you would spend, as a data scientist, more time and effort on getting the data right, extracting it, cleansing it, collating it, structuring it, and many other things that basically define data science use cases. And thats what weve been able to automate, so that analysts who have never done this before will be able to do so, said Bronfman.

Additionally, the platform offers monitoring features to continually analyze data for more accurate predictions, prioritize features as their importance changes over time, and monitor model performance via a live dashboard.

In data science, the changes that happen around us are very, very impactful and meaningful, and also potentially dangerous, said Bronfman, referencing how patterns of customer behavior can change as a reaction to factors such as inflation and supply chain disruptions, rendering current models obsolete. According to Bronfman, to continue delivering accurate predictions, the platform automatically looks for changes in patterns within data, and once it identifies a change, the models are retrained and updated by feeding new data into the algorithms to accommodate the more recent patterns.

An example Pecan AI dashboard showing a predicted churn rate. Source: Pecan AI

Bronfman and co-founder and CTO Noam Brezis started Pecan AI in 2016. The two met in graduate school while working toward PhDs in computational neuroscience, and their studies led them to research recent advancements in AI, including its capacity for automating data mining and statistical processes. Brezis became a data analyst with a focus on business analytics, and he was surprised to find that data science know-how was often relegated to highly specialized teams, isolated from the business analysts who could benefit the most from data sciences predictive potential. Bronfman and Brezis saw an opportunity to build a SQL-oriented platform that could leverage the power of data science for a BI audience while eliminating much of the manual data science work.

Pecan AI serves a variety of use cases including sales analytics, conversion, and demand forecasting. Bronfman is especially enthusiastic about Pecans predictive analytics capabilities for customer behavior, an area in which he sees three main pillars. The first pillar is acquisition, a stage when companies may be asking how to acquire and engage with new customers: For the acquisition side of things, predicted lifetime value has been one of the key success stories for us, Bronfman said of Pecans predictive lifetime value models. Those models eventually give you a very good estimation, way before things actually happen, of how well your campaigns are going to do from the marketing side. Once you have a predicted lifetime value model in place, you can wait just a couple of days with the campaign and say, Oh, the ally is going to disinvest in a month or three months time, so I should double down my spend on this campaign, or, in other cases, I should refrain from investing more.

The second customer behavior pillar is the monetization pillar, a time when companies may be asking how they can offer the customer a better experience to encourage their continued engagement: If you have the opportunity to offer an additional product, service, [or] brand, whatever that might be, you need to optimize both for what you are offering, and not less importantly, when you are offering [it]. So again, our predictions are able to tell you at the customer level, who should be offered what and when, said Bronfman.

Finally, the third pillar is retention, an area where Bronfman notes it is far more economically efficient to retain customers rather than acquire new ones: For the retention side of things, the classic use case, which has been extremely valuable and gotten us excited, is churn prediction. Churn is a very interesting data science domain because predicting churn has been notoriously challenging, and its a classic case where if youre not doing it right, you might, unfortunately, get to a place where you are accurate with your predictions but you are ineffective.

Pecan AI co-founders: CEO, Zohar Bronfman and CTO, Noam Brezis.

When predicting churn, Bronfman says that time is of the essence: When a customer has already made a final decision to churn, even if youre able to predict it before theyve communicated it, you wont be able in most cases, to change their mind. But if youre able to predict churn way in advance, which is what we specialize in, then you still have this narrow time window of opportunity to preemptively engage with the customer to give them a better experience, a better price, a better retargeting effort, whatever that might be, and increase your retention rates.

Investors and customers alike seem keen on what Pecan has to offer, and the company is seeing significant growth. So far, the company has raised a total of $116 million, including its latest Series C funding round of $66 million occurring in February, led by Insight Partners, with participation from GV and existing investors S-Capital, GGV Capital, Dell Technologies Capital, Mindset Ventures, and Vintage Investment Partners.

Pecan recently announced it has more than doubled its revenue in the first half of this year, with its annual recurring revenue increasing by 150%. Its customer count increased by 121%, with mobile gaming companies Genesis and Beach Bum and wellness brand Hydrant joining its roster which already includes Johnson & Johnson and CAA Club Group. The company also expanded its number of employees to 125 for a 60% increase.

Bronfman says Pecans growth stems from a strong tailwind of two factors: Analysts are loving the fact that they can evolve, upskill, and start being data scientists on demand. But also, we came to realize that business stakeholders love that they can drive quick and effective data science without necessarily requiring data science resources.

Related Items:

Pecan AI Announces One-Click Model Deployment and Integration with Common CRMs

Foundry Data & Analytics Study Reveals Investment, Challenges in Business Data Initiatives

Narrowing the AI-BI Gap with Exploratory Analysis

View original post here:

Pecan AI Leaps Over the Skills Gap to Enable Data Science On Demand - Datanami

Read More..

Data Scientist Training: Resources and Tips for What to Learn – Dice Insights

Data science is a complex field that requires its practitioners to think strategically. On a day-to-day basis, it requires aspects of database administration and data analysis, along with expertise in statistical modeling (and even machine learning algorithms). It also needs, as you might expect, a whole lot of training before you can plunge into a career as a data scientist.

There are a variety of training options out there for data scientists at all points in their careers, from those just starting out to those looking to master the most cutting-edge tools. Here are some platforms and training tips for all data scientists.

Kevin Young, senior data and analytics consultant at SPR, says that many data scientists treat Kaggle as a go-to learning resource. Kaggle is a Google-owned machine learning competition platform with a series of friendly courses to get beginners started on their data science journey.

Topics covered range from Python to deep learning and more. Once a beginner gains a base knowledge of data science, they can jump into machine learning competitions in a collaborative community in which people are willing to share their work with the community, Young says.

In addition to Kaggle, there are lots of other online resources that data scientists (or aspiring data scientists) can use to boost their knowledge of the field. Here are some free resources:

And here are some that will cost (although youll earn a certification or similar proof of completion at the end):

This is just a portion of whats out there, of course. Fortunately, the online education ecosystem for data science is large enough to accommodate all kinds of learning styles.

Seth Robinson, vice president of industry research at CompTIA, explains that individuals near the beginning of a data science career will need to build familiarity with data structures, database administration, and data analysis.

Database administration is the most established job role within the field of data, and there are many resources teaching the basics of data management, the use of SQL for manipulating databases, and the techniques of ensuring data quality. Beyond traditional database administration, an individual could learn about newer techniques involving non-relational databases and unstructured data, he adds.

Training for data analysis is newer, but resources such as CompTIAs Data+ certification can add skills in data mining, visualization, and data governance. From there, specific training around data science is even more rare, but resources exist for teaching or certifying advanced skills in statistical modeling or strategic data architecture, Robinson says.

Young cites two main segments of data science training: model creation and model implementation.

Model creation training is the more academic application of statistical models on an engineered dataset to create a predictive model: This is the training that most intro to data science courses would cover.

This training provides the bedrock foundations for creating models that will provide predictive results, he says. Model creation training is usually taught in Python, and covers the engineering of the dataset, creation of a model and evaluation of that model.

Model implementation training opportunities cover the step after the model is created, which is getting the model into production. This training is often vendor or cloud-specific to get the model to make predictions on live incoming data. This type of training would be through cloud providers such as AWS giving in-person or virtual education on their machine learning services such as Sagemaker, Young explains.

These cloud services provide the ability to take machine learning models produced on data scientists laptops and persist the model in the cloud, allowing for continual analysis. This type of training is vital as the time and human capital are usually much larger in the model implementation phase than in the model creation phase, Young says.

This is because when models are created, they often use a smaller, cleaned dataset from which a single data scientist can build a model. When that model is put into production engineering teams, DevOps engineers, and/or cloud engineers are often needed to create the underlying compute resources and automation around the solution.

The more training the data scientist has in these areas, the more likely the project will be successful, he says.

Young says one of the lessons learned during the pandemic that professionals in technology roles can be productive remotely. This blurs the lines a bit on the difference between boot camps compared to online courses as many boot camps have moved to a remote model, he says. This puts an emphasis on having the ability to ask questions to a subject matter expert irrespective of whether you are in a boot camp or online course.

He adds certifications can improve organizations standing with software and cloud vendors. This means that candidates for hire move to the top of the resume stack if they have certifications that the business values, Young says.

For aspiring data scientists deciding between boot camps versus online courses, he says probably the most important aspect to compare the two are the career resources offered. A strong boot camp should have a resource dedicated to helping graduates find employment after the boot camp, he says.

Robinson adds its important to note that data science is a relatively advanced field.

All technology jobs are not created equal, he explains. Someone considering a data science career should recognize that the learning journey is likely to be more involved than it would be for a role such as network administration or software development.

Young agrees, adding that data scientists need to work in a collaborative environment with other data scientists and subject matter experts reviewing their work. Data science is a fast-developing field, he says. Although fundamental techniques do not change, how those techniques are implemented does change as new libraries are written and integrated with the underlying software on which models are built.

From his perspective, a good data scientist is always learning, and any strongly positioned company should offer reimbursement for credible training resources.

Robinson notes in-house resources vary from employer to employer, but points to a macro trend of organizations recognizing that workforce training needs to be a higher priority. With so many organizations competing for so few resources, companies are finding that direct training or indirect assistance for skill building can be a more reliable option for developing the exact skills needed, while improving the employee experience in a tight labor market, he says.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

Excerpt from:

Data Scientist Training: Resources and Tips for What to Learn - Dice Insights

Read More..

Asia Pacific will lead the new wave of transformation in data innovation: Nium’s CTO Ramana Satyavarapu – ETCIO South East Asia

Ramana Satyavarapu, Chief Technology Officer, NiumIn a market such as Asia Pacific, the sheer volume of data and various emerging types of data create innumerable complexities for businesses that still require the adoption of data strategies from ground-up. For organisations that have understood the importance of data, they are yet to instil stronger data management practices in the current state of modern array. According to research revealed by Accenture, while only 3 of the 10 most valuable enterprises were actively taking a data-driven approach in 2008, that number has risen to 7 out of 10 today. All of it points to the fact that designing data-driven business processes is the only effective way to achieve fast-paced results and goals for organisations across sectors.

To further decode the nuances of the data landscape, with a special focus on the Asia Pacific region, we conducted an exclusive interaction with Ramana Satyavarapu, the Chief Technology Officer of Nium. Ramana is an engineering leader with a strong track record of delivering great products, organising and staffing geographically and culturally diverse teams, and mentoring and developing people. Throughout his career, he has delivered highly successful software products and infrastructure at big tech companies such as Uber, Google and Microsoft. With a proven track record of result-oriented execution by bringing synergy within engineering teams to achieve common goals, Ramana has a strong passion for quality and strives to create customer delight through technological innovation.

In this feature, he shares his outlook on the most relevant data management practices, effective data functionalities, building headstrong data protection systems, and leveraging optimal data insights for furthering business value propositions.

Ramana, what according to you are the most effective functions of data in the evolution of tech and innovation in Asia Pacific?

Data is becoming ubiquitous. Especially in Asia Pacific, because of the sheer number of people going digital. The amount of data available is huge. I will streamline its functions into three main areas:

First, understand the use case. Second, build just enough systems for storing, harnessing, and mining this data. For which, dont build everything in-house. Theres a lot of infrastructure out there. Data engineering has now turned into lego building, you dont have to build the legos from ground up. Just build the design structure using the existing pieces such as S3, Redshift and Google Storage. You can leverage all of these things to harness data. Thirdly, make sure the data is always encrypted, secure, and that there are absolutely robust, rock-solid, and time-tested protections around the data, which has to be taken very seriously. Those would be my three main principles while dealing with data.

How would you describe the importance of data discovery and intelligence to address data privacy and data security challenges?When you have a lot of data, reiterating my point about big datasets and their big responsibility, the number of security challenges and surface area attacks will be significantly higher. In order to understand data privacy and security challenges, more than data discovery and intelligence, one has to play a role in terms of two aspects - where we are storing it is a vault, we need to make sure the pin of the vault is super secure. Its a systems engineering problem more than a data problem. The second is, you need to understand what kind of data is this. No single vault is rock solid. Instead, how do we make sure that an intelligent piece of data is secure? Just store it in different vaults that individually, even if hacked or exposed - doesnt hurt it entirely. The aggregation of the data will be protected. Therefore, it must be a twofold strategy. Understand the data, mine it intelligently, so that you can save it not just in a single vault, but save it in ten different vaults. In layman terms, you dont put all your cash in a single bank or system. Therefore, the loss is mitigated and no one can aggregate and get ahold of all the data at once. Also, just make sure that we have solid security engineering practices to ensure the systems are protected from all kinds of hacks and security vulnerabilities.

The interpretative value of data provides immense scope for evaluating business processes. What role does data analytics play in the evolution of business success?There is a natural point where the functional value proposition that can be added or given to the customer, will start diminishing. There will be a natural point where data will be the differentiator. Ill give a pragmatic example which everybody knows - the difference between the Google search and Microsoft Bing search, both of which are comparably similar kinds of algorithms. But the results are significantly different! That's because one adopts fantastic data engineering practices. Its all about the insights and the difference that they can provide. At one point, the value addition from the algorithm diminishes and the quality and insights that you can draw from the data, will be the differentiator.

Twofold advantages of data insights or analytics. One, providing value to the customer beyond functionality. Like in the context of say Nium, or payments, or anyone whos doing a global money movement, weve identified an xyz company doing a massive money movement on the first day of every month - say to the Philippines or Indonesia. Instead of doing it on the first day of every month, why dont you do it on the last day of the previous month. That has been historically proven to be a better interchange or FX rate. At the end of the day, its all about supply and demand. Doing it one day before can save you a huge FX rate conversion which will benefit the business in many ways by one quantifiable amount, that is very powerful. Those kinds of insights can be provided to the customers by Nium. Being a customer-centric company, its our strong value proposition - we grow when the customer grows. Those insights, in addition to the business intelligence that can be drawn from it. Offering a new value proposition to the customer and just improving their processes is important.

For example, we are seeing that on an average, these customers transactions are taking x days or minutes, or this customer's acceptance rate is low, then we can improve the value, the reliability, and the availability of the system using analytics. We had a massive customer in the past, none other than McDonalds. We were looking at the data and we observed that theres a very specific pattern of transaction decline rate. However, independently, youll look at it and notice that only a few transactions are being declined. But if you look at it on a global scale, thats a significant amount of money and customer loss. When we analysed it further, we identified that this is happening with a very specific type of point of sale device in the east coast at the peak hour. We sent a detailed report of it to McDonalds saying we are identifying this kind of a pattern. McDonalds then contacted the point of sale device manufacturer and said that at this peak, these kinds of transactions, your devices are failing. That would have saved them hundreds and thousands of dollars.

Saachi, the whole idea is having a clear strategy of how we are going to use the data and we need to demystify this whole data problem space. There are data lakes, warehouses, machine learning, data mining, all of which are super complex terms. At the end of the data, break it down, and its really not that complex if you keep it simple.

In a world continually dealing with new-age data, mention some best data management practices for tech leaders. Again, theres no one set of practices that can determine that this will solve all your data problems. Then youd have to call me the data guru or something! To keep it simple, the three main aspects that I talked about - collection, aggregation, and insights - there are specific management practices for each of these strategies.

First, when it comes to data collection, focus on how to deal with heterogeneity. Data is inherently heterogeneous. From CSV files to text files to satellite images, theres no standard. Find a good orchestration layer and a good reliable, retry logic, with enough availability of ETLs to make sure this heterogeneous data is consistently and reliably collected. Thats number one. Im a big believer of: that what cannot be measured, is whats not done. Measure, measure, measure. In this case, have some validators, have some quality checks on consistency, reliability, freshness, timeliness, all the different parameters of if the data is coming to us in an accurate way. Thats the first step.

Second is standardisation. Whether its web-crawled data or Twitter information or traffic wave information or even satellite images, there was a dataset where we were measuring the number of sheep eating grass in New Zealand - so we were using image processing techniques to see the sheep. And why is that useful? Using that, you can observe the supply of merino wool sweaters in the world. If the sheep are reduced, the wool is less, and therefore the jacket will be costly. How do we store such data, though? Start with a time series and a standard identification. Every dataset, every data row, and every data cell has to be idempotent. Make sure that every piece of data, and the transformations of it, are traceable. Just have a time series with a unique identifier for each data value so that it can be consistently accessed. Thats a second.

Third, start small. Everyone presents people with machine learning or advanced data mining. Those are complex. Start with linear regressions and start identifying outliers. Start doing pattern matching. These are not rocket science to implement, start with them. Machine learning, in my opinion, is like a ten pound hammer. Its very powerful. But you want to have the right surface area and the right nail to hit it. If you use a ten pound hammer on a pushpin, the walls going to break. You need to have the right surface area or problem space to apply it. Even with ML, start with something like supervised learning, then move onto semi-supervised learning, then unsupervised learning, and then go to clustering, in a very phased manner.

That would be my approach on dividing it into the collection - having good validators or quality checks on it to ensure reliability, standardisation in the form of a time series, and then pattern recognition or simple techniques, wherefrom you can progress gradually onto how we want to mine the data and provide the insights.

To summarise, keep the data problem simple. Make sure you have a clear understanding of it - what is the use case that we are aiming to solve before we attempt to build a huge data lake or data infrastructure? Being pragmatic about the usage of data is very important. Again, data is super powerful. With lots of data, come lots of responsibilities, take it very seriously. Customers and users are entrusting us with their personal data, and that comes with a lot of responsibility. I urge every leader, engineer, and technologist out there to take it very seriously. Thank you!

Continued here:

Asia Pacific will lead the new wave of transformation in data innovation: Nium's CTO Ramana Satyavarapu - ETCIO South East Asia

Read More..

Hut 8 Mining Production and Operations Update for August 2022 – Yahoo Finance

375 Bitcoin mined, bringing reserves to 8,111

TORONTO, Sept. 6, 2022 /CNW/ - Hut 8 Mining Corp. (Nasdaq:HUT) (TSX:HUT), ("Hut 8" or the "Company")one of North America's largest, innovation-focused digital asset mining pioneers and high performance computing infrastructure provider, increased its Bitcoin holdings by 375in the period endingAugust 31, bringing its total self-mined holdings to 8,111 Bitcoin.

(CNW Group/Hut 8 Mining Corp)

Production highlights forAugust2022:

375 Bitcoin were generated, resulting in an average production rate of approximately 12.1 Bitcoin per day.

Keeping with our longstanding HODL strategy, 100% of the self-mined Bitcoin in August were deposited into custody.

Total Bitcoin balance held in reserve is 8,111 as of August 31, 2022.

Installed ASIC hash rate capacity was 2.98 EH/s at the end of the month, which excludes certain legacy miners that the Company anticipates will be fully replaced by the end of the year.

Hut 8 produced 125.8 BTC/EH in August.

Additional updates:

In late August, Hut 8 installed 180 NVIDIA GPUs in its flagship data centre in Kelowna, B.C. Currently mining Ethereum, the multi-workload machines will be designed to pivot on demand to provide Artificial Intelligence, Machine Learning, or VFX rendering services to customers.

Hut 8 is partnering with Zenlayer to bring their on-demand high-performance computing to Canadian Web 3.0 and blockchain customers for the first time.

"Our team delivered very strong results across our mining and high performance infrastructure businesses in August, positioning us well for continued success," saidJaime Leverton, CEO. "We continue to receive and install our monthly shipments of new MicroBT miners on time, while actively adding to the suite of services we offer our data centre customers."

About Hut 8

Hut 8 is one ofNorth America'slargest innovation-focused digital asset miners, led by a team of business-building technologists, bullish on bitcoin, blockchain, Web 3.0, and bridging the nascent and traditional high performance computing worlds. With two digital asset mining sites located inSouthern Albertaand a third site inNorth Bay, Ontario, all located inCanada, Hut 8 has one of the highest capacity rates in the industry and one of the highest inventories of self-mined Bitcoin of any crypto miner or publicly-traded company globally. With 36,000 square feet of geo-diverse data centre space and cloud capacity connected to electrical grids powered by significant renewables and emission-free resources, Hut 8 is revolutionizing conventional assets to create the first hybrid data centre model that serves both the traditional high performance compute (Web 2.0) and nascent digital asset computing sectors, blockchain gaming, and Web 3.0. Hut 8 was the first Canadian digital asset miner to list on the Nasdaq Global Select Market. Through innovation, imagination, and passion, Hut 8 is helping to define the digital asset revolution to create value and positive impacts for its shareholders and generations to come.

Story continues

Cautionary Note Regarding ForwardLooking Information

Thispress release includes "forward-looking information" and "forward-looking statements" within the meaning of Canadian securities laws andUnited Statessecurities laws, respectively (collectively, "forward-looking information"). All information, other than statements of historical facts, included in this press release that address activities, events or developments that the Company expects or anticipates will or may occur in the future, including such things as future business strategy, competitive strengths, goals, expansion and growth of the Company's businesses, operations, plans and other such matters is forward-looking information. Forward-looking information is often identified by the words "may", "would", "could", "should", "will", "intend", "plan", "anticipate", "allow", "believe", "estimate", "expect", "predict", "can", "might", "potential", "predict", "is designed to", "likely" or similar expressions. In addition, any statements in this press release that refer to expectations, projections or other characterizations of future events or circumstances contain forward-looking information and include, among others, statements regarding: Bitcoin and Ethereum network dynamics; the Company's ability to advance its longstanding HODL strategy;the Company's ability to produce additional Bitcoin and maintain existing rates of productivity at all sites; the Company's ability to deploy additional miners; the Company's ability to continue mining digital assets efficiently; the Company's expected recurring revenue and growth rate from its high performance computing business; and the Company's ability to successfully navigate the current market.

Statements containing forward-looking information are not historical facts, but instead represent management's expectations, estimates and projections regarding future events based on certain material factors and assumptions at the time the statement was made. While considered reasonable by Hut 8 as of the date of this press release, such statements are subject to known and unknown risks, uncertainties, assumptions and other factors that may cause the actual results, level of activity, performance or achievements to be materially different from those expressed or implied by such forward-looking information, including but not limited to, security and cybersecurity threats and hacks, malicious actors or botnet obtaining control of processing power on the Bitcoin or Ethereum network, further development and acceptance of Bitcoin and Ethereum networks, changes to Bitcoin or Ethereum mining difficulty, loss or destruction of private keys, increases in fees for recording transactions in the Blockchain, erroneous transactions, reliance on a limited number of key employees, reliance on third party mining pool service providers, regulatory changes, classification and tax changes, momentum pricing risk, fraud and failure related to cryptocurrency exchanges, difficulty in obtaining banking services and financing, difficulty in obtaining insurance, permits and licenses, internet and power disruptions, geopolitical events, uncertainty in the development of cryptographic and algorithmic protocols, uncertainty about the acceptance or widespread use of cryptocurrency, failure to anticipate technology innovations, the COVID19 pandemic, climate change, currency risk, lending risk and recovery of potential losses, litigation risk, business integration risk, changes in market demand, changes in network and infrastructure, system interruption, changes in leasing arrangements, and other risks related to the cryptocurrency and data centre business. For a complete list of the factors that could affect the Company, please see the "Risk Factors" section of the Company's Annual Information Form datedMarch 17, 2022, and Hut 8's other continuous disclosure documents which are available on the Company's profile on the System for Electronic Document Analysis and Retrieval at http://www.sedar.com and on the EDGAR section of the U.S. Securities and Exchange Commission's website at http://www.sec.gov.

These factors are not intended to represent a complete list of the factors that could affect Hut 8; however, these factors should be considered carefully. There can be no assurance that such estimates and assumptions will prove to be correct. Should one or more of these risks or uncertainties materialize, or should assumptions underlying the forward-looking statements prove incorrect, actual results may vary materially from those described in this press release as intended, planned, anticipated, believed, sought, proposed, estimated, forecasted, expected, projected or targeted and such forward-looking statements included in this press release should not be unduly relied upon. The impact of any one assumption, risk, uncertainty, or other factor on a particular forward-looking statement cannot be determined with certainty because they are interdependent and Hut 8's future decisions and actions will depend on management's assessment of all information at the relevant time. The forward-looking statements contained in this press release are made as of the date of this press release, and Hut 8 expressly disclaims any obligation to update or alter statements containing any forward-looking information, or the factors or assumptions underlying them, whether as a result of new information, future events or otherwise, except as required by law.

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/hut-8-mining-production-and-operations-update-for-august-2022-301617489.html

SOURCE Hut 8 Mining Corp

Cision

View original content to download multimedia: http://www.newswire.ca/en/releases/archive/September2022/06/c8714.html

Read this article:

Hut 8 Mining Production and Operations Update for August 2022 - Yahoo Finance

Read More..

Ethereum Miners Eye Cloud, AI To Repurpose Equipment That The Merge Will Make Obsolete – Forbes

getty

Crypto miners are turning to cloud computing and artificial intelligence as Ethereum begins the so-called Merge without incident.

The blockchains switch to a proof-of-work model from proof-of-stake will make it much more energy efficient, an important catalyst for the struggling crypto industry but one that will leave the miners who validated transactions and created new coins holding a lot of specialized computer gear that will no longer be useful in creating the No. 2 cryptocurrency.

HIVE Blockchain (Nasdaq: HIVE) said Tuesday it has a pilot project for testing a portion of its mining equipment in cloud computing at a Tier 3 data center. Such centers have multiple sources of power and cooling systems and do not require a total shutdown during maintenance or equipment replacement.

The Vancouver, Canada-based company has been using a range of Nvidias flagship graphic processing units (GPUs) to mine ether, Ethereums native cryptocurrency, but these GPUs, capable of supporting large data sets, can also be used for purposes including AI acceleration and virtual simulations, according to Nvidia. HIVE said it does not own Nvidias special-purpose CMP GPUs, which are limited to cryptocurrency mining. The miner said it produced 3,010 ETH in August, worth nearly $5 million, but sold its holdings of the currency to fund the expansion of its bitcoin mining operations.

Similarly, another Canadian crypto miner, Hut 8 Mining (Nasdaq: HUT), announced that in late August it had installed 180 NVIDIA GPUs in its data center in Kelowna, British Columbia to repurpose these machines for providing artificial intelligence, machine learning, or VFX rendering services to customers on demand.

Falling profits against the backdrop of plummeting crypto prices put publicly traded miners like HIVE and Hut 8 in a squeeze, pushing their shares down by more than 60% this year.

Stocks of publicly traded crypto miners have taken a big hit this year

The miners have also been beset by the lack of post-Merge alternatives. According to crypto intelligence firm Messari, Ethereum miners generated nearly $19 billion in revenue in 2021. To replace the lost revenue, some companies, including HIVE, said they would consider mining alternative proof-of-work coins such as Ethereum Classic, however, the market capitalization of these cryptocurrencies is less than 5% of Ethereum's $194 billion.

The likely outcome of a successful Merge is that GPUs will flood the resale market as alternative proof-of-work coins will only remain profitable for a small number of miners with access to cheap energy, writes Messari analyst Sami Kassab. Miners willing to invest the time and additional capital will be able to transition into high-performance data centers or node operators/providers for Web3 compute protocols both rapidly growing markets.

Further reading:

Ethereum Miners Will Have Few Good Options After The Merge

First Phase Of Ethereum Merge, Biggest Thing In Crypto Since Bitcoin, Goes Live

Ether Prices Rally As Bellatrix Upgrade Moves Ethereum One Step Closer To The Merge

Link:

Ethereum Miners Eye Cloud, AI To Repurpose Equipment That The Merge Will Make Obsolete - Forbes

Read More..

Tax digitalisation: Not the future, but the present – International Tax Review

Tax digitalisation should not only be linked to digital service tax, digital permanent establishments, the taxation of crypto assets, or even OECD pillars one and two discussions. In fact, the first consequence of widespread tax digitalisation is the imposition of new policy standards that allow the adaptation of legislation to the possibility of automation and machine reading instruments (ensuring the creation and validation of legal metadata).

Moreover, the design of a data strategy at international, national, or company level may lead us to a scenario where we can use specific modelling tools and select the most useful technologies for each tax problem (under EU or OECD guidelines, for example).

This relationship between tax and technology is not a novelty. However, the recent increase in data availability has allowed for a paradigm shift, considering that for the usage of technology, proper management of data is first required. In fact, what we need is qualitative, raw, and structured data. Data mining and quality issues will follow as part of a data awareness culture.

This data-centric rationale reflects the pre-eminent role and data monopoly that the tax authorities have these days.

Take the example of Portugal, where we have more than 60 ancillary obligations to be filed periodically, not covering tax returns themselves (and adding up to automatic exchange of information, fulfilment of the Common Reporting Standard requirements, and financial information exchange). Until this massive data lake is made public, the unbalance and unfair relationship between the tax authorities and taxpayers will stay untouchable.

On the one hand, the tax community tends to think about digitalisation, robot process automation, artificial intelligence (AI), and machine learning, among others, as if these soundbites would be equivalent. On the other hand, we tend to believe in digitalisation as a science fiction topic that will, sooner rather than later, replace all our human work. After all, if the Deep Blue chess computer beat Garry Kasparov in 1997, what can a computer do in 2022?

In fact, machines (or bots, as they are now named) cannot do a lot. Set your brilliant tax minds to rest that we will not soon be replaced but we do need to adapt.

What machines are already experts at is the performance countless times of several binary tasks (Boolean-valued functions). Thus, it seems advisable to request them to act as supplementary tools in research, HR, filing returns, compliance, etc. Only then, after evolving to a development phase of data mining, may we arrive at a level of legal maturity that will face severe limitations towards what science fiction commands in our imagination.

Nevertheless, a degree of opacity still associated with some AI models is one of the biggest obstacles to its growth. For instance, requiring visibility and auditability of the algorithms ruling tax IT software may become a new international standard.

Adding to the above, merely data-driven solutions may work fine as well, provided we ensure explainable design thinking (using decision trees) and transparent solutions as the modern preservation of taxpayers rights. This is not the future but the present: the need for translators between IT and tax is creating a new role for tax advisers.

The future is already here

The second big boost in technology democratisation is the low code/no code basic idea, even having features available based on Microsoft Power Apps, with interfaces for commonly used software, and avoiding overengineering the systems (with affordable IT solutions). This allows, for instance, dashboarding from country-by-country reporting to profit and loss clustering, litigation screening, and so many others.

The combination of low code/no code tools with specialised sectorial tools as well as enterprise resource planning (ERP) system integration leads tech-driven companies to a different level of real-time controlling and proactive tax strategy and vision.

There are already quite a few tax/legal start-ups that are booming, such as Blue J Legal, Do Not Pay, Jurimetra, Codex Stanford, E-clear, Taxdoo, WTS Global, Summitto, and Luminance. They are already out there pushing taxation to the boundaries of the current technical limitations.

All these examples from judicial analytics to decision tree implementation, to machine learning and some AI components governing transfer pricing matters teach us that in each tax problem there is an opportunity to model a process, improve it, and automate it.

Although it is true that Tim Berners-Lees initial idea of the Semantic Web, or Web 3.0, did not flourish, easy communication among different profiles and the agile use of the JSON-LD language (as an example) have allowed significant developments by the biggest players in the world (Google, for example) that will sooner or later extend to tax-related domains (while in Portugal, contact with the tax authorities is mainly covered by XML format files).

Furthermore, there are several use cases from a public sector perspective; take VAT and customs matters as examples. The tax authorities are effectively using machine learning technology for anomaly detection through mirror analysis (cross checking import declarations with export declarations) or real-time processing of VAT inputs to speed up refunds or pre-filled-in VAT returns. Not to mention the chatbots introduced across public authorities to respond to basic queries from taxpayers.

It all boils up to data governance and data awareness as an international standard. Imagine a brave new world where policy options need to be sustained by data, contributing to tax transparency as well as measurement of the economic impact of the options. Thus, technology is able to serve public policy options and tax collection, and is available to tax professionals and suitable to be adapted and enhanced by market needs.

And so, it is the case, my fellow tax professionals: ask not what technology can do for you, ask what you can do for technology!

Continue reading here:

Tax digitalisation: Not the future, but the present - International Tax Review

Read More..

Why data and technology will be crucial for the UKs new leader – Global Banking And Finance Review

By Jason Foster, CEO and Founder, Cynozure

According to the European Commission, the EU and UK data economy could hit 1tn by 2025. For context, thats bigger than Tesla or Facebooks owner, Meta Platforms. Clearly, there is a huge opportunity for companies and governments to capitalise on the ever-increasing amount of data they hold.

As the world rebuilds from the pandemic, we have a unique opportunity to harness the power of data to drive economic growth. Given rising inflation and the Bank of Englands warning that the UK will enter recession later this year, its imperative that data becomes increasingly central to government decision making.

Learning from crisis

The pandemic offers a clear example of the benefits of a data-led response. Without the accurate use of data, the global response to the pandemic would have been far less aligned, slower, and less agile and, most importantly, less effective.

Without meaningful data insights, could governments have accurately tracked infection hotspots in real-time or rapidly introduced measures to protect communities and save lives? I think not, and certainly not to the same speed and accuracy. Of course, data was also crucial for developing and administering the vaccines. Put simply, the effective use of data was central to the global pandemic response, and it will be vital to economic recovery.

The pandemic also had a wider social impact in terms of digital transformation. Suddenly, the entire UK had to move to digital channels to work, shop, and socialise that was the case for individuals, businesses, and government.

Things have changed forever, and society has now fully embraced the digital transformation. Data management is central to this with an increased online footprint, theres an ever-growing volume of data that can be utilised in countless ways to support businesses, boost the economy, and help the UK emerge from the pandemic in the strongest possible position.

Taking lessons from business

Whilst data usage in government decision-making processes has undeniably improved, there is always scope for further positive development.

In business, almost 40% of CEOs plan to invest in data over the next three years, with 70% expecting this investment to have a large impact on their bottom line. Whilst governments may not be profit-driven in the same way as corporates, investing in data boosts efficiency and maximises effectiveness.

The data industry can be hugely valuable to the UK economy more broadly, but the government must act quickly if it is to take full advantage of the rapidly growing market and cement the UKs position as a global leader.

Laying the groundwork for success

What tools or support are therefore needed to help promote the use of data and allow the wider data economy to thrive?

Embracing the power of data in a positive way can be a force for good for government and businesses of all sizes in all sectors. However, it requires a data-literate leader to drive this shift. Many leaders are now recognising that a business strategy isnt complete without a comprehensive data strategy.

This doesnt mean reinventing the wheel policymakers can use proven standards and best practise when defining and delivering strategy, which will help them to get a running start, as well as giving them time to upskill or hire in new talent to manage data programmes when necessary.

There is also the question of trust. Many consumers still perceive the issue of data be that sharing, or how their data is stored by governments and businesses with sceptism. The future leader must face this head on and take measures to reassure consumers that their data is safe and wont be weaponised, as we have seen in the past with election scandals.

In terms of tangible steps to be taken, this may mean establishing clear guidelines about what is allowed and not when it comes to topics like data mining and AI deployment. Ethics are vital, and the full potential of data will only be realised when the public trust how their data is being used.

Steps in the right direction but is it enough?

Government initiatives are key the Department for Digital, Culture, Media and Sport recently launched a competition with a 12m Digital Growth Grant to deliver a new digital and tech sector support programme aimed at scaling tech companies. Similar programmes are needed to support the data revolution.

The Data Protection and Digital Information Bill is a welcome step that aims to remove some regulatory barriers and support data-focused innovation. However, whether the bill goes far enough remains to be seen.

There is also the issue of how aligned the UK will be with the EU once legislation is enforced. Many are concerned that the reforms will diverge too far from the EUs GDPR standards and actually increase the regulatory burden placed upon the data economy, ultimately curtailing growth. This is a hugely important issue that government needs to resolve quickly.

As with any innovation, collaboration is vital. Again, we can learn lessons from the pandemic with effective data-sharing beyond borders, the global community was able to work together to manage the risks posed by Covid and to minimise the risks of infection. This data-sharing approach is crucial if governments are going to effectively introduce data strategies.

Working in isolation is not an option when it comes to data. It requires collaboration between governments, citizens, businesses, and technology vendors to develop and road test policies, strategies, and plans about how data is captured, stored, and used.

If the UK is to have the best chance of success, its imperative that our new leader is aware of this fact and is willing and open to leading positive change through data.

More:

Why data and technology will be crucial for the UKs new leader - Global Banking And Finance Review

Read More..