Page 3,783«..1020..3,7823,7833,7843,785..3,7903,800..»

The Robots Are Coming – Boston Review

Image: Adobe Stock

In the overhyped age of deep learning, rumors of thinking robots are greatly exaggerated. Still, we cannot leave decisions about even this sort of AI in the hands of those who stand to profit from its use.

EditorsNote: The philosopher Kenneth A. Taylor passed away suddenly this winter. Boston Review is proud to publish this essay, which grows out of talks Ken gave throughout 2019, in collaboration with his estate. Preceding it is an introductory note by Kens colleague,John Perry.

In memoriam Ken Taylor

On December 2, 2019, a few weeks after his sixty-fifth birthday, Ken Taylor announced to all of his Facebook friends that the book he had been working on for years, Referring to the World, finally existed in an almost complete draft. That same day, while at home in the evening, Ken died suddenly and unexpectedly. He is survived by his wife, Claire Yoshida; son, Kiyoshi Taylor; parents, Sam and Seretha Taylor; brother, Daniel; and sister, Diane.

Ken was an extraordinary individual. He truly was larger than life. Whatever the task at handwhether it was explaining some point in the philosophy of language, coaching Kiyoshis little league team, chairing the Stanford Philosophy department and its Symbolic Systems Program, debating at Stanfords Academic Senate, or serving as president of the Pacific Division of the American Philosophical Association (APA)Ken went at it with ferocious energy. He put incredible effort into teaching. He was one of the last Stanford professors to always wear a tie when he taught, to show his respect for the students who make it possible for philosophers to earn a living doing what we like to do. His death leaves a huge gap in the lives of his family, his friends, his colleagues, and the Stanford community.

Ken went to college at Notre Dame. He entered the School of Engineering, but it didnt quite satisfy his interests so he shifted to the Program of Liberal Studies and became its first African American graduate. Ken came from a religious family, and never lost interest in the questions with which religion deals. But by the time he graduated he had become a naturalistic philosopher; his senior essay was on Kant and Darwin.

Ken was clearly very much the same person at Notre Dame that we knew much later. Here is a memory from a Katherine Tillman, a professor in the Liberal Studies Program:

This is how I remember our beloved and brilliant Ken Taylor: always with his hand up in class, always with that curious, questioning look on his face. He would shift a little in his chair and make a stab at what was on his mind to say. Then he would formulate it several more times in questions, one after the other, until he felt he got it just right. And he would listen hard, to his classmates, to his teachers, to whomever could shed some light on what it was he wanted to know. He wouldnt give up, though he might lean back in his chair, fold his arms, and continue with that perplexed look on his face. He would ask questions about everything.Requiescat in pace.

From Notre Dame Taylor went to the University of Chicago; there his interests solidified in the philosophy of language. His dissertation was on reference, the theory of how words refer to things in the world; his advisor was the philosopher of language Leonard Linsky. We managed to lure Taylor to Stanford in 1995, after stops at Middlebury, the University of North Carolina, Wesleyan, the University of Maryland, and Rutgers.

In 2004 Taylor and I launched the pubic radio program Philosophy Talk, billed as the program that questions everythingexcept your intelligence. The theme song is Nice Work if You Can Get It, which expresses the way Ken and I both felt about philosophy. The program dealt with all sorts of topics. We found ourselves reading up on every philosopher we discussedfrom Plato to Sartre to Rawlsand on every topic with a philosophical dimension, from terrorism and misogyny to democracy and genetic engineering. I grew pretty tired of this after a few years. I had learned all I wanted to know about imporant philosophers and topics. I couldnt wait after each Sundays show to get back to my world: the philosophy of language and mind. But Ken seemed to love it more and more with each passing year. He loved to think; he loved forming opinions, theories, hypotheses and criticisms on every possible topic; and he loved talking about them with the parade of distinguished guests that joined us.

Until the turn of the century Kens publications lay pretty solidly in the philosophy of language and mind and closely related areas. But later we begin to find things like How to Vanquish the Still Lingering Shadow of God and How to Hume a Hegel-Kant: A Program for the Naturalization of Normative Consciousness. Normativitythe connection between reason, duty, and lifeis a somewhat more basic issue in philosophy than proper names. By the time of his 2017 APA presidential address, Charting the Landscape of Reason, it seemed to me that Ken had clearly gone far beyond issues of reference, and not only on Sunday morning for Philosophy Talk. He had found a broader and more natural home for his active, searching, and creative mind. He had become a philosopher who had interesting things to say not only about the most basic issues in our field but all sorts of wider concerns. His Facebook page included a steady stream of thoughtful short essays on social, political, and economic issues. As the essay below shows, he could bring philosophy, cognitive science, and common sense to bear on such issues, and wasnt afraid to make radical suggestions.

Some of us are now finishing the references and preparing an index for Referring to the World, to be published by Oxford University Press. His next book was to be The Natural History of Normativity. He died as he was consolidating the results of thirty-five years of exciting productive thinking on reference, and beginning what should have been many, many more productive and exciting years spent illuminating reason and normativity, interpreting the great philosophers of the past, and using his wisdom to shed light on social issuesfrom robots to all sort of other things.

His loss was not just the loss of a family member, friend, mentor and colleague to those who knew him, but the loss, for the whole world, of what would have beenan illuminating and important body of philosophical and practical thinking. His powerful and humane intellect will be sorely missed.

John Perry

Among the works of man, which human life is rightly employed in perfecting and beautifying, the first in importance surely is man himself. Supposing it were possible to get houses built, corn grown, battles fought, causes tried, and even churches erected and prayers said, by machineryby automatons in human formit would be a considerable loss to exchange for theseautomatons even the men and women who at present inhabit the more civilized parts of the world, and who assuredly are but starved specimens of what nature can and will produce. Human nature is not a machine to be built after a model, and set to do exactly the work prescribed for it, but a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.

John Stuart Mill, On Liberty (1859)

Some believe that we are on the cusp of a new age. The day is coming when practically anything that a human can doat least anything that the labor market is willing to pay a human being a decent wage to dowill soon be doable more efficiently and cost effectively by some AI-driven automated device. If and when that day does arrive, those who own the means of production will feel ever increasing pressure to discard human workers in favor of an artificially intelligent work force. They are likely to do so as unhesitatingly as they have always set aside outmoded technology in the past.

We are very unlikely to be inundated anytime soon with a race of thinking robotsat least not if we mean by thinking that peculiar thing that we humans do, done in precisely the way that we humans do it.

To be sure, technology has disrupted labor markets before. But until now, even the most far reaching of those disruptions have been relatively easy to adjust to and manage. That is because new technologies have heretofore tended to displace workers from old jobs that either no longer needed to be doneor at least no longer needed to be done by humansinto either entirely new jobs that were created by the new technology, or into old jobs for which the new technology, directly or indirectly, caused increased demand.

This time things may be radically different. Thanks primarily to AIs presumed potential to equal or surpass every human cognitive achievement or capacity, it may be that many humans will be driven out of the labor market altogether.

Yet it is not necessarily time to panic. Skepticism about the impact of AI is surely warranted on inductive grounds alone. Way back in 1956, at the Dartmouth Summer Research Project on Artificial Intelligence, an event that launched the first AI revolution, the assembled gaggle of AI pioneersall ten of thembreathlessly anticipated that the mystery of fully general artificial intelligence could be solved within a couple of decades at most. In 1961, Minsky, for example, was confidently proclaiming, We are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines. Well over a half century later, we are still waiting for the revolution to be fully achieved.

AI has come a long way since those early days: it is now a very big deal. It is a major focus of academic research, and not just among computer scientists. Linguists, psychologists, the legal establishment, the medical establishment, and a whole host of others have gotten into the act in a very big way. AI may soon be talking to us in flawless and idiomatic English, counseling us on fundamental life choices, deciding who gets imprisoned for how long, and diagnosing our most debilitating diseases. AI is also big business. The worldwide investment in AI technology, which stood at something like $12 billion in 2018, will top $200 billion by 2025. Governments are hopping on the AI bandwagon. The Chinese envision the development of a trillion-dollar domestic AI industry in the relatively near term. They clearly believe that the nation that dominates AI will dominate the world. And yet, a sober look at the current state of AI suggests that its promise and potential may still be a tad oversold.

Excessive hype is not confined to the distant past. One reason for my own skepticism is the fact that in recent years the AI landscape has come to be progressively more dominated by AI of the newfangled deep learning variety, rather than by AI of the more or less pass logic-based symbolic processing varietyaffectionately known in some quarters, and derisively known in others, as GOFAI (Good Old Fashion Artificial Intelligence).

It was mostly logic-based, symbolic processing GOFAI that so fired the imaginations of the founders of AI back in 1956. Admittedly, to the extent that you measure success by where time, money, and intellectual energy are currently being invested, GOFAI looks to be something of dead letter. I dont want to rehash the once hot theoretical and philosophical debates over which approach to AIlogic-based symbolic processing, or neural nets and deep learningis the more intellectually satisfying approach. Especially back in the 80s and 90s, those debates raged with what passes in the academic domain as white-hot intensity. They no longer do, but not because they were decisively settled in favor of deep learning and neural nets more generally. Its more that machine learning approaches, mostly in the form of deep learning, have recently achieved many impressive results. Of course, these successes may not be due entirely to the anti-GOFAI character of these approaches. Even GOFAI has gotten into the machine learning act with, for example, Bayesian networks. The more relevant divide may be between probabilistic approaches of various sorts and logic-based approaches.

It is important to distinguish AI-as-engineering from AI-as-cognitive-science. The former is where the real money turns out to be.

However exactly you divide up the AI landscape, it is important to distinguish what I call AI-as-engineering from what I call AI-as-cognitive-science. AI-as-engineering isnt particularly concerned with mimicking the precise way in which the human mind-brain does distinctively human things. The strategy of engineering machines that do things that are in some sense intelligent, even if they do what they do in their own way, is a perfectly fine way to pursue artificial intelligence. AI-as-cognitive science, on the other hand, takes as its primary goal that of understanding and perhaps reverse engineering the human mind. AI pretty much began its life by being in this business, perhaps because human intelligence was the only robust model of intelligence it had to work with. But these days, AI-as-engineering is where the real money turns out to be.

Though there is certainly value in AI-as-engineering, I confess to still have a hankering for AI-as-cognitive science. And that explains why I myself still feel the pull of the old logic-based symbolic processing approach. Whatever its failings, GOFAI had as one among its primary goals that of reverse engineering the human mind. Many decades later, though we have definitely made some progress, we still havent gotten all that far with that particular endeavor. When it comes to that daunting task, just about all the newfangled probability and statistics-based approaches to AImost especially deep learning, but even approaches that have more in common with GOFAI like Bayesian Netsstrike me as if not exactly nonstarters, then at best only a very small part of the truth. Probably the complete answer will involve some synthesis of older approaches and newer approaches and perhaps even approaches we havent even thought of yet. Unfortunately, however, although there are a few voices starting to sing such an ecumenical tune; neither ecumenicalism nor intellectual modesty are exactly the rage these days.

Back when the competition over competing AI paradigms was still a matter of intense theoretical and philosophical dispute, one of the advantages often claimed on behalf of artificial neural nets over logic-based symbolic approaches was that the former but not the latter were directly neuronally inspired. By directly modeling its computational atoms and computational networks on neurons and their interconnections, the thought went, artificial neural nets were bound to be truer to how the actual human brain does its computing than its logic-based symbolic processing competitor could ever hope to be.

Long before the singularity looms even on some distant horizon, the sort of AI technology that AI-as-engineering is likely to give us already has the potential to wreak considerable havoc on the human world.

This is not the occasion to debate such claims at length. My own hunch is that there is little reason to believe that deep learning actually holds the key to finally unlocking the mystery of general purpose, humanlike intelligence. Despite being neuronally inspired, many of the most notable successes of the deep learning paradigm depend crucially on the ability of deep learning architectures to do something that the human brain isnt all that good at: extracting highly predictive, though not necessarily deeply explanatory patterns, on the basis of being trained up, via either supervised or unsupervised learning, on huge data sets, consisting, from the machine eye point of view, of a plethora of weakly correlated feature bundles, without the aid of any top-down direction or built-in worldly knowledge. That is an extraordinarily valuable and computationally powerful, technique for AI-as-engineering. And it is perfectly suited to the age of massive data, since the successes of deep learning wouldnt be possible without big data.

Its not that we humans are pikers at pattern extraction. As a species, we do remarkably well at it, in fact. But I doubt that the capacity for statistical analysis of huge data sets is the core competence on which all other aspects of human cognition are ultimately built. But heres the thing. Once youve invented a really cool new hammerwhich deep learning very much isits a very natural human tendency to start looking for nails to hammer everywhere. Once you are on the lookout for nails everywhere, you can expect to find a lot more of them than you might have at first thought, and you are apt to find some of them in some pretty surprising places.

But if its really AI-as-cognitive science that you are interested in, its important not to lose sight of the fact that it may take a bit more than our cool new deep learning hammer to build a humanlike mind. You cant let your obsession with your cool new hammer make you lose sight of the fact that in some domains, the human mind seems to deploy quite a different trick from the main sorts of tricks that are at the core not only of deep learning but also other statistical paradigms (some of which, again, are card carrying members of the GOFAI family). In particular, the human mind is often able to learn quite a lot from relatively little and comparatively impoverished data. This remarkable fact has led some to conjecture that human mind must come antecedently equipped with a great deal of endogenous, special purpose, task specific cognitive structure and content. If true, that alone would suffice to make the human mind rather unlike your typical deep learning architecture.

Indeed, deep learning takes quite the opposite approach. A deep learning network may be trained up to represent words, say, as points in a micro-featural vector space of, say, three hundred dimensions, and on that basis of such representations, it might learn, after many epochs of training, on a really huge data set, to make the sort of pragmatic inferencesfrom say, John ate some of the cake to John did not eat all of the cakethat humans make quickly, easily and naturally, without a lot of focused training of the sort required by deep learning and similar such approaches. The point is that deep learning can learn to do various cool thingsthings that one might once have thought only human beings can doand although they can do some of those things quite well, it still seems highly unlikely that they do those cool things in precisely the way that we humans do.

I stress again, though, that if you are not primarily interested in AI-as-cognitive science, but solely in AI-as-engineering, you are free to care not one whit whether deep learning architectures and its cousins hold the ultimate key to understanding human cognition in all its manifestations. You are free to embrace and exploit the fact that such architectures are not just good, but extraordinarily good, at what they do, at least when they are given large enough data sets to work with. Still, in thinking about the future of AI, especially in light of both our darkest dystopian nightmares and our brightest utopian dreams, it really does matter whether we are envisioning a future shaped by AI-as-engineering or AI-as-cognitive-science. If I am right that there are many mysteries about the human mind that currently dominant approaches to AI are ill-equipped to help us solve, then to the extent that such approaches continue to dominate AI into the future, we are very unlikely to be inundated anytime soon with a race of thinking robotsat least not if we mean by thinking that peculiar thing that we humans do, done in precisely the way that we humans do it.

Once youve invented a new hammerwhich deep learning very much isits a very natural human tendency to start looking for nails to hammer everywhere.

Deep learning and its cousins may do what they do better than we could possibly do what they do. But that doesnt imply that they do what we do better than we do what we do. If so, then, at the very least, we neednt fear, at least not yet, that AI will radically outpace humans in our most characteristically human modes of cognition. Nor should we expect the imminent arrival of the so-called singularity in which human intelligence and machine intelligence somehow merge to create a super intelligence that surpasses the limits of each. Given that we still havent managed to understand the full bag of tricks our amazing minds deploy, we havent the slightest clue as to what such a merger would even plausibly consist in.

Nonetheless, it would still be a major mistake to lapse into a false sense of security about the potential impact of AI on the human world. Even if current AI is far from being the holy grail of a science of mind that finally allows us to reverse engineer it, it will still allow us to the engineer extraordinarily powerful cognitive networks, as I will call them, in which human intelligence and artificial intelligence of some kind or other play quite distinctive roles. Even if we never achieve a single further breakthrough in AI-as-cognitive-science, from this day forward, for as long as our species endures, the task of managing what I will call the division of cognitive labor between human and artificial intelligence within engineered cognitive networks will be with us to stay. And it will almost certainly be a rather fraught and urgent matter. And this will be thanks in large measure to the power of AI-as-engineering rather than to the power of AI-as-cognitive-science.

Indeed, there is a distinct possibility that AI-as-engineering may eventually reduce the role of human cognitive labor within future cognitive networks to the bare minimum. It is that possibilitynot the possibility of the so-called singularity or the possibility that we will soon be surrounded by a race of free, autonomous, creative, or conscious robots, chafing at our undeserved dominance over themthat should now and for the foreseeable future worry us most. Long before the singularity looms even on some distant horizon, the sort of AI technology that AI-as-engineering is likely to give us already has the potential to wreak considerable havoc on the human world. It will not necessarily do so by superseding human intelligence, but simply by displacing a great deal of it within various engineered cognitive networks. And if thats right, it simply wont take the arrival of anything close to full-scale super AI, as we might call it, to radically disrupt, for good or for ill, the built cognitive world.

Start with the fact that much of the cognitive work that humans are currently tasked to do within extant cognitive networks doesnt come close to requiring the full range of human cognitive capacities to begin with. A human mind is an awesome cognitive instrument, one of the most powerful instruments that nature has seen fit to evolve. (At least on our own lovely little planet! Who knows what sorts of minds evolution has managed to design on the millions upon millions of mind-infested planets that must be out there somewhere?) But stop and ask yourself, how much of the cognitive power of her amazing human mind does a coffee house Barista, say, really use in her daily work?

Not much, I would wager. And precisely for that reason, its not hard to imagine coffee houses of the future in which more and more of the cognitive labor that needs doing within them is done by AI finely tuned to cognitive loads they will need to carry within such cognitive networks. More generally, it is abundantly clear that much of the cognitive labor that needs doing within our total cognitive economy that now happens to be performed by humans is cognitive labor for which we humans are often vastly overqualified. It would be hard to lament the off-loading of such cognitive labor onto AI technology.

Even if we never achieve a single further breakthrough in AI-as-cognitive-science, from this day forward, for as long as our species endures, the task of managing the division of cognitive labor between human and artificial intelligence will be with us to stay.

But there is also a flip side. The twenty-first century economy is already a highly data-driven economy. It is likely to become a great deal more so, thanksamong other thingsto the emergence of the internet of things. The built environment will soon be even more replete with so-called smart devices. And these smart devices will constantly be collecting, analyzing and sharing reams and reams of data on every human being who interacts with them. It will not be just the usual suspects, like our computers, smart phones or smart watches, that are so engaged. It will be our cars, our refrigerators, indeed every system or appliance in every building in the world. There will be data-collecting monitors of every sortheart monitors, sleep monitors, baby monitors. There will be smart roads, smart train tracks. There will be smart bridges that constantly monitor their own state and automatically alert the transportation department when they need repair. Perhaps they will shut themselves down and spontaneously reroute traffic while they are waiting for the repair crews to arrive. It will require an extraordinary amount of cognitive labor to keep such a built environment running smoothly. And for much of that cognitive labor, we humans are vastly underqualified. Try, for example, running a data mining operation using nothing but human brain power. Youll see pretty quickly that human brains are not at all the right tool for the job, I would wager.

Perhaps what should really worry us, I am suggesting, is the possibility that the combination of our overqualification for certain cognitive labor and underqualification for other cognitive labor will leave us open to something of an AI pincer attack. AI-as-engineering may give us the power to design cognitive networks in which each node is exquisitely fine-tuned to the cognitive load it is tasked to carry. Since distinctively human intelligence will often be either too much or too little for the task at hand, future cognitive networks may assign very little cognitive labor to humans. And that is precisely how it might come about that the demand for human cognitive labor within the overall economy may be substantially diminished. How should we think about the advance of AI in light of its capacity to allow us to re-imagine and re-engineer our cognitive networks in this way? That is the question I address in the remainder of this essay.

There may be lessons to be learned from the ways that we have coped with disruptive technological innovations of the past. So perhaps we should begin by looking backward rather than forward. The first thing to say is that many innovations of the past are now widely seen as good things, at least on balance. They often spared humans work that payed dead-end wages, or work that was dirty and dangerous, or work that was the source of mind-numbing drudgery.

What should really worry us is the possibility that the combination of our overqualification for certain cognitive labor and underqualification for other will leave us open to something of an AI pincer attack.

But we should be careful not to overstate the case for the liberating power of new technology, lest that lure us to into a misguided complacency about what is to come. Even looking backward, we can see that new and disruptive technologies have sometimes been the culprit in increasing rather than decreasing the drudgery and oppressiveness of work. They have also served to rob work of a sense of meaning and purpose. The assembly line is perhaps the prime example. The rise of the assembly line doubtlessly played a vital role in making the mass production and distribution of all manner of goods possible. It made the factory worker vastly more productive than, say, the craftsman of old. In so doing, it increased the market for mass produced goods, while simultaneously diminishing the market for the craftsmans handcrafted goods. As such, it played a major role in increasing living standards for many. But it also had the downside effect of turning many human agents into mere appendages within a vast, impersonal and relentless mechanism of production.

All things considered, it would be hard to deny that trading in skilled craftsmanship for unskilled or semiskilled factory labor was a good thing. I do not intend to relitigate that choice here. But it is worth asking whether all things really were consideredand considered not just by those who owned the means of production but collectively by all the relevant stakeholders. I am no historian of political economy. But I venture the conjecture that the answer to that question is a resounding no. More likely than not, disruptive technological change was simply foisted on society as a whole, primarily by those who owned and controlled the means of production, and primarily to serve their own profit, with little, if any intentionality or democratic deliberation and participation on the part of a broader range of stakeholders.

Given the disruptive potential even of AI-as-engineering, we cannot afford to leave decisions about the future development and deployment of even this sort of AI solely in the hands of those who stand to make vast profits from its use. This time around, we have to find a way to ensure that all relevant stakeholders are involved and that we are more intentional and deliberative in our decision making than we were about the disruptive technologies of the past.

I am not necessarily advocating the sort of socialism that would require the means of production to be collectively owned or regulated. But even if we arent willing to go so far as collectively seizing the machines, as it were, we must get past the point of treating not just AI but all technology as a thing unto itself, with a life of its own, whose development and deployment is entirely independent of our collective will. Technology is never self-developing or self-deploying. Technology is always and only developed and deployed by humans, in various political, social, and economic contexts. Ultimately, it is and must be entirely up to us, and up to us collectively, whether, how, and to what end it is developed and deployed. As soon as we lose sight of the fact that it is up to us collectively to determine whether AI is to be developed and deployed in a way that enhances the human world rather than diminishes it, it is all too easy to give in to either utopian cheerleading or dystopian fear mongering. We need to discipline ourselves not to give into either prematurely. Only such discipline will afford us the space to consider various tradeoffs deliberatively, reflectively and intentionally.

We should be careful not to overstate the case for the liberating power of new technology, lest that lure us to into a misguided complacency about what is to come.

Utopian cheerleaders for AI often blithely insist that it is more likely to decrease rather than increase the amount of dirt, danger, or drudgery to which human workers are subject. As long as AI is not turned against usand why should we think that it would be?it will not eliminate the work for which we humans are best suited, but only the work that would be better left to machines in the first place.

I do not mean to dismiss this as an entirely unreasonable thought. Think of coal mining. Time was when coal mining was extraordinarily dangerous and dirty work. Over 100,000 coal miners died in mining accidents in the U.S. alone during the twentieth centurynot to mention the amount of black lung disease they suffered. Thanks largely to automation and computer technology, including robotics and AI technology, your average twenty-first-century coal industry worker relies a lot more on his or her brains than on mere brawn and is subject to a lot less danger and dirt than earlier generations of coal miners were. Moreover, it takes a lot fewer coal miners to extract more coal than the coal miners of old could possibly hope to extract.

To be sure, thanks to certain other forces having nothing to do with the AI revolution, the number of people dedicated to extracting coal from the earth will likely diminish even further in the relatively near term. But that just goes to show that even if we could manage to tame AIs effect on the future of human work, weve still got plenty of other disruptive challenges to face as we begin to re-imagine and re-engineer the made human world. But that just gives us even more reason to be intentional, reflective, and deliberative in thinking about the development and deployment of new technologies. Whatever one technology can do on its own to disrupt the human world, the interactive effects of multiple apparently independent technologies can greatly amplify the total level of disruption to which we may be subject.

I suppose that, if we had to choose, utopian cheerleading would at least feel more satisfying and uplifting than dystopian fear mongering. But we shouldnt be blind to the fact that any utopian buzz we may fall into while contemplating the future may serve to blind us to the fact that AI is very likely to transformperhaps radicallyour collective intuitive sense of where the boundary between work better consigned to machines and work best left to us humans should fall in the first place. The point is that that boundary is likely to be drawn, erased, and redrawn by the progress of AI. And as our conception of the proper boundary evolves, our conception of what we humans are here for is likely to evolve right along with it.

The upshot is clear. If it is only relative to our sense of where the boundary is properly drawn that we could possibly know whether to embrace or recoil from the future, then we are now currently in no position to judge on behalf of our future selves which outcomes are to be embraced and which are to be feared. Nor, perhaps, are we entitled to insist that our current sense of where the boundary should be drawn should remain fixed for all time and circumstances.

To drive this last point home, it will help to consider three different cognitive networks in which AI already plays, or soon can be expected to play, a significant role: the air traffic control system, the medical diagnostic and treatment system, and what Ill call the ground traffic control system. My goal in so doing is to examine some subtle ways in which our sense of proper boundaries may shift.

We cannot afford to leave decisions about the future development and deployment even of AI-as-engineering solely in the hands of those who stand to make vast profits from its use.

Begin with the air traffic control system, one of the more developed systems in which brain power and computer power have been jointly engineered to cooperate in systematically discharging a variety of complex cognitive burdens. The system has steadily evolved over many decades into a system in which a surprising amount of cognitive work is done by software rather than humans. To be sure, there are still many humans involved. Human pilots sit in every cockpit and human brains monitor every air traffic control panel. But it is fair to say that humans, especially human pilots, no longer really fly airplanes on their own within this vast cognitive network. Its really the system as a whole that does the flying. Indeed, its only on certain occasions, and on an as needed basis, that the human beings within the system are called upon to do anything at all. Otherwise, they are mostly along for the ride.

This particular human-computer cognitive network works extremely well for the most part. It is extraordinarily safe in comparison with travel by automobile. And it is getting safer all the time. Its ever-increasing safety would seem to be in large measure due to the fact that more and more of the cognitive labor done within the system is being offloaded onto machine intelligence and taken away from human intelligence. Indeed, I would hazard the guess that almost no increases in safety have resulted from taking burdens away from algorithms and machines and giving them to humans instead.

To be sure, this trend started long before AI had reached anything like its current level of sophistication. But with the coming of age of AI-as-engineering you can expect that the trend will only accelerate. For example, starting in the 1970s, decades of effort went into building human-designed rules meant to provide guidance to pilots as to which maneuvers executed in which order would enable them to avoid any possible or pending mid-air collision. In more recent years, engineers have been using AI techniques to help design a new collision avoidance system that will make possible a significant increase in air safety. The secret to the new system is that instead of leaving the discovery of optimal rules of the airways to human ingenuity, the problem has been turned over to the machines. The new system uses computational techniques to derive an optimized decision logic that better deals with various sources of uncertainty and better balances competing system objectives than anything that we humans would be likely to think up on our own. The new system, called Airborne Collision Avoidance System (ACAS) X, promises to pay considerable dividends by reducing both the risks of mid-air collision and the need for alerts that call for corrective maneuvers in the first place.

In all likelihood, the system will not be foolproofprobably no system will ever be. But in comparison with automobile travel, air travel is already extraordinarily safe. Its not because the physics makes flying inherently safer than driving. Indeed, there was a time when flying was much riskier than it currently is. What makes air travel so much safer is primarily the differences between the cognitive networks within which each operates. In the ground traffic control system, almost none of the cognitive labor has been off loaded onto intelligent machines. Within the air traffic control system, a great deal of it has.

To be sure, every now and then, the flight system will call on a human pilot to execute a certain maneuver. When it does, the system typically isnt asking for anything like expert opinion from the human. Though it may sometimes need to do that, in the course of its routine, day-to-day operations, the system relies hardly at all on the ingenuity or intuition of human beings, including human pilots. When the system does need a human pilot to do something, it usually just needs the human to expertly execute a particular sequence of maneuvers. Mostly things go right. Mostly the humans do what they are asked to do, when they are asked to do it. But it should come as no surprise that when things do go wrong, it is quite often the humans and not the machines that are at fault. Humans too often fail to respond, or they respond with the wrong maneuver, or they execute the needed maneuver but in an untimely fashion.

Utopian buzz may serve to blind us to the fact that AI is very likely to transformperhaps radicallyour collective intuitive sense of where the boundary between work better consigned to machines and work best left to us humans should fall.

I have focused on the air traffic control system because it is a relatively mature and stable cognitive network in which a robust balance between human and machine cognitive labor has been achieved over time. Given its robustness and stability and the degree of safety it provides, its pretty hard to imagine anyone having any degree of nostalgia for the days when that task of navigating the airways fell more squarely on the shoulders of human beings and less squarely on machines. On the other hand, it is not at all hard to imagine a future in which the cognitive role of humans is reduced even further, if not entirely eliminated. No one would now dream of traveling on an airplane that wasnt furnished with the latest radar system or the latest collision avoidance software. Perhaps the day will soon come when no would dream of traveling on an airplane piloted by, of all things, a human being rather than by a robotic AI pilot.

I suspect that what is true of the air traffic control system may eventually be true of many of the cognitive networks in which human and machine intelligence systematically interact. We may find that the cognitive labor that was once assigned to the human nodes has been given over to intelligent machines for narrow economic reasons aloneespecially if we fail to engage in collective decision making that is intentional, deliberative, and reflective and thereby leave ourselves to the mercy of the short-term economic interests of those who currently own and control the means of production.

We may comfort ourselves that even in such an eventuality, that which is left to us humans will be cognitive work of very high value, finely suited to the distinctive capacities of human beings. But I do not know what would now assure us of the inevitability of such an outcome. Indeed, it may turn out that there isnt really all that much that needs doing within such networks that is best done by human brains at all. It may be, for example, that within most engineered cognitive networks, the human brains that still have a place within them will mostly be along for the ride. Both possibilities are, I think, genuinely live options. And if I had to place a bet, I would bet that for the foreseeable future the total landscape of engineered cognitive networks will increasingly contain engineered networks of both kinds.

In fact, the two system I mentioned earlierthe medical diagnostic and treatment system and the ground transportation systemalready provide evidence of my conjecture. Start with the medical diagnostic and treatment system. Note that a great deal of medical diagnosis involves expertise at interpreting the results of various forms of medical imaging. As things currently stand, it is mostly human beings that do the interpreting. But an impressive variety of machine learning algorithms that can do at least as well as humans are being developed at a rapid pace. For example, CheXNet, developed at Stanford, promises to equal or exceed the performance of human radiologists in the diagnosis a wide variety of difference diseases from X-ray scans. Partly because of the success of CheXNEt and other machine learning algorithms, Geoffrey Hinton, the founding father of deep learning, has come to regard radiologists as an endangered species. On his view, medical schools ought to stop training radiologists beginning right now.

Even if Hinton is right, that doesnt mean that all the cognitive work done by the medical diagnostic and treatment system will soon be done by intelligent machines. Though human-centered radiology may soon come to seem quaint and outmoded, there is, I think, no plausible short- to medium-term future in which human doctors are completely written out of the medical treatment and diagnostic system. For one thing, though the machines beat humans at diagnosis, we still outperform the machines when it comes to the treatmentperhaps because humans are much better at things like empathy than any AI system is now or is likely to be anytime soon. Still, even if the human doctors are never fully eliminated from the diagnostic and treatment cognitive network, it is likely that their enduring roles within such networks will evolve so much that human doctors of tomorrow will bear little resemblance to human doctors of today.

We must confront hard questions about what will and should become of both them and us as we welcome ever more of them into our midst.

By contrast, there is a quite plausible near- to medium-term future in which human beings within the ground traffic control system are gradually reduced to the status of passengers. Someday in the not terribly distant future, our automobiles, buses, trucks, and trains will likely be part of a highly interconnected ground transportation system in which much of the cognitive labor is done by intelligent machines rather than human brains. The system will involve smart vehicles in many different configurations, each loaded with advanced sensors that allow them collect, analyze, and act on huge stores of data, in coordination with each other, the smart roadways on which they travel, and perhaps some centralized information hub that is constantly monitoring the whole. Within this system, our vehicles will navigate the roadways and railways safely and smoothly with very little guidance from humans. Humans will be able to direct the system to get this or that cargo or passenger from here to there. But the details will be left to the system to work out without much, if any, human intervention.

Such a development, if and when it comes to full fruition, will no doubt be accompanied by quantum leaps in safety and efficiency. But no doubt it would be a major source of a possibly permanent and steep decrease in the net demand for human labor of the sort that we referred to at the outset. All around the world, many millions of human beings make their living by driving things from one place to another. Labor of this sort has traditionally been rather secure. It cannot possibly be outsourced to foreign competitors. That is, you cannot transport beer, for example, from Colorado to Ohio by hiring a low-wage driver operating a truck in Beijing. But it may soon be the case that we can outsource such work after all. Not to foreign laborers but to intelligent machines, right here in our midst!

I end where I began. The robots are coming. Eventually, they may come for every one of us. Walls will not contain them. We cannot outrun them. Nor will running faster than the next human being suffice to save us from them. Not in the long run. They are relentless, never breaking pace, never stopping to savor their latest prey before moving on to the next.

If we cannot stop or reverse the robot invasion of the built human world, we must turn and face them. We must confront hard questions about what will and should become of both them and us as we welcome ever more of them into our midst. Should we seek to regulate their development and deployment? Should we accept the inevitability that we will lose much work to them? If so, perhaps we should rethink the very basis of our economy. Nor is it merely questions of money that we must face. There are also questions of meaning. What exactly will we do with ourselves if there is no longer any economic demand for human cognitive labor? How shall we find meaning and purpose in a world without work?

These are the sort of questions that the robot invasion will force us to confront. It should be striking that these are also the questions presaged in my prescient epigraph from Mill. Over a century before the rise of AI, Mill realized that the most urgent question raised by the rise of automation would not be the question of whether automata could perform certain tasks faster or cheaper or more reliably than human beings might. Instead, the most urgent question is what we humans would become in the process of substituting machine labor for human labor. Would such a substitution enhance us or diminish us? That has, in fact, has always been the most urgent question raised by disruptive technologies, though we have seldom recognized it.

This time around, may we face the urgent question head on. And may we do so collectively, deliberatively, reflectively, and intentionally.

Read more here:
The Robots Are Coming - Boston Review

Read More..

Google is building COVID-19 screening website as Trump declares national emergency – VentureBeat

Alphabets Verily is creating a website for people to screen for coronavirus and help them find testing sites at Target, Walgreens, CVS, and Walmart locations, according to Google. The initiative will begin in the San Francisco Bay Area with hopes of expanding coverage to more areas in the future. The move is part of a public-private partnership to dispense COVID-19 testing to millions of Americans in the weeks ahead, according to Vice President Mike Pence. Testing to confirm COVID-19 cases has been a critical part of response plans in other countries around the world.

The news was announced as President Trump declared a national emergency today in a White House press conference. Trump, Pence, and administration implied the Google screening website would service people nationwide. Positive coronavirus cases have now been found in all 50 states, and on Wednesday COVID-19 was declared a global pandemic by the World Health Organization.

The federal government will point people to the Google website to fill out a screening questionnaire, state their symptoms and risk factors, and if necessary will be told the location of a drive-through testing option. Automated machines will then be used to return results in 24 to 36 hours.

About 1,700 engineers are creating the website today, President Trump said today. VentureBeat reached out to Google for more information about the website.

Dr. Deborah Burke said that nasal swab samples can be delivered to doctors offices and hospitals, then picked up by companies like Quest Diagnostics.

The important piece in this all is, theyve gone from a machine that may have a lower throughput, to the potential to have automated extraction, she said. Its really key for the laboratory people; its an automated extraction of the RNA that then runs in an automated way on the machine with no one touching it. And the result comes out of the other end. She said that going from sample to machine to results removes the manual procedures that were slowing down testing and thus delaying results.

As an emergency executive action announced by Trump today, the Department of Education will waive interest on student loans held by federal government agencies, and he instructed secretary of energy to buy crude oil reserves.

At midnight tonight, the United States will suspend travel from Europe, and U.S. citizens traveling into the country will be asked to take part in a voluntary 14-day quarantine.

Tech giants like Apple, Amazon, Facebook, Google, and Microsoft took part in a teleconference with White House CTO Michael Kratsios to discuss how artificial intelligence and tech can help combat coronavirus. A White House statement said the discussion touched on issues like the creation of tools. Public health officials and authorities from China to Singapore and beyond have used AI as part of solutions to detect and fight coronavirus since the novel disease emerged in December 2019.

Earlier this year, Googles DeepMind also released structure predictions of proteins associated with the virus that causes COVID-19 with the latest version of the AlphaFold system.

These structure predictions have not yet been experimentally verified, but the hope is that by accelerating their release they may contribute to the scientific communitys understanding of how the virus functions and experimental work in developing future treatments, CEO Sundar Pichai said in a blog post last week.

Upon questioning by reporters at the press conference, Trump refused to take responsibility for the heretofore slow U.S. response to the pandemic. He also evaded questions about whether or not he needs to be tested for COVID-19, despite the fact that he was in close proximity days ago with a person who has tested positive Fabio Wajngarten, press secretary to Brazil president Jair Bolsano. Eventually, after multiple reporters pressed him on the issue, Trump said he would get tested but not, he said, because of his contact with the two men. Miami Mayor Francis Suarez was also in contact the two men at Trumps Mar-a-Lago resort and today tested positive for coronavirus.

Throughout the press conference, President Trump, Vice President Pence, and a roster of scientists and executives shook hands and touched the same mic.

See the article here:
Google is building COVID-19 screening website as Trump declares national emergency - VentureBeat

Read More..

Top AI Announcements Of The Week: TensorFlow Quantum And More – Analytics India Magazine

AI is one of the most happening domains in the world right now. It would take a lifetime to skim through all the machine learning research papers released till date. As the AI keeps itself in the news through new releases of frameworks, regulations and breakthroughs, we can only hope to get the best of the lot.

So, here we have a compiled a list of top exciting AI announcements released over the past one week:

Late last year, Google locked horns with IBM in their race for quantum supremacy. Though the news has been around how good their quantum computers are, not much has been said about the implementation. Today, Google brings two of their most powerful frameworks Tensorflow and CIRQ together and releases TensorFlow Quantum, an open-source library for the rapid prototyping of quantum ML models.

Google AI team has joined hands with the University of Waterloo, X, and Volkswagen, announced the release of TensorFlow Quantum (TFQ).

TFQ is designed to provide the developers with the tools necessary for assisting the quantum computing and machine learning research communities to control and model quantum systems.

The team at Google have also released a TFQ white paper with a review of quantum applications. And, each example can be run in-browser via Colab from this research repository.

A key feature of TensorFlow Quantum is the ability to simultaneously train and execute many quantum circuits. This is achieved by TensorFlows ability to parallelise computation across a cluster of computers, and the ability to simulate relatively large quantum circuits on multi-core computers.

As the devastating news of COVID-19 keeps rising at an alarming rate, the AI researchers have given something to smile about. DeepMind, one of the premier AI research labs in the world, announced last week, that they are releasing structure predictions of several proteins that can promote research into the ongoing research around COVID-19. They have used the latest version of AlphaFold system to find these structures. AlphaFold is one of the biggest innovations to have come from the labs of DeepMind, and after a couple of years, it is exhilarating to see its application in something very critical.

As the pursuit to achieve human-level intelligence in machines fortifies, language modeling will keep on surfacing till the very end. One, human language is innately sophisticated, and two, training language models from scratch is exhaustive.

The last couple of years has witnessed a flurry of mega releases from the likes of NVIDIA, Microsoft and especially Google. As BERT topped the charts through many of its variants, Google now announces ELECTRA.

ELECTRA has the benefits of BERT but more efficient learning. They also claim that this novel pre-training method outperforms existing techniques given the same compute budget.

The gains are particularly strong for small models; for example, a model trained on one GPU for four days outperformed GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark.

China has been the worst-hit nation of all the COVID-19 victims. However, two of the biggest AI breakthroughs have come from the Chinese soil. Last month, Baidu announced how its toolkit brings down the prediction time. Last week, another Chinese giant, Alibaba announced that its new AI system has an accuracy of 96% in detecting the coronavirus from the CT scan of the patients. Alibabas founder Jack Ma has fueled the vaccine development efforts of his team with a $2.15 M donation.

Facebook AI has released its in-house feature of converting a two-dimensional photo into a video byte that gives the feel of having a more realistic view of the object in the picture. This system infers the 3D structure of any image, whether it is a new shot just taken on an Android or iOS device with a standard single camera, or a decades-old image recently uploaded to a phone or laptop.

The feature has been only available on high-end phones through the dual-lens portrait mode. But, now it will be available on every mobile device even with a single, rear-facing camera. To bring this new visual format to more people, the researchers at Facebook used state-of-the-art ML techniques to produce 3D photos from virtually any standard 2D picture.

One significant implication of this feature can be an improved understanding of 3D scenes that can help robots navigate and interact with the physical world.

As the whole world focused on the race to quantum supremacy between Google and IBM, Honeywell silently has been building, as it claims, the most powerful quantum computer yet. And, it plans to release this by the middle of 2020.

Thanks to a breakthrough in technology, were on track to release a quantum computer with a quantum volume of at least 64, twice that of the next alternative in the industry. There are a number of industries that will be profoundly impacted by the advancement and ultimate application of at-scale quantum computing, said Tony Uttley, President of Honeywell Quantum Solutions in the official press release.

The outbreak of COVID-19 has created a panic globally and rightfully so. Many flagship conferences have been either cancelled or have been moved to a virtual environment.

Nvidias flagship GPU Technology Conference (GTC), which was supposed to take place in San Francisco in the last week of March was cancelled due to fears of the COVID-19 coronavirus.

Whereas, Google Cloud also has cancelled its upcoming event, Google Cloud Next 20, which was slated to take place on April 6-8 at the Moscone Center in San Francisco. Due to the growing concern around the coronavirus (COVID-19), and in alignment with the best practices laid out by the CDC, WHO and other relevant entities, Google Cloud has decided to reimagine Google Cloud Next 20, the company stated on its website.

One of the popular conferences for ML researchers, ICLR2020 too, has announced that they are cancelling its physical conference this year due to growing concerns about COVID-19. They are shifting this event to a fully virtual conference.

ICLR authorities also issued a statement saying that all accepted papers at the virtual conference will be presented using a pre-recorded video.

comments

View post:
Top AI Announcements Of The Week: TensorFlow Quantum And More - Analytics India Magazine

Read More..

QualiSpace Launched Tally on the Cloud – IT News Online

PR.com2020-03-13

Mumbai, India, March 13, 2020 --(PR.com)-- QualiSpace's "Tally On Cloud" will help you streamline the accounting processes.

Tally On Cloud is now available with QualiSpace Cloud Hosting and Dedicated Server.

QualiSpace launched yet another new service - Tally On Cloud, in the first quarter of 2020. After introducing the GPU Dedicated Server in January 2020, QualiSpace is now live with Tally on Cloud.

QualiSpace Web Services Pvt. Ltd. is an ICANN accredited domain name registrar, and managed web and cloud hosting service provider. With the launch of Tally On Cloud, QualiSpace has stepped into the field of industry-specific cloud services.

Tally is an Indian Enterprise Resource Planning Software by Tally Solutions Pvt. Ltd. Apart from being the most popular accounting software in the Indian market, Tally is available in 100+ countries.

QualiSpace customers can choose their fully managed Tally On Cloud plan based on how many user accounts they need. Depending on the data size, they can upgrade their plans. The data hosted on the Tally On Cloud can be accessed from anywhere, at any time, using RDP (Remote Desktop Protocol) client.

The Accounting Department is the cornerstone of any organization. By launching Tally On Cloud-As-A-Service, QualiSpace tries to simplify the complexities and smoothen the accounting processes. This will enable the companies to use the IT infrastructure for their core business practices and add value to their brand, said Hiren Shah, Founder, and CEO, QualiSpace.

To learn more about the QualiSpace Tally On Cloud, please visit http://www.qualispace.com/server/tally-on-cloud/.

About QualiSpace:

QualiSpace Web Services Pvt Ltd. is an ICANN Accredited Domain Name Registrar and a managed web and cloud hosting service provider. The service portfolio of QualiSpace contains the following services: Domain Registration, Managed cloud hosting, Managed web hosting, advanced email services, web security solutions, backup, disaster recovery solutions, and more.

Since its inception in 2001, QualiSpace has been empowering its customers to set up, fluently run, and thrive their businesses online. With an aim to become a customer-centric firm, rendering supreme quality web infrastructure, QualiSpace enables SMEs, start-ups, individuals, and established businesses to reach their online audiences with ease.

To know more, please visit http://www.qualispace.com.

Name: QualiSpace Web Services Pvt. Ltd.

Phone: 9819342186

Email: mktg@qualispace.com

Contact Information:

QualiSpace

Shraddha Vedak

09819342186

Contact via Email

http://www.qualispace.com/

shraddha.vedak@qualispace.com

Read the full story here: https://www.pr.com/press-release/807849

Press Release Distributed by PR.com

See more here:
QualiSpace Launched Tally on the Cloud - IT News Online

Read More..

BeBop, Sony Ci to Hold Post-Production in the Cloud Webinar March 18 – Media & Entertainment Services Alliance M&E Daily Newsletter

Sports brands everywhere are looking to make the most out of all their assets, and keep fans more engaged with all the content available.

The National Hot Rod Association (NHRA) the worlds largest motorsports sanctioning body turned to a pair of Media & Entertainment Services Alliance (MESA) members to do just that, and offer fans more engagement beyond simple race event coverage.

On March 18, the NHRA will share why it turned to Sonys Ci cloud platform and BeBop Technologys virtual editing workstations, in order to turn their small team into a remote powerhouse, to deliver major volumes of content to its passionate fan base, in the webinar Why we Moved our Post-Production to the Cloud.

Mike Rokosa, NHRA technology executive, and Rob Hedrick, director of post-production and supervising producer for NHRA, will share how and why NHRA moved to a cloud-based production workflow; how a centralized cloud content lake speeds up NHRAs production and improves the quality of their deliverables; and both the benefits and challenges of teams embracing a cloud production workflow.

Theyll be joined by David Benson, CTO and co-founder of Bebop Technology, and David Rosen, VP of cloud applications and services for Sony Imaging Products & Solutions Americas, with MESAs Guy Finley moderating.

To join this free webinar, being held March 18 at noon PST, click here.

See the article here:
BeBop, Sony Ci to Hold Post-Production in the Cloud Webinar March 18 - Media & Entertainment Services Alliance M&E Daily Newsletter

Read More..

Minimize business impact of coronavirus – IT World Canada

The coronavirus is spreading rapidly. The media is full of related articles. More and more of your employees and customers are becoming anxious about personal impacts and disruption. Panic buying is emptying store shelves of various products. Fewer people are attending events or leaving their homes for discretionary reasons.

How can you minimize the impact of the coronavirus on your business? Here are some relatively quick and easy actions that will reduce risk.

Because the coronavirus situation is unfolding rapidly, communicate with your employees frequently about what you are doing as a business to reduce risk and what you expect your employees to do to reduce risk. Be factual. Avoid exaggerated or inflammatory statements.

Review this Interim Guidance for Businesses and Employers for a list of actions to take for your business and employees.Read this Checklist for Community and Faith Leaders for ideas to reduce risk and avoid heightening anxieties.Read this COVID-19 coronavirus info for Albertans for actions to minimize the spread of infections among your customers and employees. Every province will have a similar web page.

You can practice working from home by sending one team or department home right now for a few days.Through this practice, you are likely to discover the following issues:

You can ask your IT department to partially test your business continuity plan. This action is supposed to be available on extremely short notice.Through this business continuity test, you are likely to discover the following issues:

You can ask your IT department to partially test your disaster recovery plan.Through this disaster recovery test, you are likely to discover the following issues:

As more employees work remotely from home or from other locations, the need for access to virtual collaboration tools will increase.As you use virtual collaboration tools more, you are likely to discover the following issues:

Moving more data and applications to the cloud makes them easier to access from anywhere without taxing your business network infrastructure. The cloud often also simplifies business continuity and disaster recovery.Example cloud providers that offer office software and store files include:

Leading cloud providers that offer remote computing capacity for applications include:

As you access office software and store files in the cloud, you are likely to discover the following issues:

Your business partners are encountering the same issues you are. To minimize disruption of your supply chain, strengthen coordination with those business partners.As you coordinate more with your business partners, you are likely to discover the following issues:

What strategies would you recommend that reduce the risk of coronavirus impacts? Let us know in the comments below.

View post:
Minimize business impact of coronavirus - IT World Canada

Read More..

Disruptive tech trends: Fintech leads Twitter mentions in February 2020 – Verdict

Fintech heads the Verdict list of the top five terms tweeted in disruptive tech in February 2020. These are based on data from GlobalDatas Influencer Platform. The top tweeted terms are the trending industry discussions happening on Twitter by key individuals (influencers) as tracked by the platform.

The need for fintech companies to articulate the benefits to banks and the fintech role as a global disruptor were leading topics. An article shared by Antonio Grasso, a digital transformation advisor, describes how the financial services industry is undergoing a complete makeover. This is because of fintech start-ups. From digital-only banks to payment innovations, fintech solutions are the preference, even for traditional financial companies.

The adoption of blockchain and blockchain wallets is also looming over the global finance industry opine experts, with a majority of the financial services companies increasing their investments in blockchain innovations. Michael Fisher, a tech evangelist, tweeted on the different ways the technology can help the environment.

In another tweet, Ron Shevlin, an industry analyst, shared an article on the five hottest technologies in banking. The author shared an article listing the top technologies disrupting the banking landscape. These included digital account opening, P2P payments, video collaboration, cloud computing, and application programming interfaces.

The applications of machine learning across industries, especially in healthcare were popularly discussed in February 2020. According to Marcus Borba, a global thought leader and influencer, machine learning is being used to tag, classify and filter the vast amounts of real-time data. This is to help public health officials assess the risk factors of the coronavirus, and thereby predict the course of the disease and its impact.

In other news, Ron Shevlin, a financial services marketing expert, shared an article on how chatbots and machine learning will transform the banking sector. However, the article stated that few community-based financial institutions had deployed these technologies, and that most institutions landed up not investing much than what was professed.

Described as the next frontier, and expected to transform almost everything, the internet of things (IoT) and its application in key industries was popularly discussed. For instance, Ronald van Loon, a top influencer of technologies, shared a video on how almost all unimaginable devices will be connected to the internet. For example, smart heating systems that can detect if youre home, to refrigerators that can tell you if youre running short of products.

On the automotive front, statistics show that by 2034, autonomous vehicles will make up only 10% of all the vehicles bought and sold. This was tweeted by Antonio Grasso, a digital transformation advisor. The survey further revealed that approximately 42% of the US public refused to ride on an automated vehicle altogether. While the computer systems in cars are expected to become highly sophisticated, safety and cost might hinder widespread adoption. In other news, Apple is looking to integrate the car key feature in its iphones and watches.

Meanwhile, logistics companies are connecting to IoT technologies with legacy systems. This is to improve visibility and agility through in-fleet sensors that can aggregate all kinds of data. North America, Western Europe, and developed Asia, are quickly catching up to this trend of integrating telematics hardware and software in trucks.

Innovations and worldwide spending on artificial intelligence (AI) were some of the popular topics discussed during the month. For instance, Antonio Grasso, a digital transformation advisor, shared an article on the worldwide spending on AI expected to reach approximately 45 billion in 2021. The article revealed that banking, financial services, and insurance would be spending the highest ($12bn) in AI tools in 2021.

AI investments have seen a surge because the technology is helping meet the worlds biggest problems. For instance, an AI tool can help detect breast cancer five years in advance according to an article shared by Ronald van Loon, a top technology influencer. Likewise, it can help detect skin cancer, predict the recovery of coma victims, analyse eye scans, and also trial new drugs much faster. This is according to Michael Fisher, a tech evangelist.

Harold Sinnott, a social media and digital marketing consultant, also highlighted its applications beyond healthcare such as in highly personalised education and energy conservation.

Cloud neutrality, digital transformation, and combining of technologies to achieve business scalability were discussed. Molly Wood, a tech correspondent, shared an article about the need to talk about cloud neutrality. The article stressed that it was unfair that the cloud economy lay in the hands of just a few companies, who are actually hosting their own competition.

Andy Jassy, the CEO AWS, Amazons cloud computing business, shared an article on partnering with Carrier, a leading air conditioning brand, to drive the latters digital transformation strategy. Carrier is planning to streamline its operations and improve the pace of innovation, with more sustainability and intelligent buildings. It will do this by building a data lake on the Amazon Simple Storage Service (Amazon S3). It will also use its ML services to derive insights from the real-time data across its supply chain and manufacturing.

Yves Mulkers, a data strategist, highlighted companies such as Veritone and Quantifi that have grown by using cloud platforms. He shared an article stating that the cloud is fundamental to the AI model. This is because it provides companies with the large data sets, as well as the scope to scale.

GlobalData is this websites parent business intelligence company.

Excerpt from:
Disruptive tech trends: Fintech leads Twitter mentions in February 2020 - Verdict

Read More..

Software for scheduling appointment in beauty salons Software as a service vs WordPress Plugins vs Local Applications – Mighty Gadget

Tech blogging was originally my side hustle, with web development and marketing my main job; nowadays, it is the other way around.

Quite often, I have potential clients come to me with ancient systems, it is not uncommon for some clients to be using some bespoke software that won't run on a modern OS anymore. Beauty salons are often bad for this, they buy something when they set up shop, then a decade later they are still using it.

A lot of web developers will try and sell bespoke software, or plugins for a website to handle the appointment needs of a salon, but more often than not, this is not the ideal solution and paying a monthly fee for a web-based software as a service will provide superior results.

Traditionally most of the applications we used, were installed on a local PC, this used to the case for accounting systems, customer relationship management systems and also appointment systems. Even nowadays it is not uncommon for me to talk to a salon and find out they are using some old PC with ancient software.

Moving everything into the cloud using something like Versum salon scheduling software and paying a monthly fee eliminates the need for a dedicated PC with your appointment software. This reduces hardware costs; quite often, people will use a tablet to handle all the appointments. There is also no software maintenance, as this is all done by the company providing the service. You also have the flexibility of access the system anytime from anywhere using any device with a web browser.

As a web developer, it is tempting to upsell additional functionality to clients; integrating a plugin can add hundreds (or thousands by some agencies) to a quote. While self-hosting your appointment functionality can work perfectly well, and in theory have lower long term costs, the end result is another piece of software that needs updating regularly. It inevitably reduces the load speeds of your website(or increases hosting costs). Running multiple plugins on a website will often also lead to clashes with compatibility and acts as another point for hackers to gain access to your site.

I find the Versum system to be quite affordable and the price scales with the number of staff your have on board.

If you are a solo practitioner, it will be just 25 per month. Then the 2-6 people it is 39, then 69 for up to 12 and finally 109 for unlimited staff.

Because there is no hardware costs or software maintenance on your side, these monthly costs are somewhat mitigated by savings elsewhere.

With Versum the feature list is extensive, this isn't just some online diary but a full suite of online software to manage a Salon.

This includes all the diary and appointment features you would expect, but also comprehensive client communication functions with the ability to send emails and SMS text messages. You get 50 free messages per month then a fee of 5p per message.

Clients can book online, and Versum will provide you a free website if needed. Allowing customers to book online can significantly reduce the number of calls you need to take, which can easily justify the monthly fee.

Beyond that, this offers an accounting and management system allowing you to do financial reports, work out commissions for staff, and break down revenue per employee.

Cloud-based systems are the way forward, and the small monthly costs of running such systems is likely to be negligible compared to the other running costs of your business. A cloud system will help eliminate much of the stress associated with managing a Salon and help free up time to get in more paid work.

In my opinion, Versum is extremely cost-effective and well worth considering if you are looking for a new and improved appointment system.

Last Updated on 13th March 2020

Excerpt from:
Software for scheduling appointment in beauty salons Software as a service vs WordPress Plugins vs Local Applications - Mighty Gadget

Read More..

Bill to protect children online ensnared in encryption fight | TheHill – The Hill

Senate legislation to protect children from sexual exploitation online is being dragged into a larger fight over privacy and encryption.

The bill in question, the EARN IT Act, which has bipartisan support, would create a government-backed commission to develop "best practices" for dealing with rampant child sexual abuse material online.

If tech companies do not meet the best practices adopted by Congress, they would be stripped of their legal liability shield, laid out in Section 230 of the Communications Decency Act, in such cases.

But critics worry that the bill is simply a vehicle to block the tech industry's efforts to implement end-to-end encryption, a feature which makes it impossible for companies or government to access private communications between devices.

They worry the legislation could give government a backdoor to encrypted devices. That concern has been amplified by Attorney General William BarrWilliam Pelham BarrBill to protect children online ensnared in encryption fight Trump suggests he may veto surveillance bill The Hill's Morning Report - Trump takes unexpected step to stem coronavirus MORE, vocal opponent of encryption, who would head the best practices commission under the legislation.

Sen. Ron WydenRonald (Ron) Lee WydenBill to protect children online ensnared in encryption fight Democratic Senators introduce bill to provide free coronavirus testing Vermont attorney general sues controversial facial recognition company over privacy violations MORE (D-Ore.) has slammed the bill as a "Trojan horse to give Attorney General Barr and Donald TrumpDonald John TrumpThe Hill's Morning Report Coronavirus tests a partisan Washington The Memo: Virus crisis upends political world Bill to protect children online ensnared in encryption fight MORE the power to control online speech and require government access to every aspect of Americans' lives."

But supporters of the bill are pushing back.

Senate Judiciary Committee Chairman Lindsey GrahamLindsey Olin GrahamThe Hill's Morning Report Coronavirus tests a partisan Washington Bill to protect children online ensnared in encryption fight Trudeau's wife tests positive for coronavirus MORE (R-S.C.), a bill co-sponsor, said during a hearing before his committee on Wednesday that the legislation is not about the encryption debate, but the best business practices.

This bill is not about ending encryption, added Sen. Richard Blumenthal (D-Conn.), another co-sponsor, Wednesday. And it is also Im going to be very blunt here not about the current attorney general, William Barr.

Blumenthal pointed out that the commission would have 19 members and that 14 votes would be needed to approve a best practice. Among those 19 members would be the attorney general, but also the Department of Homeland Security secretary and the chair of the Federal Trade Commission. The other 16 members would be appointed by the Senate majority leader, Senate minority leader, Speaker of the House and the House minority leader.

That has failed to calm the worries of privacy advocates.

Kathleen Ruane, senior legislative counsel at the American Civil Liberties Union, told The Hill that although there are other members on the committee, Barr will have an outsize role. She pointed to language in the bill giving the attorney general final approval power before best practices are sent to Congress.

Blumenthal has said the attorney general can only reject proposed best practices on the commission, as opposed to pushing any through unilaterally.

Regardless of Barr's role and powers,experts say encryption will come up as the commission debates best practices.

You have law enforcement representatives [on the committee] and this is a huge issue among the law enforcement community ... so it's very likely they'll bring it up, said Alan Rozenshtein, associate professor of law at the University of Minnesota and former attorney adviser for the Department of Justice. And then you have victim advocates and to the extent that they believe that encryption is part of the problem or needs to be addressed as part of the problem, they're going to bring it up as well."

"I don't really see a realistic situation in which this does not implicate encryption," he added.

Encryption is not explicitly mentioned in the bill, but that also means nothing stops them from making best practices related to it, said Elizabeth Banker, the deputy general counsel of the Internet Association, a trade association that represents many online companies.

Critics also have broader privacy concerns over the legislation outside of the encryption debate. Ruane said other best practices could pose threats to communication privacy.

One compromise that has been floated is to make it explicit in the legislation that the commission will not make any recommendations about encryption. But Graham has rebuffed that idea.

I'm not going to pre-determine what the right answers are, Graham told reporters Wednesday. Let the commission work.

Other lawmakers have also downplayed any threat to weakening encryption.

I can tell you right now I will not support something that compromises the integrity of encryption for users, because I think that that's hugely significant, Sen. Josh HawleyJoshua (Josh) David HawleyBill to protect children online ensnared in encryption fight Hillicon Valley: Facebook, Twitter dismantle Russian interference campaign targeting African Americans | YouTube to allow ads on coronavirus videos | Trump signs law banning federal funds for Huawei equipment Top general: Iran-backed militia in Iraq 'only group known' to have carried out type of attack that killed two US troops MORE (R-Mo.), one of the 11 bill co-sponsors, told reporters Wednesday.

Hawley accused tech companies of bringing up the issue of encryption to derail the legislation, which will place more responsibility on them to prevent exploitation of children online.

What the tech companies will do is seize at any straw to try to argue that we just can't possibly revise Section 230, he told reporters. Let's not underestimate how rich they've gotten on Section 230.

Blumenthal said at the hearing that some big tech companies are using encryption as a subterfuge to oppose this bill.

There have been changes to the bill from an earlier version leaked in February. That version had only 15 members on the commission and required a lower threshold to approve best practices.

Asked about those changes, Blumenthal told reporters that legislators were listening for constructive suggestions as they drafted the bill.

As lawmakers move to finalize the bill, both sides are digging in.

I am not going to stand on the sidelines any longer, Graham said Wednesday, vowing to push the legislation forward.

Go here to see the original:
Bill to protect children online ensnared in encryption fight | TheHill - The Hill

Read More..

Child exploitation bill earns strong opposition from encryption advocates – Washington Examiner

A bipartisan group of 10 senators introduced legislation designed to combat online child pornography, but many privacy and cybersecurity advocates are vehemently opposed to the bill.

Many groups focused on privacy and cybersecurity fear the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act ( EARN IT) will lead to new restrictions on the use of encryption on websites and web-based messaging services.

The bill gives the attorney general broad authority to craft new standards for websites and online services to protect against child pornography. Attorney General William Barr has often called encryption a valuable tool for child pornographers and other criminals, and privacy and security groups fear he will quickly move to require encryption back doors in online services.

The bill undermines the privacy of every single American, stifles our ability to communicate freely online, and may jeopardize the very prosecutions it seeks to enable, the American Civil Liberties Union said in a March 9 letter to the Senate Judiciary Committee. Encrypted communications are vital to everyones privacy.

Sens. Lindsey Graham and Richard Blumenthal released a discussion draft of the bill earlier this year, and on March 5, they introduced the bipartisan EARN IT Act. At the same time, Google, Facebook, and four other online companies announced they were adopting new voluntary guidelines to fight child pornography.

The bill would create a new commission that develops best practices for preventing online child pornography, and it would enforce these standards by removing lawsuit protections from websites and online services that fail to implement them.

The EARN IT Act would require online services to certify the best practices developed by the commission. If not, they risk expanded legal liability under Section 230 of the Communications Decency Act of 1996, which protects sites from lawsuits for user-generated content accused of defamation, breach of contract, and other violations.

Section 230 protects video-hosting sites like YouTube and social media providers such as Facebook and Twitter, but also any website that allows users to post comments, including many news sites.

Sponsors of the bill argue that its needed to crack down on the tens of millions of photos and videos posted online depicting child abuse.

The EARN IT Act will ensure tech companies are using best business practices to prevent child exploitation online, Graham said in a statement. For the first time, [websites] will have to earn blanket liability protection when it comes to protecting minors. Our goal is to do this in a balanced way that doesnt overly inhibit innovation, but forcibly deals with child exploitation.

After senators introduced the EARN IT Act, a trickle of criticism turned into a flood, however. The bill could turn voluntary reporting of child pornography by websites into a legal procedure that requires newly deputized websites to get court-ordered warrants before turning in users, the ACLU said in its March 9 letter.

Any evidence of [child abuse] obtained through investigations conducted to comply with the EARN IT Act, therefore, could be inadmissible in court if obtained without a warrant or in any other manner that does not comply with the Fourth Amendment, the ACLU wrote.

Critics also noted the value of encryption to domestic violence victims, to dissidents and journalists, to members of Congress, and to members of the U.S. military.

The 82nd Airborne Army division, deployed in the Middle East, uses encrypted applications Signal and Wickr to avoid surveillance by the Iranian government, the ACLU said. Encrypted services protect all of us from the prying eyes of hostile foreign governments and numerous other bad actors.

Another 25 groups, including FreedomWorks, the Electronic Frontier Foundation, and the Wikimedia Foundation, also wrote a letter to Graham and Blumenthal, voicing strong opposition to the bill. The legislation raises First Amendment and Fourth Amendment concerns, and it could push criminals to underground communications services.

Eliminating or undermining encryption on some online platforms will make law enforcements job harder by simply pushing criminals to other communications options, the groups wrote. In other words, EARN IT would harm ordinary users who rely on encrypted messaging, but would not stop bad actors.

See the original post:
Child exploitation bill earns strong opposition from encryption advocates - Washington Examiner

Read More..