Page 2,683«..1020..2,6822,6832,6842,685..2,6902,700..»

The Best Of Our Knowledge #1614: The Rise Of The Machines – WAMC

Today we think nothing of seeing laptops and iPads in the classroom. But there have been attempts at creating so-called teaching machines since the early 20th Century. And its the history of those early teaching machines that Audrey Watters explores in her new book called Teaching Machines The History of Personalized Learning."

Audrey Watters is an education technology writer and creator of the blog Hack Education.

So after a discussion about the history of learning machines, we thought it would be a good idea to take another look at machine learning. Its a very different thing. Machine learning and "Artificial Intelligence are two terms that were coined in the 1950s but are only now beginning to be put to solving practical problems. In the past few years, machine learning algorithms have been used to automate the interpretation and analysis of clinical chemistry data in a variety of situations in the lab. In the September 2020 issue of the journal Clinical Chemistry, there is a paper on a machine learning approach for the automated interpretation of amino acid profiles in human plasma. The same issue contains an accompanying editorial titled Machine Learning for the Biochemical Genetics Laboratory. One of the authors of the editorial is Dr. Stephen Master, Chief of the Division of Laboratory Medicine at the Childrens Hospital of Philadelphia and an Associate Professor of Pathology and Laboratory Medicine at the Perelman School of Medicine of the University of Pennsylvania. I asked Dr. Master, first of all, what exactly is machine learning, and why would it be significant for the clinical laboratory?

Okay, so weve done some deep dives into teaching machines and machine learning, lets go for the hat trick and take on virtual reality. Thats the topic of todays Academic Minute.

See the original post here:
The Best Of Our Knowledge #1614: The Rise Of The Machines - WAMC

Read More..

Some of the emerging AI And machine Learning trends of 2021 – Floridanewstimes.com

From consumer electronics and smart personal assistants, advanced quantum computing systems to leading-edge medical diagnostic systems artificial Intelligence and machine learning technologies are increasingly finding their way into everything as they have been hot topics in 2020. According to market researcher IDC, up 12.3 percent from 2019 revenue generated by AI hardware, software and services is expected to reach $156.5 billion worldwide this year. But when it comes to trends in the development and use of AI and ML technologies can be easy to lose sight of the forest for the trees. You should look at hown AI and machine learning are being developed and the ways they are being used not just in the types of applications they are finding their way into as we approach the end of a turbulent 2020.

The growth of AI And Machine Learning in Hyperautomation

Hyperautomationis the idea that most anything within an organization that can be automated such as legacy business processes should be automated which is identified by market research firm Gartner. Also known as digital process automation and intelligent process automation,the pandemic has advanced adoption of the concept. The major drivers of hyper automation are AI and machine learning which are the key components. On static packaged software, hyper-automation initiatives cannot rely to be successful. To changing occurrences and answer to unexpected situations, the automated business processes must be able to adapt. To allow the system to automatically improve over time and respond to changing business processes and requirements along with data generated by the automated system the AI, machine learning models and deep learning technology arrive, using learning algorithms and models. You can check out embedded hardware design services of Integra Sources where you may find out more about this topic and also can apply to work for them.

Through AI EngineeringBringing Discipline to AI Development

According to Gartners research, the percentage of AI projects which successfully make it from prototype to full production is only about 53 percent. AI initiatives often fail to generate the hoped-for returns because businesses and organizations often struggle with system maintainability, scalability and governance when trying to deploy newly developed AI systems and machine learning models. According to Gartners list of Top Strategic Technology Trends for 2021 the performance, scalability, interpretability and reliability of AI models and deliver the full value of AI investments, will improve due to Businesses and organizations which are coming to understand the robust AI engineering strategy.

Link:
Some of the emerging AI And machine Learning trends of 2021 - Floridanewstimes.com

Read More..

Bodo.ai Raises $14 million Series A to Revolutionize Simplicity, Performance and Scale for Data Analytics and Machine Learning – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Bodo.ai, the extreme-performance parallel compute platform for data workloads, today announced it has raised $14 million in Series A funding led by Dell Technologies Capital, with participation from Uncorrelated Ventures, Fusion Fund and Candou Ventures.

Founded in 2019 to revolutionize complex data analytics and machine learning applications, Bodos goal is to make Python a first-class, high-performance and production-ready platform. The companys innovative compiler technology enables customers to solve challenging, large-scale data and machine learning problems at extreme performance and low cost with the simplicity and flexibility of native Python. Validated at 10,000+ cores and petabytes of data, Bodo delivers a previously unattainable supercomputing-like performance with linear parallel scalability. By eliminating the need to use new libraries or APIs or rewrite Python into Scala, C++, Java, or GPU code to achieve scalability, Bodo users may achieve a new level of performance and economic efficiency for large-scale ETL, Data Prep, Feature Engineering, and AI/ML Model training.

Big data is getting bigger, and in todays data-driven economy, enterprise customers need speed and scale for their data analytics needs, said Behzad Nasre, co-founder and CEO of Bodo.ai. Existing workarounds for large scale data processing like extra libraries and frameworks fail to address the underlying scale and performance issues. Bodo not only addresses this, but does so with an approach that requires no rewriting of the original application code.

Python is the second most popular programming language in existence largely due to its popularity among AI and ML developers and data scientists. However, most developers and data engineers who rely on Python for AI and ML algorithms are hampered by its sub-optimal performance when handling large-scale data. And those who use extensions and frameworks still find their performance falls orders of magnitude short of Bodos. For example, a large retailer recently achieved more than 100x real time performance improvement for their mission-critical program metric analysis workloads and saved over 90% on cloud infrastructure costs by using Bodo as opposed to a leading cloud data platform.

Customers know that parallel computing is the only way to keep up with computational demands for artificial intelligence and machine learning and extend Moores Law. But such high-performance computing has only been accessible to select experts at large tech companies and government laboratories, added Ehsan Totoni, co-founder and CTO of Bodo.ai. Our inferential compiler technology automates the parallelization formerly done by performance experts, democratizing compute power for all developers and enterprises. This will have a profound impact on large-scale AI, ML and analytics communities.

Bodo bridges the simplicity-vs-performance gap by delivering compute performance and runtime efficiency with no application rewriting. This will enable hundreds of thousands of Python developers and data scientists to perform near-real-time analytics and unlock new revenue opportunities for customers.

We see enterprises using more ML and data analytics to drive business insight and growth. There is a nearly constant need for more and better insights at near-real-time, said Daniel Docter, Managing Director, Dell Technologies Capital. But the exploding growth in data and analytics comes with huge hidden costs - massive infrastructure spend, code rewriting, complexity, and time. We see Bodo attacking these problems head-on, with an elegant approach that works for native Python for scale-out parallel processing. It will change the face of analytics.

For more information visit http://www.bodo.ai.

About Bodo.ai

Founded in 2019, Bodo.ai is an extreme-performance parallel compute platform for data analytics, scaling past 10,000 cores and petabytes of data with unprecedented efficiency and linear scaling. Leveraging unique automatic parallelization and the first inferential compiler, Bodo is helping F500 customers solve some of the worlds most massive data analysis problems. And doing so in a fraction of traditional time, complexity, and cost, all while leveraging the simplicity and flexibility of native Python. Developers can deploy Bodo on any infrastructure, from a laptop to a public cloud. Headquartered in San Francisco with offices in Pittsburgh, PA, the team of passionate technologists aims to radically accelerate the world of data analytics. http://bodo.ai #LetsBodo

About Dell Technologies Capital

Dell Technologies Capital is the global venture capital investment arm of Dell Technologies. The investment team backs passionate early stage founders who push the envelope on technology innovation for enterprises. Since inception in 2012, the team has sustained an investment pace of $150 million a year and has invested in more than 125 startups, 52 of which have been acquired and 7 have gone public. Portfolio companies also gain unique access to the go-to-market capabilities of Dell Technologies (Dell, Dell EMC, VMWare, Pivotal, Secureworks). Notable investments include Arista Networks, Cylance, Docusign, Graphcore, JFrog, MongoDB, Netskope, Nutanix, Nuvia, RedisLabs, RiskRecon, and Zscaler. Headquartered in Palo Alto, California, Dell Technologies Capital has offices in Boston, Austin, and Israel. For more information, visit http://www.delltechnologiescapital.com.

Go here to read the rest:
Bodo.ai Raises $14 million Series A to Revolutionize Simplicity, Performance and Scale for Data Analytics and Machine Learning - Business Wire

Read More..

How AI and Machine Learning are changing the tech industry – refreshmiami.com

AI and Machine Learning have been gaining momentum over the past few years, but recently with the pandemic, it has accelerated in ways we couldnt imagine. Last year was an extremely difficult year for every imaginable sector of the economy. It has forced the acceleration of AI.

In this event, we will talk about how AI is changing the tech industry, and how the talent pool is not growing fast enough to meet the demands

Companies across all industries have been scrambling to secure top AI talent from a pool thats not growing fast enough. Even during the economic disruptions and layoffs caused by the COVID-19 pandemic, the demand for AI talent has been strong. Leaders are looking to reduce costs through automation and efficiency, and AI has a real role to play in that effort

Our panel will be comprised of amazing people in the industry

Koyuki Nakamori Head of Machine Learning at HeadSpace

Nehar Poddar Machine Learning Engineer at DEKA Research and Development

Excerpt from:
How AI and Machine Learning are changing the tech industry - refreshmiami.com

Read More..

Edge AI: The Future of Artificial Intelligence and Edge Computing | ITBE – IT Business Edge

Edge computing is witnessing a significant interest with new use cases, especially after the introduction of 5G. The 2021 State of the Edge report by the Linux Foundation predicts that the global market capitalization of edge computing infrastructure would be worth more than $800 billion by 2028. At the same time, enterprises are also heavily investing in artificial intelligence (AI). McKinseys survey from last year shows that 50% of the respondents have implemented AI in at least one business function.

While most companies are making these tech investments as a part of their digital transformation journey, forward-looking organizations and cloud companies see new opportunities by fusing edge computing and AI, or Edge AI. Lets take a closer look at the developments around Edge AI and the impact this technology is bringing on modern digital enterprises.

AI relies heavily on data transmission and computation of complex machine learning algorithms. Edge computing sets up a new age computing paradigm that moves AI and machine learning to where the data generation and computation actually take place: the networks edge. The amalgamation of both edge computing and AI gave birth to a new frontier: Edge AI.

Edge AI allows faster computing and insights, better data security, and efficient control over continuous operation. As a result, it can enhance the performance of AI-enabled applications and keep the operating costs down. Edge AI can also assist AI in overcoming the technological challenges associated with it.

Edge AI facilitates machine learning, autonomous application of deep learning models, and advanced algorithms on the Internet of Things (IoT) devices itself, away from cloud services.

Also read: Data Management with AI: Making Big Data Manageable

An efficient Edge AI model has an optimized infrastructure for edge computing that can handle bulkier AI workloads on the edge and near the edge. Edge AI paired with storage solutions can provide industry-leading performance and limitless scalability that enables businesses to use their data efficiently.

Many global businesses are already reaping the benefits of Edge AI. From improving production monitoring of an assembly line to driving autonomous vehicles, Edge AI can benefit various industries. Moreover, the recent rolling out of 5G technology in many countries gives an extra boost for Edge AI as more industrial applications for the technology continue to emerge.

A few benefits of edge computing powered by AI on enterprises include:

Implementation of Edge AI is a wise business decision as Insight estimates an average 5.7% return on Investment (ROI) from industrial Edge AI deployments over the next three years.

Machine learning is the artificial simulation of the human learning process with the use of data and algorithms. Machine learning with the aid of Edge AI can lend a helping hand, particularly to businesses that rely heavily on IoT devices.

Some of the advantages of Machine Learning on edge are mentioned below.

Privacy: Today, information and data being the most valuable assets, consumers are cautious of the location of their data. The companies that can deliver AI-enabled personalized features in their applications can make their users understand how their data is being collected and stored. It enhances the brand loyalty of the customers.

Reduced Latency: Most of the data processes are carried out both on network and device levels. Edge AI eliminates the requirement to send huge amounts of data across networks and devices; thus, improve the user experience.

Minimal Bandwidth: Every single day, an enterprise with thousands of IoT devices has to transmit huge amounts of data to the cloud. Then carry out the analytics in the cloud, and retransmit the analytics results back to the device. Without a wider network bandwidth and cloud storage, this complex process would turn it into an impossible task. Not to mention the possibility of exposing sensitive information during the process.

However, Edge AI implements cloudlet technology, which is small-scale cloud storage located at the networks edge. Cloudlet technology enhances mobility and reduces the load of data transmission. Consequently, it can bring down the cost of data services and enhance data flow speed and reliability.

Low-Cost Digital Infrastructure: According to Amazon, 90% of digital infrastructure costs come from Inference a vital data generation process in machine learning. Sixty percent of organizations surveyed in a recent study conducted by RightScale agree that the holy grail of cost-saving hides in cloud computing initiatives. Edge AI, in contrast, eliminates the exorbitant expenses incurred on the AI or machine learning processes carried out on cloud-based data centers.

Also read: Best Machine Learning Software in 2021

Developments in knowledge such as data science, machine learning, and IoT development have a more significant role in the sphere of Edge AI. However, the real challenge lies in strictly following the trajectory of the developments in computer science. In particular, next-generation AI-enabled applications and devices that can fit perfectly within the AI and machine learning ecosystem.

Fortunately, the arena of edge computing is witnessing promising hardware development that will alleviate the present constraints of Edge AI. Start-ups like Sima.ai, Esperanto Technologies, and AIStorm are among the few organizations developing microchips that can handle heavy AI workloads.

In August 2017, Intel acquired Mobileye, a Tel Aviv-based vision-safety technology company, for $15.3 billion. Recently, Baidu, a Chinese multinational technology behemoth, initiated the mass-production of second-generation Kunlun AI chips, an ultrafast microchip for edge computing.

In addition to microchips, Googles Edge TPU, Nvidias Jetson Nano, along with Amazon, Microsoft, Intel, and Asus, embarked on the motherboard development bandwagon to enhance edge computings prowess. Amazons AWS DeepLens, the worlds first deep learning enabled video camera, is a major development in this direction.

Also read: Edge Computing Set to Explode Alongside Rise of 5G

Poor Data Quality: Poor quality of data of major internet service providers worldwide stands as a major hindrance for the research and development in Edge AI. A recent Alation report reveals that 87% of the respondents mostly employees of Information Technology (IT) firms confirm poor data quality as the reason their organizations fail to implement Edge AI infrastructure.

Vulnerable Security Feature: Some digital experts claim that the decentralized nature of edge computing increases its security features. But, in reality, locally pooled data demands security for more locations. These increased physical data points make an Edge AI infrastructure vulnerable to various cyberattacks.

Limited Machine Learning Power: Machine learning requires greater computational power on edge computing hardware platforms. In Edge AI infrastructure, the computation performance is limited to the performance of the edge or the IoT device. In most cases, large complex Edge AI models have to be simplified prior to the deployment to the Edge AI hardware to increase its accuracy and efficiency.

Virtual assistants like Amazons Alexa or Apples Siri are great benefactors of developments in Edge AI, which enables their machine learning algorithms to deep learn at rapid speed from the data stored on the device rather than depending on the data stored in the cloud.

Automated optical inspection plays a major role in manufacturing lines. It enables the detection of faulty parts of assembled components of a production line with the help of an automated Edge AI visual analysis. Automated optical inspection allows highly accurate ultrafast data analysis without relying on huge amounts of cloud-based data transmission.

The quicker and accurate decision-making capability of Edge AI-enabled autonomous vehicles results in better identification of road traffic elements and easier navigation of travel routes than humans. It results in faster and safer transportation without manual interference.

Apart from all of the use cases discussed above, Edge AI can also play a crucial role in facial recognition technologies, enhancement of industrial IoT security, and emergency medical care. The list of use cases for Edge AI keeps growing every passing day. In the near future, by catering to everyones personal and business needs, Edge AI will turn out to be a traditional day-to-day technology.

Read next: Detecting Vulnerabilities in Cloud-Native Architectures

See the original post here:
Edge AI: The Future of Artificial Intelligence and Edge Computing | ITBE - IT Business Edge

Read More..

AI Can Write in English. Now It’s Learning Other Languages – WIRED

What's surprising about these large language models is how much they know about how the world works simply from reading all the stuff that they can find, says Chris Manning, a professor at Stanford who specializes in AI and language.

But GPT and its ilk are essentially very talented statistical parrots. They learn how to re-create the patterns of words and grammar that are found in language. That means they can blurt out nonsense, wildly inaccurate facts, and hateful language scraped from the darker corners of the web.

Amnon Shashua, a professor of computer science at the Hebrew University of Jerusalem, is the cofounder of another startup building an AI model based on this approach. He knows a thing or two about commercializing AI, having sold his last company, Mobileye, which pioneered using AI to help cars spot things on the road, to Intel in 2017 for $15.3 billion.

Shashuas new company, AI21 Labs, which came out of stealth last week, has developed an AI algorithm, called Jurassic-1, that demonstrates striking language skills in both English and Hebrew.

In demos, Jurassic-1 can generate paragraphs of text on a given subject, dream up catchy headlines for blog posts, write simple bits of computer code, and more. Shashua says the model is more sophisticated than GPT-3, and he believes that future versions of Jurassic may be able to build a kind of common-sense understanding of the world from the information it gathers.

Other efforts to re-create GPT-3 reflect the worldsand the internetsdiversity of languages. In April, researchers at Huawei, the Chinese tech giant, published details of a GPT-like Chinese language model called PanGu-alpha (written as PanGu-). In May, Naver, a South Korean search giant, said it had developed its own language model, called HyperCLOVA, that speaks Korean.

Jie Tang, a professor at Tsinghua University, leads a team at the Beijing Academy of Artificial Intelligence that developed another Chinese language model called Wudao (meaning "enlightenment'') with help from government and industry.

The Wudao model is considerably larger than any other, meaning that its simulated neural network is spread across more cloud computers. Increasing the size of the neural network was key to making GPT-2 and -3 more capable. Wudao can also work with both images and text, and Tang has founded a company to commercialize it. We believe that this can be a cornerstone of all AI, Tang says.

Such enthusiasm seems warranted by the capabilities of these new AI programs, but the race to commercialize such language models may also move more quickly than efforts to add guardrails or limit misuses.

Perhaps the most pressing worry about AI language models is how they might be misused. Because the models can churn out convincing text on a subject, some people worry that they could easily be used to generate bogus reviews, spam, or fake news.

I would be surprised if disinformation operators don't at least invest serious energy experimenting with these models, says Micah Musser, a research analyst at Georgetown University who has studied the potential for language models to spread misinformation.

Musser says research suggests that it wont be possible to use AI to catch disinformation generated by AI. Theres unlikely to be enough information in a tweet for a machine to judge whether it was written by a machine.

More problematic kinds of bias may be lurking inside these gigantic language models, too. Research has shown that language models trained on Chinese internet content will reflect the censorship that shaped that content. The programs also inevitably capture and reproduce subtle and overt biases around race, gender, and age in the language they consume, including hateful statements and ideas.

Similarly, these big language models may fail in surprising or unexpected ways, adds Percy Liang, another computer science professor at Stanford and the lead researcher at a new center dedicated to studying the potential of powerful, general-purpose AI models like GPT-3.

Originally posted here:
AI Can Write in English. Now It's Learning Other Languages - WIRED

Read More..

In the Face of Rising Security Issues and Hacks, GBC.AI Makes the Case for Machine Learning and AI Integration – TechBullion

Share

Share

Share

Email

Trustless. The word gets thrown around a lot in the blockchain space. It is one of the principles that decentralized finance is based upon. DeFi, as it was initially envisioned, is supposed to provide users with the ability to interact directly with each other thanks to decentralized technology that eliminates the need for third-party control. Faith can be placed in the blockchain systems that are secure and enable users to engage in financial transactions as they wish without having to trust all-too-corruptible humans.

That is at least how it is supposed to go. In reality, things are quite different. This is not to disparage many of the remarkable developments that have occurred thanks to blockchain technology. Individuals willing to take the plunge into the industry have at their fingertips more possibilities than are dreamt of in traditional banking philosophies, which by and large profit off the individual and throw them back bread crumbs as a reward.

There is no disputing the great promise of the industry. What is uncertainand what needs to be addressedis whether blockchain technology is making good on that promise, specifically when it comes to security and trustlessness.

Chances are you have heard about the Poly Network hack that happened a couple of weeks ago. Poly Network is a decentralized platform that facilitates peer-to-peer cryptocurrency transactions between users across different blockchains. The hacker took the equivalent of over $600 million worth of different cryptocurrencies. That makes it the biggest hack in cryptocurrency history. And yet, in what some have taken as a sign of the industrys strength, the hack hasnt been treated with the same significance that earlier hacksfor lesser sumshave.

This is partially due to the particulars of the case. The hacker responsible has reportedly returned all of the assets in question. Embedded in one of the final transactions is a note in which the hacker claims that their intentions were not to make a profit but rather to expose vulnerabilities and thereby make the network stronger:

MONEY MEANS LITTLE TO ME, SOME PEOPLE ARE PAID TO HACK, I WOULD RATHER PAY FOR THE FUN. I AM CONSIDERING TAKING THE BOUNTY AS A BOUNUS FOR PUBLIC HACKERS IF THEY CAN HACK THE POLY NETWORK IF THE POLY DONT GIVE THE IMAGINARY BOUNTY, AS EVERYBODY EXPECTS, I HAVE WELL ENOUGH BUDGET TO LET THE SHOW GO ON.

I TRUST SOME OF THEIR CODE, I WOULD PRAISE THE OVERALL DESIGN OF THE PROJECT, BUT I NEVER TRUST THE WHOLE POLY TEAM.

Bizarre to say the least. If the hacker didnt intend to take the money for themself, then why take so much? Perhaps they wanted to draw as much attention to the situation as possible knowing that if it was the biggest hack in crypto history, it would surely make headlines. But the repercussions of hacks of that magnitude in the past have led to downturns in the crypto market that have caused assets across the board to depreciate in value. It would be a very risky way to try to strengthen DeFi.

One also has to consider that the hackers hands were tied. Many of the stolen funds had been identified on their respective blockchains and exchanges like Binance had promised to freeze any of them that came within their purview. In addition to that, a significant portion of the stolen funds were in USDT. When Tether got wind of what had happened they announced that they would be freezing the approximately $33 million of USDT that had been stolen, preventing the hacker from making any transfers with them.

Regardless of the hackers intentions and that the funds ended up getting returned to the exchange, there are two issues here that bring us back to where we started. The first is the issue of security. About $1 billion dollars has been stolen from DeFi projects in this year alone. That is a staggering figure that indicates that there are serious issues with the security of blockchain platforms. Hand in hand with that is the issue of trust.

In this case Tether froze the stolen funds, preventing the hacker from transacting with them. While the intentions and outcome here were both good, this kind of power completely flies in the face of the decentralized ethos. There should not be a third party that can act like a traditional bank with complete control over assets that are being exchanged among users. The whole idea behind decentralized finance was to do away with institutions like that.

The problem is that the systems in place are not trustable enough yet. The industry is still young, so from a certain perspective, it is to be expected that there would be growing pains and vulnerabilities. But $1 billion in stolen assets is much more than can be explained away by an industry that is still coming into its own. The unpleasant truth here is that by and large the DeFi industry is falling short of what it promises.

This is where projects like GBC.AI come into the picture. GBC.AI is a company that has been working to apply the benefits of AI and machine learning to the blockchain sector. The project has developed what they call blockchain guardians, AI technology that optimizes blockchain operations while also taking a preemptive approach to chain security.

Once a blockchain is launched, it is very difficult to go back and alter it or improve it. While there are benefits to immutability, there are also downsides. Take the Poly Network case. Once a flaw in the network has been detected and exploited, given that the blockchain is already in operation, there is a substantial risk that the entire chain could collapse under the pressure of further attacks.

What GBC.AI is striving to do is to make blockchains adaptable and dynamic. With a blockchain guardian connected to a network, the AI can assess potential risks before they appear and greatly reduce any threat of dropped transactions. Rather than dealing with a static network, attackers will have to deal with blockchains that are constantly reacting and adapting to internal and external circumstances. As GBC.AI has proven with their work on the Solana blockchain, this kind of arrangement not only bolsters security, but it significantly improves chain functionality.

What is key about this is that it complies with the decentralized, trustless philosophy. By introducing AI and machine learning into the equation, users will not have to place their trust in third partieslike the teams that create and operate exchanges and pseudo banks like Tetherwhen they want to participate in decentralized finance.

While projects like GBC.AI and others working to bring in AI and machine learning to the blockchain space are still relatively new, given the gravity of the security issues, it should be only a matter of time before this becomes a major feature of blockchain development. For a long time people have wondered how the two most significant sectors of technology, AI and blockchain, could operate in conjunction. Circumstances have come together in such a way as to make DeFi the space in which that conjugation is necessary. The future of the industry could very well depend on it.

View post:
In the Face of Rising Security Issues and Hacks, GBC.AI Makes the Case for Machine Learning and AI Integration - TechBullion

Read More..

What would it be like to be a conscious AI? We might never know. – MIT Technology Review

Humans are active listeners; we create meaning where there is none, or none intended. It is not that the octopuss utterances make sense, but rather that the islander can make sense of them, Bender says.

For all their sophistication, todays AIs are intelligent in the same way a calculator might be said to be intelligent: they are both machines designed to convert input into output in ways that humanswho have mindschoose to interpret as meaningful. While neural networks may be loosely modeled on brains, the very best of them are vastly less complex than a mouses brain.

And yet, we know that brains can produce what we understand to be consciousness. If we can eventually figure out how brains do it, and reproduce that mechanism in an artificial device, then surely a conscious machine might be possible?

When I was trying to imagine Roberts world in the opening to this essay, I found myself drawn to the question of what consciousness means to me. My conception of a conscious machine was undeniablyperhaps unavoidablyhuman-like. It is the only form of consciousness I can imagine, as it is the only one I have experienced. But is that really what it would be like to be a conscious AI?

Its probably hubristic to think so. The project of building intelligent machines is biased toward human intelligence. But the animal world is filled with a vast range of possible alternatives, from birds to bees to cephalopods.

A few hundred years ago the accepted view, pushed by Ren Descartes, was that only humans were conscious. Animals, lacking souls, were seen as mindless robots. Few think that today: if we are conscious, then there is little reason not to believe that mammals, with their similar brains, are conscious too. And why draw the line around mammals? Birds appear to reflect when they solve puzzles. Most animals, even invertebrates like shrimp and lobsters, show signs of feeling pain, which would suggest they have some degree of subjective consciousness.

But how can we truly picture what that must feel like? As the philosopher Thomas Nagel noted, it must be like something to be a bat, but what that is we cannot even imaginebecause we cannot imagine what it would be like to observe the world through a kind of sonar. We can imagine what it might be like for us to do this (perhaps by closing our eyes and picturing a sort of echolocation point cloud of our surroundings), but thats still not what it must be like for a bat, with its bat mind.

Originally posted here:
What would it be like to be a conscious AI? We might never know. - MIT Technology Review

Read More..

U.S. households and small businesses have stockpiled a mind-blowing record cash pile of almost $17 trillion – MarketWatch

U.S. households and small businesses have stockpiled a record cash pile of almost $17 trillion a mind-boggling estimate that exceeds the $16 trillion in fiscal action undertaken by governments around the world to keep the global economy afloat during the pandemic.

That domestic cash hoard has grown exponentially since February 2020 due to three factors: direct government stimulus payments to individuals, shutdown-induced savings from Americans working from home, and small-business decisions to hold onto grants or loans, according to Jim Vogel, a Memphis-based manager at fixed-income dealer FHN Financial, which tracks cash flows.

The magnitude of the cash positions being held is surprising considering the tendency of households and businesses to tap their savings during each of the two or three recessions prior to the pandemic era. After the coronavirus pandemic triggered a deep two-month U.S. recession starting in February 2020, what is different this time around is that savings have soared despite the economy reopening. Two reasons have been offered for this: small businesses look to be focused on rebuilding inventories to brace for pent-up demand, while individuals are opting not to spend money on even the more restricted services and experiences that have now become the norm.

Its a sign of an unusual economy in which an awful lot of people are making money or have money, but are not spending it, Vogel said in a phone interview on Wednesday. There are two sides of this coin: A lot of people are doing well, while some people who depend on that spending are not. And the longer the imbalances last, the longer they take to work back down.

FHN Financials roughly $17 trillion estimate surpasses the $16 trillion figure that the International Monetary Fund estimated in July as the amount of fiscal action taken by governments worldwide to prevent economic collapse during the pandemic.

Vogel said his firm reached its almost $17 trillion estimate by taking the Federal Reserves most recent money-supply data, released on Tuesday, and stripping out the estimated level of demand deposits from corporations and institutional money-market accounts. The nearly $17 trillion figure has grown by about $250 billion over the past three months, he says, in a trend thats upended his expectations for declines in the current quarter. In February 2020, it stood at less than $12 trillion.

Most remarkably, the almost $17 trillion represents money that hasnt been deployed into the U.S. stock market just yet, where the benchmark indexes SPX, +0.22% DJIA, +0.11% are moving further into record territory. In addition to representing spare cash that could still come into equities, the money is acting as a barbell allowing investors already in that market to avoid selling off by much, according to Vogel.

The money is acting as a zero-risk anchor, and theres a reduced need to sell. Its also why the pattern of buying on the dips has worked so well, he says. When an outside shock knocks the economy on its heels, the length of time people hold onto cash is surprisingly long.

Here is the original post:
U.S. households and small businesses have stockpiled a mind-blowing record cash pile of almost $17 trillion - MarketWatch

Read More..

Google Health Disbanded; Staff Sent To Other Divisions – Silicon UK

The dedicated health unit at Google has been disbanded, with its 570 staff sent to different divisions within the Alphabet empire

Alphabet is reportedly disbanding its unified Google Health division, and instead will adopt a more distributed approach to developing health-related products.

This is according to Business Insider, which claimed to have seen a leaked memo on the matter. The 570 staff at Google Health are apparently being transferred to other teams.

The head of Google Health, David Feinberg, has also left the division and has joined US IT health services provider Cerner as CEO and President.

The fate of the remaining Google Health personnel has been revealed by Jeff Dean, Googles AI head in a tweet, with an undisclosed number moving to

As weve broadened our work in health across Google (Search, Cloud, YouTube, Fitbit, ), we have decided to move some @GoogleHeath teams closer to product areas to help with execution while nurturing some earlier stage products and research efforts, he tweeted.

Google Health had reportedly been founded in 2018 as a way to consolidate Googles fractured efforts in multiple healthcare areas, under a single division.

However the unit reportedly underwent some restructuring since that time.

Google will apparently remain invested in its existing health focused projects, but there will no longer be a single entity at the tech giant focused on health projects.

Google it should be remembered has its fingers in a number of healthcare related projects over the years, including Android fitness apps, medical study apps, and sleep-tracking features for its Nest Hub.

In 2019 Google officially swallowed DeepMind Health and its team into its new health division.

Google had announced in November 2018 that it would transfer control of DeepMind to a new Google Health division in California, as part of its efforts to commercialise its medical research efforts.

DeepMind had been acquired by Google for 400 million in 2014.

The firm has had its moments in the spotlight, most notably in 2017 when a war of words erupted between Deepmind and the authors of an academic paper, which fiercely criticised a NHS patient data sharing deal.

Besides Deepmind and Android apps, Google has also been involved in other health related projects.

Perhaps the most notable was in 2014, when Google and Swiss pharmaceutical giant Novartis agreed to develop smart contact lenses, designed to help people with diabetes track their blood glucose levels.

And in 2019 Googles then London-based DeepMind artificial intelligence unit created a working prototype of what would be its first commercial medical device, the result of the units three-year collaboration with Moorfields Eye Hospital.

DeepMind performed a retinal scan and real-time diagnosis on a patient who had agreed to be examined publicly.

The scan was analysed by DeepMinds algorithms in Googles cloud, which provided an urgency score and a detailed analysis in about 30 seconds.

Read the rest here:
Google Health Disbanded; Staff Sent To Other Divisions - Silicon UK

Read More..