Page 1,556«..1020..1,5551,5561,5571,558..1,5701,580..»

The problem with artificial intelligence? Its neither artificial nor intelligent – The Guardian

Opinion

Lets retire this hackneyed term: while ChatGPT is good at pattern-matching, the human mind does so much more

Thu 30 Mar 2023 10.55 EDT

Elon Musk and Apples co-founder Steve Wozniak have recently signed a letter calling for a six-month moratorium on the development of AI systems. The goal is to give society time to adapt to what the signatories describe as an AI summer, which they believe will ultimately benefit humanity, as long as the right guardrails are put in place. These guardrails include rigorously audited safety protocols.

It is a laudable goal, but there is an even better way to spend these six months: retiring the hackneyed label of artificial intelligence from public debate. The term belongs to the same scrapheap of history that includes iron curtain, domino theory and Sputnik moment. It survived the end of the cold war because of its allure for science fiction enthusiasts and investors. We can afford to hurt their feelings.

In reality, what we call artificial intelligence today is neither artificial nor intelligent. The early AI systems were heavily dominated by rules and programs, so some talk of artificiality was at least justified. But those of today, including everyones favourite, ChatGPT, draw their strength from the work of real humans: artists, musicians, programmers and writers whose creative and professional output is now appropriated in the name of saving civilisation. At best, this is non-artificial intelligence.

As for the intelligence part, the cold war imperatives that funded much of the early work in AI left a heavy imprint on how we understand it. We are talking about the kind of intelligence that would come in handy in a battle. For example, modern AIs strength lies in pattern-matching. Its hardly surprising given that one of the first military uses of neural networks the technology behind ChatGPT was to spot ships in aerial photographs.

However, many critics have pointed out that intelligence is not just about pattern-matching. Equally important is the ability to draw generalisations. Marcel Duchamps 1917 work of art Fountain is a prime example of this. Before Duchamps piece, a urinal was just a urinal. But, with a change of perspective, Duchamp turned it into a work of art. At that moment, he was generalising about art.

When we generalise, emotion overrides the entrenched and seemingly rational classifications of ideas and everyday objects. It suspends the usual, nearly machinic operations of pattern-matching. Not the kind of thing you want to do in the middle of a war.

Human intelligence is not one-dimensional. It rests on what the 20th-century Chilean psychoanalyst Ignacio Matte Blanco called bi-logic: a fusion of the static and timeless logic of formal reasoning and the contextual and highly dynamic logic of emotion. The former searches for differences; the latter is quick to erase them. Marcel Duchamps mind knew that the urinal belonged in a bathroom; his heart didnt. Bi-logic explains how we regroup mundane things in novel and insightful ways. We all do this not just Duchamp.

AI will never get there because machines cannot have a sense (rather than mere knowledge) of the past, the present and the future; of history, injury or nostalgia. Without that, theres no emotion, depriving bi-logic of one of its components. Thus, machines remain trapped in the singular formal logic. So there goes the intelligence part.

ChatGPT has its uses. It is a prediction engine that can also moonlight as an encyclopedia. When asked what the bottle rack, the snow shovel and the urinal have in common, it correctly answered that they are all everyday objects that Duchamp turned into art.

But when asked which of todays objects Duchamp would turn into art, it suggested: smartphones, electronic scooters and face masks. There is no hint of any genuine intelligence here. Its a well-run but predictable statistical machine.

The danger of continuing to use the term artificial intelligence is that it risks convincing us that the world runs on a singular logic: that of highly cognitive, cold-blooded rationalism. Many in Silicon Valley already believe that and they are busy rebuilding the world informed by that belief.

But the reason why tools like ChatGPT can do anything even remotely creative is because their training sets were produced by actually existing humans, with their complex emotions, anxieties and all. If we want such creativity to persist, we should also be funding the production of art, fiction and history not just data centres and machine learning.

Thats not at all where things point now. The ultimate risk of not retiring terms such as artificial intelligence is that they will render the creative work of intelligence invisible, while making the world more predictable and dumb.

So, instead of spending six months auditing the algorithms while we wait for the AI summer, we might as well go and reread Shakespeares A Midsummer Nights Dream. That will do so much more to increase the intelligence in our world.

Evgeny Morozov is the author of several books on technology and politics. His podcast The Santiago Boys, about the tech vision of former Chilean president Salvador Allende, is out this summer

Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Follow this link:
The problem with artificial intelligence? Its neither artificial nor intelligent - The Guardian

Read More..

Heres how the crypto industry is using artificial intelligence – Cointelegraph

The use of artificial intelligence (AI) in crypto, though still in its infant stages, is showing prospects for growth. According to a report from Fortune Business Insights, the blockchain AI market is projected to grow from $220.5 million in 2020 to $973.6 million in 2027 at a compound annual growth rate of 23.6%.

Despite Tesla CEO Elon Musk and other prominent tech moguls penning an open lettercalling on AI companies to suspend large-scale AI development temporarily, the crypto industry is ripe with AI projects. Here are some examples of crypto AI projects that have recently emerged in the community:

Blocktrace is a service provider specializingin blockchain forensics and analysis bolstered by AI technology. Its AI chatbot is designed to simplify the process of tracking blockchain transactions. Based in Austin, Blocktrace aims to leverage artificial intelligence to expedite the blockchain analysis process, and facilitate the identification of trends and anomalies.

The company developed an AI chatbot called Robby the Robot, named after the iconic character from the science-fiction film Forbidden Planet, to interact with data on the Bitcoin blockchain.

SingularityNET is a decentralized AI marketplace that uses blockchain technology to provide a platform for AI developers to share and monetize their algorithms. It enables the creation of AI-powered decentralized applications (DApps) that can be used in various industries, including finance, healthcare and transportation.

Launched via initial exchange offering (IEO) on Binance, Fetch.ai is a decentralized platform that uses AI and machine learning algorithms to create autonomous economic agents. Using Fetch.ai (FET) tokens, users can build and deploy their own digital twins on the network.

By paying with tokens, developers can access machine-learning-based utilities to train autonomous digital twins and deploy collective intelligence on the network. It helps users perform various tasks, such as data analysis, prediction markets and supply chain management. It aims to create an efficient and autonomous digital economy.

Artificial Liquid Intelligence is a decentralized platform employing AI and blockchain technology to establish a data marketplace. It allows data owners to monetize their data while retaining authority over privacy and security. The AI protocol functions with the aid of the Artificial Liquid Intelligence (ALI) utility token.

The decentralized cloud computing platform, iExec, uses AI and blockchain technology to provide a secure platform for running DApps that require high computing power. It enables developers to monetize their computing resources and provides an alternative to traditional cloud computing services.

Related: The government should fear AI, not crypto: Galaxy Digital CEO

The mentioned projects exhibit a variety of crypto-based applications that utilize AI parameters, although they are still in their early phases. These applications range from decentralized marketplaces, data exchanges, self-governing economic agents and cloud computing platforms.

Magazine: All rise for the robot judge: AI and blockchain could transform the courtroom

The rest is here:
Heres how the crypto industry is using artificial intelligence - Cointelegraph

Read More..

This gung-ho government says we have nothing to fear from AI. Are you scared yet? | Gaby Hinsliff – The Guardian

Its almost 20 years now since a socially awkward young computer science student set up a website for rating hot women.

Facemash, as Mark Zuckerberg called his creation, was shut down within days. But this crass teenage experiment was still, in retrospect, the first faltering step down a road to something even he couldnt possibly have foreseen at the time: a social media phenomenon now accused of unwittingly helping to polarise society, destabilise the democratic process, fuel hate speech and disseminate dangerous conspiracy theories around the globe, despite what providers insist have been their best attempts to stamp out the fire.

We couldnt have predicted then, and arguably still dont properly understand now, what impact Facebook or Twitter or Instagram or TikTok have had on teenage mental health. We couldnt have anticipated how life online would change our sense of self, blurring the line between private life and public content; didnt grasp until too late how algorithms developed to drive social media consumption would shape what we read or hear, and consequently how we think or feel. But if we couldnt have accurately predicted that from the start, with hindsight, there were surely moments along the road when the penny should have dropped.

Had governments not allowed the tech giants to race so far ahead of regulation, they might have saved themselves years of clearing up the resulting mess. But blinded by the riches the industry generated, and diverted by the pleasure its products have undoubtedly given along the way, we all missed the moment. The fear is that were about to do the same with something infinitely more powerful and unpredictable.

Artificial intelligence is arguably both the most exciting thing that has happened to humankind in generations key to magical, transformative breakthroughs in everything from medicine to productivity and the most frightening, given its potential to upend the existing social and economic order at breakneck speed.

This week some of the worlds leading AI experts called for a six-month pause on training the next wave of systems more powerful than the now famous ChatGPT-4 chatbot which has demonstrated an uncanny ability to communicate like a human in order to better understand the implications for humanity. They warn of an out-of-control race to develop and deploy ever more powerful digital minds that no one not even their creators can understand, predict or reliably control.

Shortly afterwards the British government published a white paper arguing that, on the contrary, Britain has only a brief window of around a year to get ahead in that race, and should adopt only the lightest of regulatory touches for fear of strangling the golden goose.

The UK wont have a new expert regulator governing what some think could become an extinction-level threat to humanity; instead, ministers will empower a bunch of overworked existing regulators to do what you might have hoped they were already doing, and scrutinise AIs impact on their sectors using a set of guiding principles that may be backed up at some unspecified point by legislation.

The whole thing smacks of a government desperate for economic growth at all costs and perhaps also for something resembling a Brexit bonus; if the EU treads its usual cautious regulatory path, Britain will position itself as the comparatively unfettered, gung-ho home of the AI pioneer.

The white paper mentions the jobs AI will undoubtedly create but skates over the ones it will eliminate and the social unrest that could follow. (Think of what the decline of coal, steel and manufacturing did to rust belt towns across Europe and the US, and how that fuelled the rise of populism; now imagine AI replacing a quarter of all work tasks worldwide, as predicted in a report by Rishi Sunaks old employer Goldman Sachs this week.)

Ministers stress the extraordinary breakthroughs possible in healthcare. But they have less to say about new forms of fraud or mass disinformation that could be perpetrated using AI tools capable of communicating convincingly like a human, or about how autonomous weaponry could be exploited by terrorists or rogue states. They dont talk nearly enough about what new rights humans might need to live alongside AI, including perhaps the legal right to know when an algorithm rather than a person was employed to sift our job application, refuse us a mortgage, fake what looks like an entirely authentic image or craft a flirty response on a dating app (yes, theres an AI application for that).

The risk of AI becoming sentient, or developing human feelings, remains relatively distant. But anyone who has ever got enraged by Twitter knows were already way past the point of algorithmic systems affecting humans feelings towards each other. Michelle Donelan, the new cabinet secretary responsible for tech, breezily assured the Sun this week that nonetheless AI wasnt something we should fear; the government had it all in hand. Feeling reassured? Me neither.

A global moratorium on AI development sadly seems unlikely, given we havent managed that kind of worldwide cooperation even against the existential threat from the climate crisis. But there has to be some way of avoiding what happened with social media: an initial free-for-all that made billions, followed eventually by an angry backlash and a doomed attempt to stuff genies back into bottles. Artificial intelligence develops, in part, by learning from its mistakes. Is it too much to ask that humans do the same?

View original post here:
This gung-ho government says we have nothing to fear from AI. Are you scared yet? | Gaby Hinsliff - The Guardian

Read More..

Some human thoughts on artificial intelligence – GIM International

As AI increasingly becomes a vital element in today's mapping solutions, Wim van Wegen ponders its transformative power for our industry, and the prospect of it evolving into artificial consciousness in the future.

Artificial intelligence (AI) has attracted significant attention in our profession in recent years. And its easy to understand why the role of AI in the geospatial industry is discussed so frequently at conferences, trade shows and in publications like GIM International; it has the potential to transform the way we gather, process and analyse spatial data. In fact, it has already become a vital element in many mapping solutions. But as I sit down to write this editorial column, I cant help but ponder the deeper philosophical questions that arise in this context.

Lets start with the basics: What is intelligence? It is a complex concept, but it can be broadly defined as the capacity to learn, understand, solve problems and adapt to new situations. In humans, intelligence includes abilities such as language comprehension, memory, spatial reasoning and decision-making. It would be very arrogant to think that humans have a monopoly on intelligence, of course! Indeed, many species of animals, birds, fish and insects are often more intelligent than we give them credit for. Swarm intelligence is a well-known natural phenomenon, for instance. In another example, some crows have learned to place nuts in front of the tyres of stationary cars at traffic lights, so that they are cracked open for them as the cars move off. And creatures such as raccoons are known for having a good memory and strong problem-solving abilities, as demonstrated in various scientific experiments.

So how does AI differ from natural intelligence? AI actually aims to imitate human intelligence, gathering and processing information, identifying patterns and learning from experience to improve its own performance over time. The idea behind AI technology is to enable machines to think like humans so that they can function independently and perform tasks that would normally require human cognitive abilities. There seems little harm in this, providing that all this intelligence is used for good things and contributes to progress in positive ways. However, as we have seen in humans, while intelligence is a remarkable gift, it does not necessarily equate to good moral character. As conscious beings, we have the power to use our intelligence to engage in negative behaviours such as bullying and aggression, and to harm the planet. Meanwhile, animals despite sometimes having advanced levels of intelligence are often guided by instinct, meaning they have less conscious control over their actions.

This leads me to wonder how much control AI-driven solutions have over their own behaviour? And what if, at some point in the future, AI evolves into artificial consciousness (AC), so that machines possess self-awareness and sentience? While theres currently no clear view of when AC could become a reality, its important to consider the implications. Could AI surpass human intelligence? Could machines become sentient beings? And if so, could we as humans lose control?

The future of AI holds many unknowns. But refocusing on our own profession for now, it is clear that AI is not just a hype, but rather a transformative force that is fundamentally altering the nature of geospatial work in numerous ways such as by enabling automatic object recognition in point clouds and facilitating advanced data analysis. The convergence of AI, big data and computing power has created the right circumstances for a technological revolution in the geospatial industry. I will leave it to the philosophers to debate on the broader impact of AI on society and the future of the world. In the meantime, I will take a moment to truly appreciate the intelligent behaviour of the birds in my garden.

Artificial intelligence has attracted significant attention in the geospatial profession in recent years.

See the original post:
Some human thoughts on artificial intelligence - GIM International

Read More..

Artificial Intelligence: UNESCO calls on all Governments to … – UNESCO

UNESCOs Recommendation on the Ethics of Artificial Intelligence is the first global framework for the ethical use of artificial intelligence. It guides countries on how to maximize the benefits of AI and reduce the risks it entails. To this end, it contains values and principles, but also detailed policy recommendations in all relevant areas.

UNESCO is concerned by many of the ethical issues raised by these innovations, in particular discrimination and stereotyping, including the issue of gender inequality, but also the fight against disinformation, the right to privacy, the protection of personal data, and human and environmental rights.

Industry self-regulation is clearly not sufficient to avoid these ethical harms, which is why the Recommendation provides the tools to ensure that AI developments abide by the rule of law, avoiding harm, and ensuring that when harm is done, accountability and redressal mechanisms are at hand for those affected.

UNESCOs Recommendation places a Readiness Assessment tool at the core of its guidance to Member States. This tool enables countries to ascertain the competencies and skills required in the workforce to ensure robust regulation of the artificial intelligence sector. It also provides that the States report regularly on their progress and their practices in the field of artificial intelligence, in particular by submitting a periodic report every four years.

To this date, more than 40 countries in all regions of the world are already working with UNESCO to develop AI checks and balances at the national level, building on the Recommendation. UNESCO calls on all countries to join the movement it is leading to build an ethical AI. A progress report will be presented at the UNESCO Global Forum on the Ethics of Artificial Intelligence in Slovenia in December 2023.

Here is the original post:
Artificial Intelligence: UNESCO calls on all Governments to ... - UNESCO

Read More..

There’s no such thing as artificial intelligence – The Japan Times

No one sells the future more masterfully than the tech industry.

According to its proponents, we will all live in the metaverse, build our financial infrastructure on web3 and power our lives with artificial intelligence. All three of these terms are mirages that have raked in billions of dollars, despite bite back by reality.

Artificial intelligence in particular conjures the notion of thinking machines. But no machine can think and no software is truly intelligent. The phrase alone may be one of the most successful marketing terms of all time.

This could be due to a conflict with your ad-blocking or security software.

Please add japantimes.co.jp and piano.io to your list of allowed sites.

If this does not resolve the issue or you are unable to add the domains to your allowlist, please see this FAQ.

We humbly apologize for the inconvenience.

In a time of both misinformation and too much information, quality journalism is more crucial than ever.By subscribing, you can help us get the story right.

Read more from the original source:
There's no such thing as artificial intelligence - The Japan Times

Read More..

Is rapidly evolving artificial intelligence cause for concern? – Texas A&M The Battalion

With the rapid boom of artificial intelligence, or AI, in the last few months, universities and educational institutions have scrambled to figure out how to tackle academic dishonesty concerns, responsibly and ethically use and teach about AI, and learn more about the quickly evolving technology.

The reliability of the software, the potential pros and cons of artificial intelligence and the academic integrity concerns it raises have all quickly dominated academic discourse.

Director of the Texas A&M University Writing Center Nancy Vazquez says the quick evolution of ChatGPT has been an important topic of both formal and informal conversation among faculty and staff. Faculty are concerned AI may enable students to engage in dishonest academic behavior.

One of the concerns is that when you get a university degree, you assume that person is capable of doing certain things, they have certain skills and knowledge, but, potentially, AI clouds that, Vazquez said.

Even with tools such as Grammarly and Microsoft auto-predicting text, Vazquez said she always recommends consultants at the writing center make sure that students ensure their professors are okay with it.

Students thinking about using any type of AI for classes or an application need to check with the authorities, whether that is an instructor or a program theyre applying for What are the policies? Vazquez said.

Vazquez says there are a range of worries with ChatGPT including the legality of who owns the generated text that AI produces. Additionally, Vazquez said librarians have spent hours looking for sources that were generated by ChatGPT that dont actually exist. Early versions of ChatGpt offered premium versions after a limited free trial, which also raises questions of access as some students might have the use of tools that other students cant afford according to Vasquez.

I also think about AI being useful for generating routine writing that people do or even [planning] out your writing, or a writing schedule but also create a renaissance for the personality, human characteristics and creativity that we bring, Vazquez said.

Currently, the Writing Center makes reference to AI under the plagiarism statement in the course policies section of syllabi tailored toward professors who want to mention AI in their class.

According to the [A&M] definitions of academic misconduct, plagiarism is the appropriation of another person's ideas, processes, results or words without giving appropriate credit, Vazquez said. You should credit your use of anyone else's words, graphic images, or ideas using standard citation styles. AI text generators such as ChatGPT should not be used for any work for this class without explicit permission of the instructor and appropriate attribution.

Philosophy professor Kenny Easwaran, Ph.D, said AI mimics the neural network of the brain and is able to deal with language by associating and connecting information. However, it only has a limited memory that can look a few pages back up with GPT-4 compared to only paragraphs with GPT-2.

This is an associonist and connectionist neural network-based system, and this is one of the main criticisms that this is not going to get us real artificial intelligence, Easwaran said. It looks really good, but that is just because it can battle convincingly.

The associationist and connectionist thinking that artificial intelligence is able to do rather quickly is also something that humans do, Easwaran said. However, it is harder to replicate symbolic reasoning that is oftentimes slower and used for things like math according to Easwaran.

One of the things that we try to develop in higher education is to get people to use this slow and effortful symbolic reasoning, Easwaran said.

Easwaran said because ChatGPT pulls from information that people have given it, it writes based on recognized patterns and memorized information. However, it is not as good at understanding arguments or why premises either support or dont support a conclusion, Easwaran said.

It can do certain things like write a recommendation letter, or a memo but if you do something like original intellectual writing, it cant do that, Easwaran said.

Easwaran said he suspects that in the same way people use spreadsheets electronically now and nobody writes by hand anymore, in five to 10 years, AI is going to be a more normalized part of writing papers. Though he said he doesnt know how exactly it will evolve in the future, it is important for students to understand that in the same way that Wikipedia can be a good source of information, ChatGPT is not perfect and does have errors.

Link:
Is rapidly evolving artificial intelligence cause for concern? - Texas A&M The Battalion

Read More..

DoD Chief Digital and Artificial Intelligence Office Launches Hack … – Department of Defense

The Chief Digital and Artificial Intelligence Office (CDAO) Directorate for Digital Services (DDS) has launched a website (www.hackthepentagon.mil) to accompany their long-running program: Hack the Pentagon (HtP).

DDS launched HtP in 2016, using bug bounties as an innovative way to secure critical Department of Defense (DoD) systems and assets. HtP invites vetted, independent security researchers, known as "ethical hackers", to discover, investigate, and report vulnerabilities, which DoD can then remediate. DDS built the HtP website as a resource for Department of Defense organizations, vendors, and security researchers to learn how to conduct a bug bounty, partner with the CDAO DDS team to support bug bounties, and participate in DoD-wide bug bounties.

"With the HtP website launch, CDAO is scaling a long running program, which historically offered services on a project-by-project basis, by offering the Department better access to lessons learned and best practices for hosting bug bounties," said Dr. Craig Martell, Chief Digital and Artificial Intelligence Officer. "The website helps equip DoD to run continuous bug bounties as part of a larger comprehensive cybersecurity strategy."

While the website is primarily an educational tool for DoD organizations to use as a foundational step before launching a bug bounty, it also is a platform to engage and recruit technical talent.

"Through Hack the Pentagon, we're building a global talent pipeline for cybersecurity experts to contribute to our national defense outside of traditional government career paths," said Jinyoung Englund, Acting Director, CDAO DDS.

Since HtP's initial launch in 2016, DDS has run 40+ bug bounties with over 1,400 ethical hackers who have collectively flagged 2,100+ vulnerabilities for remediation. DDS became part of the CDAO organization in June 2022.

DDS is a highly experienced team of software and data engineers, data scientists, product managers and user research designers with a track record of delivering immediately usable products in record time within the Chief Digital and Artificial Intelligence Office. For more information on DDS, visit dds.mil.

For security researchers who have a vulnerability to report, please visit DoD's Vulnerability Disclosure Program (VDP): https://www.dc3.mil/Missions/Vulnerability-Disclosure/Vulnerability-Disclosure-Program-VDP/

Original post:
DoD Chief Digital and Artificial Intelligence Office Launches Hack ... - Department of Defense

Read More..

Computer Vision Vs Artificial Intelligence. What Is the Difference? – Analytics Insight

Is AI and computer Vision two different domain or just two sides of the same coin?

Computer vision is a branch of artificial intelligence (AI) that enables computers and systems to extract useful information from digital photos, videos, and other visual inputs and to execute actions or make recommendations based on that information. If AI gives computers the ability to think, computer vision gives them the ability to see, observe, and comprehend.Human vision has an advantage over computer vision in that it has been around longer. With a lifetime of context, human sight has the advantage of learning how to distinguish between things, determine their distance from the viewer, determine whether they are moving, and determine whether an image is correct. Using cameras, data, and algorithms instead of retinas, optic nerves, and the visual cortex, computer vision teaches computers to execute similar tasks in much less time. A system trained to inspect items or monitor a production asset can swiftly outperform humans since it can examine thousands of products or processes per minute while spotting imperceptible flaws or problems.Energy, utilities, manufacturing, and the automobile industries all use computer vision, and the market is still expanding.

A lot of data is required for computer vision. It repeatedly executes analyses of the data until it can distinguish between things and recognize images. For instance, a computer needs to be fed a huge amount of tire photos and tire-related things to be trained to detect automotive tires. This is especially true of tires without any flaws. This is done using two key technologies: convolutional neural networks and deep learning, a sort of machine learning (CNN). With the use of algorithmic models, a computer can learn how to understand the context of visual input using machine learning. The computer will look at the data and educate itself to distinguish between different images if enough data is sent through the model. Instead of needing to be programmed to recognize an image, algorithms allow the machine to learn on its own.

By dissecting images into pixels with labels or tags, a CNN aids a machine learning or deep learning models ability to see. It creates predictions about what it is seeing by performing convolutions on the labels, which is a mathematical operation on two functions to create a third function. Until the predictions start to come true, the neural network conducts convolutions and evaluates the accuracy of its predictions repeatedly. Then, it is recognizing or views images similarly to how people do. Similar to how a human would perceive a picture from a distance, a CNN first recognizes sharp contours and basic forms before adding details as it iteratively tests its predictions. To comprehend individual images, a CNN is utilized. Like this, recurrent neural networks (RNNs) are employed in video applications to assist computers in comprehending the relationships between the images in a sequence of frames. Here are some applications of computer vision:

A dog, an apple, or a persons face are examples of images that can be classified using image classification. More specifically, it can correctly guess which class a given image belongs to. A social network corporation would want to utilize it, for instance, to automatically recognize and sort out offensive photographs shared by users.

To identify a specific class of image and then recognize and tabulate its existence in an image or video, object detection can employ image classification. Detecting damage on an assembly line or locating equipment that needs maintenance are a couple of examples.

After an object is found, it is followed or tracked. This operation is frequently carried out using real-time video streams or a series of sequentially taken pictures. For instance, autonomous vehicles must track moving things like pedestrians, other vehicles, and road infrastructure in addition to classifying and detecting them to avoid crashes and follow traffic regulations.

Instead of focusing on the metadata tags that are attached to the photos, content-based image retrieval employs computer vision to browse, search, and retrieve images from massive data repositories. Automatic picture annotation can be used in place of manual image tagging for this activity. These tasks can be used in digital asset management systems to improve search and retrieval precision.

Visit link:
Computer Vision Vs Artificial Intelligence. What Is the Difference? - Analytics Insight

Read More..

Privacy Tip #359 Privacy Concerns with Artificial Intelligence … – JD Supra

As artificial intelligence, also known as AI becomes more of a household word, it is worth pointing out not only how cool it can be, but also how some uses raise privacy concerns.

The rapid growth of technological capabilities often surpasses our ability to understand long-term implications on society. Decades later, we find ourselves looking back and wishing that development of certain technology would have been more measured and controlled to mitigate risk. Examples of this are evident in the massive explosion of smartphones and social media. Studies today show clear negative consequences from the proliferation of the use of certain technology.

The development of AI is still in its early stage, even though it has been developed for years. It is not widely used yet by individuals, though it is clear that we are on the cusp.

The privacy risks of AI have been outlined in an article published in The Digital Speaker, Privacy in the Age of AI: Risks, Challenges and Solutions. The concerns about privacy in the use of AI is succinctly summarized by the author:

Privacy is crucial for a variety of reasons. For one, it protects individuals from harm, such as identity theft or fraud. It also helps to maintain individual autonomy and control over personal information, which is essential for personal dignity and respect. Furthermore, privacy allows individuals to maintain their personal and professional relationships without fear of surveillance or interference. Last, but not least, it protects our free will; if all our data is publicly available, toxic recommendation engines will be able to analyse our data and use it to manipulate individuals into making certain (buying) decisions.

In the context of AI, privacy is essential to ensure that AI systems are not used to manipulate individuals or discriminate against them based on their personal data. AI systems that rely on personal data to make decisions must be transparent and accountable to ensure that they are not making unfair or biased decisions.

The article lists the privacy concerns of using AI, including a violation of ones privacy, bias and discrimination and job displacement, data abuse, the power of big tech on data, the collection and use of data by AI companies, and the use of AI in surveillance by private companies and law enforcement. The examples used by the author are eye-opening and worth a read. The article sets forth a cogent path forward in the development and use of AI that is broad and thoughtful.

The World Economic Forum published a paper last year (before ChatGPT was in most peoples vocabulary) also outlining some of the privacy concerns raised by the use of AI and why privacy must be included in the design of AI products. The article posits:

Massive databases might encompass a wide range of data, and one of the most pressing problems is that this data could be personally identifiable and sensitive. In reality, teaching algorithms to make decisions does not rely on knowing who the data relates to. Therefore, companies behind such products should focus on making their datasets private, with few, if any, ways to identify users in the source data, as well as creating measures to remove edge cases from their algorithms to avoid reverse-engineering and identification.

We have talked about the issue of reverse engineering, where bad actors discover vulnerabilities in AI models and discern potentially critical information from the models outputs. Reverse engineering is why changing and improving databases and learning data is vital for AI use in cases facing this challenge.

As for the overall design of AI products and algorithms, de-coupling data from users via anonymization and aggregation is key for any business using user data to train their AI models.

AI systems need lots of data, and some top-rated online services and products could not work without personal data used to train their AI algorithms. Nevertheless, there are many ways to improve the acquisition, management, and use of data, including the algorithms themselves and the overall data management. Privacy-respecting AI depends on privacy-respecting companies.

Both articles give a good background on the privacy concerns posed by the use of AI and solutions for the development and use of AI that are worth consideration to have a more comprehensive approach to the future of collection, use and disclosure of big data. Hopefully, we will learn from past mistakes to think about the use of AI for good purposes and minimize its use for nefarious or bad purposes. Now is the time to develop a comprehensive strategy and work together to implement it. One way we can help is to stay abreast of the issues and concerns and use our voices to advocate for a comprehensive approach to the problem.

[View source.]

Visit link:
Privacy Tip #359 Privacy Concerns with Artificial Intelligence ... - JD Supra

Read More..