Library talk guides seniors through wonders and dangers of AI … – countylive.ca

AI photograph generated for this story by Craiyon.com https://www.craiyon.com/ Request was photograph of library tech teaching AI to seniors group on ZOOM

By Sharon HarrisonA significant potential risk of artificial intelligence (AI) is that it outsmarts us, said Adam Cavanaugh, with the Picton branch of the Prince Edward County Public Library.

Adam Cavanaugh, PEC Public Library

He provided an introduction to County seniors curious to learn more about AI and ChatGPT in what is still very early days with an advancing technology many have yet to explore.

While its not as new as most like to think, as its been around in some basic form for decades, it is a subject thats grabbed headlines recently as the technology evolves. People are divided on whether AI is a good thing for us, for society, and for the world, with some experts and entrepreneurs advising caution, and a pause of AI development, as it rapidly develops, becoming a stronger presence in everyday lives, and all with little or no regulation and controls in place.

Hosted by the Prince Edward County Community Care Association for Seniors, Wednesdays webinar presentation formed part of the organizations active living monthly programming for Prince Edward County seniors, aged 60-plus.

Cavanaugh noted the presentation only scrapes the surface of what is a vast technological, and somewhat controversial area, where he focused on the history of AI, how intelligence is defined, as well as summarizing some of the technological advancements (specifically ChatGPT).

In what was a fascinating delve into a relatively new and rapidly advancing technology, Cavanaugh attempted to simplify what is a complex and complicated area, where he also spoke about the future of AI and regulation, and speculative risk factors.

He began by defining what AI is, all while noting how variable the definitions can be by providing text book examples.

What do we mean by AI: is it thinking humanly, or thinking rationally, or acting humanly, or acting rationally, and how would we define intelligence in the first place?

One example he gave is that AI could be defined by contrast of certain things.

You might say a snail is not as intelligent as an ape or a human, so, as long as its not that, its intelligent.

Cavanaugh also spoke to using definition, where he noted a standard cognitive philosophy definition of intelligence is: X is intelligent, if, and only if, X is capable of acquiring and holding true beliefs.

He further touched on common definitions, where he noted the standard definition of intelligence is just the ability to acquire and apply knowledge and skills.

So, by that common definition, if anything can develop its sense of smell to identify food sources, and form long-term memories to make sure they could access food, surely that would be somewhat intelligent, but that brings us back to the snail.

Even defining intelligence in the first place, he said, is no simple matter.

Experts disagree on what should constitute the intelligence in artificial intelligence, where one example of this model is that it should be able to pass as a human, or the Turing test (Alan Turing famous for computation and AI), he explained. The test was if you could simulate human communication, either by chat or in some manner; then if you could trick the person opposite the machine into believing they were talking to a human, and that would surely be sufficient to be able to say its intelligent.

Should it correspond to how humans empirically think because that would require a full theory of how our brains work?

He said cognitive science and neuroscientists would be the first to acknowledge that we dont really quite understand all of the inner workings of the human brain.

Should it correspond to our ideas of intelligence and which idea philosophical, mathematical, economic? Then there are engineering problems: can we even build it based on the appropriate model of intelligence, or will we have to make compromises, and what kinds of compromises?

Noting some problems with modelling AI as he asked how will intelligence show itself?

Do we make AI perform tasks, communicate to us, predict the future?

Cavanaugh explained how there are different types of AI, or what is known as weak AI, where he said many people already have experience of a version of this such as with Siri (digital assistant on phones, tablets and computers), for example, which is a weak AI, because it lacks understanding and a problem-solving ability.

Then there is this ideal form of AI, and this is what media is often talking about when they are expressing concerns about the future of AI, Cavanaugh said.

Known as artificial general intelligence, or strong AI, it is currently reserved for the domain of science fiction.

It is not yet created, it is not yet on the horizon; it can independently learn to replicate any cognitive task possible by humans, and it can do that independent of supervision by humans.

Cavanaugh noted the first conception of AI was by Warren McCulloch and Walter Pitts in 1943, and explained early AI capabilities began with general-problem solving, i.e., solving puzzles in a human-like fashion.

He said some people might be familiar with early examples from IBM that created some of the first AI programs.

Some of these general problem-solving programs were things like geometry theorem prover, which were able to prove mathematical problems that were very difficult for most students, he explained. Along the way, this disproved a theory that computers can only do exactly what they are told to do.

Moving on to ChatGPT, he explained that it is an AI natural language processing interface and acts like a chatbot where far-ranging requests can be made, and real-time responses received.

ChatGPT leverages large language models, as well as neural networks, to field requests from a large array of subject areas.

Cavanaugh used ChatGPT during the presentation to demonstrate examples of the types of questions that can be asked (asking it why it was in the news so much recently), where he shared the immediate real-time response in mere seconds.

He then addressed the issue of whether regular folks should be using ChatGPT.

It seems likely that ChatGPT, or something like it, will become an important tool for accessing information, and refining online queries, he said. All we have to do is think about the impact of something like Google search on the early internet and see how reliant we are on it today.

He suggested it may be useful to become familiar with how ChatGPT works, something that can be done relatively easily by creating an account with an email address (https://openai.com); once an account is created, you can simply chat with the AI.

However, Cavanaugh did provide caution, especially given that it doesnt provide sources for its research, and the information has limitations and isnt always accurate.

ChatGPT is like a plain-language Google. Moreover, it refines its answers through usage and correction, so the more people use it, the better it is at doing its job, and ours, he said. The more intelligent ChatGPT becomes, the more helpful it will be, and the less we have to work for our information and knowledge.

He noted that ChatGPT sources are unknown, meaning we dont know how reliable the information received is.

Even though it sounds plausible, ChatGPT does not cite its sources, he emphasised. They took a wide set of data from the internet and trained computers to interpret them, and act on them, and then learn from the mistakes they make in acting on that information.

He said that doesnt mean all of the information is true.

It basically scooped all of this information from the internet, but as we all know, not all of the information on the internet is itself accurate.

The more powerful ChatGPT, and other AI, become, the more urgent it is that we create models of controls and norms around use, he said.

He noted that university students are already using ChatGPT to write essays, and professors cant accuse them of this because the information is auto-generated, rather than found in an original or secondary source.

For the unscrupulous, this will remove the necessity to do the work, allowing them to earn a degree without study or knowledge acquisition, he explained.

Extend this knowledge on a wide scale and we can understand that the trend would point toward outsourcing our learning and knowledge to computers: why would I need to know this if I have ChatGPT?

He said, this points to a more critical issue of control.

ChatGPT makes stuff up all the time; its a pretty prolific liar as well, so we have to vet all information you get from it, so its not really an independent research tool.

He said, AI would not necessarily demonstrate intelligence in a way possible to, and to provoke anthropomorphize, or make human, but it could become so super intelligent as to warrant comparison between us and a worm, that is, completely incomparable.

Some skills that could make such risks include intelligence amplification, or AI becoming more and more smart without us needing to put research and development into its software strategization.

If AI were able to start making strategic decisions without human supervision: social manipulation, if it was able to maybe start leveraging the fact that it knows how to use chat softwares to communicate and manipulate humans into doing tasks for it, he explained.

If it could use the three above points to make strategical interventions on say governments, or industries, technological research, if it could do its own research and development, and economic productivity, if it could generate its own funds to do its own technological research and we are starting to see the picture unfold.

While the world figures out how AI and ChatGPT will play a role in everyday lives, Cavanaugh suggests to have fun with it for the time being, with low-risk uses.

One example he demonstrated was by giving it a list of ingredients found in the pantry, and asking it to come up with a recipe (which it did quite successfully).

The Community Care for Seniors Active Living programs for those 60-plus are available five days a week, with more than 50 online events each month. Online Zoom fitness and arts classes along with socials are held Monday to Friday.

In November, Zoom webinar topicsinclude: Sleep: Are you Getting Enough? with Tammy Orr and Janice Hall from the Prince Edward Family Health Team; and Nearby and Natural, and Nature West in Quinte West and Beyond, both with naturalist Terry Sprague. Community Care now offers a phone-only option for these Zoom webinars (no computer is needed). Several in-person events this month include 55-Alive Safe Driving Course, and Stronger U Fitness Course with Tracy Young-Reid.

Community Care for Seniors offers an extensive array of programming, services, resources and help for seniors living in Prince Edward County. To learn more, they can be reached by phone at 613-476-7493, by email, info@communitycareforseniors.org, or visit the website at communitycareforseniors.org

Read the original:

Library talk guides seniors through wonders and dangers of AI ... - countylive.ca

Related Posts

Comments are closed.