February 20th, 2024
Listen to the story below:
In his two decades at Carolina, philosophy professor Thomas Hofweber has seen a lot of change. But within the past few years, that change has grown exponentially in the landscape of academia and beyond as artificial intelligence (AI) has taken his world by storm.
Originally from Germany, Hofweber got into philosophy by accident. He intended to study chemistry and math but took some philosophy classes on a whim when he began at the University of Munich.
Once I started, I loved it and I got totally stuck, he says.
Hofweber loved asking big questions and was even more captivated by trying to answer them. Studying philosophy enabled him to engage with different fields of study. He continues this collaboration with all types of academics by specializing in metaphysics and the philosophy of language and mathematics. Recently, much of his research and teaching has been focused on AI.
Artificial Intelligence is leading humanity to more efficient, productive industries. Its creating opportunities for health care advancement, personalized education, accelerated research, and optimal cities and infrastructure. And according to Hofweber, we are just at the tip of the iceberg with the technologys advancement.
But it is crucial to address potential challenges such as ethical considerations, job displacement, and bias, Hofweber says. His fascination with AI stems from its moral implications and the pressing concerns in decision-making algorithms.
These computational processes use predefined rules and patterns to analyze data and make choices or predictions. But they also have the potential to make errors and discriminate and can reflect existing biases in the data used for their training.
Theres so much going on. Its amazing, Hofweber says. But its also very hard to keep up because theres always some new development and so many people working in this area that its hard to stay on top of whats really happening.
In Spring 2019, Hofweber began teaching a class called AI and the Future of Humanity to discuss big challenges in the artificial intelligence field and how it will affect the future of human beings in both positive and negative ways.
The class examines topics like the extinction threat AI poses to civilization. Will these machines take over the world or are they a pathway for a new form of survival? How will they impact and change our workforce?
Ethical dilemmas are also a big discussion in the class. As advanced forms of AI emerge, Hofweber wonders what we owe them, how they will relate to us, and how we can positively influence the technology.
The course also examines metaphysical questions related to virtual reality and AI. Is a virtual reality an illusion or just a different kind of reality? Can minds and consciousness be realized by machines or computers?
Like the AI landscape itself, the course constantly shifts as Hofweber races to keep up with the fields rapid evolution.
AI is not a new phenomenon in philosophy. Since the Industrial Revolution, philosophers have debated whether machines have emotions or other human characteristics and if thinking a type of computation. Since early computing pioneers like Alan Turing have asked whether machines can exhibit intelligent behavior like humans, AI has been a topic of discussion in the field.
Hofweber first started considering these issues in graduate school at Stanford University. He worked with philosophers, linguists, and computer scientists to explore how symbols, like words or mathematical notations, use specific rules to convey meaning and represent information.
There was a great culture of cross-disciplinary collaboration, he says.
When he started working at Carolina, his research naturally connected to certain parts of the AI space, especially his focus on the philosophy of language. Understanding how communication works and how language relates to the world were important considerations when Hofweber began studying how language models, like chatbots, compare to humans.
ChatGPT is a recent example of how AI is much more accessible for the average person and has started cultural conversations about human interaction with the technology.
Hofweber noticed these conversations happening in different departments on Carolinas campus and recognized an interest in inter-departmental collaboration to better understand the philosophical foundations of these resources. Linguistics, computer science, and philosophy had a unique interconnectedness around the topic and so the AI Project was born.
Its supposed to be a research-focused project, he says. I thought it would be really good for people to learn from each other, collaborate a little bit more and talk to each other across these disciplines.
Informally, the AI project is a group that meets twice a month to discuss recent research in artificial intelligence. More formally it is a series of discussion and reading groups, research presentations, and lectures that enable cross-departmental understanding.
As the director of the project, Hofweber orchestrates a series of virtual events each semester that focus on a theme, like language models or explainability, enabling participants to approach AI from multiple angles and share insights.
If you get people together, you can look at the same thing from different perspectives and then exchange ideas and talk about it and that can be very helpful, Hofweber says.
Peter Hase, a Carolina PhD student studying computer science, works closely with Hofweber and appreciates the different viewpoints hes exposed to through the AI Project.
Computer scientists often live in a little bit of a bubble, he says. So, this jumped out to me as a big opportunity.
Under the the guidance of professor Mohit Bansal, Hase researches how machine learning models make decisions and develops models that enable computers to understand and generate human language that is clear and relevant.
For Hase, regular discussions through the AI Project have changed how he thinks about his research and reframed the problems he faces.
Its a good learning experience, he says. Hearing other peoples perspectives has provided new angles for thinking about problems.
Hofwebers own research is informed and inspired by the AI Project. In collaboration with colleagues in the computer science department, he investigates how language models represent the world, the nature of their modeling, and whether they adhere to the norms of rationality.
Whether or not the kind of intelligence you get from a language model is very similar to or very different from the kind of intelligence that we human beings have, he clarifies.
Language models dont learn by moving through physical and cultural worlds like humans do. They learn from text pulled from the internet. So, they should form contradictory beliefs, reflecting the contradictions in the text they learn from. But the inner workings of language models remain opaque, leaving us unaware of the extent to which their intelligence aligns with human norms of rationality.
Hofweber isnt scared by how much is unknown about the inner workings of AI, but he believes it will be a more urgent issue in the future. Understanding, controlling, and implementing safety measures within AI systems is crucial.
We might have a truly alien form of intelligence that can be studied and understood and that can help us understand what is significant about being human, he says.
While Hofwebers current studies wont directly affect the general public now, the progress he makes will eventually provide more transparency and insight into AI rationality and explainability.
The field of AI research remains vast, with countless unanswered questions, Hofweber says. Nevertheless, he is optimistic that his research, combined with the collaborative efforts of the AI Project, will contribute to a deeper understanding of these machines, benefiting society as a whole.
Here is the original post:
Questioning AI - Endeavors - The University of North Carolina at Chapel Hill
Read More..