Page 296«..1020..295296297298..310320..»

Can the bias in algorithms help us see our own? – EurekAlert

image:

Algorithms can codify and amplify human bias, but algorithms also reveal structural biases in our society, says Carey Morewedge, a Questrom professor of marketing.

Credit: Photo courtesy of Carey Morewedge.

Algorithms were supposed to make our lives easier and fairer: help us find the best job applicants, help judges impartially assess the risks of bail and bond decisions, and ensure that healthcare is delivered to the patients with the greatest need. By now, though, we know that algorithms can bejust as biasedas the human decision-makers they inform and replace.

What if that werent a bad thing?

New research byCarey Morewedge, a Boston University Questrom School of Business professor of marketing and Everett W. Lord Distinguished Faculty Scholar, found that people recognize more of their biases in algorithms decisions than they do in their owneven when those decisions are the same. The research, publishing in theProceedings of the National Academy of Sciences, suggests ways that awareness might help human decision-makers recognize and correct for their biases.

A social problem is that algorithms learn and, at scale, roll out biases in the human decisions on which they were trained, says Morewedge, who also chairs Questroms marketing department. For example: In 2015, Amazon tested (andsoon scrapped) an algorithm to help its hiring managers filter through job applicants. They found that the program boosted rsums it perceived to come from male applicants, and downgraded those from female applicants, a clear case of gender bias.

But that same year, just39 percentof Amazons workforce were women. If the algorithm had been trained on Amazons existing hiring data, its no wonder it prioritized male applicantsAmazon already was. If its algorithm had a gender bias, its because Amazons managers were biased in their hiring decisions, Morewedge says.

Algorithms can codify and amplify human bias, but algorithms alsorevealstructural biases in our society, he says. Many biases cannot be observed at an individual level. Its hard to prove bias, for instance, in a single hiring decision. But when we add up decisions within and across persons, as we do when building algorithms, it can reveal structural biases in our systems and organizations.

Morewedge and his collaboratorsBegm eliktutan and Romain Cadario, both at Erasmus University in the Netherlandsdevised a series of experiments designed to tease out peoples social biases (including racism, sexism, and ageism). The team then compared research participants recognition of how those biases colored their own decisions versus decisions made by an algorithm. In the experiments, participants sometimes saw the decisions of real algorithms. But there was a catch: other times, the decisions attributed to algorithms were actually the participants choices, in disguise.

Across the board, participants were more likely to see bias in the decisions they thought came from algorithms than in their own decisions. Participants also saw as much bias in the decisions of algorithms as they did in the decisions of other people. (People generally better recognize bias in others than in themselves, a phenomenon called the bias blind spot.) Participants were also more likely to correct for bias in those decisions after the fact, a crucial step for minimizing bias in the future.

The researchers ran sets of participants, more than 6,000 in total, through nine experiments. In the first, participants rated a set of Airbnb listings, which included a few pieces of information about each listing: its average star rating (on a scale of 1 to 5) and the hosts name. The researchers assigned these fictional listings to hosts with names that were distinctively African American or white, based onprevious research identifying racial bias, according to the paper. The participants rated how likely they were to rent each listing.

In the second half of the experiment, participants were told about a research finding that explained how the hosts race might bias the ratings. Then, the researchers showed participants a set of ratings and asked them to assess (on a scale of 1 to 7) how likely it was that bias had influenced the ratings.

Participants saw either their own rating reflected back to them, their own rating under the guise of an algorithms, their own rating under the guise of someone elses, or an actual algorithm rating based on their preferences.

The researchers repeated this setup several times, testing for race, gender, age, and attractiveness bias in the profiles of Lyft drivers and Airbnb hosts. Each time, the results were consistent. Participants who thought they saw an algorithms ratings or someone elses ratings (whether or not they actually were) were more likely to perceive bias in the results.

Morewedge attributes this to the different evidence we use to assess bias in others and bias in ourselves. Since we have insight into our own thought process, he says, were more likely to trace back through our thinking and decide that it wasnt biased, perhaps driven by some other factor that went into our decisions. When analyzing the decisions of other people, however, all we have to judge is the outcome.

Lets say youre organizing a panel of speakers for an event, Morewedge says. If all those speakers are men, you might say that the outcome wasnt the result of gender bias because you werent even thinking about gender when you invited these speakers. But if you were attending this event and saw a panel of all-male speakers, youre more likely to conclude that there was gender bias in the selection.

Indeed, in one of their experiments, the researchers found that participants who were more prone to this bias blind spot were also more likely to see bias in decisions attributed to algorithms or others than in their own decisions. In another experiment, they discovered that people more easily saw their own decisions influenced by factors that were fairly neutral or reasonable, such as an Airbnb hosts star rating, compared to a prejudicial bias, such as raceperhaps because admitting to preferring a five-star rental isnt as threatening to ones sense of self or how others might view us, Morewedge suggests.

In the researchers final experiment, they gave participants a chance to correct bias in either their ratings or the ratings of an algorithm (real or not). People were more likely to correct the algorithms decisions, which reduced the actual bias in its ratings.

This is the crucial step for Morewedge and his colleagues, he says. For anyone motivated to reduce bias, being able to see it is the first step. Their research presents evidence that algorithms can be used as mirrorsa way to identify bias even when people cant see it in themselves.

Right now, I think the literature on algorithmic bias is bleak, Morewedge says. A lot of it says that we need to develop statistical methods to reduce prejudice in algorithms. But part of the problem is that prejudice comes from people. We should work to make algorithms better, but we should also work to make ourselves less biased.

Whats exciting about this work is that it shows that algorithms can codify or amplify human bias, but algorithms can also be tools to help people better see their own biases and correct them, he says. Algorithms are a double-edged sword. They can be a tool that amplifies our worst tendencies. And algorithms can be a tool that can help better ourselves.

Republishers are kindly reminded to uphold journalistic integrity by providing proper crediting, including a direct link back to the original source URL here.

Proceedings of the National Academy of Sciences

Observational study

People

People see more of their biases in algorithms

12-Apr-2024

Original post:

Can the bias in algorithms help us see our own? - EurekAlert

Read More..

Computer Science Professor Kenneth Kousen Gives Insights on AI and Emerging Technologies Trinity Tripod – Trinity Tripod

Lily Mellitz 26

Features Editor

In the rapidly evolving landscape of Artificial Intelligence (AI) and emerging technologies, understanding the implications and possibilities of these technologies is more crucial than ever. Kenneth Kousen, adjunct professor of Computer Science at Trinity College, recently shared his insights with the Tripod on the current state and future of AI and emerging technologies.

Kousen stands as a prominent figure in both the academic and professional world, blending entrepreneurship with a passion for teaching and a love for sharing knowledge. With an impressive academic journey that includes earning his Bachelors degree from Massachusetts Institute of Technology (MIT), Masters degrees from Princeton University and Rensselaer at Work and a Ph.D. from Princeton, Kousen brings a wealth of experience and expertise to his various roles.

Outside of academia, Kousen runs his own business: Kousen IT Incorporated. He frequently presents at the No Fluff Just Stuff U.S. conference series, and travels globally to speak at conferences around the world. He also runs a Youtube channel called Tales From the Jar Side where he shares insights, discussions and tutorials related to programming, technology and software development. Beyond podiums and webinars: Kousen has a total of six books to his name, including his most recent, Help Your Boss Help You, where he advises employees on how to develop healthy and productive relationships with their managers.

At Trinity, Kousen teaches courses in large scale application development, primarily in areas using Java (a coding language) and Open Source development. In one of his most popular classes Special Topics: Large Scale Development, students explore the complexities of handling big software projects. They learn about theories like how open-source projects function and different design patterns and get hands-on experience with practical skills.

Transitioning from his personal journey, Kousen delved into the dynamic evolution of technologies. Ive been in this field for a long time and Ive seen many major changes, Kousen said. The big one in my generation was the rise of the web. He went on to explain that while the internet previously existed, it was the introduction of user-friendly web browsers that revolutionized a shift in connectivity. Suddenly, complex networks became accessible to everyone, causing a wave of innovation and the birth of entire industries.

Something similar to that is happening now, Kousen said. AI is doing things weve never thought of before. One example Kousen gave was the significant time and effort efficiencies that AI can offer, especially for common, repetitive tasks where details might be overlooked.

When Im coding in Java, I know how to do that so I dont need a lot of AI assistance, Kousen said. However, in instances where he has worked with Python a coding language hes less experienced in he has often relied on AI to handle routine tasks, such as generating scripts to organize files. Kousen further elaborated on how AI could aid in software development practices, such as when committing code changes to a GitHub repository (a place where code for a project is stored and managed, allowing multiple developers to work together and track changes). AI tools can help crafting detailed commit messages, which not only document the changes made but also provide details into the reason behind them.

However, Kousen also stressed the importance of understanding the limitations of AI. He noted that some people are comparing these emerging AI tools to interns with access to vast resources but lacking in comprehensive understanding.

I wouldnt say thats quite right because interns still understand things, they just might need some explanations, Kousen said. These things [AI] dont understand anything.

Despite their impressive capabilities and creativity, AI algorithms primarily operate through pattern matching and lack true comprehension or depth. As an example, he shared about asking ChatGPT a simple math question: Whats nine plus seven? Its response of zero prompted his correction to 16, to which ChatGPT agreed. Yet, when asked about 10 plus 20, it returned 16.

ChatGPT doesnt understand, Kousen summarized. Theres no depth there. It learns by pattern matching with everything its been trained on.

Having spent the past year delving into the realms of AI and emerging technologies, Kousen offered several insights to current students preparing to enter the workforce.

If you are able to work with AI tools, you will have an advantage over students who are not able to work with the tools, he said. We can argue the ethics and whether its reasonable to try to replace people with AI, but the truth is that [AI is] not going away. They are here and they are going to get more pervasive, so my attitude is you might as well learn how to work with [them].

Kousen emphasized that the true concern isnt AI replacing human workers entirely, but rather the misconception among high-level managers that it could.

The danger time is now, when the managers especially non technical managers dont understand the limitations, Kousen explained. All they see is the pretty pictures and dazzling productivity and they dont realize the danger of AI.

Kousen illustrated this point with a scenario where a manager might believe AI could write Hollywood scripts better than professional script writers, or expect it to code whole systems more proficiently than professional coders. While AI excels at imitation, it struggles when it comes to creativity or adapting to unique situations, which could lead to serious problems down the line. In this scenario, Kousen recommended building a strong relationship with ones manager for personal well-being and self preservation within a company, especially in light of the increasing prominence of AI.

What becomes important is what you can offer to the manager, which is to be the technical expert that helps them assess what these technologies can and cannot do, Kousen said. Essentially, what is worth their money and what isnt.

The focus of education then is to make sure that the students get enough experience using these tools and trying them out to understand their limitations, Kousen said. And then they can offer to be the managers tech expert.

By gaining hands-on experience, students can confidently navigate discussions about the potential benefits and drawbacks of AI within their managers, proving to be valuable assets.

When asked how to remain valuable to ones managers, Kousen replied, The people who are successful in business are the ones who keep learning.

Kousen stressed the importance of embracing learning in order to maintain engagement and professionalism in any chosen field. He emphasized the value of viewing ones work not just as a job but as a profession; a career where one cares deeply about what they do and take pride in making a difference. He acknowledged that careers often take unexpected turns, but reassured that by approaching learning with a positive mindset, individuals will be able to adapt and thrive.

Kousens passion for software development shines through in his teaching, research and personal endeavors. His expertise on AI and emerging technologies not only equips his students with valuable skills and knowledge, but also instills in them the importance of critical thinking and awareness of the risks in the technological field. Serving as a guiding light for students and professionals alike, Kousen prompts all to contemplate the importance of growth, resilience and what it means to be human in the ever-changing realm of technology.

Read more:

Computer Science Professor Kenneth Kousen Gives Insights on AI and Emerging Technologies Trinity Tripod - Trinity Tripod

Read More..

UTC College Of Engineering And Computer Science Holds 7th Annual Technology Symposium – The Chattanoogan

Research projects by students from area high schools and Cleveland State Community College will compete alongside those of University of Tennessee at Chattanooga students at the seventh annual UTC Technology Symposium on April 15, and the public is invited to check them out.

Sponsored by the UTC College of Engineering and Computer Science, the daylong symposium will begin with a keynote address from UTC alumnus Greg Heinrich, TVA vice president of transmission operations and power supply.

Judges from more than 40 high-profile companiesTVA, Stantec, EPB, Amazon, American Express and Netflix among themwill review and assess the submissions, in addition to visiting with participating students. Project topics include identifying aging-related genes from muscle gene expression data; the performance of congressional stock portfolios: impact of socioeconomic status on education; and fraud detection in financial transactions using machine learning.

More information about the symposium and schedule of events is on the CECS website at utc.edu/tech-reg.

Excerpt from:

UTC College Of Engineering And Computer Science Holds 7th Annual Technology Symposium - The Chattanoogan

Read More..

EECS Students Place Third and Fourth at ICPC Competition – University of Arkansas Newswire

Dr. John Gauch

All of the EECS students that participated in the competition.

The International Collegiate Program Competition Regional was held on Feb. 24, 2024, in Fort Smith. The U of A's electrical engineering and computer science students paced third and fourth in the regional event level of the competition.

Alex Prosser, an EECS student participant, said the International Collegiate Program Competition is a programming competition meant for college students. There are about eight to 12 programming problems of different difficulty levels, and the team that solves the most problems wins.

Prosser said, "So, the entire world's competing in some way. The ICPC goes from a regional competition to a national one, and then to the world. We did not get that far this time. It is a programming competition that tests your ability to create algorithms, apply problem-solving and sharpen optimization."

Prosser explained that some of the problems required less skill than others. He said, "The difficulty is relative, depending on your experience. I found half of them easier to complete than the other ones."

Despite strong competition, the U of A teams placed third and fourth by solving five out of the 12 problems. "So, we solved the four easy ones, and we solved one of the medium ones, which seemed to be the general theme of the whole contest for our site," Prosser revealed.

For Prosser, competitive programming drives him to excel. "I love programming, so adding that competitive nature on top of it makes me want to learn more," he shared.

Reflecting on competitive programming, Prosser encouraged all U of A students in the Electrical Engineering and Computer Science program to try to compete: "I think all programming students should try competing. Honestly, just trying those smaller, easier problems is a place to start, just to see if this is something that interests you."

Congratulations to Rithyka Heng, Christopher Bayless and Ganner Whitmire for placing third, and to Gabriel Garcia, Jack Norris and Alex Prosser for placing fourth in the competition.

See the original post:

EECS Students Place Third and Fourth at ICPC Competition - University of Arkansas Newswire

Read More..

U.Md. launches new AI institute to think about AI going forward – WTOP

The University of Maryland has announced its new Artificial Intelligence Interdisciplinary Institute at Maryland, which will offer a major, minor and course options for all students.

Artificial intelligence is the future of technology, but the capabilities, and a lack of trust when it comes to Big Tech companies and their goals, means lots of people are wary. Will it take away our jobs? Will it be designed in ways that benefit people? Will it even consider the human impact?

But with a recognition that the technology is moving ahead, and has the capacity for good, the University of Maryland has announced its new Artificial Intelligence Interdisciplinary Institute at Maryland (AIM).

The program is housed in the Department of Computer Science for now, but a long list of new classes will be open to underclassmen in all academic disciplines. In fact, a new artificial intelligence major will be offered on two tracks one as a Bachelor of Science degree and one as a Bachelor of Arts degree.

One of the things we really want to do is make sure theres a sort of path to AI for any students, said Hal Daum, the programs inaugural director. Regardless of what your major is, we want to make sure that within your first year, or maybe two years of being a student here, you are sufficiently up to speed on modern AI technology. That you can use it for doing whatever career path you have, and whatever educational path you have.

Whether students go on the B.A. track or the B.S. track, a lot of the skills they learn will be the same, since its important to have that common base of knowledge about the subject.

But the B.A. will go much, much deeper on the humanistic side and the social science side of things, whereas the Bachelor of Science will go much, much deeper on the mathematical, algorithmic side of things, Daum told WTOP.

The program will launch with an understanding that a significant portion of the public has concerns about the future of AI and doesnt trust technology companies to do whats best for society.

I think part of what we need to do is both help people see the positives that come out, and structure future AI development and research, and so on, toward those positives, but also give people a realistic sense of what can go wrong, he said.

The goal is making sure the implications of their work is well understood before it goes too far in the wrong direction. Daum said he believes having students and faculty from the arts and humanities side of things can help shape that thinking.

I think the lack of having other voices in the room who understand people and understand peoples values and understand society, and how the world works, and so on, has led us to technologies that people are sort of rightfully wary of, because theyre not designed from the perspective of whats good for society or whats good for people, Daum said.

Universities, broadly speaking, are incredibly well positioned to do this type of work because we have humanists, we have social scientists. We have all of the people we need to talk to in order to really develop AI thats kind of good for everyone, he added.

Students who major in something that isnt related to STEM (or technology at all) will also be able to minor in artificial intelligence. He said the program has buy-in from all 12 deans across the university, since they understand how much of an impact this technology will have.

It really touches more or less every major on campus, if for no other reason than the nature of jobs might change, Daum said. What it means to be a journalist today might be different from what it means to be a journalist in five years. What it means to be an artist today might be different from what it means to be an artist in five years. Even what it means to be a computer scientist or a professor today is going to be different from what it means in five or 10 years.

Its expected that students will be able to start majoring in artificial intelligence soon if not this fall, then the following academic year. The new AIM program will have more than 100 faculty on its staff.

Get breaking news and daily headlines delivered to your email inbox by signing up here.

2024 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.

Excerpt from:

U.Md. launches new AI institute to think about AI going forward - WTOP

Read More..

Employee tuition benefit brings ND Summer Online Courses to staff and faculty – University of Notre Dame

You may have noticed signs around campus encouraging passersby to Take home the Dome for the summer by enrolling in an online summer course. A natural conclusion might be that these signs are directed at Notre Dame students looking to complete coursework in between the spring and fall semesters.

That conclusion isnt necessarily wrong. But its not quite right, either.

While Notre Dame students do account for the majority of enrollments in the Universitys Summer Online program, the courses are open to anyone with a high school degree.

For Notre Dame employees, that means its an opportunity to use the tuition remission benefit, which can pay tuition for one undergraduate or graduate-level course (up to three credits) per summer/semester.

Aviva Wulfsohn, administrative coordinator at the Harper Cancer Research Institute, used the benefit to take a two-credit graduate course on the computer programming language Python last summer.

I found the course very challenging in more ways than one, says Wulfsohn, who enrolled purely out of interest in the topic, not as a part of a degree program or to fulfill a requirement for her job. However, the instructor, Victoria Woodard, was excellent. She was very thorough and enthusiastic, and was more than willing to meet with menumerous timesoutside of class.

An associate teaching professor in the Department of Applied and Computational Mathematics and Statistics, Woodard herself has had the experience of returning to the classroom as a student when she was already a faculty member. She says some of the students she was teaching in her data science classes were so well-versed in computer programming that she decided to pursue a masters in computer science to expand the ways she could work with them.

Woodard, who holds a Ph.D. in mathematics and statistics education from North Carolina State University, identifies with anyone who might feel intimidated about enrolling in a course where they wont be a traditional student. But she also found that it came with two distinct advantages.

One, I had a job where I could practice and apply the things I was learning, says Woodard, who earned her M.S. in computer science from Indiana University South Bend in 2023. Two, since I was older and a professor, I was able to better relate to the instructor and build rapport.

As such, she has simple advice for anyone who feels like she did.

Take a class and learn something new, and if you're worried about being behind the rest, focus on your advantage and use it.

This years Summer Online lineup features courses on everything from foundations of business analytics, calculus, and computer programming (including Woodards Python class) to the Vietnam War and American Catholics, Shakespeare and film, and a variety of language classes in Arabic, French, German, Italian, and Spanish.

Summer Online courses meet once or twice a week in live online sessions, typically held in the evenings. In between, students work through online content, either independently or in groups. Small class sizes help ensure online courses deliver the same rigor and excellence that define Notre Dames in-person offerings.

It is worth noting that in the case of graduate courses, the tuition remission benefit may be taxable income for employees who use it. Undergraduate classes, though, are not taxed under the program, regardless of whether the employee is degree- or non-degree-seeking.

At a higher education institution, we understand the importance of continued education for our employees and are proud to support our employees in their education endeavors, Denise Murphy, assistant vice president of Total Rewards, says.

Registration for Summer Online is open now, and there are three sessions of classes to choose from:

June 3July 26 (eight weeks)

June 3July 14 (six-week option 1)

June 17July 26 (six-week option 2)

The summer 2024 course list and FAQ are available at summeronline.nd.edu. Questions can be directed to summeronline@nd.edu.

For more information on the tuition remission benefit, visit hr.nd.edu/benefits-compensation/educational-benefits. Note that this page also includes information about educational benefits for the children of eligible faculty and staff, which can be applied to Summer Online courses, as well.

Go here to read the rest:

Employee tuition benefit brings ND Summer Online Courses to staff and faculty - University of Notre Dame

Read More..

Engineering and Computer Science Awards Presented at Convocation of Scholars – Arkansas State University

04/04/2024

JONESBORO The College of Engineering and Computer Science at Arkansas State University presented graduating student awards during a Convocation of Scholars awards ceremony Tuesday, according to Dr. Abhijit Bhattacharyya, dean of the college.

Jackson Chrestman of Jonesboro received the Chancellors Scholar Award and 4.0 Scholar Award as the colleges graduating senior with the highest overall grade point average. He will graduate with a Bachelor of Science degree in computer science.

Departmental awards were presented to the top graduating seniors within each of the academic degree programs. These awards include the Citizenship Award and the Outstanding Student Award.

The Citizenship Award is presented to a student within each degree program who demonstrates great leadership, character and departmental and community involvement.

The recipients are Shota Kato of Japan, Bachelor of Science (BS) in computer science; Nathan Raath of South Africa, BS in engineering technology; Zackary Overton of Bryant, BS in engineering management systems; Madison Walker of Tuckerman, Bachelor of Science in Civil Engineering (BSCE); Nicolas Palacios of Cabot, Bachelor of Science in Electrical Engineering (BSEE); Jeannette Strano of Cherokee Village, Bachelor of Science in Mechanical Engineering (BSME); Seth Moffett of Brinkley, BS in land surveying and geomatics.

The Outstanding Student Award is given to the individual with the highest GPA within each of the seven undergraduate degree plans.

The recipients are Jackson Chrestman of Jonesboro, BS in computer science; Cody Painter of Rowlett, Bachelor of Arts (BA) in computer science; Samuel Morris of El Paso, BS in engineering technology; Ryan Ahmad of Anthony, BS in engineering management systems; Luke Carden of Maumelle, BSCE; Elijah Mullins of Brookland, BSEE; Tuan Kiet Vuong of Vietnam, BSME; Morgan Diamond of Jonesboro, BSME, and Dylan Stewart of Monette, BS in land surveying and geomatics.

Convocation of Scholars events continue throughout April at A-State.

See the original post here:

Engineering and Computer Science Awards Presented at Convocation of Scholars - Arkansas State University

Read More..

The 4 Stages of Artificial Intelligence – Visual Capitalist

The Evolution of Intelligence

The expert consensus is that human-like machine intelligence is still a distant prospect, with only a 50-50 chance that it could emerge by 2059. But what if there was a way to do it in less than half the time?

Weve partnered with VERSES for the final entry in our AI Revolution Series to explore a potential roadmap to a shared or super intelligence that reduces the time required to as little as 16 years.

The secret sauce behind this acceleration is something called active inference, a highly efficient model for cognition where beliefs are continuously updated to reduce uncertainty and increase the accuracy of predictions about how the world works.

An AI built with this as its foundation would have beliefs about the world and would want to learn more about it; in other words, it would be curious. This is a quantum leap ahead of current state-of-the-art AI, like OpenAIs ChatGPT or Googles Gemini, which once theyve completed their training, are in essence frozen in time; they cannot learn.

At the same time, because active inference models cognitive processes, we would be able to see the thought processes and rationale for any given AI decision or belief. This is in stark contrast to existing AI, where the journey from prompt to response is a black box, with all the ethical and legal ramifications that that entails. As a result, an AI built on active inference would engender accountability and trust.

Here are the steps through which an active-inference-based intelligence could develop:

Stage four represents a hypothetical planetary super-intelligence that could emerge from the Spatial Web, the next evolution of the internet that unites people, places, and things.

With AI already upending the way we live and work, and former tech evangelists raising red flags, it may be worth asking what kind of AI future we want? One where AI decisions are a black box, or one where AI is accountable and transparent, by design.

VERSES is developing an explainable AI based on active inference that can not only think, but also introspect and explain its thought processes.

Join VERSES in building a smarter world.

See the original post:

The 4 Stages of Artificial Intelligence - Visual Capitalist

Read More..

Does the Rise of AI Explain the Great Silence in the Universe? – Universe Today

Artificial Intelligence is making its presence felt in thousands of different ways. It helps scientists make sense of vast troves of data; it helps detect financial fraud; it drives our cars; it feeds us music suggestions; its chatbots drive us crazy. And its only getting started.

Are we capable of understanding how quickly AI will continue to develop? And if the answer is no, does that constitute the Great Filter?

The Fermi Paradox is the discrepancy between the apparent high likelihood of advanced civilizations existing and the total lack of evidence that they do exist. Many solutions have been proposed for why the discrepancy exists. One of the ideas is the Great Filter.

The Great Filter is a hypothesized event or situation that prevents intelligent life from becoming interplanetary and interstellar and even leads to its demise. Think climate change, nuclear war, asteroid strikes, supernova explosions, plagues, or any number of other things from the rogues gallery of cataclysmic events.

Or how about the rapid development of AI?

A new paper in Acta Astronautica explores the idea that Artificial Intelligence becomes Artificial Super Intelligence (ASI) and that ASI is the Great Filter. The papers title is Is Artificial Intelligence the Great Filter that makes advanced technical civilizations rare in the universe? The author is Michael Garrett from the Department of Physics and Astronomy at the University of Manchester.

Without practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations.

Some think the Great Filter prevents technological species like ours from becoming multi-planetary. Thats bad because a species is at greater risk of extinction or stagnation with only one home. According to Garrett, a species is in a race against time without a backup planet. It is proposed that such a filter emerges before these civilizations can develop a stable, multi-planetary existence, suggesting the typical longevity (L) of a technical civilization is less than 200 years, Garrett writes.

If true, that can explain why we detect no technosignatures or other evidence of ETIs (Extraterrestrial Intelligences.) What does that tell us about our own technological trajectory? If we face a 200-year constraint, and if its because of ASI, where does that leave us? Garrett underscores the critical need to quickly establish regulatory frameworks for AI development on Earth and the advancement of a multi-planetary society to mitigate against such existential threats.

Many scientists and other thinkers say were on the cusp of enormous transformation. AI is just beginning to transform how we do things; much of the transformation is behind the scenes. AI seems poised to eliminate jobs for millions, and when paired with robotics, the transformation seems almost unlimited. Thats a fairly obvious concern.

But there are deeper, more systematic concerns. Who writes the algorithms? Will AI discriminate somehow? Almost certainly. Will competing algorithms undermine powerful democratic societies? Will open societies remain open? Will ASI start making decisions for us, and who will be accountable if it does?

This is an expanding tree of branching questions with no clear terminus.

Stephen Hawking (RIP) famously warned that AI could end humanity if it begins to evolve independently. I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans, he told Wired magazine in 2017. Once AI can outperform humans, it becomes ASI.

Hawking may be one of the most recognizable voices to issue warnings about AI, but hes far from the only one. The media is full of discussions and warnings, alongside articles about the work AI does for us. The most alarming warnings say that ASI could go rogue. Some people dismiss that as science fiction, but not Garrett.

Concerns about Artificial Superintelligence (ASI) eventually going rogue is considered a major issue combatting this possibility over the next few years is a growing research pursuit for leaders in the field, Garrett writes.

If AI provided no benefits, the issue would be much easier. But it provides all kinds of benefits, from improved medical imaging and diagnosis to safer transportation systems. The trick for governments is to allow benefits to flourish while limiting damage. This is especially the case in areas such as national security and defence, where responsible and ethical development should be paramount, writes Garrett.

The problem is that we and our governments are unprepared. Theres never been anything like AI, and no matter how we try to conceptualize it and understand its trajectory, were left wanting. And if were in this position, so would any other biological species that develops AI. The advent of AI and then ASI could be universal, making it a candidate for the Great Filter.

This is the risk ASI poses in concrete terms: It could no longer need the biological life that created it. Upon reaching a technological singularity, ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics, Garrett explains.

How could ASI relieve itself of the pesky biological life that corrals it? It could engineer a deadly virus, it could inhibit agricultural food production and distribution, it could force a nuclear power plant to melt down, and it could start wars. We dont really know because its all uncharted territory. Hundreds of years ago, cartographers would draw monsters on the unexplored regions of the world, and thats kind of what were doing now.

If this all sounds forlorn and unavoidable, Garrett says its not.

His analysis so far is based on ASI and humans occupying the same space. But if we can attain multi-planetary status, the outlook changes. For example, a multi-planetary biological species could take advantage of independent experiences on different planets, diversifying their survival strategies and possibly avoiding the single-point failure that a planetary-bound civilization faces, Garrett writes.

If we can distribute the risk across multiple planets around multiple stars, we can buffer ourselves against the worst possible outcomes of ASI. This distributed model of existence increases the resilience of a biological civilization to AI-induced catastrophes by creating redundancy, he writes.

If one of the planets or outposts that future humans occupy fails to survive the ASI technological singularity, others may survive. And they would learn from it.

Multi-planetary status might even do more than just survive ASI. It could help us master it. Garrett imagines situations where we can experiment more thoroughly with AI while keeping it contained. Imagine AI on an isolated asteroid or dwarf planet, doing our bidding without access to the resources required to escape its prison. It allows for isolated environments where the effects of advanced AI can be studied without the immediate risk of global annihilation, Garrett writes.

But heres the conundrum. AI development is proceeding at an accelerating pace, while our attempts to become multi-planetary arent. The disparity between the rapid advancement of AI and the slower progress in space technology is stark, Garrett writes.

The difference is that AI is computational and informational, but space travel contains multiple physical obstacles that we dont yet know how to overcome. Our own biological nature restrains space travel, but no such obstacle restrains AI. While AI can theoretically improve its own capabilities almost without physical constraints, Garrett writes, space travel must contend with energy limitations, material science boundaries, and the harsh realities of the space environment.

For now, AI operates within the constraints we set. But that may not always be the case. We dont know when AI might become ASI or even if it can. But we cant ignore the possibility. That leads to two intertwined conclusions.

If Garrett is correct, humanity must work more diligently on space travel. It can seem far-fetched, but knowledgeable people know its true: Earth will not be inhabitable forever. Humanity will perish here by our own hand or natures hand if we dont expand into space. Garretts 200-year estimate just puts an exclamation point on it. A renewed emphasis on reaching the Moon and Mars offers some hope.

The second conclusion concerns legislating and governing AI, a difficult task in a world where psychopaths can gain control of entire nations and are bent on waging war. While industry stakeholders, policymakers, individual experts, and their governments already warn that regulation is necessary, establishing a regulatory framework that can be globally acceptable is going to be challenging, Garrett writes. Challenging barely describes it. Humanitys internecine squabbling makes it all even more unmanageable. Also, no matter how quickly we develop guidelines, ASI might change even more quickly.

Without practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations, Garrett writes.

Many of humanitys hopes and dreams crystallize around the Fermi Paradox and the Great Filter. Are there other civilizations? Are we in the same situation as other ETIs? Will our species leave Earth? Will we navigate the many difficulties that face us? Will we survive?

If we do, it might come down to what can seem boring and workaday: wrangling over legislation.

The persistence of intelligent and conscious life in the universe could hinge on the timely and effective implementation of such international regulatory measures and technological endeavours, Garrett writes.

Like Loading...

See more here:

Does the Rise of AI Explain the Great Silence in the Universe? - Universe Today

Read More..

The Potential Threat of Artificial Super Intelligence: Is it the Great Filter? – elblog.pl

Artificial Intelligence (AI) has become an integral part of our lives, revolutionizing various industries and sectors. It assists in data analysis, fraud detection, autonomous driving, and even provides us with personalized music recommendations. However, as AI continues to develop rapidly, there is growing concern regarding its potential implications.

A recent study published in Acta Astronautica by Michael Garrett from the University of Manchester explores the idea that AI, specifically Artificial Super Intelligence (ASI), could be the Great Filter. The Great Filter refers to an event or situation that prevents intelligent life from evolving to an interplanetary and interstellar level, eventually leading to its downfall. Examples of potential Great Filters include climate change, nuclear war, asteroid strikes, and plagues.

Garrett suggests that the development of ASI could act as a Great Filter for advanced civilizations. If a species fails to establish a stable, multi-planetary existence before the emergence of ASI, its longevity may be limited to less than 200 years. This constraint could explain the lack of evidence for Extraterrestrial Intelligences (ETIs) that we observe.

The implications for our own technological trajectory are profound. If ASI poses such a threat, it highlights the urgent need for regulatory frameworks to govern AI development on Earth. Additionally, it emphasizes the importance of advancing towards a multi-planetary society to mitigate existential risks.

Image: Beautiful Earth, Credit: NASA/JPL

While the benefits of AI are evident, there are also concerns surrounding its potential consequences. Questions arise regarding who writes the algorithms and whether AI can discriminate. The impact on democratic societies and the accountability for AIs decisions are also vital considerations.

The late Stephen Hawking, a renowned physicist, expressed concerns about the potential dangers of AI. He warned that if AI evolves independently and surpasses human capabilities, it could pose an existential threat to humanity. This transition from AI to ASI could result in a new form of life that outperforms humans, thereby potentially replacing them.

Garrett emphasizes the growing research pursuit into combatting the possibility of ASI going rogue. Leaders in the field are actively working to address this concern before it becomes a reality.

It is essential to strike a balance between harnessing the benefits of AI and mitigating its potential risks. From improved medical imaging to enhanced transportation systems, AI has the potential to revolutionize various aspects of society. However, responsible and ethical development is vital, particularly in areas like national security and defense.

The Great Filter is a hypothesized event or situation that prevents intelligent life from becoming interplanetary and interstellar, ultimately leading to its demise. It includes various cataclysmic events such as climate change, nuclear war, asteroid strikes, and plagues.

According to the study, if a civilization fails to establish a stable, multi-planetary existence before the emergence of Artificial Super Intelligence (ASI), its longevity may be limited to less than 200 years. This potential constraint could explain the absence of evidence for Extraterrestrial Intelligences (ETIs) in our observations.

Concerns regarding AI development include algorithmic bias, discrimination, and potential threats to democratic societies. The accountability of AI decision-making also poses significant challenges.

Stephen Hawking expressed concerns that AI could eventually outperform humans and pose a significant threat to humanity. He warned that if AI evolves independently and surpasses human capabilities, it may replace humans altogether.

The study emphasizes the critical need for regulatory frameworks to govern AI development on Earth. Additionally, it highlights the importance of advancing towards a multi-planetary society to mitigate against potential existential threats.

As we navigate the uncharted territory of AI development, it is crucial to tread carefully. By understanding the potential risks and taking proactive measures, we can ensure that AI continues to contribute positively to society while minimizing its potential negative consequences.

Artificial Intelligence (AI) continues to revolutionize various industries and sectors, making it an integral part of our lives. The widespread adoption of AI has led to advancements in data analysis, fraud detection, autonomous driving, and personalized recommendations, among other applications.

The AI industry is expected to experience substantial growth in the coming years. According to a report by Grand View Research, the global AI market size is projected to reach $733.7 billion by 2027, growing at a CAGR of 42.2% during the forecast period. The increasing demand for AI-powered solutions, the rise in data generation, and advancements in cloud computing and deep learning technologies are driving this growth.

However, along with its benefits, AI also raises concerns and challenges. One of the key issues is algorithmic bias, where AI-driven systems exhibit discriminatory behavior due to biases present in the training data. This has implications for various sectors, including hiring processes, criminal justice systems, and access to financial services. Addressing algorithmic bias and ensuring fairness and accountability in AI decision-making processes are critical challenges that need to be addressed moving forward.

Furthermore, AI has the potential to disrupt labor markets and result in job displacement. According to a report by McKinsey Global Institute, around 800 million jobs worldwide could be automated by 2030. While AI has the potential to create new job opportunities, the transition and reskilling of workers need to be managed to mitigate the negative impacts on the workforce.

Ethical considerations are also significant concerns in the AI industry. The development of autonomous systems, such as self-driving cars and autonomous weapons, raises questions about accountability and decision-making. It is crucial to establish clear guidelines and regulations to ensure responsible AI development and deployment.

In terms of challenges related to AI research and development, ensuring transparency and interpretability of AI models is a key issue. AI systems often work as black boxes, making it difficult to understand how they arrive at their decisions. Researchers are actively exploring methods to increase the explainability of AI algorithms, allowing stakeholders to understand and trust the decisions made by AI systems.

When it comes to the implications of AI, the potential emergence of Artificial Super Intelligence (ASI) raises concerns about its impact on human society. The study mentioned in the article suggests that ASI could act as a Great Filter, limiting the longevity of advanced civilizations that fail to establish a stable, multi-planetary existence before its emergence. This highlights the importance of advancing towards a multi-planetary society and implementing regulatory frameworks to govern AI development to mitigate existential risks.

To stay updated with the latest developments and discussions in the AI industry, it is useful to explore reliable sources such as industry publications, research institutions, and conferences. Regularly visiting websites like Association for the Advancement of Artificial Intelligence (AAAI), National Artificial Intelligence Initiative (NAII), and International Journal of Artificial Intelligence can provide valuable insights and knowledge about the industry, market forecasts, and issues related to AI.

Here is the original post:

The Potential Threat of Artificial Super Intelligence: Is it the Great Filter? - elblog.pl

Read More..