Page 1,257«..1020..1,2561,2571,2581,259..1,2701,280..»

Taking the Time to Implement Trust in AI – Illinois Computer Science News

For years, but especially recently, the accelerated pace of development for new machine learning technology has caught the eye of researchers who also value security and privacy.

Vulnerabilities to these advancements and their AI applications do, of course, leave users open to attacks.

In response, Illinois Computer Science professor Bo Li has positioned her research career at the intersection of trustworthy machine learning, with an emphasis on robustness, privacy, generalization, and the underlying interconnections of these items.

As we have become increasingly aware, machine learning currently has been ubiquitous in the technology world through different domains ranging from autonomous driving, large language models, ChatGPT, etc., Li said. It is also a benefit found in many different applications, like face recognition technology.

The troubling aspect is that we have also learned threat these advancements are vulnerable to attack.

Earlier this month,Bo Li logged on to her computer and noticed several emails from colleagues and students congratulating her.

However, what exactly for, she wasnt sure.

I found out, eventually, the way we find out so much information these days on Twitter, Li said with a laugh.

There she saw several notifications stemming from IEEEs announcement of the AI 10 to Watch List for 2022 which included her name.

I was so pleased to hear from current and past students and collaborators, who said such nice things to me. Im also delighted to be a part of this meaningful list from IEEE, Li said.

The Illinois CS professors early-career in academia has become quite decorated already, with awards like the IJCAI Computers and Thought Award, Alfred P. Sloan Research Fellowship, NSF CAREER Award, AI's 10 to Watch List from IEEE, MIT Technology Review TR-35 Award, etc.

Lis work also includes research awards from tech companies such as Amazon, Meta, Google, Intel, MSR, eBay, and IBM, and best paper awards at several top machine learning and security conferences.

Each recognition and award signify a tremendous amount support for my research, and each have provided me confidence in the direction that Im working on, Li said. Im very glad and grateful to all the nominators and communities. Every recognition, including the IEEE AI 10 to Watch List, provide a very interesting and important signal to me that my work is valuable in different ways to different people.

Calling it a golden age in AI, San Murugesan, IEEE Intelligent Systems Editor in Chief, stressed the importance of this years recipients who are rising stars in a field that offers an incredible amount of opportunity.

Li thanked her mentor here at Illinois CS, professor David Forsyth, as well as influences from her time at the University of California, Berkely like Dawn Song and Stuart Russell.

Through their steady guidance, she has prepared her early academic career for success. And Li is ready to return the favor for the next generation of influential AI academicians.

The first piece of advice I would give is to read a lot of good literature and talk with senior people you admire. Therefore, develop your own deep and unique research taste, Li said. Great researchers provide insights that are both unique and profound. Its rare, and it takes hard work. But the work is worth it.

In an already successful start to her career focused on this problem, Li also earned $1 million to align her Secure learning Lab to DARPAs Guaranteeing AI Robustness Against Deception (GARD) program.

The project, she said, is purely research motivated. It will separate those involved into different teams; the red team presents the vulnerability or attack while the blue team attempts to defend against it.

Organizers believe the vulnerability to be too complex to solve during the duration of this project, but the value of the work goes well beyond simply solving the vulnerability.

For the students participating from my lab, this presents an opportunity to work on an ambitious project without the pressure of a leaderboard or competitive end result, Li said. Its ultimately an evaluation that can help us understand the algorithm involved. Its open source and structured with consistent meetings, so we can all work together to uncover advancements and understand them best.

The ultimate goal, for both her and her students, is to define this threat model in a more precise way.

We cannot say our system or machine learning system is trustworthy against any arbitrary attack that's almost impossible. So, we have to characterize our threat model in a precise way, Li said. And we must define trustworthy requirements. For instance, given a task, given a data set to provide a model, we have this different specific requirement.

And then we can optimize an end-to-end system, which can give you guarantees for the metrics you care about. At the same time, hopefully, we can provide tighter guarantees by optimizing the algorithm, optimizing the data, optimizing other components in this process.

This continues work Li has conducted with her students for years into the concept of Trustworthy AI.

For example, a previous breakthrough considered the consistent give-and-take between components that create Trustworthy AI.

Researchers felt that there were certain tradeoffs that had to occur between accuracy and robustness in their systems combating machine learning vulnerabilities.

But Li said that she and her group proposed a framework called Learning-Reasoning, which integrated human reasoning into the equation to help mitigate such tradeoffs.

What were striving for is a scenario in which the people responsible for developments in AI understand that both robustness and accuracy or safety are important to prioritize at the same time, Li said. Often times, processes simply prioritize performance first. Then organizers worry about safeguarding it later. I think both concepts can go together, and that will help the proper development of new AI and ML based technologies.

Additional work from her students has led to progress in related areas.

For example, Ph.D. student Linyi Li has built a unified toolbox to provide certified robustness for Deep Neural Networks.

Also, Ph.D. student Chejian Xu and masters student Jiawei Zhang have generated different safety-critical scenarios for autonomous vehicles. They will host a CVPR workshop on it in June.

Finally, Zhang and Ph.D. student Mintong Kang built the scalable learning-reasoning framework together.

These sorts of developments have also led to Lis involvement in the newly formed NSF AI Institute for Agent-based Cyber Threat Intelligence and OperatioN (ACTION).

Led by the University of California, Santa Barbara, the NSF ACTION Institute also aims to revolutionize protection for mission-critical systems against sophisticated cyber threats.

The most impactful potential outcomes from the ACTION Institute include a range of fundamental algorithms for AI and cybersecurity, and large-scale AI-enabled systems for cybersecurity tasks with formal security guarantees, which are realized by not only purely data-driven models, but also logic reasoning based on domain knowledge, weak human supervision, and instructions. Such security protection and guarantees will hold against unforeseen attacks, as well, Li said.

Its clear that, despite the speed with which AI and machine learning developments occur, Li and her students are providing a presence dedicated to stabilizing and securing this process moving forward.

Read more here:

Taking the Time to Implement Trust in AI - Illinois Computer Science News

Read More..

Cutting-Edge Algorithm Identifies Forest Fire Risk in Canada – Northeastern University

Where theres smoke, theres fireand with tools ranging from fire tower lookouts to satellites, a forest fire can be readily detected.

But what if we could easily predict where a forest fire will occur before the smoke appears? Thats the goal of Michal Aibin, a visiting associate teaching professor in the Khoury College of Computer Sciences at Northeastern Universitys campus in Vancouver, British Columbia.

Currently when we think about forest fires, and all the research that is happening around the world the majority of the worklike 90 percent of the workfocuses on the detection of the fire, Aibin says. But obviously when were detecting the fire, it means the fire is already there we want to predict the fires in the area.

Aibin and his team in Vancouver have developed a computer vision algorithm that assesses and classifies forests according to their fire risk. This enables foresters to see the most at-risk areas and preemptively direct appropriate fire-prevention efforts.

Our goal is to provide as much information as possible, so they can implement prevention strategiesmaintain the forest, do cleanup of debris, maybe a controlled cutting and in the event of a fire, this fire wont spread that far or wont be as devastating as if that fuel was there, Aibin says.

Its a timely project, Aibin says. Roughly 1,800 forest fires burned 135,000 hectares (a hectare contains roughly 2.5 acres) in Canada last year, costing $650 million, Aibin says. Worldwide, fires cost about $50 billion a year, according to the World Economic Forum.

That acreage and cost is likely to increase as the climate changes, cycles of drought become more extreme and carbon is released by wildfires. The WEF said wildfires released an estimated 645 million tons of carbon dioxide into the atmosphere in 2021.

Its become challenging more and more every year, Aibin says. Because of (fires) we release more carbon in the air and then the climate gets warmer, so its a kind of loop that we need to break.

Its also not just the forests or grasslands that are affected. Aibin says smoke from forest fires can travel long distances and degrade air quality, exacerbating human respiratory problems such as asthma.

But Aibins work occurs before the smoke starts to rise.

He uses a drone to scan woodland areas, collecting data such as the amount of flammable debris in the area, the species of trees and their respective flammability, the proximity of water sources, areas with diseases that weaken the trees and more.

The information is put into a mapping program which then displays the forest and assigns different color-coded classifications based on its flammability riskfrom low to extreme.

That information is then made available to foresters who can see the most fire-prone areas, learn why that area is particularly risky, and direct mitigation and prevention strategies.

So far, Aibin has focused on forests near the Thompson River in British Columbia, working with Natural Resources Canada and Transport Canada (government departments responsible for natural resources including forests, and transportation in Canada, respectively) to collect and analyze data.

But he sees a worldwide application for the program, and is working on expanding the amount and type of risk factors that it analyzes to make it more geographically specific.

Expand, expandget more data, more tree species, more risk factors involved, and get a system that can be applied in British Columbia, all of Canada, the States, and finally globally, Aibin says.

He also hopes his computer science students can be inspired to apply their skills beyond traditional programming.

We can make a difference, we can make an impact, Aibin continues. Not only learn these computing skills to be a programmer or tester or designer, we teach those skills, but also with that role comes the responsibility to make an impact in the world.

Cyrus Moulton is a Northeastern Global News reporter. Email him at c.moulton@northeastern.edu. Follow him on Twitter@MoultonCyrus.

The rest is here:

Cutting-Edge Algorithm Identifies Forest Fire Risk in Canada - Northeastern University

Read More..

The Story of Quadruplemaybe QuintupleMajor Michael Opheim … – Bethel University News

His advice: Definitely consider multiple majors.

For those wondering about declaring more than one major, Opheim says: Definitely consider it! It comes with challenges, but Opheim offers his advice:

You need to be on top of your schedule. Professors are a huge help, and he meets with all four of his advisors every advising day. But overall planning requires some independence. You have to make sure, on your own, that you are scheduling things in ways that they work with all our other majors, he says.

Time management is also crucial. If youre doing multiple majors that are in very different fields, your brain is going to be pulled in two different directions, he says. You need to know when you work best. And you need to know how you can, I guess, work to make things fit in your personal schedule.

Opheims course load has come with some sacrifices. He was the president of the Neuroscience Club but stepped down. Though it can be hard to get involved with organizations on campus, he still has free time and time for extracurriculars he enjoys.

Though Opheim admits his workload can be a little hectic, Opheim has no regrets. Hes talked with many people who changed careersor wanted toat least once. He loves each subject, and each could also serve well in years to come. The thing that keeps me going is just I really am just interested in all these areas, he says. I like to have a vast wealth of knowledge that I can, I guess, take from in any different case. Because the future is so uncertain. You never know whats going to be useful in 15-20 years. While Opheim knows he could get a job with any oneor twoof his degrees, he has no regrets. Im enjoying it. Its definitely worth it, he says.

Visit link:

The Story of Quadruplemaybe QuintupleMajor Michael Opheim ... - Bethel University News

Read More..

Texas Southern University joins HBCU Engineering Deans forum as … – BlackEngineer.com

Texas Southern University has been welcomed as the latest member of the HBCU Council of Engineering Deans. During a recent council meeting, TSU presented its engineering programs, which were accredited by ABET in September 2022, including civil engineering and electrical engineering. (Photo: Courtesy of Texas Southern University)

The HBCU Council of Engineering Deans, chaired by Joyce T. Shirazi, dean of the School of Engineering, Architecture, and Aviation at Hampton University, unanimously voted to accept TSU as the 16th member of the council.

TSU currently has over 300 students enrolled in its engineering programs, which fall under the College of Science, Engineering, and Technology. Wei Wayne Li, professor and acting dean of the college, as well as interim chair of the computer science department at Texas Southern University, is the acting dean.

The Council of Engineering Deans of the Historically Black Colleges and Universities (HBCUs)

ALABAMA A&M UNIVERSITYDr. Zhengtao (Z.T.) Deng, Dean, College of Engineering, Technology and Physical SciencesProfessor, Mechanical Engineering Department

FLORIDA A&M -FLORIDA STATE UNIVERSITYSuvranu De, Sc.D., Dean, College of Engineering

HAMPTON UNIVERSITYDr. Joyce T. Shirazi, Dean, School of Engineering, Architecture, and Aviation

HOWARD UNIVERSITYJohn M. M. Anderson, Dean, College of Engineering and Architecture

JACKSON STATE UNIVERSITYDr. Wilbur Waters, Dean, College of Science, Engineering, and Technology

MORGAN STATE UNIVERSITYOscar Barton, Jr., Ph.D., Dean, Clarence M. Mitchell, Jr. School of Engineering

NORFOLK STATE UNIVERSITYDr. Michael Keeve, Professor of MathematicsDean, College of Science, Engineering, and Technology

NORTH CAROLINA A&T UNIVERSITYStephanie Luster-Teasley, Ph.D., Interim Dean, College of EngineeringProfessor, Department of Civil, Architectural, and Environmental Engineering

PRAIRIE VIEW A&M UNIVERSITYDr. Pamela Holland Obiomon, Dean and Professor, Roy G. Perry College of Engineering

TENNESSEE STATE UNIVERSITYLin Li, Ph.D., Professor of Civil Engineering & Interim Dean

TEXAS SOUTHERN UNIVERSITYWei Wayne Li, Professor and Acting Dean, College of Science, Engineering, and TechnologyInterim Chair of Computer Science Department

TUSKEGEE UNIVERSITYHeshmat Aglan, Ph.D., Dean and Professor, College of Engineering

UNIVERSITY of the DISTRICT of COLUMBIADr. Devdas Shetty, Dean, School of Engineering & Applied Sciences

UNIVERSITY OF MARYLAND EASTERN SHOREDr. Derrek B. Dunn, Dean, School of Business and TechnologyChairperson of the Department of Technology

VIRGINIA STATE UNIVERSITYDr. Dawit Haile, Dean, College of Engineering and Technology

Original post:

Texas Southern University joins HBCU Engineering Deans forum as ... - BlackEngineer.com

Read More..

More Byte – Ohio Wesleyan University

Ohio Wesleyan Joins Statewide Higher Education Initiative to Increase Numbers of Computing Graduates

By Cole Hatcher

DELAWARE, Ohio Ohio Wesleyan University is part of a new National Science Foundation-funded consortium of 15 Ohio colleges and universities created to support students seeking to study computer science, especially women and minorities historically underrepresented in the field.

The $2-million National Science Foundation (NSF) grant is being used to create the five-year Ohio Pathways to Undergraduate Computing Success project, which includes six public community colleges and nine of Ohios independent colleges and universities, including Ohio Wesleyan.

Ohio Wesleyan is excited to help educate more computer scientists, said Karlyn Crowley, Ph.D., provost. Ohio employers are seeking out this expertise, and we are committed to helping them fill this vital need. Over the past few years, OWU computer science graduates have begun careers immediately after graduation at local and national companies, including Facebook, Instagram, and Nike. We look forward to the possibilities ahead.

This new consortium, led by Baldwin Wallace University, will collaborate to recruit and graduate more computer science students through work that includes establishing a shared set of requirements and coursework that makes it easy for students to move from earning two-year associates degrees to four-year bachelors degrees without losing transfer credits or adding classroom time.

Each school also is recruiting industrial partners to take part in an advisory board that will give input on the skills and abilities needed most and provide access to job shadowing and internship experiences. Ohio Wesleyans first industrial partner is necoTECH, which develops eco-friendly building materials to create environmentally sustainable infrastructure. necoTECH is headquartered in the Delaware Entrepreneurial Center at OWU.

Ohio Wesleyan and the other consortium members also will work to develop and offer support for women and minority students pursuing degrees in computing fields and to provide faculty development, advisor workshops, and training and recruitment materials for admission counselors.

In addition to Ohio Wesleyan and Baldwin Wallace, the four-year Ohio institutions taking part in the initial consortium are Ashland University, Capital University, Defiance College, Hiram University, Lourdes University, Tiffin University, and Ursuline College.

The two-year institutions involved are Columbus State Community College, Cuyahoga Community College, Lakeland Community College, Lorain County Community College, Sinclair Community College, and Terra State Community College.

Learn more about transferring to Ohio Wesleyan at owu.edu/transfer and more about studying computer science at OWU at owu.edu/ComputerScience.

Founded in 1842, Ohio Wesleyan University is one of the nations premier liberal arts universities. Located in Delaware, Ohio, the private university offers more than 70 undergraduate majors and competes in 24 NCAA Division III varsity sports. Through its signature experience, the OWU Connection, Ohio Wesleyan teaches students to understand issues from multiple academic perspectives, volunteer in service to others, build a diverse and global perspective, and translate classroom knowledge into real-world experience through internships, research, and other hands-on learning. Ohio Wesleyan is featured in the book Colleges That Change Lives and included on the U.S. News & World Report and Princeton Review Best Colleges lists. Connect with OWU expert interview sources at owu.edu/experts or learn more at owu.edu.

Read more:

More Byte - Ohio Wesleyan University

Read More..

Local students honored at Indian Valley Vocational Center Awards … – Shaw Local News Network

Indian Valley Vocational Center (IVVC) hosted its annual Awards Night on May 9 and May 11 at the Montcler Hotel in Sandwich. Local and memorial scholarships were presented. Recognition of graduating seniors with certificates of completion was the highlight of the evening. Several hundred family members and friends were in attendance to congratulate all of these students on their accomplishments.

Scholarships were awarded to the following students: Sandwich Rotary Club - Myranda Banister, Somonauk; George Murphy Memorial- Brianna Gibson, Somonauk; Doran Greif Memorial - Domenick DiVito, Yorkville; Rose Greif Memorial Melody Goldstein, Sandwich; Roy Wright Memorial - Autumn Massier, Serena and Andrew Harrelson, Plano; Festival on Wheels/Tom Ciolek Memorial Brianna Gibson, Somonauk and Aidan Byrne, Hinckley-Big Rock; John Kedzierski Memorial Domenick DiVito, Yorkville; Richard Wasson Memorial Melanie Metzger, Somonauk; Pruski Memorial Brooke Potrawski, Yorkville; Sandwich Park Foundation Hailey Erickson, Sandwich; Dr. Lucile Gustafson Alyssa Broce, Sandwich; Student Service Award Myranda Banister, Somonauk and Brianna Gibson, Somonauk; Karam Family Memorial Zac Gatenby, Sandwich; Yorkville Junior Womens Club Brooke Patrowski, Yorkville, Domenick DiVito, Yorkville, and Johnathan Doucette, Yorkville; Nanzer Family Memorial Noreily Hernandez, Earlville; and the Ian Oldenburg Memorial Aidan Byrne, Hinckley- Big Rock.

Seniors from IVVCs 17 career and technical programs were presented with certificates of completion. The following students were recognized as Most Outstanding in their programs: Auto Body Repair- Aidan Byrne, Hinckley - Big Rock and Brianna Gibson, Somonauk; Auto Technology Nicholas Carlson, Sandwich, Zachariah Gatenby, Sandwich, and Ryan Larson, home-schooled; Certified Nurse Assistant Hailey Erickson, Sandwich and Noreily Hernandez, Earlville; Computer Programming Hannah Fish, Sandwich; Computer Technology- Oliver Moore, Somonauk and Ewan Krisch, Sandwich; Construction Trades Adam Edwards, Sandwich, and Justin Lee, Somonauk; Culinary Arts Melody Goldstein, Sandwich, and Shayla Casas, Plano; Emergency Medical Services Allison Olson, Sandwich; Fire Science- Bryan Gorsky, Hinckley - Big Rock; Graphic Design Alia Villa, Plano, and Joseph Shafer, Sandwich; Health Occupations Amanda Skinner, Sandwich, and Savannah Marks, Plano; Law Enforcement- Justiss Silas, Yorkville; Sports Medicine MacKenzie Svatek, Somonauk, and Lily Geltz, Sandwich; Teaching Methods Alyssa Broce, Sandwich, and Amber Elder, Sandwich; Welding and Fabrication Justin Wiesbrook, Yorkville, Jackson Houtz, Yorkville, and Joseph Kummer, Yorkville.

The following students were recognized as Most Improved in their program areas: Auto Technology Peyton Mongiovi, Yorkville; Computer Programming Christopher Tineo, Yorkville; Emergency Medical Services - Athena Westphal, Yorkville; Fire Science Brooke Potrawski, Yorkville; Law Enforcement Luke Lanehart, Parkview Christian; and Welding & Fabrication Hunter Ruman, Sandwich.

Indian Valley Vocational Center is a career center for high school juniors and seniors located in Sandwich. Ten school districts (Earlville, Hinckley-Big Rock, Indian Creek, Leland, Newark, Plano, Sandwich, Serena, Somonauk, Yorkville) participate in the Indian Valley cooperative which currently offers 17 career and technical programs. Learn more about Indian Valley Vocational Center and its career programs by visiting ivvc.net.

View original post here:

Local students honored at Indian Valley Vocational Center Awards ... - Shaw Local News Network

Read More..

Post-doctoral Research Fellow, School of Computer Science job … – Times Higher Education

Applications are invited for a Temporary post of a Post-doctoral Research Fellow Level 2 within UCD School of Computer Science.

Mild traumatic brain injury (mTBI), or concussion, is the most common type of traumatic brain injury. With mTBI comes symptoms that include headaches, fatigue, depression, anxiety, and irritability, as well as impaired cognitive function. Symptom resolution is thought to occur within 3 months post-injury, except for a small percentage of individuals who are said to experience persistent post-concussion syndrome. The number of individuals who experience persistent symptoms appears to be low despite clear evidence of longer-term pathophysiological changes resulting from mTBI.

UCD together with Contego Sports Ltd have established a new research programme, NProSend, focussed on the design and develop a computational head model (inclusive of N-Pro headgear) that can accurately predict the behaviour of the brain (specifically brain resonance patterns and transient development of stress and strain) when impacted according to typical Rugby-specific impact scenarios. We will deal with the identification and selection of sensor technologies (off-the-shelf) for the trial, contribution to the multi-modal specification and data analysis plans and identify ML models for the multi-modal datasets.

This is an advanced research focused role, building on your prior experience as a post-doctoral fellow, where you will conduct a specified programme of research supported by research training under the supervision and direction of a Principal Investigator.

The primary purpose of the role is to develop new or advanced research skills and competences, on the processes of publication in peer-reviewed academic publications and scholarly dissemination, the development of funding proposals, and the supervision and mentorship of graduate students along with the opportunity to develop your skills in research led teaching.

In addition to the Principal Duties and Responsibilities listed below, the successful candidate will also carry out the following duties specific to this project:

Post-doctoral Research Fellow Level 2 Salary range: 49,790 - 51,193 per annum.

Appointment on the above range will be dependent upon qualifications and experience.

Closing date: 17:00hrs (local Irish time) on 6th of June 2023.

Applications must be submitted by the closing date and time specified. Any applications which are still in progress at the closing time of 17:00hrs (Local Irish Time) on the specified closing date will be cancelled automatically by the system.

UCD are unable to accept late applications.

UCD do not require assistance from Recruitment Agencies. Any CV's submitted by Recruitment Agencies will be returned.

Prior to application, further information (including application procedure) should be obtained from the Work at UCD website: https://www.ucd.ie/workatucd/jobs/.

View post:

Post-doctoral Research Fellow, School of Computer Science job ... - Times Higher Education

Read More..

AI doom, AI boom and the possible destruction of humanity – VentureBeat

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.

This statement, released this week by the Center for AI Safety (CAIS), reflects an overarching and some might say overreaching worry about doomsday scenarios due to a runaway superintelligence.The CAIS statement mirrors the dominant concerns expressed in AI industry conversations over the last two months: Namely, that existential threats may manifest over the next decade or two unless AI technology is strictly regulated on a global scale.

The statement has been signed by a whos who of academic experts and technology luminaries ranging from Geoffrey Hinton (formerly at Google and the long-time proponent of deep learning) to Stuart Russell (a professor of computer science at Berkeley) and Lex Fridman (a research scientist and podcast host from MIT). In addition to extinction, the Center for AI Safety warns of other significant concerns ranging from enfeeblement of human thinking to threats from AI-generated misinformation undermining societal decision-making.

In a New York Times article, CAIS executive director Dan Hendrycks said: Theres a very common misconception, even in the AI community, that there only are a handful of doomers. But, in fact, many people privately would express concerns about these things.

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Doomers is the keyword in this statement. Clearly, there is a lot of doom talk going on now. For example, Hinton recently departed from Google so that he could embark on anAI-threatens-us-all doom tour.

Throughout the AI community, the term P(doom) has become fashionable to describe the probability of such doom. P(doom) is an attempt to quantify the risk of a doomsday scenario in which AI, especially superintelligent AI, causes severe harm to humanity or even leads to human extinction.

On a recent Hard Fork podcast, Kevin Roose of The New York Times set his P(doom) at 5%. Ajeya Cotra, an AI safety expert with Open Philanthropy and a guest on the show, set her P(doom) at 20 to 30%. However, it needs to be said that P(doom) is purely speculative and subjective, a reflection of individual beliefs and attitudes toward AI risk rather than a definitive measure of that risk.

Not everyone buys into the AI doom narrative. In fact, some AI experts argue the opposite. These include Andrew Ng (who founded and led the Google Brain project) and Pedro Domingos (a professor of computer science and engineering at the University of Washington and author of The Master Algorithm). They argue, instead, that AI is part of the solution. As put forward by Ng, there are indeed existential dangers, such as climate change and future pandemics, and that AI can be part of how these are addressed and hopefully mitigated.

Melanie Mitchell, a prominent AI researcher, is also skeptical of doomsday thinking. Mitchell is the Davis Professor of complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans. Among her arguments is that intelligence cannot be separated from socialization.

In Towards Data Science, Jeremie Harris, co-founder of AI safety company Gladstone AI, interprets Mitchell as arguing that a genuinely intelligent AI system is likely to become socialized by picking up common sense and ethics as a byproduct of their development and would, therefore, likely be safe.

While the concept of P(doom) serves to highlight the potential risks of AI, it can inadvertently overshadow a crucial aspect of the debate: The positive impact AI could have on mitigating existential threats.

Hence, to balance the conversation, we should also consider another possibility that I call P(solution) or P(sol), the probability that AI can play a role in addressing these threats. To give you a sense of my perspective, I estimate my P(doom) to be around 5%, but my P(sol) stands closer to 80%. This reflects my belief that, while we shouldnt discount the risks, the potential benefits of AI could be substantial enough to outweigh them.

This is not to say that there are no risks or that we should not pursue best practices and regulations to avoid the worst imaginable possibilities. It is to say, however, that we should not focus solely on potential bad outcomes or claims, as does a post in the Effective Altruism Forum, that doom is thedefault probability.

The primary worry, according to many doomers, is the problem of alignment, where the objectives of a superintelligent AI are not aligned with human values or societal objectives. Although the subject seems new with the emergence of ChatGPT, this concern emerged nearly 65 years ago. As reported by The Economist, Norbert Weiner an AI pioneer and the father of cybernetics published an essay in 1960 describing his worries about a world in which machines learn and develop unforeseen strategies at rates that baffle their programmers.

The alignment problem was first dramatized in the 1968 film 2001: A Space Odyssey. Marvin Minsky, another AI pioneer, served as a technical consultant for the film. In the movie, the HAL 9000 computer that provides the onboard AI for the spaceship Discovery One begins to behave in ways that are at odds with the interests of the crew members. The AI alignment problem surfaces when HALs objectives diverge from those of the human crew.

When HAL learns of the crews plans to disconnect it due to concerns about its behavior, HAL perceives this as a threat to the missions success and responds by trying to eliminate the crew members. The message is that if an AIs objectives are not perfectly aligned with human values and goals, the AI might take actions that are harmful or even deadly to humans, even if it is not explicitly programmed to do so.

Fast forward 55 years, and it is this same alignment concern that animates much of the current doomsday conversation. The worry is that an AI system may take harmful actions even without anybody intending them to do so. Many leading AI organizations are diligently working on this problem. Google DeepMind recently published a paper on how to best assess new, general-purpose AI systems for dangerous capabilities and alignment and to develop an early warning system as a critical aspect of a responsible AI strategy.

Given these two sides of the debate P(doom) or P(sol) there is no consensus on the future of AI. The question remains: Are we heading toward a doom scenario or a promising future enhanced by AI? This is a classic paradox. On one side is the hope that AI is the best of us and will solve complex problems and save humanity. On the other side, AI will bring out the worst of us by obfuscating the truth, destroying trust and, ultimately, humanity.

Like all paradoxes, the answer is not clear. What is certain is the need for ongoing vigilance and responsible development in AI. Thus, even if you do not buy into the doomsday scenario, it still makes sense to pursue common-sense regulations to hopefully prevent an unlikely but dangerous situation. The stakes, as the Center for AI Safety has reminded us, are nothing less than the future of humanity itself.

Gary Grossman is SVP of technology practice atEdelmanand global lead of the Edelman AI Center of Excellence.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Read more:

AI doom, AI boom and the possible destruction of humanity - VentureBeat

Read More..

Why K-12 Schools Must Invest in Teaching Quantum Computing … – EdTech Magazine: Focus on K-12

Some States Add Quantum Computing to K-12 Curriculum

While the field is still developing, the White House is working with the National Science Foundation to get quantum learning materials to K-12 schools. Educators in Ohio and Texas have made this subject a high priority for K12.

Last year, Ohio updated its K-12 computer science curriculum to include quantum computing. Around the same time, Texas educators advocated in front of the state board of education for foundational quantum computing subjects such as physics to become mandatory, particularly to prepare students for careers in cybersecurity and IT.

LEARN MORE: Teaching emerging technologies empowers K12 students.

While quantum information science may not be part of every states curriculum, there are organizations working to fill the gap. Last summer, the University of Texas at Arlington offered its Quantum For Allworkshops for teachers and camps for students at several locations in Texas, Ohio and New York. The organization is planning to offer the camps and workshops again this year.

In 2020, The Coding School partnered with IBM, MIT and Oxford University to kick off Qubit by Qubit, the first global quantum computing course for students in high school and beyond. The yearlong program drew 7,500 students from around the world and has since offered additional camps, workshops and courses, and even a camp for middle school students.

RELATED: Schools are teaching the principles of computer science early.

Quantum computing is a really fantastic way to introduce students to interdisciplinary science, technology, engineering and math subjects, Warshay says.

The lessons start at a conceptual level aligned with skills students need to know for coding and relevant physics concepts, she adds. During a typical introductory class, instructors explain to students how quantum computers are differentfrom conventional computers and other machines.

For students interested in quantum computing work, the programs instructors and leaders discuss opportunities in higher education and the workplace, says Gabbie Meis, program manager for Qubit by Qubit. Our goal is creating and supporting and empowering more transdisciplinary educated young learners, regardless of whether they choose to go into quantum as well, she says.

Continued here:

Why K-12 Schools Must Invest in Teaching Quantum Computing ... - EdTech Magazine: Focus on K-12

Read More..

Would Large Language Models Be Better If They Werent So Large? – The New York Times

When it comes to artificial intelligence chatbots, bigger is typically better.

Large language models like ChatGPT and Bard, which generate conversational, original text, improve as they are fed more data. Every day, bloggers take to the internet to explain how the latest advances an app that summarizes articles, A.I.-generated podcasts, a fine-tuned model that can answer any question related to professional basketball will change everything.

But making bigger and more capable A.I. requires processing power that few companies possess, and there is growing concern that a small group, including Google, Meta, OpenAI and Microsoft, will exercise near-total control over the technology.

Also, bigger language models are harder to understand. They are often described as black boxes, even by the people who design them, and leading figures in the field have expressed unease that A.I.s goals may ultimately not align with our own. If bigger is better, it is also more opaque and more exclusive.

In January, a group of young academics working in natural language processing the branch of A.I. focused on linguistic understanding issued a challenge to try to turn this paradigm on its head. The group called for teams to create functional language models using data sets that are less than one-ten-thousandth the size of those used by the most advanced large language models. A successful mini-model would be nearly as capable as the high-end models but much smaller, more accessible and more compatible with humans. The project is called the BabyLM Challenge.

Were challenging people to think small and focus more on building efficient systems that way more people can use, said Aaron Mueller, a computer scientist at Johns Hopkins University and an organizer of BabyLM.

Alex Warstadt, a computer scientist at ETH Zurich and another organizer of the project, added, The challenge puts questions about human language learning, rather than How big can we make our models? at the center of the conversation.

Large language models are neural networks designed to predict the next word in a given sentence or phrase. They are trained for this task using a corpus of words collected from transcripts, websites, novels and newspapers. A typical model makes guesses based on example phrases and then adjusts itself depending on how close it gets to the right answer.

By repeating this process over and over, a model forms maps of how words relate to one another. In general, the more words a model is trained on, the better it will become; every phrase provides the model with context, and more context translates to a more detailed impression of what each word means. OpenAIs GPT-3, released in 2020, was trained on 200 billion words; DeepMinds Chinchilla, released in 2022, was trained on a trillion.

To Ethan Wilcox, a linguist at ETH Zurich, the fact that something nonhuman can generate language presents an exciting opportunity: Could A.I. language models be used to study how humans learn language?

For instance, nativism, an influential theory tracing back to Noam Chomskys early work, claims that humans learn language quickly and efficiently because they have an innate understanding of how language works. But language models learn language quickly, too, and seemingly without an innate understanding of how language works so maybe nativism doesnt hold water.

The challenge is that language models learn very differently from humans. Humans have bodies, social lives and rich sensations. We can smell mulch, feel the vanes of feathers, bump into doors and taste peppermints. Early on, we are exposed to simple spoken words and syntaxes that are often not represented in writing. So, Dr. Wilcox concluded, a computer that produces language after being trained on gazillions of written words can tell us only so much about our own linguistic process.

But if a language model were exposed only to words that a young human encounters, it might interact with language in ways that could address certain questions we have about our own abilities.

So, together with a half-dozen colleagues, Dr. Wilcox, Dr. Mueller and Dr. Warstadt conceived of the BabyLM Challenge, to try to nudge language models slightly closer to human understanding. In January, they sent out a call for teams to train language models on the same number of words that a 13-year-old human encounters roughly 100 million. Candidate models would be tested on how well they generated and picked up the nuances of language, and a winner would be declared.

Eva Portelance, a linguist at McGill University, came across the challenge the day it was announced. Her research straddles the often blurry line between computer science and linguistics. The first forays into A.I., in the 1950s, were driven by the desire to model human cognitive capacities in computers; the basic unit of information processing in A.I. is the neuron, and early language models in the 1980s and 90s were directly inspired by the human brain.

But as processors grew more powerful, and companies started working toward marketable products, computer scientists realized that it was often easier to train language models on enormous amounts of data than to force them into psychologically informed structures. As a result, Dr. Portelance said, they give us text thats humanlike, but theres no connection between us and how they function.

For scientists interested in understanding how the human mind works, these large models offer limited insight. And because they require tremendous processing power, few researchers can access them. Only a small number of industry labs with huge resources can afford to train models with billions of parameters on trillions of words, Dr. Wilcox said.

Or even to load them, Dr. Mueller added. This has made research in the field feel slightly less democratic lately.

The BabyLM Challenge, Dr. Portelance said, could be seen as a step away from the arms race for bigger language models, and a step toward more accessible, more intuitive A.I.

The potential of such a research program has not been ignored by bigger industry labs. Sam Altman, the chief executive of OpenAI, recently said that increasing the size of language models would not lead to the same kind of improvements seen over the past few years. And companies like Google and Meta have also been investing in research into more efficient language models, informed by human cognitive structures. After all, a model that can generate language when trained on less data could potentially be scaled up, too.

Whatever profits a successful BabyLM might hold, for those behind the challenge, the goals are more academic and abstract. Even the prize subverts the practical. Just pride, Dr. Wilcox said.

Go here to see the original:

Would Large Language Models Be Better If They Werent So Large? - The New York Times

Read More..