Page 659«..1020..658659660661..670680..»

Sam Altman In Talks to Return to OpenAI | by Derick David | Utopian … – Medium

Sam Altman, cofounder of OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all US senators hosted by Senate Majority Leader Chuck Schumer at the US Capitol in Washington, on Sep 13, 2023.

OpenAI board is in discussions with Sam Altman to return as CEO, just a day after he was ousted. Sam was fired by the board on Friday with no notice and major investors including Microsoft were blindsided.

Sam co-founded OpenAI with Elon Musk and a team of AI scientists in 2015 with the goal of developing safe and beneficial AI. He has since been the face of the company, a leading figure in the field, and has been credited with the creation of ChatGPT.

Microsoft released a statement saying theyre still committed with their partnership with OpenAI but was caught off guard like other investors by the boards abrupt decision to oust CEO Sam Altman, leaving the companys future in doubt amidst fierce competition in the AI landscape with the rise LLMs like Google Bard, ChatGPT, and now xAI.

According to The Verge, OpenAI board is in discussions with Sam to return to the company as CEO, according to multiple people familiar with the matter. Altman, who was unexpectedly let go by the board, is undecided about his comeback and demands substantial governance alterations.

The 4 board members who voted out Sam Altman:

Helen Toner Director of strategy and foundational research grants at Georgetowns CSET, expert on Chinas AI landscape. She joined OpenAI board in September 2021.

Adam DAngelo CEO of Quora, advocate for OpenAIs capped-profit structure and nonprofit control. He joined OpenAI board in April 2018. He also crated Poe, an AI chatbot app which allows users to interact with many different chatbots (including ChatGPT, Claude, Llama, PaLM2, etc).

Tasha McCauley Adjunct senior management scientist at RAND Corporation, co-founder of Fellow Robots and GeoSim Systems. Shes also a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Sam Altman, Iyla Sutskever, and Elon Musk also signed.)

Ilya Sutskever OpenAI cofounder, Russian-born chief scientist, co-author of a key paper in neural networks, helped lead the AlphaGo project.

Unlike traditional companies, OpenAI board is not focused on making money for shareholders. In fact, none of the board members even own shares in the company. Instead, their goal is to make sure that artificial general intelligence (AGI) is developed in a way that benefits everyone, not just a select few.

This is a very different approach than the one taken by most companies. Typically, companies are run by a board of directors who are responsible for making decisions that will increase shareholder value.

This often means maximizing profits, even if it comes at the expense of other stakeholders, such as employees, customers, or the environment.

This is a challenging task, but its one that OpenAI board is taking very seriously. They are working with some of the worlds leading experts on AI to develop guidelines and safeguards that will help to ensure that AGI is used for the benefit of all.

Follow this link:
Sam Altman In Talks to Return to OpenAI | by Derick David | Utopian ... - Medium

Read More..

Absolutely, here’s an article on the impact of upcoming technology – Medium

Photo by Possessed Photography on Unsplash

In the ever-evolving world of technology, one can hardly keep track of the pace at which advancements occur. In every industry, from healthcare to entertainment, technology is causing sweeping changes, redefining traditional norms, and enhancing efficiency on an unprecedented scale. This is an exploration of just a few of these innovating technological advancements that are defining the future.

Artificial Intelligence (AI), already disruptive in its impact, continues to push barriers. With the introduction of advanced systems such as GPT-3 by OpenAI or DeepMinds AlphaGo, the world is witnessing AIs potential in generating human-like text, accurate predictions, problem-solving and strategy development. Companies are reaping the benefits of AI, including improved customer service and streamlined operational processes.

Blockchain technology, while often associated solely with cryptocurrencies, has capabilities far beyond the world of finance. Its transparent and secure nature promises to reform industries like supply chain management, healthcare and even elections, reducing fraud and increasing efficiency.

In the realm of communication, 5G technology is set to revolutionize not only how we connect with each other but also how machines interconnect. Its ultra-fast, stable connection and low latency promise to drive the Internet of Things (IoT) to new heights, fostering an era of smart cities and autonomous vehicles.

Virtual and Augmented Reality (VR/AR) technologies have moved beyond the gaming industry to more practical applications. Industries such as real estate, tourism, and education are starting to realize the immense potential of these technologies for enhancing customer experience and learning outcomes.

Quantum computing, though still in its infancy, holds extraordinary promise with its potential to solve complex computational problems at unprecedented speeds. This technology could bring profound impacts to sectors such as pharmacology, weather forecasting, and cryptography.

These breakthroughs represent the astounding future that lies ahead, but they also hint at new challenges to be navigated. As we move forward, questions surrounding ethical implications, data privacy, and security need to be addressed. However, whats undeniable is the critical role technology will play in shaping our collective future. This evolution inspires awe and eager anticipation of what is yet to come.

Continue reading here:
Absolutely, here's an article on the impact of upcoming technology - Medium

Read More..

UW computer science research event offers a glimpse of the future … – GeekWire

UW computer science students Shirley Xue (left) and Dilini Nissanka wearing low-powered wireless earnings they helped develop that could be an alternative to smartwatches and other wearable health devices. (GeekWire Photo / Taylor Soper)

Inside one of the University of Washingtons computer science buildings on Tuesday evening, students showed off smart earrings that monitor health metrics, and earbuds that measure blood pressure.

On the floor below, an assistive dexterous arm picked up pieces of fruit as part of a robot-assisted feeding system.

Others demoed their research on the implications of facial recognition technology and security of government websites.

The annual Research Showcase and Open House at the UWs Paul G. Allen School of Computer Science & Engineering offered a glimpse at the current state and potential direction of computing demonstrating the growing impact of artificial intelligence as both a focus and a tool for computer science breakthroughs.

At the outset of the research process, generative AI tools such as ChatGPT and Google Bard are dramatically accelerating the process of synthesizing and summarizing existing computer science literature, while also helping to brainstorm potential questions to study, said UW computer science professor Shwetak Patel.

Getting to a candidate research hypothesis is so much faster now, Patel said. Before, he explained, it would take months. But now, You can do this in an hour.

Many of the UW researchers are pursuing and achieving AI breakthroughs.

Seattle venture capital firm Madrona Venture Group each year recognizes teams that demonstrate top research with strong commercial potential. This year both the winner, a project called QLoRA, and runner-up, dubbed Punica, are working on different ways to more efficiently fine-tune large language models.

The picks reflect the recent boom and attention on generative AI and LLMs.

While a lot of exciting news comes from industry right now, the research presented today shows some of the importance and impact of academic research in this space, said Magdalena Balazinska, the Allen School director.

The event also highlighted the emerging disparities in the field, and efforts to overcome them.

The luncheon keynote speaker, Hanna Hajishirzi, a UW associate professor and senior research manager at the Allen Institute for AI (AI2), gave attendees the latest details on OLMo, an AI2 initiative to develop a transparent, open large language model.

The challenge that were facing is that all these state-of-the-art models nowadays are being developed by private companies. And all of these models are proprietary, she said. So its very hard for AI researchers to actually understand and analyze what is going on behind the doors of these large language models.

While researchers can use existing large language models as part of their work, funding and access to the immense processing power needed to train their own LLMs is an ongoing challenge, said the UWs Patel.

We just literally dont have the compute, Patel said. We have to think about research problems that can inform foundational models, or think about application areas. But academia and industry have to co-evolve. And its hard, honestly, to train these models in an academic context.

Many of the UW projects showed how tech can be used for good.

Madronas second runner-up award went to a team working on wireless earbuds that can perform hearing screenings.

The peoples choice award went to the group building the robot-assisted feeding system, a project aimed at helping those who are unable to perform essential tasks live more independently.

Robots can really represent an extension of ones independence and extension of ones ability to act in the world, said Amal Nanavati, a UW Ph.D. computer science student on the team. I think we need more people focusing on projects like this, to take cutting edge technology that we are actively developing and apply it to the needs of people who have been underserved by technological progress so far.

Several projects demonstrated how technology is becoming smaller, faster, cheaper, and more embedded.

Its passively there and helps you get better health, said Shirley Xue, a Ph.D. student who helped develop smart earrings for health monitoring.

UW students arent just focused on developing better software or hardware. Theyre also thinking about the implications of the technology on society.

Ph.D. student Rachel Hong is part of a team researching racial equity in facial recognition software. The work focuses on data collection methods that power such models, which have sparked controversy.

With all the push for machine learning and LLMs, they work well a majority of the time but when they dont, it can be incredibly consequential, Hong said.

View a list of the presenting teams for the poster sessions here, and those who gave presentions during the day here.

Read more from the original source:

UW computer science research event offers a glimpse of the future ... - GeekWire

Read More..

Learnbay Collaborates with Woolf to Launch Master’s Degree in … – Press Trust of India

Source Name : Learnbay by Learnvista Pvt. Ltd.

Category Name : General, High Technology

Updated: 20/11/2023

Bengaluru, Karnataka, India Business Wire India

Learnbay, a leading professional upskilling ed-tech startup is delighted to unveil its cutting-edge Master's Degree following a high-profile collaboration with Woolf, the world's pioneering global collegiate institution. This innovative program offers two specialized tracks inComputer Science -Data Science and Data Analytics,Artificial Intelligence and Machine Learning.Additionally, a unique focus on Software Engineering will be available, setting a new standard for academic excellence in the tech domain.

Highlighted below are the core specializations offered in the Masters Degree

1. Master in CS - Specializations

Data Science and Data Analytics

Artificial Intelligence and Machine Learning

2. Master in CS - Software Engineering

This alliance is set to offer a Master's qualification that stands shoulder-to-shoulder with those from premier institutions in the US & Europe. An emblem of academic rigor and industry relevance, the 18-month 'Excelvarsity' program promises a suite of distinctive advantages

Universal Appeal:Crafted to embrace candidates from diverse technical backgrounds, ensuring broader talent development.

Global Recognition:Degrees certified by Woolf, designed to unlock international tech-career opportunities for graduates.

Hands-on Learning:Beyond theory, participants gain practical insights through a unique project experience certification.

Professional Flexibility:Live, interactive classes designed with the working individual in mind.

Career Headstart:Unparalleled 100% career assistance, offering graduates an edge in the competitive job arena.

On this momentous occasion,Mr. Krishna Kumar, CEO of Learnbay, shared,"Our collaboration with Woolf is a testament to Learnbay's dedication to refining the future of professional upskilling. The 'Excelvarsity' program embodies our vision to bridge the data science and AI expertise gap, ensuring our students remain at the forefront of tech innovation."

Founder and Head of Institution at Woolf,Dr. Joshua Broggi, added,"We are honored to partner with Learnbay on the 'Excelvarsity' Master's program. This collaboration underscores our joint commitment to fostering lifelong learning opportunities. In today's digital landscape, Data Science and AI stand as pivotal catalysts, transforming both industries and societies at large. Recognizing the imperative of these competencies, LearnbayExcelvarsity has meticulously crafted this course. Through this masters program, we aim not merely to equip students and professionals for the future, but to empower them to actively sculpt it."

Learnbay provides a distinctive opportunity for professionals in tech and non-tech sectors to future-proof their careers. The degree is tailored for those aiming to broaden their skills, leaders keen on refining their decision-making in an AI and data-driven era, and visionary business owners focused on ensuring their enterprises thrive in the current market landscape.

About Learnbay

Learnbay, founded by Mr. Krishna Kumar, Mrs. Nisha Kumari, and Mr. Abhishek Gupta in 2015, is a Bengaluru-based EdTech firm. Their primary focus is to offer data science, AI, ML, full-stack web development, and advanced software development courses ranging from 180 to 400 hours to working professionals. Besides providing course completion certificates, the company also offers certifications for project work and micro-skills. Students may access this hybrid learning style through authorized centers in cities across India, including, Bangalore, Pune, Mumbai, Hyderabad, and Delhi. Learnbay's most extraordinary distinguishing qualities are its personalized learning strategies, modules, and support. The organization offers focused career counseling to assist hopefuls in choosing the best career path based on their educational and professional skills.

(Disclaimer: The above press release comes to you under an arrangement with Business Wire India. PTI takes no editorial responsibility for the same.)

Read more here:

Learnbay Collaborates with Woolf to Launch Master's Degree in ... - Press Trust of India

Read More..

Willamette students and faculty shine at regional computer science … – willamette.edu

For two days in October, Willamette Universitys computer science faculty and a student participated in the Consortium for Computing Sciences Northwest (CCSC-NW) Conference. The event featured computer science technology research and education from several colleges in the Pacific Northwest. Professor of Computer Science Haiyan Cheng served as this years conference program chair and as a session chair.

Participants presented papers, submitted work for the conferences yearly publication, and took part in a panel discussion and tutorials. Students were also invited to participate in a poster competition and share their research.

This year, Spencer Veatch BA/BS24, MS24 won first place in the poster competition. Veatch, a student in Willamettes new accelerated 3+1 data science program, presented his research using computer vision to more accurately differentiate between eight kinds of skin cancer. His research was the culmination of his final project in Professor Chengs Introduction to Data Science course, where he became increasingly interested in machine learning a burgeoning field using sophisticated artificial intelligence and deep-learning algorithms systems.

Veatch hopes to expand his work on the project to include more diverse datasets, and says that the CCSC-NW gave him an opportunity to network with professionals working at the intersection of data science and health.

Also attending this years conference was Assistant Professor Calvin Deutschbein, who served as the student poster chair, and Assistant Professor Lucas Cordova, who will serve as next years conference chair.

Willamette will host next years conference for the first time since 2004. The honor of hosting the conference is a testament to the success of Willamettes new School of Computing & Information Sciences and its facultys leadership in the field. Without the typical travel barriers, Willamette students can also register for the event at a discount to listen to the newest developments from Computer Science faculty, connect with industry experts, or enter their research in the poster competition.

Photo caption: pictured left to right: Willamette Student Spencer Veatch,Professor Haiyan Cheng, Assistant Professor Lucas Cordova, and Assistant Professor Calvin Deutchbein

More here:

Willamette students and faculty shine at regional computer science ... - willamette.edu

Read More..

UGA School of Computing hosts Research Day at Georgia Center – Red and Black

Keynote speaker Irfan Essa, the senior associate dean of the College of Computing at Georgia Tech, explained recent developments of artificial intelligence and how these advancements may impact society at the University of Georgia School of Computings Research Day on Nov. 17.

Ask not when, or if, AI will replace people, Essa said. Ask when people using AI will replace people not using AI.

There was also computer science trivia and a panel discussion, in addition to the keynote address. Anna Stenport, Dean of the Franklin College of Arts and Sciences, opened with a speech emphasizing the importance of the new School of Computing and all the new possibilities that come with the establishment of this interdisciplinary school.

In July of 2022, UGAs department of computer science was promoted to the School of Computing, and is jointly administered by the College of Engineering and the Franklin College of Arts and Sciences.

This is a jointly appointed school with broad horizons and a bright future, Stenport said. I have come to know, over just the past number of months that I've been on campus, the ways in which we embrace the distinctiveness of this School of Computing, because we embrace the old, the new and the ambitious.

However, Stenport also said that new opportunities come with a responsibility to ethically consider how the power of computing is applied. Shelley Hooks, the Associate Vice-President of Research at UGA, seconded that, saying UGA was a particularly exciting place for computing and artificial intelligence fields.

Essas speech, titled Generative AI and Responsive AI, began with an explanation of what artificial intelligence is and the research and information that goes into its development. He said the interdisciplinary system requires an understanding of humans, as well as the map keys and ethics on how to deploy said systems.

Essa went on to discuss the controversial topics surrounding the development of artificial intelligence, which include subjects ranging from Chat GPT and doctored videos to self-driving cars and robots.

You have to remember that there's a lot of misinformation and disinformation that doesn't use any of this technology, Essa said. People take a picture from 10 years ago and post it saying this happened today and it creates riots. There is no minute image manipulation.

Throughout his keynote speech, Essa went into technical detail about the inner-workings and ongoing research involving artificial and augmented intelligence. The full speeches can be viewed here.

Following the keynote speech, attendees played computer science trivia. After prizes were awarded for the trivia winners, a panel of computer science and artificial intelligence professors convened to discuss the impacts that AI development has had on education.

Different people learn in different ways, Essa said. So we can not fully adapt to every possible learner, but an AI system can.

Following the panel discussion, undergraduate and graduate students presented posters that they created to inform about artificial intelligence, and then professors and students shared a light dinner and discussed computer science in further detail.

Visit link:

UGA School of Computing hosts Research Day at Georgia Center - Red and Black

Read More..

Using AI to track island fish earns national award | University of … – University of Hawaii

Five fish are marked using FISHTRAC. Detecting them in underwater video can typically be relatively difficult due to the complex coral reef background.

In groundbreaking research at the University of Hawaii at Hilo, computer scientist Travis Mandel and his interdisciplinary team have garnered national acclaim for their innovative use of artificial intelligence in revolutionizing reef fish tracking. The teams FISHTRAC software, a creation of collaboration among students, alumni and faculty, employs AI to meticulously follow individual fish through video footage and photographs, providing a transformative solution to the time-consuming task of image review in the realm of marine research.

Mandel and his team published their research paper in Pattern Recognition this past March and recently learned they won the editors choice award.

It certainly works better than all the other algorithms we compared it to in the paper, which was a very large number, said Mandel who is an associate professor of computer science and director of UH Hilos interdisciplinary data science program.

Its still not perfect, he explained, but the reality is the AI allows for much less human effort and time in terms of someone sitting and watching a hundred videos and then drawing boxes around every frame, because that takes forever.

The AI-based video identification would serve as an alternative to catch-and-release tagging research methods, a process that can be invasive to the fish. The central question the research team hoped to address: How can AI and machine learning systems work with humans to solve real problems?

Initially, Mandel was called on by environmental scientists at UH Hilo to help with computer vision issues, or the ability of software to recognize objects consistently in photographs or videos. The process of teaching an AI engine to learn is complex, and projects such as these are on the cutting edge of computer science and environmental science today.

Although Mandels training was not in computer vision, the pressing need for research within this field quickly presented itself.

A lot of people started reaching out to me, faculty members and grad students in different disciplines, saying, Hey, can you help us with our computer vision problems? said Mandel.

Mandel co-authored the paper with UH Hilo alumni Mark Jimenez (computer science), Emily Risley (computer science), Taishi Nammoto (physics) and Rebekka Williams (mathematics). UH Hilo students Meynard Ballesteros (computer science) and Bobbie Suarez (tropical conservation biology and environmental science) also served as co-authors alongside Max Panoff, a doctoral student in electrical and computer engineering at the University of Florida.

For more go to UH Hilo Stories.

By Evangeline Lemieux, who is double majoring in English and medical anthropology at UH Hilo.

Continued here:

Using AI to track island fish earns national award | University of ... - University of Hawaii

Read More..

Recognizing fake news now a required subject in California schools – Pleasanton Weekly

Pushing back against the surge of misinformation online, California will now require all K-12 students to learn media literacy skills such as recognizing fake news and thinking critically about what they encounter on the internet.

Gov. Gavin Newsom last month signed Assembly Bill 873, which requires the state to add media literacy to curriculum frameworks for English language arts, science, math and history-social studies, rolling out gradually beginning next year. Instead of a stand-alone class, the topic will be woven into existing classes and lessons throughout the school year.

"I've seen the impact that misinformation has had in the real world how it affects the way people vote, whether they accept the outcomes of elections, try to overthrow our democracy," said the bill's sponsor, Assemblymember Marc Berman, a Democrat from Menlo Park. "This is about making sure our young people have the skills they need to navigate this landscape."

The new law comes amid rising public distrust in the media, especially among young people. A 2022 Pew Research Center survey found that adults under age 30 are nearly as likely to believe information on social media as they are from national news outlets. Overall, only 7% of adults have "a great deal" of trust in the media, according to a Gallup poll conducted last year.

Media literacy can help change that, advocates believe, by teaching students how to recognize reliable news sources and the crucial role that media plays in a democracy.

"The increase in Holocaust denial, climate change denial, conspiracy theories getting a foothold, and now AI ... all this shows how important media literacy is for our democracy right now," said Jennifer Ormsby, library services manager for the Los Angeles County Office of Education. "The 2016 election was a real eye-opener for everyone on the potential harms and dangers of fake news."

AB 873 passed nearly unanimously in the Legislature, underscoring the nonpartisan nature of the topic. Nationwide, Texas, New Jersey and Delaware have also passed strong media literacy laws, and more than a dozen other states are moving in that direction, according to Media Literacy Now, a nonprofit research organization that advocates for media literacy in K-12 schools.

Still, California's law falls short of Media Literacy Now's recommendations. California's approach doesn't include funding to train teachers, an advisory committee, input from librarians, surveys or a way to monitor the law's effectiveness.

Keeping the bill simple, though, was a way to help ensure its passage, Berman said. Those features can be implemented later, and he felt it was urgent to pass the law quickly so students can start receiving media literacy education as soon as possible. The law goes into effect Jan. 1, 2024, as the state begins updating its curriculum frameworks, although teachers are encouraged to teach media literacy now.

Berman's law builds on a previous effort in California to bring media literacy to K-12 classrooms. In 2018, Senate Bill 830 required the California Department of Education to provide media literacy resources lesson plans, project ideas, background to the state's K-12 teachers. But it didn't make media literacy mandatory.

The new law also overlaps somewhat with California's effort to bring computer science education to all students. The state hopes to expand computer science, which can include aspects of media literacy, to all students, possibly even requiring it to graduate from high school. Newsom recently signed Assembly Bill 1251, which creates a commission to look at ways to recruit more computer science teachers to California classrooms. Berman is also sponsoring Assembly Bill 1054, which would require high schools to offer computer science classes. That bill is currently stalled in the Senate.

Understanding media, and creating it

Teachers don't need a state law to show students how to be smart media consumers, and some have been doing it for years. Merek Chang, a high school science teacher at Hacienda La Puente Unified in the City of Industry east of Los Angeles, said the pandemic was a wake-up call for him.

During remote learning, he gave students two articles on the origins of the coronavirus. One was an opinion piece from the New York Post, a tabloid, and the other was from a scientific journal. He asked students which they thought was accurate. More than 90% chose the Post piece.

"It made me realize that we need to focus on the skills to understand content, as much as we focus on the content itself," Chang said.

He now incorporates media literacy in all aspects of his lesson plans. He relies on the Stanford History Education Group, which offers free media literacy resources for teachers, and took part in a KQED media literacy program for teachers.

In addition to teaching students how to evaluate online information, he shows them how to create their own media. Homework assignments include making TikTok-style videos on protein synthesis for mRNA vaccines, for example. Students then present their projects at home or at lunchtime events for families and the community.

"The biggest impact, I've noticed, is that students feel like their voice matters," Chang said. "The work isn't just for a grade. They feel like they're making a difference."

Ormsby, the Los Angeles County librarian, has also been promoting media literacy for years. Librarians generally have been on the forefront of media literacy education, and California's new law refers to the Modern School Library Standards for media literacy guidelines.

Ormsby teaches concepts like "lateral reading" (comparing an online article with other sources to check for accuracy) and reverse imaging (searching online to trace a photo to its original source or checking if it's been altered). She also provides lesson plans, resources and book recommendations such as "True or False: A CIA analyst's guide to spotting fake news" and, for elementary students, "Killer Underwear Invasion! How to spot fake news, disinformation & conspiracy theories."

She's happy that the law passed, but would like to see librarians included in the rollout and the curriculum implemented immediately, not waiting until the frameworks are updated.

The gradual implementation of the law was deliberate, since schools are already grappling with so many other state mandates, said Alvin Lee, executive director of Generation Up, a student-led advocacy group that was among the bill's sponsors. He's hoping that local school boards decide to prioritize the issue on their own by funding training for teachers and moving immediately to get media literacy into classrooms.

"Disinformation contributes to polarization, which we're seeing happen all over the world," said Lee, a junior at Stanford who said it's a top issue among his classmates. "Media literacy can address that."

In San Francisco Unified, Ricardo Elizalde is a teacher on special assignment who trains elementary teachers in media literacy. His staff gave out 50 copies of "Killer Underwear!" for teachers to build activities around, and encourages students to make their own media, as well.

Elementary school is the perfect time to introduce the topic, he said.

"We get all these media thrown at us from a young age, we have to learn to defend ourselves," Elizalde said. "Media literacy is a basic part of being literate. If we're just teaching kids how to read, and not think critically about what they're reading, we're doing them a disservice."

Excerpt from:

Recognizing fake news now a required subject in California schools - Pleasanton Weekly

Read More..

Argonne receives funding to advance diversity in STEM – Argonne National Laboratory

The U.S. Department of Energy (DOE) has awarded DOEs Argonne National Laboratory funding as part of the Reaching a New Energy Sciences Workforce (RENEW) initiative, aimed at fostering diversity in STEM and advancing innovative research opportunities.

DOE announced $70 million to support internships, training programs and mentorship opportunities at 65 different institutions, including 40 higher-learning institutions that serve minority populations. By supporting these partnerships, DOE aims to create a more diverse STEM talent pool capable of addressing critical energy, environmental and nuclear challenges.

To compete on the global stage, America will need to draw scientists and engineers from every pocket of the nation, and especially from communities that have been historically underrepresented in STEM, said U.S. Secretary of Energy Jennifer M. Granholm. The RENEW initiative will support talented, motivated students to follow their passions for science, energy and innovation, and help us overcome challenges like climate change and threats to our national security.

RENEW will offer hands-on experiences and open new career avenues for young scientists and engineers from minority-serving institutions.

Argonne is partnering with six minority-serving institutions to mentor 24 undergraduate and eight doctoral students on research projects related to artificial intelligence (AI) and autonomous discovery (AD), an initiative that is harnessing the power of robotics, machine learning and AI to accelerate the pace of science. Computational biologist Arvind Ramanathan is co-PI on the project, which is called Mobilizing the Emerging Diverse AI Talent through Design and Automated Control of Autonomous Scientific Laboratories. Argonne will leverage its AD facilities, such as the Rapid Prototyping Lab, where researchers identify common issues that can arise during AD and then quickly create and test solutions.

The lead PI is Sumit Kumar Jha, professor of computer science at Florida International University. Other university partners include Bowie State University, Cleveland State University, Oakland University and University of Central Florida.

The RENEW initiative leverages the unique capabilities of DOEs national laboratories, user facilities and research infrastructure to provide valuable training opportunities for students, faculty and researchers from underrepresented backgrounds. This project is funded by the DOE Office of Science, Advanced Scientific Computing Research program.

See the original post here:

Argonne receives funding to advance diversity in STEM - Argonne National Laboratory

Read More..

The mind’s eye of a neural network system – Purdue University

WEST LAFAYETTE, Ind. In the background of image recognition software that can ID our friends on social media and wildflowers in our yard are neural networks, a type of artificial intelligence inspired by how own our brains process data. While neural networks sprint through data, their architecture makes it difficult to trace the origin of errors that are obvious to humans like confusing a Converse high-top with an ankle boot limiting their use in more vital work like health care image analysis or research. A new tool developed at Purdue University makes finding those errors as simple as spotting mountaintops from an airplane.

In a sense, if a neural network were able to speak, were showing you what it would be trying to say, said David Gleich, a Purdue professor of computer science in the College of Science who developed the tool, which is featured in a paper published in Nature Machine Intelligence. "The tool weve developed helps you find places where the network is saying, Hey, I need more information to do what youve asked. I would advise people to use this tool on any high-stakes neural network decision scenarios or image prediction task.

Code for the tool is available on GitHub, as are use case demonstrations. Gleich collaborated on the research with Tamal K. Dey, also a Purdue professor of computer science, and Meng Liu, a former Purdue graduate student who earned a doctorate in computer science.

In testing their approach, Gleichs team caught neural networks mistaking the identity of images in databases of everything from chest X-rays and gene sequences to apparel. In one example, a neural network repeatedly mislabeled images of cars from the Imagenette database as cassette players. The reason? The pictures were drawn from online sales listings and included tags for the cars stereo equipment.

Neural network image recognition systems are essentially algorithms that process data in a way that mimics the weighted firing pattern of neurons as an image is analyzed and identified. A system is trained to its task such as identifying an animal, a garment or a tumor with a training set of images that includes data on each pixel, tagging and other information, and the identity of the image as classified within a particular category. Using the training set, the network learns, or extracts, the information it needs in order to match the input values with the category. This information, a string of numbers called an embedded vector, is used to calculate the probability that the image belongs to each of the possible categories. Generally speaking, the correct identity of the image is within the category with the highest probability.

But the embedded vectors and probabilities dont correlate to a decision-making process that humans would recognize. Feed in 100,000 numbers representing the known data, and the network produces an embedded vector of 128 numbers that dont correspond to physical features, although they do make it possible for the network to classify the image. In other words, you cant open the hood on the algorithms of a trained system and follow along. Between the input values and the predicted identity of the image is a proverbial black box of unrecognizable numbers across multiple layers.

The problem with neural networks is that we cant see inside the machine to understand how its making decisions, so how can we know if a neural network is making a characteristic mistake? Gleich said.

Rather than trying to trace the decision-making path of any single image through the network, Gleichs approach makes it possible to visualize the relationship that the computer sees among all the images in an entire database. Think of it like a birds-eye view of all the images as the neural network has organized them.

The relationship among the images (like networks prediction of the identity classification of each of the images in the database) is based on the embedded vectors and probabilities the network generates. To boost the resolution of the view and find places where the network cant distinguish between two different classifications, Gleichs team first developed a method of splitting and overlapping the classifications to identify where images have a high probability of belonging to more than one classification.

The team then maps the relationships onto a Reeb graph, a tool taken from the field of topological data analysis. On the graph, each group of images the network thinks are related is represented by a single dot. Dots are color coded by classification. The closer the dots, the more similar the network considers groups to be, and most areas of the graph show clusters of dots in a single color. But groups of images with a high probability of belonging to more than one classification will be represented by two differently colored overlapping dots. With a single glance, areas where the network cannot distinguish between two classifications appear as a cluster of dots in one color, accompanied by a smattering of overlapping dots in a second color. Zooming in on the overlapping dots will show an area of confusion, like the picture of the car thats been labeled both car and cassette player.

What were doing is taking these complicated sets of information coming out of the network and giving people an in into how the network sees the data at a macroscopic level, Gleich said. The Reeb map represents the important things the big groups and how they relate to each other and that makes it possible to see the errors.

Topological Structure of Complex Predictions was produced with the support of the National Science Foundation and the U.S. Department of Energy.

Writer/Media contact: Mary Martialay; mmartial@purdue.edu

Source: David Gleich; dgleich@purdue.edu

Read more:

The mind's eye of a neural network system - Purdue University

Read More..