Page 661«..1020..660661662663..670680..»

Why the Godfather of A.I. Fears What He’s Built – The New Yorker

I love this house, but sometimes its a sad place, he said, while we looked at the pictures. Because she loved being here and isnt here.

The sun had almost set, and Hinton turned on a little light over his desk. He closed the computer and pushed his glasses up on his nose. He squared up his shoulders, returning to the present.

I wanted you to know about Roz and Jackie because theyre an important part of my life, he said. But, actually, its also quite relevant to artificial intelligence. There are two approaches to A.I. Theres denial, and theres stoicism. Everybodys first reaction to A.I. is Weve got to stop this. Just like everybodys first reaction to cancer is How are we going to cut it out? But it was important to recognize when cutting it out was just a fantasy.

He sighed. We cant be in denial, he said. We have to be real. We need to think, How do we make it not as awful for humanity as it might be?

How usefulor dangerouswill A.I. turn out to be? No one knows for sure, in part because neural nets are so strange. In the twentieth century, many researchers wanted to build computers that mimicked brains. But, although neural nets like OpenAIs GPT models are brainlike in that they involve billions of artificial neurons, theyre actually profoundly different from biological brains. Todays A.I.s are based in the cloud and housed in data centers that use power on an industrial scale. Clueless in some ways and savantlike in others, they reason for millions of users, but only when prompted. They are not alive. They have probably passed the Turing testthe long-heralded standard, established by the computing pioneer Alan Turing, which held that any computer that could persuasively imitate a human in conversation could be said, reasonably, to think. And yet our intuitions may tell us that nothing resident in a browser tab could really be thinking in the way we do. The systems force us to ask if our kind of thinking is the only kind that counts.

During his last few years at Google, Hinton focussed his efforts on creating more traditionally mindlike artificial intelligence using hardware that more closely emulated the brain. In todays A.I.s, the weights of the connections among the artificial neurons are stored numerically; its as though the brain keeps records about itself. In your actual, analog brain, however, the weights are built into the physical connections between neurons. Hinton worked to create an artificial version of this system using specialized computer chips.

If you could do it, it would be amazing, he told me. The chips would be able to learn by varying their conductances. Because the weights would be integrated into the hardware, it would be impossible to copy them from one machine to another; each artificial intelligence would have to learn on its own. They would have to go to school, he said. But you would go from using a megawatt to thirty watts. As he spoke, he leaned forward, his eyes boring into mine; I got a glimpse of Hinton the evangelist. Because the knowledge gained by each A.I. would be lost when it was disassembled, he called the approach mortal computing. Wed give up on immortality, he said. In literature, you give up being a god for the woman you love, right? In this case, wed get something far more important, which is energy efficiency. Among other things, energy efficiency encourages individuality: because a human brain can run on oatmeal, the world can support billions of brains, all different. And each brain can learn continuously, rather than being trained once, then pushed out into the world.

As a scientific enterprise, mortal A.I. might bring us closer to replicating our own brains. But Hinton has come to think, regretfully, that digital intelligence might be more powerful. In analog intelligence, if the brain dies, the knowledge dies, he said. By contrast, in digital intelligence, if a particular computer dies, those same connection strengths can be used on another computer. And, even if all the digital computers died, if youd stored the connection strengths somewhere you could then just make another digital computer and run the same weights on that other digital computer. Ten thousand neural nets can learn ten thousand different things at the same time, then share what theyve learned. This combination of immortality and replicability, he says, suggests that we should be concerned about digital intelligence taking over from biological intelligence.

How should we describe the mental life of a digital intelligence without a mortal body or an individual identity? In recent months, some A.I. researchers have taken to calling GPT a reasoning enginea way, perhaps, of sliding out from under the weight of the word thinking, which we struggle to define. People blame us for using those wordsthinking, knowing, understanding, deciding, and so on, Bengio told me. But even though we dont have a complete understanding of the meaning of those words, theyve been very powerful ways of creating analogies that help us understand what were doing. Its helped us a lot to talk about imagination, attention, planning, intuition as a tool to clarify and explore. In Bengios view, a lot of what weve been doing is solving the intuition aspect of the mind. Intuitions might be understood as thoughts that we cant explain: our minds generate them for us, unconsciously, by making connections between what were encountering in the present and our past experiences. We tend to prize reason over intuition, but Hinton believes that we are more intuitive than we acknowledge. For years, symbolic-A.I. people said our true nature is, were reasoning machines, he told me. I think thats just nonsense. Our true nature is, were analogy machines, with a little bit of reasoning built on top, to notice when the analogies are giving us the wrong answers, and correct them.

On the whole, current A.I. technology is talky and cerebral: it stumbles at the borders of the physical. Any teen-ager can learn to drive a car in twenty hours of practice, with hardly any supervision, LeCun told me. Any cat can jump on a series of pieces of furniture and get to the top of some shelf. We dont have any A.I. systems coming anywhere close to doing these things today, except self-driving carsand they are over-engineered, requiring mapping the whole city, hundreds of engineers, hundreds of thousands of hours of training. Solving the wriggly problems of physical intuition will be the big challenge of the next decade, LeCun said. Still, the basic idea is simple: if neurons can do it, then so can neural nets.

Here is the original post:

Why the Godfather of A.I. Fears What He's Built - The New Yorker

Read More..

SCS Researchers To Receive $1.2M for Continued DOE Nuclear … – Carnegie Mellon University

The U.S. Department of Energy will continue funding research on nuclear fusion at Carnegie Mellon Universitys School of Computer Science.

The DOE recently announced(opens in new window) $16 million in funding for nine projects spread across 13 institutions, including CMU, that aim to establish the scientific foundation needed to develop a fusion energy source. The projects focus on advancing innovative fusion technology and collaborative research on both small-scale experiments and theDIII-D National Fusion Facility(opens in new window) in San Diego, the largest tokamak operating in the United States. CMU will receive about $1.2 million over three years.

While establishing the scientific basis for fusion energy, we must also improve the maturity of existing fusion technologies and explore entirely new innovations that have the potential to revolutionize the fusion landscape, saidJean Paul Allain, DOE associate director of the Office of Science for Fusion Energy Sciences. The extensive capabilities at DIII-D make it the ideal facility to pursue areas of great potential that are not sufficiently mature for adoption by the private sector.

Nuclear fusion happens when hydrogen nuclei smash, or fuse, together. This process releases a tremendous amount of energy but remains challenging to maintain at levels necessary for putting electricity on the grid. One method to produce nuclear fusion uses magnetic fields to contain a plasma of hydrogen at the required temperature and pressure to fuse the nuclei. This process happens inside a tokamak a massive machine that uses magnetic fields to confine the hydrogen plasma in a donut shape called a torus. Containing the plasma and maintaining its shape require hundreds of micromanipulations to the magnetic fields and blasts of additional hydrogen particles.

The DOE funding will allow Jeff Schneider, a research professor in the Robotics Institute, and his team to continue their research on using machine learning to control fusion reactions.

Last year, Ian Char, a doctoral candidate in theMachine Learning Department advised by Schneider, used reinforcement learning to control the hydrogen plasma of the tokamak at DIII-D(opens in new window). Char was the first CMU researcher to run an experiment on the sought-after machines, the first to use reinforcement learning to affect the rotation of a tokamak plasma, and the first person to try reinforcement learning on the largest operating tokamak machine in the United States.

Schneider and his team will now attempt to develop a machine-learning-based system that simultaneously controls the injection of hydrogen particles, the shape of the plasma, and its current and density. Developing such a system is critical to the development of ITER, formerly known as the International Thermonuclear Experimental Reactor an international nuclear fusion research project that will be the worlds largest tokamak when it is completed in 2025.

The proposed work will bring the power of machine learning techniques to plasma control at DIII-D. This will set the stage for the successful operation of ITER, which requires plasma control at a level beyond current capabilities, and will also expand the scientific understanding of plasma evolution and instabilities, Schneider said. Carnegie Mellon University is leveraging its expertise in machine learning to help the global scientific community harness a new source of clean, abundant energy.

As it has in the past, the CMU team will collaborate with the Princeton Plasma Physics Laboratory and the SLAC National Accelerator Laboratory at Stanford on the work.

The rest is here:

SCS Researchers To Receive $1.2M for Continued DOE Nuclear ... - Carnegie Mellon University

Read More..

ASU center brings faculty together to research human-robot solutions – ASU News Now

November 15, 2023

To help mitigate the world biodiversity crisis, Arizona State Universitys Julie Ann Wrigley Global Futures Laboratory has recruited Harris Lewin, a prominent genome scientist currently spearheading one of biologys most ambitious moonshot goals a complete DNA catalog of life's genetic code by the end of this decade.

Lewin leads the Earth BioGenome Project, a massive coalition of worldwide scientists and 50-plus ongoing projects that has a primary goal of completing high-quality DNA reference genomes the gold standard of an organisms complete DNA genetic code and sequence for all higher organisms on Earth, an estimated 1.8 million species. Harris Lewin, scientists and leaders of the Earth BioGenome Project. Download Full Image

The global secretariat of the project, which was at the University of California, Davis (UC Davis), will also move to ASU in December.

You really have to know who's there before you can really understand biology, Lewin said. And right now, with only 10% of the species that exist having been named for most of life, or 80% to 90% of all life, we don't even know whats there.

Lewins appointment as professor in ASU's Global Futures Laboratory boosts its comprehensive strategy to develop solutions for our worlds planetary systems challenges, including the current biodiversity crisis. There are an estimated two-thirds of higher organisms that may face the urgent threat of a new mass extinction, primarily due to the activities of humans that impact natural ecosystems and drive climate change.

Today, with trying to build scalable models on understanding how ecosystems function and how they might be restored and remediated, we have to have detailed understanding of the organisms in those ecosystems, Lewin said. We need to move as quickly as we can, because if species that comprise critical ecosystems are lost, they may never be recovered again.

Once a species goes extinct, scientists forever lose the ability to better understand what sustained its life, or if that species might be used to improve food or medicine production.

As our worlds life-supporting systems continue to be stressed to levels that have never before been recorded, the significance of the Earth BioGenome Project cannot be understated, said Peter Schlosser, vice president and vice provost of Global Futures at ASU.

To have a pioneering scientist like Harris Lewin and his colleagues identify ASU and the Julie Ann Wrigley Global Futures Laboratory as not simply a logical home for this endeavor, but a preferred home because of our facilities and global network of partners, speaks volumes," Schlosser said. "As with all work designed to help shape options for a thriving future for our world and its inhabitants, this project is of the highest urgency and requires a deep cohort of experts from around the world.

The 19th century naturalist Charles Darwin wrote about the complexity of life on Earth describing it as endless forms most beautiful 164 years ago in his profound book on evolution, On the Origin of Species.

In the 20th century, the structure of DNA was discovered. The combination and exact order of DNA chemical letters abbreviated as A, C, T or G are responsible for the blueprints of life. To better decipher this blueprint, DNA sequencing was invented in the 1970s.

With advances in sequencing technology in the 1990s, academic, private and government labs raced to complete the genomes for the first bacterium, yeast, nematode and fruit fly. The first draft of the Human Genome Project, a Herculean effort at the time, was completed in 2003, taking an international consortium of scientists 13 years to do so and at an estimated cost of $3 billion dollars.

Fifteen years later, Lewin co-founded the Earth BioGenome Project, or EBP, and today chairs its executive council.The project was announced at the World Economic Forum in Davos, Switzerland, at the beginning of 2018 and officially launched at the Wellcome Trust in London later that year.

He describes the EBP as a critical biology infrastructure project that will allow scientists to stand on the shoulder of giants to see further and better understand the worlds biodiversity akin to how astronomers have used tools such as the Webb Space Telescope to understand the nature of the universe.

Genomes are the infrastructure for the future of biology and the bioeconomy, Lewin said. Much like how the Webb Telescope allows you to peer into the cosmos to understand the origins and evolution of the universe, having all the sequence of eukaryotic life those with a nucleus will facilitate understanding of the origin and evolution of life on Earth.

Key facets of a bio-driven economy from genome science include renewable biofuels from algae, food crops like corn and soybeans, threats like agricultural pests, model scientific organisms for drug and medicine development, and biodefense and biosecurity issues, such as the recent worldwide COVID-19 pandemic. Other products of the bioeconomy will involve new industrial catalysts, biomaterials and drugs.

Working together, to date, the EBP has completed a pilot phase of about 2,000 genomes. Among the EBP are 55 genome projects underway, the largest led by the U.K.s Wellcome Sanger Institute, Rockefeller University, the European Union, Genome Canada, China, a pan-African consortium and Australia. In the U.S., Rockefeller University leads the Vertebrate Genomes Project, which has now completed over 300 genomes, and the California Conservation Genomics Project has finished over 150 genomes.

With rapid advances in DNA sequencing technology and computing power, Lewin thinks the EBP can sequence the rest of all 1.8 million named eukaryotic species for around the same cost as the human genome draft within the next 10 years.

Funding for the EBP will come from a variety of worldwide endeavors.

Theres no central funding, Lewin said. It's a distributed model. Each of these projects raises their own money, but they're all agreeing to coordinate and work together with common standards towards the goal of sequencing all eukaryotes in 10 years. The limitation these days is really not the sequencing technology, the limitation is acquiring taxonomically well-identified, vouchered and ethically sourced samples from all over the world.

The next goal is to complete 10,000 genomes by the end of 2025. When fully up to speed, the affiliated projects of the EBP will need to sequence an estimated 1,500 genomes per day to meet its ambitious goal.

This also includes a very aggressive set of standards for a collection of samples, all the metadata that gets collected with them, and how the sequencing is to be done and to what specifications in terms of quality, Lewin said.

With the move to ASU, there will now be abundant opportunities to develop an EBP at ASU program to sequence and better understand iconic life found in desert climates from the mighty arms of the saguaro cactus to Gila monsters to Gimbal quail to the diamondback rattlesnake.

The EBP at ASU will be greatly strengthened by the National Science Foundation NEON (National Ecological Observatory Network)Biorepository, directed by Nico Franz, Virginia M. Ullman Professor of Ecology and Biocollections director.

Our team is thrilled to have the opportunity to work with Harris Lewin, Franz said. We have a shared, inclusive vision to advance EBP at ASU and beyond. This model is based on sound biodiversity sampling design, ethical data governance and broadly impacting education in the computational life sciences."

From the worlds coral reefs to rainforests which together account for an estimated 75% of worldwide biodiversity to temperate land climates, ASU has been at the forefront of developing innovative solutions for understanding and conserving biodiversity.

ASU will be one of the global centers for Earth BioGenome Project, not just on the sample provision side, but all the way through sequencing, assembly and analysis, Lewin said. We certainly have early plans to try and understand desert ecosystems and to reveal the impacts of climate change on those critical ecosystems, including aquatic ecosystems.

The Earth BioGenome Project now joins the newSchool of Ocean Futures, NeoBio, Bermuda Institute of Ocean Science, Center for Global Discovery and Conservation Science, and Center for Biodiversity Outcomes as ASUs academic lead initiatives to help solve the world biodiversity crisis.

We are excited to see how this work integrates with programs like the Bermuda Institute of Ocean Sciences and Nico Franzs research with the Biodiversity Knowledge Integration Centerand NEON, Schlosser said. These collaborationscan help repair, preserve and protect our worlds ecosystems.

Lewins official ASU appointment began Nov. 1.For the past 12 years, Lewin served as distinguished professor of evolution and ecology and former vice chancellor for research at UCDavis. He is a member of theNational Academy of Sciences and won theWolf Prize in Agriculturefor his research into cattle genomics. He has been a leader in the field of mammalian comparative genomics and has made major contributions to our understanding of chromosome evolution and its relationship to adaptation, speciation and the origins of cancers. Previously, Lewin worked at theUniversity of Illinois for 27 years and, in 2003, served as the founding director of theCarl R. Woese Institute for Genomic Biology.

Read more:

ASU center brings faculty together to research human-robot solutions - ASU News Now

Read More..

The Reach of Online Learning to Ensure Continuing Access to … – USC Viterbi School of Engineering

Website created by female students from Afghanistan as part of a DEN at USC Viterbi course.

With many students in the world today living under challenging circumstances, continuing access to educational opportunities can be nearly impossible. Recognizing these unforeseen challenges, USC Viterbi faculty turned to DEN@Viterbi, the Distance Education Network at USC Viterbi, with more than 50 years of experience in hybrid and remote learning, to help students whose education has been suddenly interrupted or curtailed. As a result, over the last year, free access to USC Viterbi engineering classes and workshops were offered to students, living in two different regions in the world, war-torn Ukraine and Afghanistan, in order to ensure that students in such unique and volatile circumstances had the opportunity to continue their education.

Leveraging the DEN platform, established five decades ago (ahead of online learning common today), Astronautics Professor Mike Gruntman hosted a free online course on fundamentals of space systems for students and faculty in Ukraine. Gruntman emphasized that this humanitarian initiative by the Viterbi School offered important opportunities for specialists in Ukraine to maintain academic excellence in a rapidly developing area of technology that would play an important role in the rebuilding of the country in the future.

Simultaneously, a number of Afghan female students participated in two free educational opportunities:

Seventy-five women last year participated in the first such opportunity, a global course in innovation (Principles and Practices of Innovation) taught by Professor Stephen Lu through USCs iPodia program. The more than decade-old iPodia program allows for students from different parts of the world to simultaneously attend the same class using the DEN platform. The Afghan students joined classmates from universities in Brazil, China, Germany, Greece, Israel, Mexico, Taiwan, Uganda and the United States.

The second such educational opportunity was the creation of a series of skills-based short courses at USC, that has become known as the Afghan Pathways Program (APP). Through USC Viterbis Information Technology Program (ITP) (which focuses on applied technology coursework), professors Trina Gregory and Nayeon Kim taught women (now permitted to study at home) how to create websites and to code in Python. For twelve weeks, these Afghan students met three times a week with their instructors. Forty certificates in web development and/or Python programming have been earned thus far by the female Afghan students who completed the courses. Both programs were coordinated in collaboration with the non-profit Afghanistan-US Democratic Peace and Prosperity Council (DPPC).

A snapshot of the DEN at Viterbi dashboard.

Said USC ViterbiDean Yannis C. Yortsos, We are fortunate to have the ability to reach and provide engineering education to many students in many parts of the world, where such access is curtailed.

USC Vice Dean and Interim ITP Director, Erik A. Johnson, said about the courses for women and girls in Afghanistan, We so easily take for granted to educational opportunities that we have here; these short courses are providing instruction and skills to women who have no other direct opportunity to continue their education.

Trina Gregory who is an Associate Professor of Information Technology Practice and who taught the courses for Afghan women remarked,The students inspired me to continue to fight for education for girls and women.

One individual affiliated with the DPPC believes that this can be a model for other universities to follow. The virtual classroom is a window of hope for Afghan girls and women. Further, she believes engineering, computer science, and codingthese disciplines and skills are key to womens independence. She believes it is imperative so that the generation does not get lost.

USC Viterbi continues to offer coursework to students, including introduction to web development to a second cohort of Afghan women this term. In addition, Afghan women are currently participating in a course, Astronomy 101: The Universe through A Cultural Lens, being taught by USC Dornsife Professor Vahe Peroomian, an iPodia fellow.

Lu added, iPodia enables USC Viterbi to extend our classrooms to hard-to-reach places across the globe so our students can learn together with peers of various backgrounds. The Afghan participants in iPodia classes are not just students but also teachers to our students.

Published on November 17th, 2023

Last updated on November 17th, 2023

View post:

The Reach of Online Learning to Ensure Continuing Access to ... - USC Viterbi School of Engineering

Read More..

Realistic talking faces created from only an audio clip and a person’s … – EurekAlert

image:

(L-R) NTU School of Computer Science and Engineering (SCSE) PhD student Mr Zhang Jiahui, NTU SCSE Associate Professor Lu Shijian, NTU SCSE PhD graduate Dr Wu Rongliang, and NTU SCSE PhD student Mr YuYingchen, presenting a video produced by DIRFA based on Assoc Prof Lus photo.

Credit: NTU Singapore

A team of researchers from Nanyang Technological University, Singapore (NTU Singapore) has developed a computer program that creates realistic videos that reflect the facial expressions and head movements of the person speaking, only requiring an audio clip and a face photo.

DIverse yet Realistic Facial Animations, or DIRFA, is an artificial intelligence-based program that takes audio and a photo and produces a 3D video showing the person demonstrating realistic and consistent facial animations synchronised with the spoken audio (see videos).

The NTU-developed program improves on existing approaches (see Figure 1), which struggle with pose variations and emotional control.

To accomplish this, the team trained DIRFA on over one million audiovisual clips from over 6,000 people derived from an open-source database called The VoxCeleb2 Dataset to predict cues from speech and associate them with facial expressions and head movements.

The researchers said DIRFA could lead to new applications across various industries and domains, including healthcare, as it could enable more sophisticated and realistic virtual assistants and chatbots, improving user experiences. It could also serve as a powerful tool for individuals with speech or facial disabilities, helping them to convey their thoughts and emotions through expressive avatars or digital representations, enhancing their ability to communicate.

Corresponding author Associate Professor Lu Shijian, from the School of Computer Science and Engineering (SCSE) at NTU Singapore, who led the study, said: The impact of our study could be profound and far-reaching, as it revolutionises the realm of multimedia communication by enabling the creation of highly realistic videos of individuals speaking, combining techniques such as AI and machine learning. Our program also builds on previous studies and represents an advancement in the technology, as videos created with our program are complete with accurate lip movements, vivid facial expressions and natural head poses, using only their audio recordings and static images.

First author Dr Wu Rongliang, a PhD graduate from NTUs SCSE, said: Speech exhibits a multitude of variations. Individuals pronounce the same words differently in diverse contexts, encompassing variations in duration, amplitude, tone, and more. Furthermore, beyond its linguistic content, speech conveys rich information about the speaker's emotional state and identity factors such as gender, age, ethnicity, and even personality traits. Our approach represents a pioneering effort in enhancing performance from the perspective of audio representation learning in AI and machine learning." Dr Wu is a Research Scientist at the Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR), Singapore.

The findings were published in the scientific journal Pattern Recognition in August.

Speaking volumes: Turning audio into action with animated accuracy

The researchers say that creating lifelike facial expressions driven by audio poses a complex challenge. For a given audio signal, there can be numerous possible facial expressions that would make sense, and these possibilities can multiply when dealing with a sequence of audio signals over time.

Since audio typically has strong associations with lip movements but weaker connections with facial expressions and head positions, the team aimed to create talking faces that exhibit precise lip synchronisation, rich facial expressions, and natural head movements corresponding to the provided audio.

To address this, the team first designed their AI model, DIRFA, to capture the intricate relationships between audio signals and facial animations. The team trained their model on more than one million audio and video clips of over 6,000 people, derived from a publicly available database.

Assoc Prof Lu added: Specifically, DIRFA modelled the likelihood of a facial animation, such as a raised eyebrow or wrinkled nose, based on the input audio. This modelling enabled the program to transform the audio input into diverse yet highly lifelike sequences of facial animations to guide the generation of talking faces.

Dr Wu added: Extensive experiments show that DIRFA can generate talking faces with accurate lip movements, vivid facial expressions and natural head poses. However, we are working to improve the programs interface, allowing certain outputs to be controlled. For example, DIRFA does not allow users to adjust a certain expression, such as changing a frown to a smile.

Besides adding more options and improvements to DIRFAs interface, the NTU researchers will be finetuning its facial expressions with a wider range of datasets that include more varied facial expressions and voice audio clips.

Explainer video: How DIRFA uses artificial intelligence to generate talking heads

Video 2: A DIRFA-generated talking head with just an audio of former US president Barrack Obama speaking, and a photo of Associate Professor Lu Shijian.

Video 3: A DIRFA-generated talking head with just an audio of former US president Barrack Obama speaking, and a photo of studys first author Dr Wu Rongliang.

Pattern Recognition

Imaging analysis

People

Audio-driven talking face generation with diverse yet realistic facial animations

31-Aug-2023

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

More:

Realistic talking faces created from only an audio clip and a person's ... - EurekAlert

Read More..

Computer Science most preferred course by Indian students in US – DTNEXT

CHENNAI: Computer Science is the most preferred course for Indian students studying in the United States followed by engineering programmes, new data with the US consulate, Chennai revealed on Monday.

United States-India Educational Foundation (USIEF) regional officer Maya Sundararajan said that of the total, 41.2% of Indian students choose Computer Science courses at Under Graduate (UG) and Post Graduate (PG) levels in the US. She said engineering programmes in the US are the second preferred choice with a total of 26.9% of students studying that course. Though the number of students was not disclosed, Maya said a total of 11.6% of Indian students wanted to study management courses in the US.

Life Science course and Health accounts to 5.6% and 2.5% respectively, she added. She said even at the international level, Computer Science and Engineering is the most preferred course in the US. When asked which places in the US were most preferred by Indian students, the official said Texas, California, and New York were the priority states for Indian students to pursue both UG and PG courses there.

Stating that USIEF engages institutions of higher education in the US and in India to help foster and enhance linkages between them, she advised the students to choose only accredited universities.

US Consul General in Chennai, Christopher W Hodges said that graduate students in India, who come for higher studies in the US, were focusing more on research.

It is great to see the trajectory in the increase of Indian students studying in the US, he added.

Stressing the need for the linkage between the industry and education, he said this would help the students in higher education get practical experience provided by the companies.

Pointing out that cooperation between the US and India has reached the next level, he said many institutions including the Indian Institute of Technology have ties with universities in the US, which would help student exchange programmes.

See the original post:

Computer Science most preferred course by Indian students in US - DTNEXT

Read More..

University Ham Radio Station Equipment Installed | Royal News … – Scranton

Antenna systems installed on the roof of the Loyola Science Center (LSC) include a 40-foot tower with a Skyhawk High Frequency antenna for 14, 21, and 28 MHz, as well as VHF/UHF satellite and microwave antennas, some with rotating mounts. New equipment includes heavy-duty controllers, all-mode transceivers, speakers, desktop microphones and other components that allow for students to operate ham radio units.

The main amateur radio station will be located in a room on the fifth floor with floor to ceiling windows that have panoramic views of the city and will feature state-of-the-art operating positions, so it will be a very attractive place for students to learn about amateur radio, radio science and radio engineering, said Dr. Frissell. An additional room on the same floor with equipment and antenna connections will be used as a lab for controlled HamSCI space research projects.

The new station capabilities will also allow additional student learning and community service opportunities.

The system is tied into the LSC emergency power system, so that the system could be part of the local emergency communications network, said Dr. Frissell, who has already involved student members of the Universitys Amateur Radio Club with emergency response training with the Luzerne County Emergency Communication Agency.

In addition, during the installation, students helped assemble antennas under the tutelage of Jeff DePolo of Broadcast Sciences, L.L.C., who is leading the installation. DePalo has worked on similar projects at numerous colleges, including the University of Pennsylvania, Temple University and Drexel University.

The installation has given us great hands-on experience of what it will be like when we enter the workforce, said Tom Pisano, an electrical engineering major from Staten Island, New York, as he and fellow students helped assemble antennas.

Go here to read the rest:

University Ham Radio Station Equipment Installed | Royal News ... - Scranton

Read More..

Materials Discovery to Enable Computers that Think More Like … – USC Viterbi School of Engineering

A USC Viterbi research team has discovered a new semiconductor with a unique property that will allow for energy efficient computers that function more like the human brain. Image/Unreal

Artificial intelligence is already transforming how we work and live, automating time-consuming tasks and streamlining our decision-making.

However, AI algorithms are mostly run on conventional complementary metal oxide semiconductor (CMOS)-based hardware. This requires them to be trained with large datasets to accomplish even the simplest tasks, such as image analysis or facial recognition. Processing these data-intensive requests requires vast computing resources, like data centers. The process consumes significant amounts of energy.

A USC Viterbi School of Engineering research team has discovered a new semiconductor with a unique material property that can enable more energy-efficient computing hardware that functions like the human brain. Two related research papers were published recently in the journals Advanced Materials and Advanced Electronic Materials. The research is led by Huandong Chen, a 2023 Materials Science Ph.D. graduate in the Mork Family Department of Chemical Engineering and Materials Science, from the group of Jayakanth Ravichandran, an associate professor in chemical engineering and materials science and electrical and computer engineering.

The human brain is excellent at associative learning we have an innate ability to call up memories, make connections, and understand objects and stimuli in relation to each other. Human brains utilize interconnected neurons and synapses to store information locally where it is processed. Our brains are capable of handling highly sophisticated tasks and operating at remarkably low energy consumption. Developing neuromorphic computing hardware hardware that mimics the architecture and operation of the human brain is highly desired in the quest to achieve energy-efficient advanced computing.

Hardware materials that mimic the brain

Philip and Cayley MacDonald Endowed Early Career Chair Jayakanth Ravichandran.

If a material can move abruptly between two states (also known as phase transitions), this provides the foundation for hardware that mimics the brain. For example, a slight difference in temperature dramatically changes a materials electrical conductivity the ease of passing an electrical current from a high to a low value, or vice versa. Such neuron-inspired phase change devices have been achieved only in a handful of materials.

The USC Viterbi researchers discovered novel electronic phase transitions in a semiconductor and leveraged those intriguing physical properties to demonstrate an abrupt electrical conductivity change with varying temperature and applied voltage, which can enable the development of energy-efficient neuromorphic computing.

Ravichandran holds the Philip and Cayley MacDonald Endowed Early Career Chair. His group has been working on a semiconductor material known as barium titanium sulfide (BaTiS3) since 2017. The groups work resulted in the BaTiS3 material showing a world-record high birefringence property a phenomenon in which a ray of light is split into two rays. In a recent unrelated work, they discovered an even higher value in a related material.

However, as a semiconductor, we do not expect any abrupt phase transition in BaTiS3, said Ravichandran.

Naively thinking, this material should behave like a boring semiconductor without any expectation of a phase transition, said Chen.A surprising discovery

Ravichandran and his group were surprised to observe the signatures of phase transitions in the BaTiS3 material when measuring its electrical properties under different temperatures. Upon cooling the material, the electrical resistivity of BaTiS3 increases, and it undergoes a transition at around 240 Kelvin (about -33 Celsius), featuring an abrupt change in electrical conductivity. With further cooling, it continues to increase until 150 Kelvin (about -123 Celsius), after which the material goes through another transition with increased electrical conductivity.

It is always exciting to observe abnormal behavior in our experiments, but we have to check carefully to make sure that those phenomena are real and reproducible, said Ravichandran.

In this work, Chen performed careful experiments to rule out contributions from many extrinsic factors, such as contact resistance and strain status, which could complicate this effect. It was demonstrated that the unique property originated within the material itself.

Postdoctoral researcher in the Ravichandran Group Huandong Chen

This is particularly important when characterizing such a new material system. One good example of not ruling out other factors was the recent drama surrounding the so-called room temperature superconductor LK-99, where it seems the sharp drop in resistivity at around 105 Celsius is likely from an impurity, known as Cu2S, said Chen.

The team also investigated how the crystal structure of the BaTiS3 material changes during these electronic phase transitions, corresponding to the changes in electrical conductivity.

Boyang Zhao, a Materials Science Ph.D. candidate from Ravichandrans group, traveled to the synchrotron at Lawrence Berkeley National Lab to map out the structure evolution. By combining the information from the electrical and structural measurements which are key experimental features for the interesting phenomena called charge density wave phase transition the team could claim the existence of charge density wave order in BaTiS3.

Weve discovered one very special charge density wave phase change material. Most charge density wave materials only go from a metal state which is high conductivity to an insulator state which is low conductivity. What we have found is that you can go from a low conductivity state to a low conductivity state. Such insulating-to-insulating transition is very, very rare, with only a handful of examples out there. So, scientifically, its very interesting, said Ravichandran.

How the phase transitions in BaTiS3 work is not fully understood yet. The team collaborated with Rohan Mishras group from Washington University in St. Louis, performing materials modeling to obtain a deeper understanding of the material system. Current experimental and theoretical findings suggest that the observed phase change phenomena have an unexpected origin compared to most charge density wave materials. The team is conducting further studies to understand this phenomenon better.

The latest Advanced Materials research on novel phase change material discovery was conducted with collaborators from the University of Washington in Seattle, Washington University in St. Louis, Columbia University, Oak Ridge National Laboratory, and Lawrence Berkley National Laboratory.

A prototype showing the material in action

In a follow-up work that was recently published in Advanced Electronic Materials, Chen and his collaborators fabricated the first prototype neuronal device using the BaTiS3 material. They were able to show abrupt switching by varying current and voltage. They also showed oscillations in voltage that signified fast switching between two states in the phase transition. Similar voltage oscillations are observed in the brain.

This is an important step towards actual electronic device applications of BaTiS3. It is also quite exciting to see such a short period of time between this prototype device demonstration and the fundamental material property discovery, said Chen.

The frequency of voltage oscillations was altered by the operation temperature and channel sizes. A lower operation temperature and a shorter device channel size give rise to higher oscillation frequencies.

We expect that much more sophisticated neuronal functionalities can be achieved by connecting multiple BaTiS3 neurons to each other or integrating with other passive synaptic devices, as has been successfully demonstrated in another phase change system VO2. Future efforts in making this material in the thin film form that features phase transitions and is potentially compatible with our semiconductor manufacturing could be of great interest to both the research community and the semiconductor industry, said Chen.

This work in Advanced Electronic Materials was done in collaboration with Robert G. and Mary G. Lane Endowed Early Career Chair Han Wangs group in USCs Ming Hsieh Department of Electrical and Computer Engineering. Other authors include Materials Science Ph.D. candidate Nan Wang and Electrical Engineering Ph.D. candidate Hefei Liu.

Ravichandran serves as a co-director for the Core Center of Excellence in NanoImaging (CNI).

Ravichandran and his research team at USC are supported by the MURI program of the Army Research Office and the U.S. National Science Foundations Ceramics Program.

Published on November 14th, 2023

Last updated on November 14th, 2023

More here:

Materials Discovery to Enable Computers that Think More Like ... - USC Viterbi School of Engineering

Read More..

Future of brain-inspired AI as Python code library passes major … – Science Daily

Four years ago, UC Santa Cruz's Jason Eshraghian developed a Python library that combines neuroscience with artificial intelligence to create spiking neural networks, a machine learning method that takes inspiration from the brain's ability to efficiently process data. Now, his open source code library, called "snnTorch," has surpassed 100,000 downloads and is used in a wide variety of projects, from NASA satellite tracking efforts to semiconductor companies optimizing chips for AI.

A new paper published in the journal Proceedings of the IEEE documents the coding library but also is intended to be a candid educational resource for students and any other programmers interested in learning about brain-inspired AI.

"It's exciting because it shows people are interested in the brain, and that people have identified that neural networks are really inefficient compared to the brain," said Eshraghian, an assistant professor of electrical and computer engineering. "People are concerned about the environmental impact [of the costly power demands] of neural networks and large language models, and so this is a very plausible direction forward."

Building snnTorch

Spiking neural networks emulate the brain and biological systems to process information more efficiently. The brain's neurons are at rest until there is a piece of information for them to process, which causes their activity to spike. Similarly, a spiking neural network only begins processing data when there is an input into the system, rather than constantly processing data like traditional neural networks.

"We want to take all the benefits of the brain and its power efficiency and smush them into the functionality of artificial intelligence -- so taking the best of both worlds," Eshraghian said.

Eshraghian began building the code for a spiking neural network in Python as a passion project during the pandemic, somewhat as a method to teach himself the coding language Python. A chip designer by training, he became interested in learning to code when considering that computing chips could be optimized for power efficiency by co-designing the software and the hardware to ensure they best complement each other.

Now, snnTorch is being used by thousands of programmers around the world on a variety of projects, supporting everything from NASA's satellite tracking projects to major chip designers such as Graphcore.

While building the Python library, Eshraghian created code documentation and educational materials, which came naturally to him in the process of teaching himself the coding language. The documents, tutorials, and interactive coding notebooks he made later exploded in the community and became the first point of entry for many people learning about the topics of neuromorphic engineering and spiking neural networks, which he sees as one of the major reasons that his library became so popular.

An honest resource

Knowing that these educational materials could be very valuable to the growing community of computer scientists and beyond who were interested in the field, Eshraghian began compiling his extensive documentation into a paper, which has now been published in the Proceedings of the IEEE, a leading computing journal.

The paper acts as a companion to the snnTorch code library and is structured like a tutorial, and an opinionated one at that, discussing uncertainty among brain-inspired deep learning researchers and offering a perspective on the future of the field. Eshraghian said that the paper is intentionally upfront to its readers that the field of neuromorphic computing is evolving and unsettled in an effort to save students the frustration of trying to find the theoretical basis for code decision-making that the research community doesn't even understand.

"This paper is painfully honest, because students deserve that," Eshraghian said. "There's a lot of things that we do in deep learning, and we just don't know why they work. A lot of times we want to claim that we did something intentionally, and we published because we went through a series of rigorous experiments, but here we say just: this is what works best and we have no idea why."

The paper contains blocks of code, a format unusual to typical research papers. These code blocks are sometimes accompanied by explanations that certain areas may be vastly unsettled, but provide insight into why researchers think certain approaches may be successful. Eshraghian said he has seen a positive reception to this honest approach in the community, and has even been told that the paper is being used in onboarding materials at neuromorphic hardware startups.

"I don't want my research to put people through the same pain I went through," he said.

Learning from and about the brain

The paper offers a perspective on how researchers in the field might navigate some of the limitations of brain-inspired deep learning that stem from the fact that overall, our understanding of how the brain functions and processes information is quite limited.

For AI researchers to move toward more brain-like learning mechanisms for their deep learning models, they need to identify the correlations and discrepancies between deep learning and biology, Eshraghian said. One of these key differences is that brains can't survey all of the data they've ever inputted in the way that AI models can, and instead focus on the real-time data that comes their way, which could offer opportunities for enhanced energy efficiency.

"Brains aren't time machines, they can't go back -- all your memories are pushed forward as you experience the world, so training and processing are coupled together," Eshraghian said. "One of the things that I make a big deal of in the paper is how we can apply learning in real time."

Another area of exploration in the paper is a fundamental concept in neuroscience that states that neurons that fire together are wired together -- meaning when two neurons are triggered to send out a signal at the same time, the pathway between the two neurons is strengthened. However, the ways in which the brain learns on an organ-wide scale still remains mysterious.

The "fire together, wired together" concept has been traditionally seen as in opposition to deep learning's model training method known as backpropagation, but Eshraghian suggests that these processes may be complementary, opening up new areas of exploration for the field.

Eshraghian is also excited about working with cerebral organoids, which are models of brain tissue grown from stem cells, to learn more about how the brain processes information. He's currently collaborating with biomolecular engineering researchers in the UCSC Genomics Institute's Braingeneers group to explore these questions with organoid models. This is a unique opportunity for UC Santa Cruz engineers to incorporate "wetware" -- a term referring to biological models for computing research -- into the software/hardware co-design paradigm that is prevalent in the field. The snnTorch code could even provide a platform for simulating organoids, which can be difficult to maintain in the lab.

"[The Braingeneers] are building the biological instruments and tools that we can use to get a better feel for how learning can happen, and how that might translate in order to make deep learning more efficient," Eshraghian said.

Brain-inspired learning at UCSC and beyond

Eshraghian is now using the concepts developed in his library and the recent paper in his class on neuromorphic computing at UC Santa Cruz called "Brain-Inspired Deep Learning." Undergraduate and graduate students across a range of academic disciplines are taking the class to learn the basics of deep learning and complete a project in which they write their own tutorial for, and potentially contributing to, snnTorch.

"It's not just kind of coming out of the class with an exam or getting an A plus, it's now making a contribution to something, and being able to say that you've done something tangible," Eshraghian said.

Meanwhile, the preprint version of the recent IEEE paper continues to receive contributions from researchers around the world, a reflection of the dynamic, open-source nature of the field. A new NSF grant he is a co-principal investigator on will support students' ability to attend the month-long Telluride Neuromorphic & Cognition Engineering workshop.

Eshraghian is collaborating with people to push the field in a number of ways, from making biological discoveries about the brain, to pushing the limits of neuromorphic chips to handle low-power AI workloads, to facilitating collaboration to bring the spiking neural network-style of computing to other domains such as natural physics.

Discord and Slack channels dedicated to discussing the spiking neural network code support a thriving environment of collaboration across industry and academia. Eshraghian even recently came across a job posting that listed proficiency in snnTorch as a desired quality.

See the rest here:

Future of brain-inspired AI as Python code library passes major ... - Science Daily

Read More..

For the first time, AI produces better weather predictions — and it’s … – ZME Science

AI-generated image.

Predicting the weather is notoriously difficult. Not only are there a million and one parameters to consider but theres also a good degree of chaotic behavior in the atmosphere. But DeepMinds scientists (the same group that brought us AlphaGo and AlphaFold) have developed a system that can revolutionize weather forecasting. This advanced AI model leverages vast amounts of data to generate highly accurate predictions.

Weather forecasting, an indispensable tool in our daily lives, has undergone tremendous advancements over the years. Todays 6-day forecast is as good (if not better) than the 3-day forecast from 30 years ago. Storms and extreme weather events rarely catch people off-guard. You may not notice it because the improvement is gradual, but weather forecasting has progressed greatly.

This is more than just a convenience; its a lifesaver. Weather forecasts help people prepare for extreme events, saving lives and money. They are indispensable for farmers protecting their crops, and they significantly impact the global economy.

This is exactly where AI enters the room.

DeepMind scientists now claim theyve made a remarkable leap in weather forecasting with their GraphCast model. GraphCast is a sophisticated machine-learning algorithm that outperforms conventional weather forecasting around 90% of the time.

We believe this marks a turning point in weather forecasting, Googles researchers wrote in a study published Tuesday.

Crucially, GraphCast offers warnings much faster than standard models. For instance, in September, GraphCast accurately predicted that Hurricane Lee would make landfall in Nova Scotia nine days in advance. Currently used models predicted it only six days in advance.

The method that GraphCast uses is significantly different. Current forecasts typically use a lot of carefully defined physics equations. These are then transformed into algorithms and run on supercomputers, where models are simulated. As mentioned, scientists have this approach with great results so far.

However, this approach requires a lot of expertise and computation power. Machine learning offers a different approach. Instead of running equations on the current weather conditions, you look at the historical data. You see what type of conditions led to what type of weather. It gets even better: you can mix conventional methods with this new AI approach, and get accurate, fast readings.

Crucially, GraphCast and traditional approaches go hand-in-hand: we trained GraphCast on four decades of weather reanalysis data, from the ECMWFs ERA5 dataset. This trove is based on historical weather observations such as satellite images, radar, and weather stations using a traditional numerical weather prediction (NWP) to fill in the blanks where the observations are incomplete, to reconstruct a rich record of global historical weather, writes lead author Remi Lam, from DeepMind.

While GraphCasts training was computationally intensive, the resulting forecasting model is highly efficient. Making 10-day forecasts with GraphCast takes less than a minute on a single Google TPU v4 machine. For comparison, a 10-day forecast using a conventional approach can take hours of computation in a supercomputer with hundreds of machines.

The algorithm isnt perfect, it still lags behind conventional models in some regards (especially in precipitation forecasting). But considering how easy it is to use, its at least an excellent complement to existing forecasting tools. Theres another exciting bit about it: its open source. This means that companies and researchers can use and change it to better suit their needs.

Byopen-sourcing the model code for GraphCast,we are enabling scientists and forecasters around the world to benefit billions of people in their everyday lives. GraphCast is already being used by weather agencies, adds Lam.

The significance of this development cannot be overstated. As our planet faces increasingly unpredictable weather patterns due to climate change, the ability to accurately and quickly predict weather events becomes a critical tool in mitigating risks. The implications are far-reaching, from urban planning and disaster management to agriculture and air travel.

Moreover, the open-source nature of GraphCast democratizes access to cutting-edge forecasting technology. By making this powerful tool available to a wide range of users, from small-scale farmers in remote areas to large meteorological organizations, the potential for innovation and localized weather solutions increases exponentially.

No doubt, were witnessing another field where machine learning is making a difference. The marriage of AI and weather forecasting is not just a fleeting trend but a fundamental shift in how we understand and anticipate the whims of nature.

Read the original here:
For the first time, AI produces better weather predictions -- and it's ... - ZME Science

Read More..