Page 1,457«..1020..1,4561,4571,4581,459..1,4701,480..»

Grove School engineer Samah Saeed is beneficiary of $4.6m DoE … – The City College of New York News

City College of New York Computer engineer and scientist Samah M. Saeed is the co-recipient of a $4.6 million U.S. Department of Energy [DoE] grant to advance quantum computing. The funding is for her project, Toward Efficient Quantum Algorithm Execution on Noisy Intermediate-Scale Quantum Hardware.

An assistant professor of electrical engineering in CCNYs Grove School of Engineering, Saeed will focus on resolving theissues currently effecting the development of quantum computing. The ultimate goal is to develop research and training programs to enable efficient and reliable executions of quantum algorithms on large-scale quantum computers.

"The future of computing is quantum, an emerging computing paradigm that will offer a computational speedup for critical applications, said Saeed. Near-term quantum computers, referred to as Noisy Intermediate-Scum (NISQ) computers, are expected to have a transformative impact on applications demanding intense computation, such as machine learning and physical and chemical simulations.

While these computers are very promising, Saeed added, they are fragile and operate in the presence of errors. As a result, there is a gap between current and near-term quantum hardware capabilities and quantum algorithms, which should be addressed to exploit the power of quantum computers. Although error correction is the ultimate solution to suppress errors and enable the correct execution of quantum algorithms, they are infeasible for near-term quantum computers due to the massive number of physical qubits required to correct errors.

Other objectives of Saeeds project include:

In addition, it will build a strong foundation in quantum information science and quantum computing at CCNY through collaboration with the co-PI from Lawrence Berkeley National Laboratory (LBNL). The project will provide an extensive two-pronged training program involving onsite training at the CCNY open to the entire college community to increase participation of underrepresented groups in the quantum computing workforce and summer research at LBNL. The idea is to enable interaction with a broader team of quantum-focused researchers with a diverse background including physics, computer science, and applied mathematics at LBNL.

About the City College of New YorkSince 1847, The City College of New York has provided a high-quality and affordable education to generations of New Yorkers in a wide variety of disciplines. CCNY embraces its position at the forefront of social change. It is ranked #1 by the Harvard-based Opportunity Insights out of 369 selective public colleges in the United States on the overall mobility index. This measure reflects both access and outcomes, representing the likelihood that a student at CCNY can move up two or more income quintiles. Education research organization Degree Choices ranks CCNY #1 nationally among universities for economic return on investment. In addition, the Center for World University Rankings places CCNY in the top 1.8% of universities worldwide in terms of academic excellence. Labor analytics firm Emsi puts at $1.9 billion CCNYs annual economic impact on the regional economy (5 boroughs and 5 adjacent counties) and quantifies the for dollar return on investment to students, taxpayers and society. At City College, more than 15,000 students pursue undergraduate and graduate degrees in eight schools and divisions, driven by significant funded research, creativity and scholarship. This year, CCNY launched its most expansive fundraising campaign, ever. The campaign, titled Doing Remarkable Things Together seeks to bring the Colleges Foundation to more than $1 billion in total assets in support of the College mission. CCNY is as diverse, dynamic and visionary as New York City itself. View CCNY Media Kit.

Follow this link:

Grove School engineer Samah Saeed is beneficiary of $4.6m DoE ... - The City College of New York News

Read More..

Rose Mutiso ’08 Wins Second Annual McGuire Prize – Dartmouth News

Energy technology and policy expert Rose Mutiso 08, Thayer 08, has been named the winner of the 2023McGuire Family Prize for Societal Impact.

Mutiso is the research director for the Washington, D.C.-based think tank Energy for Growth Hub and the co-founder and former CEO of the Nairobi, Kenya-based Mawazo Institute.

The $100,000 prize, established through a gift from Terry McGuire, Thayer 82, andCarolyn Carr McGuire, Tuck 83, recognizes Dartmouth students, faculty, staff, alumni, or friends who are making a significant positive impact on humanity, society, or the environment.

Rose Mutiso has built her career at the intersections of science, policy, gender equity, and international development, says President Philip J. Hanlon 77. She brings to bear the power of diversity and inclusion in creating a sustainable energy future, bringing voices to the table who havent traditionally been heard. This is exactly the kind of societal impact the McGuire Prize was founded to amplify.

Each year, the recipient is invited to campus to formally receive the prize and engage the community in a discussion of their work. This years McGuire Prize presentation will take place on campus in the fall.

Quote

The thing that connects everything up for me is the power of science and innovation to solve big problems.

Attribution

Rose Mutiso 08, winner of the McGuire Prize

Of receiving the McGuire Prize, Mutiso says, Its humbling. And its inspired me to do quite a lot of reflection. Im hopeful the prize can help bring more attention to the issues I work on: amplifying African voices and agency in the shaping of Africas climate and energy future.

Mutiso grew up in Nairobi, where, she says, her natural curiosity wasnt always encouraged. She first heard of Dartmouthand the term liberal artswhile studying in the United States on a high school exchange program.

It completely blew my mind. Theres this education system where if you are curious about many things, you dont have to pick one? I knew that was exactly what I wanted to do, she says.

As an undergraduate, she majored in engineering and threw herself into everything the liberal arts have to offer. It is just mind-boggling when I reflect on the intellectual journey I went on, and thats because of the environment, the professors, she says. Id never written a paper before and by the end I was the head tutor of the Dartmouth writing center. There were so many opportunities for growth.

She went on to earn her PhD in materials science and engineering at the University of Pennsylvania and pursued a postdoc through the American Institute of Physics and the American Association for the Advancement of Science, working on energy and innovation policy issues in the U.S. Senate, followed by a role serving as a senior fellow in the Office of International Climate and Clean Energy at the U.S. Department of Energy.

That was great, because it tied my engineering background to science and innovation policy, she says. I learned how science and energy research is funded and the kind of advocacy that scientists and others have to do to engage policymakers and the public.

But Mutiso never forgot her Kenyan roots. With classmate Rachel Strohm 08, she co-founded the nonprofit Mawazo Institute, which supports early-career African women researchers.

Im passionate about women in science and supporting women generally, she says. How can we have more female scientists in Africa who pursue their academic work at the highest level, but also have platforms to engage with society and use their expertise to inform public debates and policy decisions?

Her current work at the Energy for Growth Hub centers on using data and evidence to help solve the twin crises of climate change and energy poverty in developing countries, she says. The thing that connects everything up for me is the power of science and innovation to solve big problems.

Activating the potential of science in society requires diversifying what are traditionally male- and Western-dominated fields, she says. Women are 50% of the potential talent pool. We need to be part of science as this crucial driver for change. And from the African perspective, were at the forefront of climate impacts, and so we need to be able to leverage science and technology to build our economies and be resilient.

Mutiso sees the McGuire Prize as less an award for past accomplishments than as a jumping-off point for what comes next.

I dont see this as simply a recognition of work done, she says. This is a very serious opportunity to share my work and my ideas. This incredible honor inspires me to look forward and ask myself: What can I do with this, to inspire others, in particular those with nontraditional backgrounds like me, and to further the work?

In 2022, the inaugural prize went to former Geisel School of Medicine professor Jason McLellan, whose research on coronavirus spike proteins laid the groundwork for the development of effective COVID-19 vaccines.

This years prize selection committee was chaired by psychological and brain sciences professor Thalia Wheatley, the Lincoln Filene Professor in Human Relations. The committee includedScott Brown 78, a partner and founder of New Energy Capital; Sydney Finkelstein, the Steven Roth Professor of Management at the Tuck School of Business; Lorie Loeb, a professor of computer science; Mathieu Morlighem, the Evans Family Distinguished Professor of Earth Sciences; and Sandra Wong, the William N. and Bessie Allyn Professor of Surgery at the Geisel School of Medicine.

Learn more about the McGuire Family Prize for Societal Impact.

Read more:

Rose Mutiso '08 Wins Second Annual McGuire Prize - Dartmouth News

Read More..

Ashby, Fu Share Their Shining Examples as Winners of the 2023 … – Illinois Computer Science News

Featured during a conversation geared toward students who were watching both in-person and virtually, two Illinois Computer Science alumni joined Rashid Bashir, Dean of The Grainger College of Engineering, for an inspiring panel discussion honoring seven winners of the 2023 Alumni Award for Distinguished Service.

Two of the seven winners named by Grainger Engineering on March 31 at the NCSA Auditorium were Illinois CS graduates, including Steven Ashby (MS '85, PhD 88) and Ping Fu (MS '90).

Delving further than the honorees own experiences in computing, Bashir stretched thought toward several relevant topics: career preparation at the University of Illinois Urbana-Champaign, how to overcome adversity, and grand challenges in engineering that inspire future change for the betterment of all.

The event began with a video that introduced each award winner and shed light on their own personal and professional paths. Fu concluded this video with a thought that resonated, in an altogether unsurprising development considering her own memoir, Bend, Not Break: Life in two worlds, is a New York Times best seller.

If I would have to say one thing, I would say to keep a mental image of mountain ranges, not peaks. The peak is a mental image that our society keeps giving to people, as if going up is the success and going down is the failure, said Fu, who is also a venture and angel investor and co-founder of the software company Geomagic. But you cannot go on to another peak without going down first. Life is a mountain range, not a peak.

In the same video, Ashby encouraged students to expand their horizons intellectually throughout their collegiate experience.

Invest in your education and yourself, Ashby said. You think you are already doing that, but you need to seek opportunities beyond your current studies to learn about different areas, to explore different career paths, so you can make an informed decision about what you want to do with your future.

Ashby currently serves as Director of the Department of Energys Pacific Northwest National Laboratory (PNNL) a position he said he could never have anticipated if not for the influence of his Ph.D. advisor and late Illinois CS professor Paul Saylor.

His own boundaries expanded literally when Saylor recruited him to UIUC, considering Ashby had never lived outside of his native California. Then the professor expanded his pupils boundaries figuratively, by challenging Ashby academically and professionally.

Paul Saylor recruited me here from Santa Clara University to the cornfields of Illinois, Ashby said. He had a passion for numerical analysis and for his students, and its through him and his tutelage that I had a chance to be successful as a computational scientist. This wasnt just about academic influence; it was a life-long friendship.

I want to mention this, because when you come back and you have these opportunities, its really about the people youre meeting and the connections you make. Its not just a professional engagement, its personal.

Ashby said this earlier in the day when he spoke at the Thomas M. Siebel Center for Computer Science as part of the Illinois CS Speaker Series. His presentation entitled "The Breadth of R&D at Pacific Northwest National Laboratory, focused on the opportunities PNNL offers students and expanded to explain his own experiences.

At both the Grainger Engineering panel and his presentation at Illinois CS, Ashby emphasized the role connections have played in his career which spans his current leadership position at PNNL, a previous position as PNNLs Deputy Director for Science and Technology, and nearly 21 years with Lawrence Livermore National Laboratory.

While Ashby connected with others to advance his own career and education, Fu explored all that computing has to offer in several different areas to craft her own impact.

Her desire to do so came from an internal drive dedicated to overcoming the adversity she faced in life. Beginning with the moment she came to the United State unable to speak the language and thus not able to study her first choice in college, Fu has looked at CS to prove how much she can accomplish.

Ive really had many moments of adversity in my life, and, when I look back, adversity really is either the best teacher or best opportunity, Fu said. My story about studying computer science began when I first came to the United States. I did not speak much English, and I wanted to study comparative literature. But I couldnt. Computer science was a new science, a new language, and I felt like I was going to be in the same standing as other students.

Thats how I got into computer science, which has pretty much defined my life. Illinois has been such a great place for me. I got my degree there, I got married there, I gave birth to my daughter there, and I started my company there.

From the moment Fu and Ashby began studying and then graduated from Illinois CS, their examples have shined as the reason why Bashir brought these seven alumni together to celebrate their awards.

A very, very special thank you to our award recipients for sharing your Grainger Engineering memories, industry insights, advice, and grand challenges, Bashir said. Were really so fortunate to have such brilliant, accomplished individuals as part of our community to make us so proud and who are changing the world. The continued success of Grainger Engineering lies in our people.

Read the original announcement of this award from The Grainger College of Engineering.

View original post here:

Ashby, Fu Share Their Shining Examples as Winners of the 2023 ... - Illinois Computer Science News

Read More..

Congressman Don Beyer, Mason student and lifelong learner – George Mason University

Body

One of George Mason Universitys most popular students is also a nontraditional one: U.S. Congressman Don Beyer. And like many Mason students, the 72-year-old is also juggling work and classes.

So far a TV crew from ABC News and a photographer from the Washington Post have accompanied the former Virginia lieutenant governor to his math classes to capture his education journey.

Beyer told the Washington Post that hearing about the work of Masons Institute for Digital Innovation and the plans for Fuse during a visit to Mason Square are what spurred his interest in furthering his studies.

It was so impressive. I said, Can I take courses here? Beyer is quoted the Post as saying.

Beyer, whose full-time job involves representing Virginias 8th District and chairing the House science, space, and technology subcommittee, enrolled at Mason in fall 2022 to begin work toward a masters degree in computer science with a concentration in machine learning. Right now, he is working to complete several prerequisites in math and computer science before he can start his graduate program.

Beyer feels strongly that before the U.S. Congress can regulate or enact laws regarding technologies, such as artificial intelligence, they need to have a better understanding of the tools and their potential. And like a lot of Mason students, he hopes to one day apply what he is learning in the classroom to his performance at his day job and use his AI knowledge in his legislative work.

See the rest here:

Congressman Don Beyer, Mason student and lifelong learner - George Mason University

Read More..

More than Smart: Computer Science Research Aims to Make … – Georgia State University News

In Haoxin Wangs eyes, once-futuristic visions of self-driving vehicles are closer to reality than many people think. His research is focused on trying to help revolutionize the design of those autonomous vehicles.

Advancements in self-driving technology in recent years have ranged from adaptive cruise control and braking to efforts to create fully autonomous vehicles.

Wang, an assistant professor in the Department of Computer Science atGeorgia StateUniversity, is addressing the need for advanced vehicle computing power to make self-driving technology more accessible and efficient.

The key, Wang says, may lie in edge computing and sustainable artificial intelligence (AI) to improve the way smart vehicles operate.

We want to make sure that every vehicle has sufficient computation power when theyre running AI applications like self-driving algorithms or image processing, said Wang, who refers to the work as protocol and algorithm design.

In the future, he said, our vision is that all vehicles can have fully autonomous driving functions.

Assistant Professor of Computer Science Haoxin Wang envisions a future in which all vehicles have fully autonomous driving functions.

That would mean a more equal and fair computing environment, making it possible for vehicles to have the most advanced technology regardless of price.

Through their research, Wang and Jiang Xie, of the University of North Carolina at Charlotte, could change the way intelligent vehicles function through automotive edge computing.

Edge computing refers to a system of servers and wireless technology that works like a remote computer for a vehicle. This approach allows the computational work to be offloaded from the car to external resources.

Recent advances in car technology rely on downlink connections (transmission from a computer server to a vehicle) or uplink connections (transmission of information from a vehicle to a server). Downlink connections support features such as streaming entertainment. Meanwhile, features that require cars to take in information from the environment, such as self-driving technology, are supported by uplink connections.

In some cases, both downlink and uplink connections are used. For example, navigation technology may use uplink technology to transmit location data from the car to the remote server, and then use downlink technology to send the updated route information back to the car.

However, current vehicles are limited in computing power. As a result, manufacturers must choose how to allocate resources that support each of these functions.

Traditional network resources are considered asymmetrical. That is, they allocate more resources to downlink applications than uplink applications. In the future, this could become an issue, as uplink applications are vital for self-driving technologies.

This is where Wang sees an opportunity for improvement.

Using an algorithm to enhance the performance of existing edge computing technology, his research proposes a new model of resource allocation that includes external edge servers.

The vehicle will be like the data collector, Wang said. It collects data from the surrounding environment and transmits the data to the edge server for processing. After it finishes processing, the edge server will return the results back to the car.

Offloading the computational process to an external server would make the vehicles hardware limitations less restrictive. This technology could provide more equality across vehicles, regardless of their built-in computing power.

In addition to making these forms of technology more accessible, Wang wants to make them better for the environment. He speaks in terms of a sustainable artificial intelligence ecosystem.

Most researchers today care most about the performance of AI, what the AI can provide, Wang said. What were caring about right now is the sustainability of AI.

AI applications require a large amount of data and power, resulting in substantial energy consumption and carbon emissions. Wang is striving to make AI more environmentally friendly so that as it becomes more prevalent in the future perhaps with the help of his research it can become more sustainable.

This kind of environment-friendly AI is very important for the future of AI. We need to make it environment-friendly and sustainable enough to support our community, Wang said.

See original here:

More than Smart: Computer Science Research Aims to Make ... - Georgia State University News

Read More..

MIT CSAIL researchers discuss frontiers of generative AI – MIT News

The emergence of generative artificial intelligence has ignited a deep philosophical exploration into the nature of consciousness, creativity, and authorship. As we bear witness to new advances in the field, its increasingly apparent that these synthetic agents possess a remarkable capacity to create, iterate, and challenge our traditional notions of intelligence. But what does it really mean for an AI system to be generative, with newfound blurred boundaries of creative expression between humans and machines?

For those who feel as if generative artificial intelligence a type of AI that can cook up new and original data or content similar to what it's been trained on cascaded into existence like an overnight sensation, while indeed the new capabilities have surprised many, the underlying technology has been in the making for some time.

But understanding true capacity can be as indistinct as some of the generative content these models produce. To that end, researchers from MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) convened in discussions around the capabilities and limitations of generative AI, as well as its potential impacts on society and industries, with regard to language, images, and code.

There are various models of generative AI, each with their own unique approaches and techniques. These include generative adversarial networks (GANs), variational autoencoders (VAEs), and diffusion models, which have all shown off exceptional power in various industries and fields, from art to music and medicine. With that has also come a slew of ethical and social conundrums, such as the potential for generating fake news, deepfakes, and misinformation. Making these considerations is critical, the researchers say, to continue studying the capabilities and limitations of generative AI and ensure ethical use and responsibility.

During opening remarks, to illustrate visual prowess of these models, MIT professor of electrical engineering and computer science (EECS) and CSAIL Director Daniela Rus pulled out a special gift her students recently bestowed upon her: a collage of AI portraits ripe with smiling shots of Rus, running a spectrum of mirror-like reflections. Yet, there was no commissioned artist in sight.

The machine was to thank.

Generative models learn to make imagery by downloading many photos from the internet and trying to make the output image look like the sample training data. There are many ways to train a neural network generator, and diffusion models are just one popular way. These models, explained by MIT associate professor of EECS and CSAIL principal investigator Phillip Isola, map from random noise to imagery. Using a process called diffusion, the model will convert structured objects like images into random noise, and the process is inverted by training a neural net to remove noise step by step until that noiseless image is obtained. If youve ever tried a hand at using DALL-E 2, where a sentence and random noise are input, and the noise congeals into images, youve used a diffusion model.

To me, the most thrilling aspect of generative data is not its ability to create photorealistic images, but rather the unprecedented level of control it affords us. It offers us new knobs to turn and dials to adjust, giving rise to exciting possibilities. Language has emerged as a particularly powerful interface for image generation, allowing us to input a description such as Van Gogh style and have the model produce an image that matches that description, says Isola. Yet, language is not all-encompassing; some things are difficult to convey solely through words. For instance, it might be challenging to communicate the precise location of a mountain in the background of a portrait. In such cases, alternative techniques like sketching can be used to provide more specific input to the model and achieve the desired output.

Isola then used a bird's image to show how different factors that control the various aspects of an image created by a computer are like dice rolls. By changing these factors, such as the color or shape of the bird, the computer can generate many different variations of the image.

And if you havent used an image generator, theres a chance you might have used similar models for text. Jacob Andreas, MIT assistant professor of EECS and CSAIL principal investigator, brought the audience from images into the world of generated words, acknowledging the impressive nature of models that can write poetry, have conversations, and do targeted generation of specific documents all in the same hour.

How do these models seem to express things that look like desires and beliefs? They leverage the power of word embeddings, Andreas explains, where words with similar meanings are assigned numerical values (vectors) and are placed in a space with many different dimensions. When these values are plotted, words that have similar meanings end up close to each other in this space. The proximity of those values shows how closely related the words are in meaning. (For example, perhaps Romeo is usually close to Juliet, and so on). Transformer models, in particular, use something called an attention mechanism that selectively focuses on specific parts of the input sequence, allowing for multiple rounds of dynamic interactions between different elements. This iterative process can be likened to a series of "wiggles" or fluctuations between the different points, leading to the predicted next word in the sequence.

Imagine being in your text editor and having a magical button in the top right corner that you could press to transform your sentences into beautiful and accurate English. We have had grammar and spell checking for a while, sure, but we can now explore many other ways to incorporate these magical features into our apps, says Andreas. For instance, we can shorten a lengthy passage, just like how we shrink an image in our image editor, and have the words appear as we desire. We can even push the boundaries further by helping users find sources and citations as they're developing an argument. However, we must keep in mind that even the best models today are far from being able to do this in a reliable or trustworthy way, and there's a huge amount of work left to do to make these sources reliable and unbiased. Nonetheless, theres a massive space of possibilities where we can explore and create with this technology.

Another feat of large language models, which can at times feel quite meta, was also explored: models that write code sort of like little magic wands, except instead of spells, they conjure up lines of code, bringing (some) software developer dreams to life. MIT professor of EECS and CSAIL principal investigator Armando Solar-Lezama recalls some history from 2014, explaining how, at the time, there was a significant advancement in using long short-term memory (LSTM), a technology for language translation that could be used to correct programming assignments for predictable text with a well-defined task. Two years later, everyones favorite basic human need came on the scene: attention, ushered in by the 2017 Google paper introducing the mechanism, Attention is All You Need. Shortly thereafter, a former CSAILer, Rishabh Singh, was part of a team that used attention to construct whole programs for relatively simple tasks in an automated way. Soon after, transformers emerged, leading to an explosion of research on using text-to-text mapping to generate code.

Code can be run, tested, and analyzed for vulnerabilities, making it very powerful. However, code is also very brittle and small errors can have a significant impact on its functionality or security, says Solar-Lezema. Another challenge is the sheer size and complexity of commercial software, which can be difficult for even the largest models to handle. Additionally, the diversity of coding styles and libraries used by different companies means that the bar for accuracy when working with code can be very high.

In the ensuing question-and-answer-based discussion, Rus opened with one on content: How can we make the output of generative AI more powerful, by incorporating domain-specific knowledge and constraints into the models? Models for processing complex visual data such as 3-D models, videos, and light fields, which resemble the holodeck in Star Trek, still heavily rely on domain knowledge to function efficiently, says Isola. These models incorporate equations of projection and optics into their objective functions and optimization routines. However, with the increasing availability of data, its possible that some of the domain knowledge could be replaced by the data itself, which will provide sufficient constraints for learning. While we cannot predict the future, its plausible that as we move forward, we might need less structured data. Even so, for now, domain knowledge remains a crucial aspect of working with structured data."

The panel also discussed the crucial nature of assessing the validity of generative content. Many benchmarks have been constructed to show that models are capable of achieving human-level accuracy in certain tests or tasks that require advanced linguistic abilities. However, upon closer inspection, simply paraphrasing the examples can cause the models to fail completely. Identifying modes of failure has become just as crucial, if not more so, than training the models themselves.

Acknowledging the stage for the conversation academia Solar-Lezama talked about progress in developing large language models against the deep and mighty pockets of industry. Models in academia, he says, need really big computers to create desired technologies that dont rely too heavily on industry support.

Beyond technical capabilities, limitations, and how its all evolving, Rus also brought up the moral stakes around living in an AI-generated world, in relation to deepfakes, misinformation, and bias. Isola mentioned newer technical solutions focused on watermarking, which could help users subtly tell whether an image or a piece of text was generated by a machine. One of the things to watch out for here, is that this is a problem thats not going to be solved purely with technical solutions. We can provide the space of solutions and also raise awareness about the capabilities of these models, but it is very important for the broader public to be aware of what these models can actually do, says Solar-Lezama. At the end of the day, this has to be a broader conversation. This should not be limited to technologists, because it is a pretty big social problem that goes beyond the technology itself.

Another inclination around chatbots, robots, and a favored trope in many dystopian pop culture settings was discussed: the seduction of anthropomorphization. Why, for many, is there a natural tendency to project human-like qualities onto nonhuman entities? Andreas explained the opposing schools of thought around these large language models and their seemingly superhuman capabilities.

"Some believe that models like ChatGPT have already achieved human-level intelligence and may even be conscious,"Andreas said, "but in reality these models still lack the true human-like capabilities to comprehend not only nuance, but sometimes they behave in extremely conspicuous, weird, nonhuman-like ways. On the other hand, some argue that these models are just shallow pattern recognition tools that cant learn the true meaning of language. But this view also underestimates the level of understanding they can acquire from text. While we should be cautious of overstating their capabilities, we should also not overlook the potential harms of underestimating their impact. In the end, we should approach these models with humility and recognize that there is still much to learn about what they can and cant do.

More:

MIT CSAIL researchers discuss frontiers of generative AI - MIT News

Read More..

SheCode- One Day Introduction to Programming Event at SIUE … – RiverBender.com

EDWARDSVILLE Computer science is a growing career path that is fun, challenging and important. The Southern Illinois University Edwardsville School of Engineering Department of Computer Science (CS) wants to ensure females are a part of the fields surging growth and success by offering a one-day introduction to programming event.

Don't miss our top stories and need-to-know news everyday in your inbox.

The department will host SheCode from 10 a.m.- 2 p.m. Saturday, April 23 in the Engineering Building. The free event sparks interest and inspires more females to pursue computer science through an interactive programming project and mentorship from an SIUE CS alumna and professional in the technology field.

This event is designed to give young women a chance to try programming and learn about computing, said Dennis Bouvier, PhD, professor in the CS department. The event is designed for those who have no programming experience, but those with some experience are welcome to attend."

The School of Engineering offers one of the most comprehensive and affordable engineering programs in the St. Louis region with eight undergraduate degrees, five masters degrees and two cooperative doctoral programs, all housed in a state-of-the-art facility. Students learn from expert faculty, perform cutting-edge research and participate in intercollegiate design competitions. Companies in the metropolitan St. Louis area provide students challenging internships and co-op opportunities, which often turn into permanent employment. All undergraduate programs are accredited by their respective accreditation agencies.

See the rest here:

SheCode- One Day Introduction to Programming Event at SIUE ... - RiverBender.com

Read More..

Transformation Through Innovation: Broadening Participation in … – University of Arkansas

About this Event

Join us for the second event in our Institute's Speaker Series featuring Dr. Trina L. Fletcher: Transformation through Innovation: Broadening Participation in Engineering and Computing Education

This event is sponsored by the Institute for Integrative and Innovative Research.

To join via zoom remotely:

Please click the link below to join the webinar:https://uark.zoom.us/j/86970178738?pwd=MjROc2I1WU9CZ3FEM0JnYW5vdytjQT09Passcode: AhBFk*&3

Abstract

Engineers and computer scientists play a critical role in our ability to be innovative and positively impact the workforce, economy, and society. Over the past 20 years, the number of degree programs and research funding within engineering and computing education has risen alongside increased spending and programmatic efforts in science, technology, engineering, and mathematics (STEM) education at the K-12 level. Yet, enrollment and degrees attained by diverse groups have remained stagnant or declined. Dr. Fletcher argues that engineering and computing education should elevate the notion of transformation through innovation to move the needle for student academic success and persistence. This talk will take a journey along Dr. Fletchers path, personal and professional, covering experiences from industry and the non-profit sector that she incorporates throughout her STEM READi Lab research portfolio. Knowledge of how she uses asset-based strategies such as equity-centered collaborations, industry-proven best practices, innovative approaches to problem-solving, and continuous process improvement (CPI) as a part of her contributions to transforming engineering and computing education will be shared.

Follow this link:

Transformation Through Innovation: Broadening Participation in ... - University of Arkansas

Read More..

ACM Prize in Computing Recognizes Yael Tauman Kalai for Fundamental Contributions to Cryptography – Yahoo Finance

Verifiable Delegation and Other Breakthrough Works Have Advanced the Field

NEW YORK, April 12, 2023 /PRNewswire/ --ACM, the Association for Computing Machinery, today named Yael Tauman Kalai the recipient of the 2022 ACM Prize in Computing for breakthroughs in verifiable delegation of computation and fundamental contributions to cryptography. Kalai's contributions have helped shape modern cryptographic practices and provided a strong foundation for further advancements.

Yael Tauman Kalai of Microsoft Research and MIT has been named the recipient of the ACM Prize in Computing.

The ACM Prize in Computing recognizes early-to-mid-career computer scientists whose research contributions have fundamental impact and broad implications. The award carries a prize of $250,000, from an endowment provided by Infosys Ltd.

Verifiable Delegation of ComputationKalai has developed methods for producing succinct proofs that certify the correctness of any computation. This method enables a weak device to offload any computation to a stronger device in a way that enables the results to be efficiently checked for correctness. Such succinct proofs have been used by numerous blockchain companies (including Ethereum) to certify transaction validity and thereby overcome key obstacles in blockchain scalability, enabling faster and more reliable transactions. Kalai's research has provided essential definitions, key concepts, and inventive techniques to this domain.

More specifically, Kalai's work pioneered the study of "doubly efficient" interactive proofs, which ensure that the computational overhead placed on the strong device is small (nearly linear in the running time of the computation being proved). In contrast, previous constructions incurred an overhead that is super-exponential in the space of the computation. Kalai's work transformed the concept of delegation from a theoretical curiosity to a reality in practice. Her subsequent work used cryptography to develop certificates of computation, eliminating the need for back-and-forth interaction. This work used insights from quantum information theory, specifically "non-signaling" strategies, to construct a one-round delegation scheme for any computation. These schemes have led to a body of work on delegation including theoretical advancements, applied implementations, and real-world deployment.

Story continues

Additional Contributions to CryptographyKalai's other important contributions include her breakthrough work on the security of the "Fiat-Shamir paradigm," a general technique for eliminating interaction from interactive protocols. This paradigm is extensively utilized in real-world applications including in the most prevalent digital signature scheme (ECDSA) which is used by all iOS and Android mobile devices. Despite its widespread adoption, its security has been poorly understood. Kalai's research established a solid foundation for understanding the security of this paradigm. In addition, she co-pioneered the field of leakage resilient cryptography and solved a long-standing open problem in interactive coding theory, showing how to convert any interactive protocol into one that is resilient to a constant fraction of adversarial errors while increasing the communication complexity by at most a constant factor and the running time by at most a polynomial factor. Kalai's extensive work in the field of cryptography has helped shape modern cryptographic practices and provided a strong foundation for further advancements.

"As data is the currency of our digital age, the work of cryptographers, who encrypt and decrypt coded language, is essential to keeping our technological systems secure and our data private, as necessary," said ACM President Yannis Ioannidis. "Kalai has not only made astonishing breakthroughs in the mathematical foundations of cryptography, but her proofs have been practically useful in areas such as blockchain and cryptocurrencies. Her research addresses complex problems whose solution opens new directions to where the field is headingfocusing on keeping small computers (such as smartphones) secure from potentially malicious cloud servers. A true star all around, she has also established herself as a respected mentor, inspiring and cultivating the next generation of cryptographers."

"We are pleased to see one of the world's leading cryptographers recognized," said Salil Parekh, Chief Executive Officer, Infosys. "Kalai's technical depth and innovation of her work has definitely made a tremendous mark in this field and will inspire aspiring cryptographers. We are thankful for her contributions to date and can only imagine what she has in store in the coming years. Infosys has been proud to sponsor the ACM Prize since its inception. Recognizing the achievements of young professionals is especially important in computing, as bold innovations from people early in their careers have a tremendous impact on our field."

Kalai will be formally presented with the ACM Prize in Computing at the annual ACM Awards Banquet, which will be held this year on Saturday, June 10 at the Palace Hotel in San Francisco.

Biographical BackgroundYael Tauman Kalaiis a Senior Principal Researcher at Microsoft Research and an Adjunct Professor at the Massachusetts Institute of Technology (MIT). Kalai earned a BSc in Mathematics from the Hebrew University of Jerusalem, an MS in Computer Science and Applied Mathematics from The Weizmann Institute of Science, and a PhD in Computer Science from the Massachusetts Institute of Technology.

Kalai's honors include the George M. Sprowls Award for Best Doctoral Thesis in Computer Science (MIT, 2007), an IBM PhD Fellowship (2004-2006), an MIT Presidential Graduate Fellowship (2003-2006), and an Outstanding Master's Thesis Prize (Weizmann Institute of Science, 2001). She is a Fellow of the International Association for Cryptologic Research (IACR). Additionally, Kalai gave an Invited Talk at the International Congress of Mathematics (ICM, 2018).

About the ACM Prize in ComputingThe ACM Prize in Computingrecognizes an early to mid-career fundamental innovative contribution in computing that, through its depth, impact, and broad implications, exemplifies the greatest achievements in the discipline. The award carries a prize of $250,000. Financial support is provided by an endowment from Infosys Ltd.The ACM Prize in Computing was previously known as the ACM-Infosys Foundation Award in the Computing Sciences from 2007 through 2015. ACM Prize recipients are invited to participate in the Heidelberg Laureate Forum, an annual networking event that brings together young researchers from around the world with recipients of the ACM A.M. Turing Award, the Abel Prize, the Fields Medal, and the IMU Abacus Medal (a continuation of the Rolf Nevanlinna Prize).

About ACMACM, the Association for Computing Machinery, is the world's largest educational and scientific computing society, uniting computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field's challenges. ACM strengthens the computing profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

About InfosysInfosysis a global leader in next-generation digital services and consulting. We enable clients in 46 countries to navigate their digital transformation. With over three decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.

ACM, the Association for Computing Machinery, has named 57 of its members ACM Fellows for wide-ranging and fundamental contributions in disciplines including cybersecurity, human-computer interaction, mobile computing, and recommender systems among many other areas. (PRNewsfoto/Association For Computing Machinery, Inc.)

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/acm-prize-in-computing-recognizes-yael-tauman-kalai-for-fundamental-contributions-to-cryptography-301794755.html

SOURCE Association For Computing Machinery, Inc.

The rest is here:

ACM Prize in Computing Recognizes Yael Tauman Kalai for Fundamental Contributions to Cryptography - Yahoo Finance

Read More..

Advances in generalizable medical AI | Stanford News – Stanford University News

A patient is lying on the operating table as the surgical team reaches an impasse. They cant find the intestinal rupture. A surgeon asks aloud: Check whether we missed a view of any intestinal section in the visual feed of the last 15 minutes. An artificial intelligence medical assistant gets to work reviewing the patients past scans and highlighting video streams of the procedure in real time. It alerts the team when theyve skipped a step in the procedure and reads out relevant medical literature when surgeons encounter a rare anatomical phenomenon.

Generalist medical AI models could accomplish a wide variety of tasks within and across disciplines even without having been trained specifically on the assigned tasks. (Image credit: iStock/metamorworks)

Doctors across all disciplines, with assistance from artificial intelligence, may soon have the ability to quickly consult a patients entire medical file against the backdrop of all medical health care data and every published piece of medical literature online. This potential versatility in the doctors office is only now possible due to the latest generation of AI models.

We see a paradigm shift coming in the field of medical AI, said Jure Leskovec, professor of computer science at Stanford Engineering. Previously, medical AI models could only address very small, narrow pieces of the health care puzzle. Now we are entering a new era, where its much more about larger pieces of the puzzle in this high stakes field.

Stanford researchers and their collaborators describe generalist medical artificial intelligence, or GMAI, as a new class of medical AI models that are knowledgeable, flexible, and reusable across many medical applications and data types. Their perspective on this advance is published in the April 12 issue of Nature.

Leskovec and his collaborators chronicle how GMAI will interpret varying combinations of data from imaging, electronic health records, lab results, genomics, and medical text well beyond the abilities of concurrent models like ChatGPT. These GMAI models will provide spoken explanations, offer recommendations, draw sketches, and annotate images.

A lot of inefficiencies and errors that happen in medicine today occur because of the hyper-specialization of human doctors and the slow and spotty flow of information, said co-first author Michael Moor, an MD and now postdoctoral scholar at Stanford Engineering. The potential impact of generalist medical AI models could be profound because they wouldnt be just an expert in their own narrow area, but would have more abilities across specialties.

Of the more than 500 AI models for clinical medicine approved by the FDA, most only perform one or two narrow tasks, such as scanning a chest X-ray for signs of pneumonia. But recent advances in foundation model research promise to solve more diverse and challenging tasks. The exciting and the groundbreaking part is that generalist medical AI models will be able to ingest different types of medical information for example, imaging studies, lab results, and genomics data to then perform tasks that we instruct them to do on the fly, said Leskovec.

We expect to see a significant change in the way medical AI will operate, continued Moor. Next, we will have devices that, rather than doing just a single task, can do maybe a thousand tasks, some of which were not even anticipated during model development.

The authors, which also include Oishi Banerjee and Pranav Rajpurkar from Harvard University, Harlan Krumholz from Yale, Zahra Shakeri Hossein Abad from University of Toronto, and Eric Topol at the Scripps Research Translational Institute, outline how GMAI could tackle a variety of applications from chatbots with patients, to note-taking, all the way to bedside decision support for doctors.

In the radiology department, the authors propose, models could draft radiology reports that visually point out abnormalities, while taking the patients history into account. Radiologists could improve their understanding of cases by chatting with GMAI models: Can you highlight any new multiple sclerosis lesions that were not present in the previous image?

In their paper, the scientists describe additional requirements and capabilities that are needed to develop GMAI into a trustworthy technology. They point out that the model needs to consume all of the personal medical data, as well as historical medical knowledge, and refer to it only when interacting with authorized users. It then needs to be able to hold a conversation with a patient, much like a triage nurse, or doctor to collect new evidence and data or suggest various treatment plans.

In their research paper, the co-authors address the implications of a model capable of 1,000 medical assignments with the potential to learn even more. We think the biggest problem for generalist models in medicine is verification. How do we know that the model is correct and not just making things up? Leskovec said.

They point to the flaws already being caught in the ChatGPT language model. Likewise, an AI-generated image of the pope wearing a designer puffy coat is funny. But if theres a high-stake scenario and the AI system decides about life and death, verification becomes really important, said Moor.

The authors continue that safeguarding privacy is also a necessity. This is a huge problem because with models like ChatGPT and GPT-4, the online community has already identified ways to jailbreak the current safeguards in place, Moor said.

Deciphering between the data and social biases also poses a grand challenge for GMAI, Leskovec added. GMAI models need the ability to focus on signals that are causal for a given disease and ignore spurious signals that only tend to correlate with the outcome. Assuming that model size is only going to get bigger, Moor points to early research that shows larger models tend to exhibit more social biases than smaller models. It is the responsibility of the owners and developers of such models and vendors, especially if theyre deploying them in hospitals, to really make sure that those biases are identified and addressed early on, said Moor.

The current technology is very promising, but theres still a lot missing, Leskovec agreed. The question is: can we identify current missing pieces, like verification of facts, understanding of biases, and explainability/justification of answers so that we give an agenda for the community on how to make progress to fully realize the profound potential of GMAI?

Rajpurkar, co-senior author of the paper, is a former computer science PhD student at Stanford School of Engineering, and Banerjee, co-first author, is a former masters student in computer science at Stanford School of Engineering. Leskovec is also a member of Stanford Bio-X, a member of the Wu Tsai Neurosciences Institute, and a faculty affiliate in the Institute for Human-Centered Artificial Intelligence.

This research was funded by the National Institutes of Health, the Defense Advanced Research Projects Agency, GSK, Wu Tsai Neurosciences Institute, the Army Research Office, the National Science Foundation, the Stanford Data Science Initiative, Amazon, Docomo, Hitachi, Intel, JPMorgan Chase, Juniper Networks, KDDI, NEC, and Toshiba. In the past three years, Krumholz received expenses and/or personal fees from UnitedHealth, Element Science, Eyedentifeye, and F-Prime; is a co-founder of Refactor Health and HugoHealth; and is associated with contracts, through Yale New Haven Hospital, from the Centers for Medicare & Medicaid Services and through Yale University from the Food and Drug Administration, Johnson & Johnson, Google and Pfizer. The other authors declare no competing interests.

Excerpt from:

Advances in generalizable medical AI | Stanford News - Stanford University News

Read More..