Page 660«..1020..659660661662..670680..»

Making Computing Sustainable, With Help from NSF Grant – Yale University

With research projects - including one that recently received a $1.3 million grant - and an upcoming course, Prof. Robert Soul is looking at new ways to make computing more sustainable.

Working with Prof. Noa Zilberman from Oxford University, Soul has received a grant jointly funded by the United Kingdoms Engineering and Physical Sciences Research Council (EPSRC) and United States National Science Foundation (NSF) for work that aims to reduce the energy consumption of computing. Specifically, it sets its sights on computer networks, which consume an estimated one-and-a-half times the energy of all data centers, according to some reports. In contrast to other large scale computer infrastructures, accounting for the carbon emissions of the network is extremely hard.

The project is designed to collect information about the power consumption of network devices, specifically the computer hardware involved in connecting users to computer networks. This includes switches, which connect different computers together, and the network interface cards in computers or servers that connect users to the network. Traditional computer networks try to optimize the paths to reduce latency and achieve the fastest response possible.

But what we're hypothesizing is that you could actually instead choose paths that would result in the lowest amount of power consumed, or maybe the greenest path, said Soul, associate professor of computer science & electrical engineering. We want to measure how much power they are consuming and the quality of the power that they're consuming. For example, did they come from a green energy source? So were collecting the data that would allow you to make these informed decisions, and designing the network algorithms that would change a routing behavior in order to reduce the overall carbon footprint.

One possible way to do that is to develop systems that send computer traffic to a path that consumes energy from a green energy source. Another is a system that chooses a path that minimizes overall power consumption.

Another component to Souls work in this area is a collaboration with Prof. Rajit Manohar, the John C. Malone Professor of Electrical Engineering and Computer Science. Theyre developing network hardware that can go into idle mode when its inactive, very much like some cars with engines that automatically turn off at red lights.

There's a problem with current network hardware in that its not really able to go into idle mode because a part of it is always running to see if information is arriving, he said. So Rajit and I have been looking at whether we can design hardware devices - a new network switch - that would consume energy in proportion to the amount of traffic that it's seeing. And if it did see less traffic, it would go into idle mode

Soul is also co-teaching a course next year on sustainable computing with Dr. Eve Schooler, an IEEE Fellow and Yale alum. The course, they said, takes a broader view of the subject.

We're trying to do more of a survey of different approaches to improving the carbon efficiency of computer networks in general, Soul said. But even beyond that, we're also looking at a broader discussion at the policy level, where the intersection of sustainability and technology meet.

Schooler said the course will cover a large swath of topics. For instance, it might explore issues like the role that computing can play in the Intergovernmental Panel on Climate Change or how large institutions perform carbon accounting.

We'll also focus on networking, and on topics around the streaming infrastructure, the content distribution networks, she said. Other topics will be about large algorithms, like large language models - the ChatGPTs of the world - and Bitcoin or some of the crypto currencies that are also large consumers of electricity.

Read more:

Making Computing Sustainable, With Help from NSF Grant - Yale University

Read More..

Op-ed: Remember moderate opinions exist but are often concealed … – The Huntington News

After spending many hours grinding through my work, theres nothing I find more refreshing than opening up my laptop and scrolling through Reddit in the evening. My 2022 Reddit Recap affirmed that I spent the most time scrolling through the Northeastern subreddit, which makes sense considering how much I use it to navigate my academic journey.

Besides all the sh*t posts and yet another entertaining complaint about finding rats in an apartment, I take advantage of the fact that many Northeastern students go on Reddit to describe their experiences there. Hence, when building my schedule or figuring out which professors best align with how I learn, I read through subreddit posts meticulously.

Even though I use Reddit to, supposedly, relieve my stress about the unknowns of my college career, I have come to a point where I got so caught up in the opinions of other students about various classes that I lost my sense of identity and confidence about the path I could pursue in college. I let my potential be defined by the experiences of people on a social media platform.

Particularly, when I was stressed about which major I wanted to pursue after deciding that a path toward the medical field would be too stressful, I decided to browse through Reddit to learn about peoples experiences with other majors at Northeastern. If youre an avid visitor of the Northeastern subreddit, you would know that there are countless posts about peoples frustrations with Northeasterns computer science classes. The idea of spending at least 10 hours of work on classes with unsupportive professors, combined with the fact that computer science did not come naturally to me in high school, made me immediately dismiss the major as an option.

I didnt even bother to talk to people in real life about their experiences with the major, let alone discuss the major with an academic advisor from Khoury College of Computer Sciences. I had let the fear I felt from seeing many people complain about Fundamentals of Computer Science 1, or Fundies, on Reddit prevent me from potentially pursuing the field. In high school, there was no subreddit to prevent me from taking four AP classes during my senior year. I had a slight glimmer of confidence to get me through that pursuit.

Looking back at my initial doubts about Fundies, I realize I shouldve been more open toward my friend from another school who affirmed I would be fine, considering my strong work ethic. I consulted with my friend, a computer science and behavioral neuroscience major, over the summer about what she thought about the Fundies posts on Reddit, and talking to her just reaffirmed the fact that people tend to only post online only if they have strong opinions.

It seems that people tend to post on Reddit, and even TRACE, another online resource I spend too much time stressing over, if they have extremely positive or (more likely) extremely negative experiences with a class or professor. Reddit and TRACE are double-edged swords. On one hand, they are accessible resources on which to gather up-to-date information about classes Im interested in from a large pool of students. On the other hand, the opinions contained in those resources are biased and seem to often come out of spitefulness.

As a data science and psychology combined major, I remember feeling distraught when I learned I had to take Discrete Structures. Discrete is another computer science class that gets complained about on Reddit. I mentally prepared myself to receive a bad grade in the class but found that I was able to be much more successful in the class than expected. I definitely do not think that Discrete is an easy class. Still, I found it manageable if youre willing to review the lecture videos as needed, go to office hours as early and as much as needed, attend recitations and generally just work hard and responsibly.

After completing Discrete, I truly started to contemplate what my life would have been like if I had chosen to become a computer science major instead. I adore data science, but sometimes I do wonder what it would have been like if I had been a computer science and music technology combined major instead.

Northeasterns Reddit community does have its empowering moments. About a month ago, a Northeastern student posted alleging the school had commented out code that would have enabled students to automatically donate their leftover meal swipes for a week. I also often see posts about people expressing mental health concerns that receive heartwarming and reassuring comments. And the infinite comedic posts complaining about the quality of Snells study environment always release any tensions I feel after a rough workday.

Social media sites such as Reddit can be fun and even uplifting, but its important to establish boundaries with them and not let them control the trajectory of your life.

Even though Im fortunate enough to be majoring in a field I can see myself enjoying in the future, I worry for other people who have the potential to excel at computer science, or other fields with a reputation for being difficult at Northeastern, but end up letting biased online opinions deteriorate their confidence.

Reddit and TRACE are accessible resources to gauge the difficulty of a class, but just because an overwhelming amount of students complain about a class does not mean you wont turn out fine. Northeastern is a school that encourages students to engage in exploratory learning, and I dont wish for peoples journeys in this school to be tailored by what they have seen on the internet by other students. Talk to your friends, advisors and other in-person resources if youre worried about what kind of path you want to pursue at this school. Once youve looked at a post, after youve read it, try not to dwell on it.

Jethro R. Lee is a third-year data science and psychology combined major. He can be reached at [emailprotected].

See the rest here:

Op-ed: Remember moderate opinions exist but are often concealed ... - The Huntington News

Read More..

Carnegie Mellon Honors Three Faculty With Professorships – Carnegie Mellon University

Alex John London

London is an internationally recognized ethicist from the Department of Philosophy(opens in new window) who is frequently called upon to address critical societal problems. He brings deep disciplinary expertise to bear and collaborates with the best technical and scientific minds to make an impact on policy, technology, medicine and science.

London joined CMU in 2000 and in 2016 was named the Clara L West Professor of Ethics and Philosophy. He is the director of the Center for Ethics and Policy(opens in new window) and chief ethicist at the Block Center for Technology and Society(opens in new window). An elected Fellow of the Hastings Center, Londons work focuses on ethical and policy issues surrounding the development and deployment of novel technologies in medicine, biotechnology and artificial intelligence, on methodological issues in theoretical and practical ethics, and on cross-national issues of justice and fairness.

In 2022, Oxford University Press published his book, For the Common Good: Philosophical Foundations of Research Ethics. It has been called a philosophical tour de force, a remarkable achievement and a vital foundation on which policy progress should indeed, must be built. He also is co-editor of Ethical Issues in Modern Medicine, one of the most widely used textbooks in medical ethics.

London is a member of the World Health Organization Expert Group on Ethics and Governance of AI. In addition, he is a member of the U.S. National Academy of Medicine Committee on Creating a Framework for Emerging Science, Technology, and Innovation in Health and Medicine. He also co-leads the ethics core for the National Science Foundation AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups.

For more than a decade, London has helped to shape key ethical guidelines for the oversight of research with human participants. He is currently a member of the U.S. National Science Advisory Board for Biosecurity, and he has served as an ethics expert in consultations with organizations including the U.S. National Institutes of Health, the World Medical Association and the World Bank.

Heidari is faculty in the Department of Machine Learning(opens in new window) and the Software and Societal Systems Department(opens in new window) (S3D). She is also affiliated with the HCII, CyLab(opens in new window), the Block Center, Heinz College of Information Systems and Public Policy(opens in new window) and the Carnegie Mellon Institute for Strategy and Technology(opens in new window).

Heidaris research broadly concerns the social, ethical and economic implications of artificial intelligence, and in particular, issues of fairness and accountability through the use of machine learning in socially consequential domains. Her work in this area has won a best paper award at the Association for Computing Machinery (ACM) Conference on Fairness, Accountability, and Transparency; an exemplary track award at the ACM Conference on Economics and Computation; and a best paper award at the IEEE Conference on Secure and Trustworthy Machine Learning .

Heidari co-founded and co-leads the university-wide Responsible AI Initiative(opens in new window). She has organized several scholarly events on topics related to responsible and trustworthy AI, including multiple tutorials and workshops at top-tier academic venues specializing in artificial intelligence.

She is particularly interested in translating research contributions into positive impact on AI policy and practice. She has organized multiple campus-wide events and policy convenings, bringing together diverse groups of experts to address such topics as AI governance and accountability and contribute to ongoing efforts in this area at various levels of government.

Myers is director of the HCII in the School of Computer Science with an affiliated appointment in S3D. He received the ACM SIGCHI Lifetime Achievement Award in Research in 2017 for outstanding fundamental and influential research contributions to the study of human-computer interaction, and in 2022 SCS honored him with its Alan J. Perlis Award for Imagination in Computer Science "for pioneering human-centered methods to democratize programming." He is an IEEE Fellow, ACM Fellow, member of the CHI Academy, and winner of numerous best paper awards and most influential paper awards.

Myers has authored or edited more than 550 publications, and he has been on the editorial board of six journals. He has been a consultant on user interface design and implementation to over 90 companies and regularly teaches courses on user interface design and software. Myers received a Ph.D. in computer science at the University of Toronto, where he developed the Peridot user interface tool. He received masters and bachelors degrees from the Massachusetts Institute of Technology, and belongs to the ACM, SIGCHI, IEEE and the IEEE Computer Society.

View original post here:

Carnegie Mellon Honors Three Faculty With Professorships - Carnegie Mellon University

Read More..

This 3D printer can watch itself fabricate objects – MIT News

With 3D inkjet printing systems, engineers can fabricate hybrid structures that have soft and rigid components, like robotic grippers that are strong enough to grasp heavy objects but soft enough to interact safely with humans.

These multimaterial 3D printing systems utilize thousands of nozzles to deposit tiny droplets of resin, which are smoothed with a scraper or roller and cured with UV light. But the smoothing process could squish or smear resins that cure slowly, limiting the types of materials that can be used.

Researchers from MIT, the MIT spinout Inkbit, and ETH Zurich have developed a new 3D inkjet printing system that works with a much wider range of materials. Their printer utilizes computer vision to automatically scan the 3D printing surface and adjust the amount of resin each nozzle deposits in real-time to ensure no areas have too much or too little material.

Since it does not require mechanical parts to smooth the resin, this contactless system works with materials that cure more slowly than the acrylates which are traditionally used in 3D printing. Some slower-curing material chemistries can offer improved performance over acrylates, such as greater elasticity, durability, or longevity.

In addition, the automatic system makes adjustments without stopping or slowing the printing process, making this production-grade printer about 660 times faster than a comparable 3D inkjet printing system.

The researchers used this printer to create complex, robotic devices that combine soft and rigid materials. For example, they made a completely 3D-printed robotic gripper shaped like a human hand and controlled by a set of reinforced, yet flexible, tendons.

Our key insight here was to develop a machine-vision system and completely active feedback loop. This is almost like endowing a printer with a set of eyes and a brain, where the eyes observe what is being printed, and then the brain of the machine directs it as to what should be printed next, says co-corresponding author Wojciech Matusik, a professor of electrical engineering and computer science at MIT who leads the Computational Design and Fabrication Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

He is joined on the paper by lead author Thomas Buchner, a doctoral student at ETH Zurich, co-corresponding author Robert Katzschmann PhD 18, assistant professor of robotics who leads the Soft Robotics Laboratory at ETH Zurich; as well as others at ETH Zurich and Inkbit. The research appears today in Nature.

Contact free

This paper builds off a low-cost, multimaterial 3D printer known as MultiFab that the researchers introduced in 2015. By utilizing thousands of nozzles to deposit tiny droplets of resin that are UV-cured, MultiFab enabled high-resolution 3D printing with up to 10 materials at once.

With this new project, the researchers sought a contactless process that would expand the range of materials they could use to fabricate more complex devices.

They developed a technique, known as vision-controlled jetting, which utilizes four high-frame-rate cameras and two lasers that rapidly and continuously scan the print surface. The cameras capture images as thousands of nozzles deposit tiny droplets of resin.

The computer vision system converts the image into a high-resolution depth map, a computation that takes less than a second to perform. It compares the depth map to the CAD (computer-aided design) model of the part being fabricated, and adjusts the amount of resin being deposited to keep the object on target with the final structure.

The automated system can make adjustments to any individual nozzle. Since the printer has 16,000 nozzles, the system can control fine details of the device being fabricated.

Geometrically, it can print almost anything you want made of multiple materials. There are almost no limitations in terms of what you can send to the printer, and what you get is truly functional and long-lasting, says Katzschmann.

The level of control afforded by the system enables it to print very precisely with wax, which is used as a support material to create cavities or intricate networks of channels inside an object. The wax is printed below the structure as the device is fabricated. After it is complete, the object is heated so the wax melts and drains out, leaving open channels throughout the object.

Because it can automatically and rapidly adjust the amount of material being deposited by each of the nozzles in real time, the system doesnt need to drag a mechanical part across the print surface to keep it level. This enables the printer to use materials that cure more gradually, and would be smeared by a scraper.

Superior materials

The researchers used the system to print with thiol-based materials, which are slower-curing than the traditional acrylic materials used in 3D printing. However, thiol-based materials are more elastic and dont break as easily as acrylates. They also tend to be more stable over a wider range of temperatures and dont degrade as quickly when exposed to sunlight.

These are very important properties when you want to fabricate robots or systems that need to interact with a real-world environment, says Katzschmann.

The researchers used thiol-based materials and wax to fabricate several complex devices that would otherwise be nearly impossible to make with existing 3D printing systems. For one, they produced a functional, tendon-driven robotic hand that has 19 independently actuatable tendons, soft fingers with sensor pads, and rigid, load-bearing bones.

We also produced a six-legged walking robot that can sense objects and grasp them, which was possible due to the systems ability to create airtight interfaces of soft and rigid materials, as well as complex channels inside the structure, says Buchner.

The team also showcased the technology through a heart-like pump with integrated ventricles and artificial heart valves, as well as metamaterials that can be programmed to have non-linear material properties.

This is just the start. There is an amazing number of new types of materials you can add to this technology. This allows us to bring in whole new material families that couldnt be used in 3D printing before, Matusik says.

The researchers are now looking at using the system to print with hydrogels, which are used in tissue-engineering applications, as well as silicon materials, epoxies, and special types of durable polymers.

They also want to explore new application areas, such as printing customizable medical devices, semiconductor polishing pads, and even more complex robots.

This research was funded, in part, by Credit Suisse, the Swiss National Science Foundation, the U.S. Defense Advanced Research Projects Agency, and the U.S. National Science Foundation.

Original post:

This 3D printer can watch itself fabricate objects - MIT News

Read More..

New Tool for Building and Fixing Roads and Bridges: Artificial … – The New York Times

In Pennsylvania, where 13 percent of the bridges have been classified as structurally deficient, engineers are using artificial intelligence to create lighter concrete blocks for new construction. Another project is using A.I. to develop a highway wall that can absorb noise from cars and some of the greenhouse gas emissions that traffic releases as well.

At a time when the federal allocation of billions of dollars toward infrastructure projects would help with only a fraction of the cost needed to repair or replace the nations aging bridges, tunnels, buildings and roads, some engineers are looking to A.I. to help build more resilient projects for less money.

These are structures, with the tools that we have, that save materials, save costs, save everything, said Amir Alavi, an engineering professor at the University of Pittsburgh and a member of the consortium developing the two A.I. projects in conjunction with the Pennsylvania Department of Transportation and the Pennsylvania Turnpike Commission.

The potential is enormous. The manufacturing of cement alone makes up at least 8 percent of the worlds carbon emissions, and 30 billion tons of concrete are used worldwide each year, so more efficient production of concrete would have immense environmental implications.

And A.I. essentially machines that can synthesize information and find patterns and conclusions much as the human mind can could have the ability to speed up and improve tasks like engineering challenges to an incalculable degree. It works by analyzing vast amounts of data and offering options that give humans better information, models and alternatives for making decisions.

It has the potential to be both more cost effective one machine doing the work of dozens of engineers and more creative in coming up with new approaches to familiar tasks.

But experts caution against embracing the technology too quickly when it is largely unregulated and its payoffs remain largely unproven. In particular, some worry about A.I.s ability to design infrastructure in a process with several regulators and participants operating over a long period of time. Others worry that A.I.s ability to draw instantly from the entirety of the internet could lead to flawed data that produces unreliable results.

American infrastructure challenges have become all the more apparent in recent years Texas power grid failed during devastating ice storms in 2021 and continues to grapple with the states needs; communities across the country from Flint, Mich., to Jackson, Miss., have struggled with failing water supplies; and more than 42,000 bridges are in poor condition nationwide.

A vast majority of the countrys roadways and bridges were built several decades ago, and as a result infrastructure challenges are significant in many dimensions, said Abdollah Shafieezadeh, a professor of civil, environmental and geodetic engineering at Ohio State University.

The collaborations in Pennsylvania reflect A.I.s potential to address some of these issues.

In the bridge project, engineers are using A.I. technology to develop new shapes for concrete blocks that use 20 percent less material while maintaining durability. The Pennsylvania Department of Transportation will use the blocks to construct a bridge; there are more than 12,000 in the state that need repair, according to the American Road & Transportation Builders Association.

Engineers in Pittsburgh are also working with the Pennsylvania Turnpike Commission to design a more efficient noise-absorbing wall that will also capture some of the nitrous oxide emitted from vehicles. They are planning to build it in an area that is disproportionately affected by highway sound pollution. The designs will save about 30 percent of material costs.

These new projects have not been tested in the field, but they have been successful in the lab environment, Dr. Alavi said.

In addition to A.I.s speed at developing new designs, one of its largest draws in civil engineering is its potential to prevent and detect damage.

Instead of investing large sums of money in repair projects, engineers and transportation agencies could identify problems early on, experts say, such as a crack forming in a bridge before the structure itself buckled.

This technology is capable of providing an analysis of what is happening in real time in incidents like the bridge collapse on Interstate 95 in Philadelphia this summer or the fire that shut down a portion of Interstate 10 in Los Angeles this month, and could be developed to deploy automated emergency responses, said Seyede Fatemeh Ghoreishi, an engineering and computer science professor at Northeastern University.

But, as in many fields, there are increasingly more conversations and concerns about the relationship between A.I., human work and physical safety.

Although A.I. has proved helpful in many uses, tech leaders have testified before Congress, pushing for regulations. And last month, President Biden issued an executive order for a range of A.I. standards, including safety, privacy and support for workers.

Experts are also worried about the spread of disinformation from A.I. systems. A.I. operates by integrating already available data, so if that data is incorrect or biased, the A.I. will generate faulty conclusions.

It really is a great tool, but it really is a tool you should use just for a first draft at this point, said Norma Jean Mattei, a former president of the American Society of Civil Engineers.

Dr. Mattei, who has worked in education and ethics for engineering throughout her career, added: Once it develops, Im confident that well get to a point where youre less likely to get issues. Were not there yet.

Also worrisome is a lack of standards for A.I. The Occupational Safety and Health Administration, for example, does not have standards for the robotics industry. There is rising concern about car crashes involving autonomous vehicles, but for now, automakers do not have to abide by any federal software safety testing regulations.

Lola Ben-Alon, an assistant professor of architecture technology at Columbia University, also takes a cautionary approach when using A.I. She stressed the need to take the time to understand how it should be employed, but she said that she was not condemning it" and that it had many great potentials.

Few doubt that in infrastructure projects and elsewhere, A.I. exists as a tool to be used by humans, not as a substitute for them.

Theres still a strong and important place for human existence and experience in the field of engineering, Dr. Ben-Alon said.

The uncertainty around A.I. could cause more difficulties for funding projects like those in Pittsburgh. But a spokesman for the Pennsylvania Department of Transportation said the agency was excited to see how the concrete that Dr. Alavi and his team are designing could expand the field of bridge construction.

Dr. Alavi said his work throughout his career had shown him just how serious the potential risks from A.I. are.

But he is confident about the safety of the designs he and his team are making, and he is excited for the technologys future.

After 10, 12 years, this is going to change our lives, Dr. Alavi said.

Originally posted here:

New Tool for Building and Fixing Roads and Bridges: Artificial ... - The New York Times

Read More..

Building next generation autonomous robots to serve humanity – CU Boulder’s College of Engineering & Applied Science

Featured on CBS Sunday Morning

Sean Humbert discusses the team's award-winning research developing autonomous robots that can navigate challenging conditions. The team demonstrated the robots for CBS during a recent visit to the Edgar Mine in Idaho Springs, CO.

Watch on CBS News

Since completion of the Subterranean Challenge, faculty and students have been conducting follow-on research and competitions with multiple corporate and government partners.

Research further advancing the capabilities of the Subterranean Challenge Robots is being led by numerous CU Boulder laboratories.

One thousand feet underground, a four-legged creature scavenges through tunnels in pitch darkness. With vision that cuts through the blackness, it explores a spider web of paths, remembering its every step and navigating with precision. The sound of its movements echo eerily off the walls, but it is not to be feared this is no wild animal; it is an autonomous rescue robot.

Initially designed to find survivors in collapsed mines, caves, and damaged buildings, that is only part of what it can do.

Created by a team of University of Colorado Boulder researchers and students, the robots placed third as the top US entry and earned $500,000 in prize money at a Defense Advanced Projects Research Agency Subterranean Challenge competition in 2021.

Two years later, they are pushing the technology even further, earning new research grants to expand the technology and create new applications in the rapidly growing world of autonomous systems.

Ideally you dont want to put humans in harms way in disaster situations like mines or buildings after earthquakes; the walls or ceilings could collapse and maybe some already have, said Sean Humbert, a professor of mechanical engineering and director of the Robotics Program at CU Boulder. These robots can be disposable while still providing situational awareness.

The team developed an advanced system of sensors and algorithms to allow the robots to function on their own once given an assignment, they make decisions autonomously on how to best complete it.

A major goal is to get them from engineers directly into the hands of first responders. Success requires simplifying the way the robots transmit data into something approximating plain English, according to Kyle Harlow, a computer science PhD student.

The robots communicate in pure math. We do a lot of work on top of that to interpret the data right now, but a firefighter doesnt have that kind of time, Harlow said.

To make that happen Humbert is collaborating with Chris Heckman, an associate professor of computer science, to change both how the robots communicate and how they represent the world. The robots eyes a LiDAR sensor creates highly detailed 3D maps of an environment, 15 cm at a time. Thats a problem when they try to relay information the sheer amount of data clogs up the network.

Humans dont interpret the environment in 15 cm blocks, Humbert said. Were now working on whats called semantic mapping, which is a way to combine contextual and spatial information. This is closer to how the human brain represents the world and is much less memory intensive.

The team is also integrating new sensors to make the robots more effective in challenging environments. The robots excel in clear conditions but struggle with visual obstacles like dust, fog, and snow. Harlow is leading an effort to incorporate millimeter wave radar to change that.

We have all these sensors that work well in the lab and in clean environments, but we need to be able to go out in places such as Colorado where it snows sometimes, Harlow said.

Where some researchers are forced to suspend work when a grant ends, members of the subterranean robotics team keep finding new partners to push the technology further.

Eric Frew, a professor of aerospace at CU Boulder, is using the technology for a new National Institute of Standards and Technology competition to develop aerial robots drones instead of ground robots, to autonomously map disaster areas indoors and outside.

Our entry is based directly on the Subterranean Challenge experience and the systems developed there, Frew said.

Some teams in the competition will be relying on drones navigated by human operators, but Frew said CU Boulders project is aiming for an autonomous solution that allows humans to focus on more critical tasks.

Although numerous universities and private businesses are advancing autonomous robotic systems, Humbert said other organizations often focus on individual aspects of the technology. The students and faculty at CU Boulder are working on all avenues of the systems and for uses in environments that present extreme challenges.

Weve built world-class platforms that incorporate mapping, localization, planning, coordination all the high level stuff, the autonomy, thats all us, Humbert said. There are only a handful of teams across the world that can do that. Its a huge advantage that CU Boulder has.

Originally posted here:

Building next generation autonomous robots to serve humanity - CU Boulder's College of Engineering & Applied Science

Read More..

Doctoral Candidate One of Only 30 Young Researchers Invited to … – Yeshiva University

Dear Students, Faculty, Staff and Friends,

I am pleased to present to you this Guide to our plans for the upcoming fall semester and reopening of our campuses. In form and in content, this coming semester will be like no other. We will live differently, work differently and learn differently. But in its very difference rests its enormous power.

The mission of Yeshiva University is to enrich the moral, intellectual and spiritual development of each of our students, empowering them with the knowledge and abilities to become people of impact and leaders of tomorrow. Next years studies will be especially instrumental in shaping the course of our students lives. Character is formed and developed in times of deep adversity. This is the kind of teachable moment that Yeshiva University was made for. As such, we have developed an educational plan for next year that features a high-quality student experience and prioritizes personal growth during this Coronavirus era. Our students will be able to work through the difficulties, issues and opportunities posed by our COVID-19 era with our stellar rabbis and faculty, as well as their close friends and peers at Yeshiva.

To develop our plans for the fall, we have convened a Scenario Planning Task Force made up of representatives across the major areas of our campus. Their planning has been guided by the latest medical information, government directives, direct input from our rabbis, faculty and students, and best practices from industry and university leaders across the country. I am deeply thankful to our task force members and all who supported them for their tireless work in addressing the myriad details involved in bringing students back to campus and restarting our educational enterprise.

In concert with the recommendations from our task force, I am announcing today that our fall semester will reflect a hybrid model. It will allow many students to return in a careful way by incorporating online and virtual learning with on-campus classroom instruction. It also enables students who prefer to not be on campus to have a rich student experience by continuing their studies online and benefitting from a full range of online student services and extracurricular programs.

In bringing our students back to campus, safety is our first priority. Many aspects of campus life will change for this coming semester. Gatherings will be limited, larger courses will move completely online. Throughout campus everyone will need to adhere to our medical guidelines, including social distancing, wearing facemasks, and our testing and contact tracing policies. Due to our focus on minimizing risk, our undergraduate students will begin the first few weeks of the fall semester online and move onto the campus after the Jewish holidays. This schedule will limit the amount of back and forth travel for our students by concentrating the on-campus component of the fall semester to one consecutive segment.

Throughout our planning, we have used the analogy of a dimmer switch. Reopening our campuses will not be a simple binary, like an on/off light switch, but more like a dimmer in which we have the flexibility to scale backwards and forwards to properly respond as the health situation evolves. It is very possible that some plans could change, depending upon the progression of the virus and/or applicable state and local government guidance.

Before our semester begins, we will provide more updates reflecting our most current guidance. Please check our website, yu.edu/fall2020 for regular updates. We understand that even after reading through this guide, you might have many additional questions, so we will be posting an extensive FAQ section online as well. Additionally, we will also be holding community calls for faculty, students, staff and parents over the next couple of months.

Planning for the future during this moment has certainly been humbling. This Coronavirus has reminded us time and time again of the lessons from our Jewish tradition that we are not in full control of our circumstances. But our tradition also teaches us that we are in control of our response to our circumstances. Next semester will present significant challenges and changes. There will be some compromises and minor inconveniences--not every issue has a perfect solution. But faith and fortitude, mutual cooperation and resilience are essential life lessons that are accentuated during this period. And if we all commit to respond with graciousness, kindness, and love, we can transform new campus realities into profound life lessons for our future.

Deeply rooted in our Jewish values and forward focused in preparing for the careers and competencies of the future, we journey together with you, our Yeshiva University community, through these uncharted waters. Next year will be a formative year in the lives of our students, and together we will rise to the moment so that our students will emerge stronger and better prepared to be leaders of the world of tomorrow.

Best Wishes,

Ari Berman

Read more:

Doctoral Candidate One of Only 30 Young Researchers Invited to ... - Yeshiva University

Read More..

Opera gives voice to Alan Turing with help of artificial intelligence – Yale News

A few years ago, composer Matthew Suttor was exploring Alan Turings archives at Kings College, Cambridge, when he happened upon a typed draft of a lecture the pioneering computer scientist and World War II codebreaker gave in 1951 foreseeing the rise of artificial intelligence.

In the lecture, Intelligent Machinery, a Heretical Theory, Turing posits that intellectuals would oppose the advent of artificial intelligence out of fear that machines would replace them.

It is probable though that the intellectuals would be mistaken about this, Turing writes in a passage that includes his handwritten edits. There would be plenty to do, trying to understand what the machines were trying to say, i.e., in trying to keep ones (sic) intelligence up to the standard set by the machines

To Suttor, the passage underscores Turings visionary brilliance.

Reading it was kind of a mind-blowing moment as were now on the precipice of Turings vision becoming our reality, said Suttor, program manager at Yales Center for Collaborative Arts and Media (CCAM) a campus interdisciplinary center engaged in creative research and practice across disciplines and a senior lecturer in the Department of Theater and Performance Studies in Yales Faculty of Arts and Sciences.

Inspired by Turings 1951 lecture, and other revelations from his papers, Suttor is working with a team of musicians, theater makers, and computer programmers (including several alumni from the David Geffen School of Drama at Yale) to create an experimental opera, called I AM ALAN TURING, which explores his visionary ideas, legacy, and private life.

I didnt envision a chronological biographical operatic piece To me, it was much more interesting to investigate Turings ideas.

Matthew Suttor

In keeping with Turings vision, the team has partnered with artificial intelligence on the project, using successive versions of GPT, a large language model, to help write the operas libretto and spoken text.

Three work-in-progress performances of the opera formed the centerpiece of the Machine as Medium Symposium: Matter and Spirit, a recent two-day event produced by CCAM that investigated how AI and other technologies intersect with creativity and alter how people approach timeless questions on the nature of existence.

The symposium, whose theme Matter and Spirit was derived from Turings writings, included panel discussions with artists and scientists, an exhibition of artworks made with the help of machines or inspired by technology, and tour of the Yale School of Architectures robotic lab led by Hakim Hasan, a lecturer at the school who specializes in robotic fabrication and computational design research.

All sorts of projects across fields and disciplines are using AI in some capacity, said Dana Karwas, CCAMs director. With the opera, Matthew and his team are using it as a collaborative tool in bringing Alan Turings ideas and story into a performance setting and creating a new model for opera and other types of live performance.

Its also an effective platform for inviting further discussion about technology that many people are excited about or questioning right now, and is a great example of the kind of work were encouraging at CCAM.

Turing is widely known for his work at Bletchley Park, Great Britains codebreaking center during World War II, where he cracked intercepted Nazi ciphers. But he was also a path-breaking scholar whose work set the stage for the development of modern computing and artificial intelligence.

His Turing Machine, developed in 1936, was an early computational device that could implement algorithms. In 1950, he published an article in the journal Mind that asked: Can machines think? He also made significant contributions to theoretical biology, which uses mathematical abstractions in seeking to better understand the structures and systems within living organisms.

A gay man, Turing was prosecuted in 1952 for gross indecency after acknowledging a sexual relationship with a man, which was then illegal in Great Britain, and underwent chemical castration in lieu of a prison sentence. He died by suicide in 1954, age 41.

Before visiting Turings archive, Suttor had read Alan Turing: The Enigma, Andrew Hodges authoritative 1983 biography, and believed the mathematicians life possessed an operatic scale.

I didnt envision a chronological biographical operatic piece, which frankly is a pretty dull proposition, Suttor said. To me, it was much more interesting to investigate Turings ideas. How do you put those on stage and sing about them in a way that is moving, relevant, and dramatically exciting?

Thats when Smita Krishnaswamy, an associate professor of genetics and computer science at Yale, introduced Suttor and his team to OpenAI and several Zoom conversations with representatives of the company about the emerging technology followed. Working with Yale University Librarys Digital Humanities Lab, the team built an interface to interact with an instance, or single occurrence, of GPT-2, training it with materials from Turings archive and the text of books hes known to have read. For example, they knew Turing enjoyed George Bernard Shaws play Back to Methuselah, and Snow White, the Brothers Grimm fairytale, so they shared those texts with the AI.

The team began asking GPT-2 the kinds of questions that Turing had investigated, such as Can machines think? They could control the temperature of the models answers or, the creativity or randomness and the number of characters the responses contained. They continually adjusted the settings on those controls and honed their questions to vary the answers.

Some of the responses are just jaw-droppingly beautiful, Suttor said. You are the applause of the galaxy, for instance, is something you might print on a T-shirt.

In one prompt, the team asked the AI technology to generate lyrics for a sexy song about the operas subject, which yielded the lyrics to Im a Turing Machine, Baby.

In composing the operas music, Suttor and his team incorporated elements of Turings work on morphogenesis the biological process that develops cells and tissues and phyllotaxis, the botanical study of mathematical patterns found in stems, leaves, and seeds. For instance, Suttor found that diagrams Turing had produced showing the spiral patterns of seeds in a sunflower head conform to a Fibonacci sequence, in which each number is the sum of the two before it. Suttor superimposed the circle of fifths a method in music theory of organizing the 12 chromatic pitches as a sequence of perfect fifths onto Turings diagram, producing a unique mathematical, harmonic progression.

Suttor repeated the process using prime numbers numbers greater than 1 that are not the product of two smaller numbers in place of the Fibonacci sequence, which also produced a harmonic series. The team sequenced analog synthesizers to these harmonic progressions.

It sounds a little like Handel on acid, he said.

The workshop version of I AM ALAN TURING was performed on three consecutive nights before a packed house in the CCAM Leeds Studio. The show, in its current form, consists of eight pieces of music that cross genres. Some are operatic with a chorus and soloist, some sound like pop music, and some evoke musical theater. While Suttor composed key structural pieces, the entire team has collaborated like a band while creating the music.

At the same time, the shows storytelling is delivered through various modes: opera, pop, and acted drama. At the beginning, an actor portraying Turing stands at a chalkboard drawing the sunflowers spiral pattern.

Another scene is drawn from a transcript of Turings comments during a panel discussion, broadcast by the BBC, about the potential of artificial intelligence. In that conversation, Turing spars with a skeptical colleague who doesnt believe machines could reach or exceed human levels of intelligence.

Turing made that point during that BBC panel that hed trained machines to do things, which took a lot of work, and they both learned something from the process, Suttor said. I think that captures our experience working with GPT to draft the script.

The show also contemplates Turings sexuality and the persecution he endured because of it. One sequence shows Turing enjoying a serene morning in his kitchen beside a partner, sipping tea and eating toast. His partner reads the paper. Turing scribbles in a notebook. A housecat makes its presence felt.

Its the life that Turing never had, Suttor said.

In high school, Turing had a close friendship with classmate Christopher Morcom, who succumbed to tuberculosis while both young men were preparing to attend Cambridge. Morcom has been described as Turings first true love.

Turing wrote a letter called Nature of Spirit to Christophers mother in which he imagines the possibility of multiple universes and how the soul and the body are intrinsically linked.

In the opera, a line from the letter is recited following the scene, in Turings kitchen, that showed a glimmer of domestic tranquility: Personally, I think that spirit is really eternally connected with matter but certainly not always by the same kind of body.

The show closed with an AI-generated text, seemingly influenced by Snow White: Look in the mirror, do you realize how beautiful you are? You are the applause of the galaxy.

The I AM ALAN TURING experimental opera was just one of many projects presented during Machine as Medium: Matter and Spirit, a two-day symposium that demonstrated the kinds of interdisciplinary collaborations driven by Yales Center for Collaborative Arts and Media (CCAM).

An exhibition at the centers York Street headquarters highlighted works created with, or inspired by, various kinds of machines and technology, including holograms, motion capture, film and immersive media, virtual reality, and even an enormous robotic chisel. An exhibition tour allowed the artists to connect while describing their work to the public. The discussion among the artists and guests typifies the sense of community that CCAM aims to provide, said Lauren Dubowski 14 M.F.A., 23 D.F.A.,CCAM's assistant director,who designed and led the event.

We work to create an environment where anyone can come in and be a part of the conversation, Dubowski said. CCAM is a space where people can see work that they might not otherwise see, meet people they might not otherwise meet and talk about the unique things happening here.

See the article here:

Opera gives voice to Alan Turing with help of artificial intelligence - Yale News

Read More..

U.S. Education Departments Office for Civil Rights Releases New … – US Department of Education

The U.S. Department of Educations(Department) Office for Civil Rights (OCR) today releasednew civil rights datafrom the 202021 school year, offering critical insight regarding civil rights indicators during that coronavirus pandemic year. OCR also released seven data reports and snapshots, including A First Look: Students Access to Educational Opportunities in the Nations Public Schools, which provides an overview of these data and information.

In America, talent and creativity can come from anywhere, but only if we provide equitable educational opportunities to students everywhere, said U.S. Secretary of Education Miguel Cardona. We cannot be complacent when the data repeatedly tells us that the race, sex, or disability of students continue to dramatically impact everything from access to advanced placement courses to the availability of school counselors to the use of exclusionary and traumatic disciplinary practices. The Biden-Harris Administration has prioritized equity for underserved students throughout our historic investments in education, and we will continue to partner with states, districts, and schools to Raise the Bar and provide all students with access to an academically rigorous education in safe, supportive, and inclusive learning environments.

OCRs Civil Rights Data Collection (CRDC) is a mandatory survey of public schools serving students from preschool to grade 12. The purpose of the CRDC is to provide the federal government and members of the public with vital data about the extent to which students have equal educational opportunities required by federal civil rights laws. While OCR generally collects the CRDC biennially, the 2020-21 CRDC is the first published since the 2017-18 collection (which was released in 2020), because OCR paused the collection due to the pandemic. OCRs 2020-21 CRDC contains information collected from over 17,000 school districts and over 97,000 schools. These data include student enrollment; access to courses, teachers, other school staff, and the Internet and devices; and school climate factors, such as student discipline, harassment or bullying, and school offenses.

The 2020-21 CRDC reflects stark inequities in education access throughout the nation. For example, high schools with high enrollments of Black and Latino students offered fewer courses in mathematics, science, and computer science than schools with low enrollments of Black and Latino students. English learner students and students with disabilities, who received services under the federal Individuals with Disabilities Education Act, had a lower rate of enrollment in mathematics and science courses when compared to enrollment rates of all high school students.

These new CRDC data reflect troubling differences in students experiences in our nations schools, said Assistant Secretary for Civil Rights Catherine E. Lhamon. We remain committed to working with school communities to ensure the full civil rights protections that federal law demands.

As part of todays release of the 2020-21 CRDC, OCR launched a redesigned CRDC website that now includes an archival tool with access to historical civil rights data from 1968 to 1998 which can be found here. The 2020-21 CRDC public-use data file, reports, and snapshots are available on the Departments redesigned CRDC website. Additional reports and snapshots will be posted periodically on the website.

Key data points from the 202021 CRDC are below and highlighted in one or more of the data reports or snapshots.

School Offenses

Student Discipline

Restraint and Seclusion

The Departments data reports and snapshots are available here and listedbelow.

The Department will release additional data reports and snapshots on key topics such as student access to courses and programs and data specific to English learner students and to students with disabilities.

Read this article:

U.S. Education Departments Office for Civil Rights Releases New ... - US Department of Education

Read More..

Computational imaging researcher attended a lecture, found her … – MIT News

Soon after Kristina Monakhova started graduate school, she attended a lecture by Professor Laura Waller 04, MEng 05, PhD 10, director of the University of California at Berkeleys Computational Imaging Lab, who described a kind of computational microscopy with extremely high image resolution.

The talk blew me away, says Monakhova, who is currently an MIT-Boeing Distinguished Postdoctoral Fellow in MIT's Department of Electrical Engineering and Computer Science. It definitely changed my trajectory and put me on the path to where I am now. I knew right away that this is what I wanted to do. It was the perfect combination of signal processing, hardware, and algorithms, and I could use it to make more capable imaging sensors for diverse applications.

Today, Monakhovas research involves creating cameras and microscopes designed to produce not high-resolution images for human consumption, but rather information-dense images to be used by algorithms. She aspires to combine imaging system physics with deep learning.

She points out that the purpose of cameras has been fundamentally changed by automation. In many contexts, people dont look at the images; algorithms do, she explains.

A good example of when the data in an image is more important than its visual representation or sharpness is in skin cancer diagnosis, where measuring specific light wavelengths using a hyperspectral camera can help determine whether a certain skin lesion is cancerous and, if so, malignant. While hyperspectral cameras generally cost more than $20,000, Monakhova has designed a cheap computational camera that could be adapted for such diagnosis.

Monakhova says she inherited her early academic ambition from her mother, who brought her to this country from Russia when she was 4 years old.

My mother is my role model and inspiration. She immigrated to the U.S. as a single mother and raised me while completing her PhD in electrical engineering, Monakhova says. I remember spending my elementary school holidays sitting in her classes, drawing. She tried to get me excited about math and science as a child and I guess she succeeded!

By middle school, Monakhova had discovered her interest in engineering after joining a robotics team. When many years later she started graduate school at UC Berkeley, she chose robotics as her first lab, although Waller's computational microscopy lecture drew her away to Waller's lab and to her current field of research.

Starting in the MIT Postdoctoral Fellowship Program for Engineering Excellence in fall 2022, Monakhova experienced another life-changing event.

My daughter was born on the first day of work at MIT, making for a particularly exciting first day, she says.

Born four weeks early, the baby required an elaborate system of feeding, a process that took almost two hours and needed to be repeated in three-hour increments, which left the parents just one out of every three hours to do everything else.

The first four or five months were a whirlwind of challenges and emotions and doctor appointments, Monakhova says.

Despite those challenges, the new mother continued with her fellowship. Knowing that a postdoc is often a bridge to a faculty position, she took special advantage of a series of program presentations focused on what its like to be a professor and the academic job search process. Although the presentations took place while she was on maternity leave and she wasn't required to participate, Monakhova still attended via Zoom.

I could call in and listen while breastfeeding my newborn infant, she says. I went on the academic job market, and this series was useful to help me get my job materials together and prepare for my interviews.

Monakhova says she is "thankful that MIT has a relatively good maternity and family leave policy, as well as crucial resources, such as lactation rooms, back-up daycare, and a fantastic on-campus daycare program with financial aid available. Without these resources and support, I would have had to quit my career. In order to attract and retain women in science and engineering, we need family-friendly policies that dont penalize women for having babies.

By June, Monakhova had landed a position as an assistant professor at Cornell Universitys Department of Computer Science. Having deferred the appointment, shell start in fall 2024.

Referring to her upcoming work as a professor and lab leader at Cornell, Monakhova says, Im particularly excited to try to set up a collaborative, friendly lab culture where mental health and work-life balance are prioritized, and failure is seen as an important step in the research process.

Throughout her academic career, Monakhova says community has been extremely important. The MIT Postdoctoral Fellowship Program for Engineering Excellence, which was designed to develop the next generation of faculty leaders and help guide MITs School of Engineering toward supporting more women and others who are underrepresented in engineering, allowed her to explore new research questions in a different area and work with some amazing MIT students on some exciting projects.

I believe its important to help each other out and create a welcoming environment where everyone has the support and resources they need to thrive, says Monakhova, who has an exemplary record of mentoring and giving back. Research and breakthroughs dont happen in isolation theyre the result of teams and communities of people working together.

The rest is here:

Computational imaging researcher attended a lecture, found her ... - MIT News

Read More..