Page 171«..1020..170171172173..180190..»

How Nvidia co-founder plans to turn Hudson Valley into a tech powerhouse greater than Silicon Valley – New York Post

A co-founder of chip maker Nvidia is bankrolling a futuristic quantum computer system at Rensselaer Polytechnic Institute and wants to turn New Yorks Hudson Valley into a tech powerhouse.

Curtis Priem, 64, donated more than $75 million so that the Albany-area college could obtain the IBM-made computer the first such device on a university campus anywhere in the world, the Wall Street Journal reported.

The former tech executive and RPI alum said his goal is to establish the area around the school, based in Troy, into a hub of talent and business as quantum computing becomes more mainstream in the years ahead.

Weve renamed Hudson Valley as Quantum Valley, Priem told the Journal. Its up to New York whether they want to become Silicon State not just a valley.

The burgeoning technology uses subatomic quantum bits, or qubits, to process data much faster than conventional binary computers. The devices are expected to play a key role in the development of advanced AI systems.

Priem will reportedly fund the whopping $15 million per year required to rent the computer, which is kept in a building that used to be a chapel on RPIs campus.

RPI PresidentMartin Schmidt told the newspaper that the school will begin integrating the device into its curriculum and ensure it is accessible to the student body.

Representatives for IBM and RPI did not immediately return The Posts request for comment.

An electrical engineer by trade, Priem co-founded Nvidia alongside its current CEO Jensen Huang and Chris Malachowsky in 1993. He served as the companys chief technology officer until retiring in 2003.

Priem sold most of his stock in retirement and used the money to start a charitable foundation.

He serves as vice chair of the board at RPI and has reportedly donated hundreds of millions of dollars to the university.

Nvidia has surged in value as various tech firms rely on its computer chips to fuel the race to develop artificial intelligence.

The companys stock has surged 95% to nearly $942 per share since January alone. Nvidias market cap exceeds $2.3 trillion, making it the worlds third-most valuable company behind Microsoft and Apple.

In November 2023, Forbes estimated that Priem would be one of the worlds richest people, with a personal fortune of $70 billion, if he hadnt sold off most of his Nvidia shares.

More here:
How Nvidia co-founder plans to turn Hudson Valley into a tech powerhouse greater than Silicon Valley - New York Post

Read More..

Alice & Bob’s Cat Qubit Research Published in Nature – HPCwire

PARIS and BOSTON, May 23, 2024 Alice & Bob, a global leader in the race for fault-tolerant quantum computing, today announced the publication of its foundational research in Nature, showcasing significant advancements in cat qubit technology.

The study, Quantum control of a cat-qubit with bit-flip times exceeding ten seconds, realized in collaboration with the QUANTIC Team (Mines Paris PSL, Ecole Normale Suprieure and INRIA), demonstrates an unprecedented improvement in the stability of superconducting qubits, marking a critical milestone towards useful fault-tolerant quantum computing.

The researchers have significantly extended the bit-flip times from milliseconds to tens of secondsthousands of times better than any other superconducting qubit type.

Quantum computers face two types of errors: bit-flips and phase-flips. Cat qubits exponentially reduce bit-flips, which are analogous to classical bit flips in digital computing. As a result, the remaining phase-flips can be addressed more efficiently with simpler error correcting codes.

The researchers used Alice & Bobs Boson 3 chipset for this record-breaking result, which features a cat qubit design named TomCat. TomCat employs an efficient quantum tomography (measurement) protocol that allows for the control of quantum states without the use of a transmon, a common circuit used by many quantum companies, but one of the major sources of bit-flips for cat qubits. This design also minimizes the footprint of the qubit on the chip, removing drivelines, cables, instruments, making this stable qubit scalable. Recently, Alice & Bob made publicly available their new Boson 4 chipset that reaches over 7 minutes of bit-flip lifetime. The results from this Nature Publication can therefore be reproduced by users on Boson 4 over Google Cloud.

Although Alice & Bobs latest Boson chips are getting closer to the company bit-flip protection targets, Alice & Bob plans to further advance their technology. The next iterations will focus on boosting the cat qubit phase-flip time and readout fidelity to reach the requirements of their latest architecture to deliver a 100 logical qubit quantum computer.

Key advances highlighted in the research include:

About Alice & Bob

Alice & Bob is a quantum computing company based in Paris and Boston whose goal is to create the first universal, fault-tolerant quantum computer. Founded in 2020, Alice & Bob has already raised 30 million in funding, hired over 95 employees and demonstrated experimental results surpassing those of technology giants such as Google or IBM. Alice & Bob specializes in cat qubits, a pioneering technology developed by the companys founders and later adopted by Amazon. Demonstrating the power of its cat architecture, Alice & Bob recently showed that it could reduce the hardware requirements for building a useful large-scale quantum computer by up to 200 times compared with competing approaches. Alice & Bob cat qubit is available for anyone to test through cloud access.

Source: Alice & Bob

View original post here:
Alice & Bob's Cat Qubit Research Published in Nature - HPCwire

Read More..

NIST quantum-resistant algorithms to be published within weeks, top White House advisor says – The Record from Recorded Future News

Update, May 24: Includes correction from NIST about the number of algorithms to be released.

The U.S. National Institute of Standards and Technology (NIST) will release post-quantum cryptographic algorithms in the next few weeks, a senior White House official said on Monday.

Anne Neuberger, the White Houses top cyber advisor, told an audience at the Royal United Services Institute (RUSI) in London that the release of the algorithms was a momentous moment, as they marked a major step in the transition to the next generation of cryptography.

The transition is being made in apprehension of what is called a cryptographically relevant quantum computer (CRQC), a device theoretically capable of breaking the encryption thats at the root of protecting both corporate and national security secrets, said Neuberger. NIST made a preliminary announcement of the algorithms in 2022.

Following publication, a spokesperson for NIST told Recorded Future News it was planning to release three finalized algorithms this summer and not four, as Neuberger had said in London.

Conrad Prince, a former official at GCHQ and now a distinguished fellow at RUSI, told Neuberger that during his previous career there had consistently been a concern about hostile states having the capability to decrypt the plaintext of secure messages, although this capability was consistently estimated at being roughly a decade away and had been for the last 20 years.

Neuberger said the U.S. intelligence communitys estimate is similar, the early 2030s, for when a CRQC would be operational. But the time-frame is relevant, said the White House advisor, because there is national security data that is collected today and even if decrypted eight years from now, can still be damaging.

Britains NCSC has warned that contemporary threat actors could be collecting and storing intelligence data today for decryption at some point in the future.

Given the cost of storing vast amounts of old data for decades, such an attack is only likely to be worthwhile for very high-value information, stated the NCSC. As such, the possibility of a CRQC existing at some point in the next decade is a very relevant threat right now.

Neuberger added: Certainly theres some data thats time sensitive, you know, a ship that looks to be transporting weapons to a sanctioned country, probably in eight years we dont care about that anymore.

Publishing the new NIST algorithms is a protection against adversaries collecting the most sensitive kinds of data today, Neuberger added.

A spokesperson for NIST told Recorded Future News: The plan is to release the algorithms this summer. We dont have anything more specific to offer at this time.

But publishing the algorithms is not the last step in moving to a quantum-resistant computing world. The NCSC has warned it is actually just the second step in what will be a very complicated undertaking.

Even if any one of the algorithms proposed by NIST achieves universal acceptance as something that is unbreakable by a quantum computer, it would not be a simple matter of just swapping those algorithms in for the old-fashioned ones.

Part of the challenge is that most systems that currently depend on public-key cryptography for their security are not necessarily capable of running the resource-heavy software used in post-quantum cryptography.

Ultimately, the security of public key cryptographic systems relies on the mathematical difficulty of factoring very large prime numbers something that traditional computers find exhaustingly difficult.

However, research by American mathematician Peter Shor, published in 1994, proposed an algorithm that could be run on a quantum computer for finding these prime factors with far more ease; potentially undermining some of the key assumptions about what makes public-key cryptography secure.

The good news, according to NCSC, is that while advances in quantum computing are continuing to be made, the machines that exist today are still limited, and suffer from relatively high error rates in each operation they perform, stated the agency.

But the NCSC warned that in the future, it is possible that error rates can be lowered such that a large, general-purpose quantum computer could exist, but it is impossible to predict when this may happen.

Recorded Future

Intelligence Cloud.

No previous article

No new articles

Alexander Martin

is the UK Editor for Recorded Future News. He was previously a technology reporter for Sky News and is also a fellow at the European Cyber Conflict Research Initiative.

Go here to see the original:
NIST quantum-resistant algorithms to be published within weeks, top White House advisor says - The Record from Recorded Future News

Read More..

Exploring new frontiers with Fujitsu’s quantum computing research and development – Fujitsu

Fujitsu and RIKEN have already successfully developed a 64-qubit superconducting quantum computer at the RIKEN-RQC-Fujitsu Collaboration Center, which was jointly established by the two organizations (*1). Our interviewee, researcher Shingo Tokunaga, is currently participating in a joint research project with RIKEN. He majored in electronic engineering at university and worked on microwave-related research topics. After joining Fujitsu, he worked in a variety of software fields, including network firmware development as well as platform development for communication robots. Currently, he is applying his past experience in the Quantum Hardware Team at the Quantum Laboratory to embark on new challenges.

In what fields do you think quantum computing can be applied to?

ShingoQuantum computing has many potential applications, such as finance and healthcare, but especially in quantum chemistry calculations used in drug development. If we can use it for these calculations, we can realize efficient and high precision simulations in a short period of time. Complex calculations that traditionally take a long time to solve on conventional computers are expected to be solved quickly by quantum computers. One such example of this is finding solutions for combinatorial optimization problems such as molecular structure patterns. The spread of the novel coronavirus has made the development of vaccines and therapeutics urgent, and in such situations where rapid responses are needed, I believe the time will come when quantum computers can be utilized.

Fujitsu is collaborating with world-leading research institutions to advance research and development in all technology areas, from quantum devices to foundational software and applications, with the aim of realizing practical quantum computers. Additionally, we are also advancing the development of hybrid technologies (*2) for quantum computers and high-performance computing technologies, represented by the supercomputer Fugaku, which will be necessary for large-scale calculations until the full practicality of quantum computers is achieved.

What themes are you researching? What are your challenges and goals?

ShingoOne of the achievements of our collaborative research with RIKEN is the construction of a 64-qubit superconducting quantum computer. Superconducting quantum computers operate by manipulating quantum bits on quantum chips cooled to under 20 mK using ultra-low-temperature refrigerators, driving them with microwave signals of around 8 GHz, and reading out the state of the bits. However, since both bit operations and readouts are analog operations, errors are inherent. Our goal is to achieve higher fidelity in the control and readout of quantum bits, providing an environment where quantum algorithms can be executed with high computational accuracy, ultimately solving our customers' challenges.

What role do you play in the team?

ShingoThe Quantum Hardware Team consists of many members responsible for tasks such as designing quantum chips, improving semiconductor manufacturing processes, designing and constructing components inside refrigerators, as well as designing and constructing control devices outside refrigerators. I am responsible for building control devices and controlling quantum bits. While much attention is often given to the development of the main body of quantum computers or quantum chips, by controlling and reading quantum bits with high precision, we can deliver the results of the development team to users, and that's my role.

How do you carry out controlling quantum bits, and in what sequence or process?

ShingoThe first step is the basic evaluation of the quantum chip, followed by calibration for controlling the quantum bits. First, we receive the quantum chip from the manufacturing team and perform performance measurements. To evaluate the chip, it is placed inside the refrigerator, and after closing the cover of the refrigerator, which is multilayered for insulation, the inside is vacuumed and cooling begins. It usually takes about two days to cool from room temperature to 20 mK. In the basic evaluation, we confirm parameters such as the resonance frequency of the quantum bits and coherence time called T1(the time it takes for a qubit to become initialized). Then, we perform calibration for quantum bit operations and readouts. Bit operations and readouts may not always yield the desired results, because there are interactions between the bits. The bit to be controlled may be affected by the neighboring bits, so it is necessary to control based on the overall situation of the bits. Therefore, we investigate why the results did not meet expectations, consult with researchers at RIKEN, and make further efforts to minimize errors.

How do you approach the challenge of insufficient accuracy in bit operations and readouts?

ShingoThere are various approaches we can try, such as improving semiconductor processes, implementing noise reduction measures in control electronics, and changing the method of microwave signal irradiation. Our team conducts studies on the waveform, intensity, phase, and irradiation timing of microwave signals necessary to improve the accuracy of quantum bit control. Initially, we try existing methods described in papers on our quantum chip and then work to improve accuracy further from there.

What other areas do you focus on or innovate in, outside of your main responsibilities? Can you also explain the reasons for this?

ShingoI am actively advancing tasks to contribute to improving the performance of quantum computer hardware further. The performance of the created quantum chip can only be evaluated by cooling it in a refrigerator and conducting measurements. Based on these results, it is important to determine what is needed to improve the performance of quantum computer hardware and provide feedback to the quantum chip design and manufacturing teams.

For Fujitsu, the development of quantum computers marks a first-time challenge. Do you have any concerns?

ShingoI believe that venturing into unknown territories is precisely where the value of a challenge lies, presenting opportunities for new discoveries and growth. Fujitsu is tackling quantum computer research and development by combining various technologies it has cultivated over the years. I aim to address challenges one by one and work towards achieving stable operation. Once stable operation is achieved, I hope to conduct research on new control methods.

What kind of activities you are undertaking to accelerate your research on quantum computers?

ShingoQuantum computing is an unknown field even for myself, so I am advancing development while consulting with researchers at RIKEN, our collaborative research partner. I aim to build a relationship of give and take, so I actively strive to cooperate if there are ways in which I can contribute to RIKEN's research.

What is your outlook for future research?

ShingoUltimately, our goal is to utilize quantum computers to solve societal issues, but quantum computing is still in its early stages of development. I believe that it is the responsibility of our Quantum Hardware Team urgently to provide application development teams with qubits and quantum gates that have many bits and high fidelity. In particular, fidelity improvement in two-qubit gate operations is a challenge in the field of control, and I aim to work on improving it. Additionally, I want to explore the development of a quantum platform that allows customers to maximize their utilization of quantum computers.

We use technology to make peoples lives happier. As a result of this belief, we have created various technologies and contributed to the development of society and our customers. At the Fujitsu Technology Hall located in the Fujitsu Technology Park, you can visit mock-ups of Fujitsu's quantum computers, as well as experience the latest technologies such as AI.

Mock-up of a quantum computer exhibited at the Fujitsu Technology Hall

See the rest here:
Exploring new frontiers with Fujitsu's quantum computing research and development - Fujitsu

Read More..

Glimpse of next-generation internet – Harvard Office of Technology Development

May 20th, 2024

By Anne Manning, Harvard Staff Writer Published in the Harvard Gazette

An up close photo of the diamond silicon vacancy center.

Its one thing to dream up a next-generation quantum internet capable of sending highly complex, hacker-proof information around the world at ultra-fast speeds. Its quite another to physically show its possible.

Thats exactly what Harvard physicists have done, using existing Boston-area telecommunication fiber, in a demonstration of the worlds longest fiber distance between two quantum memory nodes. Think of it as a simple, closed internet carrying a signal encoded not by classical bits like the existing internet, but by perfectly secure, individual particles of light.

The groundbreaking work, published in Nature, was led by Mikhail Lukin, the Joshua and Beth Friedman University Professor in the Department of Physics, in collaboration with Harvard professors Marko Lonar and Hongkun Park, who are all members of the Harvard Quantum Initiative. The Nature work was carried out with researchers at Amazon Web Services.

The Harvard team established the practical makings of the first quantum internet by entangling two quantum memory nodes separated by optical fiber link deployed over a roughly 22-mile loop through Cambridge, Somerville, Watertown, and Boston. The two nodes were located a floor apart in Harvards Laboratory for Integrated Science and Engineering.

Showing that quantum network nodes can be entangled in the real-world environment of a very busy urban area is an important step toward practical networking between quantum computers.

Mikhail Lukin, the Joshua and Beth Friedman University Professor in the Department of Physics

Quantum memory, analogous to classical computer memory, is an important component of a quantum computing future because it allows for complex network operations and information storage and retrieval. While other quantum networks have been created in the past, the Harvard teams is the longest fiber network between devices that can store, process, and move information.

Each node is a very small quantum computer, made out of a sliver of diamond that has a defect in its atomic structure called a silicon-vacancy center. Inside the diamond, carved structures smaller than a hundredth the width of a human hair enhance the interaction between the silicon-vacancy center and light.

The silicon-vacancy center contains two qubits, or bits of quantum information: one in the form of an electron spin used for communication, and the other in a longer-lived nuclear spin used as a memory qubit to store entanglement, the quantum-mechanical property that allows information to be perfectly correlated across any distance.

(In classical computing, information is stored and transmitted as a series of discrete binary signals, say on/off, that form a kind of decision tree. Quantum computing is more fluid, as information can exist in stages between on and off, and is stored and transferred as shifting patterns of particle movement across two entangled points.)

Map showing path of two-node quantum network through Boston and Cambridge. Credit: Can Knaut via OpenStreetMap

Using silicon-vacancy centers as quantum memory devices for single photons has been a multiyear research program at Harvard. The technology solves a major problem in the theorized quantum internet: signal loss that cant be boosted in traditional ways.

A quantum network cannot use standard optical-fiber signal repeaters because simple copying of quantum information as discrete bits is impossible making the information secure, but also very hard to transport over long distances.

Silicon-vacancy-center-based network nodes can catch, store, and entangle bits of quantum information while correcting for signal loss. After cooling the nodes to close to absolute zero, light is sent through the first node and, by nature of the silicon vacancy centers atomic structure, becomes entangled with it, so able to carry the information.

Since the light is already entangled with the first node, it can transfer this entanglement to the second node, explained first author Can Knaut, a Kenneth C. Griffin Graduate School of Arts and Sciences student in Lukins lab. We call this photon-mediated entanglement.

Over the last several years, the researchers have leased optical fiber from a company in Boston to run their experiments, fitting their demonstration network on top of the existing fiber to indicate that creating a quantum internet with similar network lines would be possible.

Showing that quantum network nodes can be entangled in the real-world environment of a very busy urban area is an important step toward practical networking between quantum computers, Lukin said.

A two-node quantum network is only the beginning. The researchers are working diligently to extend the performance of their network by adding nodes and experimenting with more networking protocols.

The paper is titled Entanglement of Nanophotonic Quantum Memory Nodes in a Telecom Network. The work was supported by the AWS Center for Quantum Networkings research alliance with the Harvard Quantum Initiative, the National Science Foundation, the Center for Ultracold Atoms (an NSF Physics Frontiers Center), the Center for Quantum Networks (an NSF Engineering Research Center), the Air Force Office of Scientific Research, and other sources.

Harvard Office of Technology Development enabled the strategic alliance between Harvard University and Amazon Web Services (AWS) to advance fundamental research and innovation in quantum networking.

Tags: Alliances, Collaborations, Quantum Physics, Internet, Publication

Press Contact: Kirsten Mabry | (617) 495-4157

Read more:
Glimpse of next-generation internet - Harvard Office of Technology Development

Read More..

DeepMind’s AI program AlphaFold3 can predict the structure of every protein in the universe and show how they … – Livescience.com

DeepMind has unveiled the third version of its artificial intelligence (AI)-powered structural biology software, AlphaFold, which models how proteins fold.

Structural biology is the molecular basis study of biological materials including proteins and nucleic acids and aims to reveal how they are structured, work, and interact.

AlphaFold3 helps scientists more accurately predict how proteins large molecules that play a critical role in all life forms, from plants and animals to human cells interact with other biological molecules, including DNA and RNA. Doing so will enable scientists to truly understand lifes processes, DeepMind representatives wrote in a blog post.

By comparison, its predecessors, AlphaFold and AlphaFold2, could only predict the shapes that proteins fold into. That was still a major scientific breakthrough at the time.

AlphaFold3's predictions could help scientists develop bio-renewable materials, crops with greater resistance, new drugs and more, the research team wrote in a study published May 8 in the journal Nature.

Related: 'Master of deception': Current AI models already have the capacity to expertly manipulate and deceive humans

Given a list of molecules, the AI program can show how they fit together. It does this not only for large molecules like proteins, DNA, and RNA but also for small molecules known as ligands, which bind to receptors on large proteins like key fitting into a lock.

Get the worlds most fascinating discoveries delivered straight to your inbox.

AlphaFold3 also models how some of these biomolecules (organic molecules produced by living things) are chemically modified. Disruptions in these chemical modifications can play a role in diseases, according to the blog post.

AlphaFold3 can perform these calculations because its underlying machine-learning architecture and training data encompasses every type of biomolecule.

The researchers claim that AlphaFold3 is 50% more accurate than current software-based methods of predicting protein structures and their interactions with other molecules.

For example, in drug discovery, Nature reported that AlphaFold3 outperformed two docking programs which researchers use to model the affinity of small molecules and proteins when they bind together and RoseTTAFold All-Atom, a neural network for predicting biomolecular structures.

Frank Uhlmann, a biochemist at the Francis Crick Institute in London, told Nature that he's been using the tool for predicting the structure of proteins that interact with DNA when copying genomes and experiments show the predictions are mostly accurate.

However, unlike its predecessors, AlphaFold 3 is no longer open source. This means scientists cannot use custom versions of the AI model, or access its code or training data publicly, for their research work.

Scientists looking to use AlphaFold3 for non-commercial research can access it for free via the recently launched AlphaFold Server. They can input their desired molecular sequences and gain predictions within minutes. But they can only perform 20 jobs per day.

Follow this link:
DeepMind's AI program AlphaFold3 can predict the structure of every protein in the universe and show how they ... - Livescience.com

Read More..

Looking ahead to the AI Seoul Summit – Google DeepMind

How summits in Seoul, France and beyond can galvanize international cooperation on frontier AI safety

Last year, the UK Government hosted the first major global Summit on frontier AI safety at Bletchley Park. It focused the worlds attention on rapid progress at the frontier of AI development and delivered concrete international action to respond to potential future risks, including the Bletchley Declaration; new AI Safety Institutes; and the International Scientific Report on Advanced AI Safety.

Six months on from Bletchley, the international community has an opportunity to build on that momentum and galvanize further global cooperation at this weeks AI Seoul Summit. We share below some thoughts on how the summit and future ones can drive progress towards a common, global approach to frontier AI safety.

Since Bletchley, there has been strong innovation and progress across the entire field, including from Google DeepMind. AI continues to drive breakthroughs in critical scientific domains, with our new AlphaFold 3 model predicting the structure and interactions of all lifes molecules with unprecedented accuracy. This work will help transform our understanding of the biological world and accelerate drug discovery. At the same time, our Gemini family of models have already made products used by billions of people around the world more useful and accessible. We've also been working to improve how our models perceive, reason and interact and recently shared our progress in building the future of AI assistants with Project Astra.

This progress on AI capabilities promises to improve many peoples lives, but also raises novel questions that need to be tackled collaboratively in a number of key safety domains. Google DeepMind is working to identify and address these challenges through pioneering safety research. In the past few months alone, weve shared our evolving approach to developing a holistic set of safety and responsibility evaluations for our advanced models, including early research evaluating critical capabilities such as deception, cyber-security, self-proliferation, and self-reasoning. We also released an in-depth exploration into aligning future advanced AI assistants with human values and interests. Beyond LLMs, we recently shared our approach to biosecurity for AlphaFold 3.

This work is driven by our conviction that we need to innovate on safety and governance as fast as we innovate on capabilities - and that both things must be done in tandem, continuously informing and strengthening each other.

Maximizing the benefits from advanced AI systems requires building international consensus on critical frontier safety issues, including anticipating and preparing for new risks beyond those posed by present day models. However, given the high degree of uncertainty about these potential future risks, there is clear demand from policymakers for an independent, scientifically-grounded view.

Thats why the launch of the new interim International Scientific Report on the Safety of Advanced AI is an important component of the AI Seoul Summit - and we look forward to submitting evidence from our research later this year. Over time, this type of effort could become a central input to the summit process and, if successful, we believe it should be given a more permanent status, loosely modeled on the function of the Intergovernmental Panel on Climate Change. This would be a vital contribution to the evidence base that policymakers around the world need to inform international action.

We believe these AI summits can provide a regular forum dedicated to building international consensus and a common, coordinated approach to governance. Keeping a unique focus on frontier safety will also ensure these convenings are complementary and not duplicative of other international governance efforts.

Evaluations are a critical component needed to inform AI governance decisions. They enable us to measure the capabilities, behavior and impact of an AI system, and are an important input for risk assessments and designing appropriate mitigations. However, the science of frontier AI safety evaluations is still early in its development.

This is why the Frontier Model Forum (FMF), which Google launched with other leading AI labs, is engaging with AI Safety Institutes in the US and UK and other stakeholders on best practices for evaluating frontier models. The AI summits could help scale this work internationally and help avoid a patchwork of national testing and governance regimes that are duplicative or in conflict with one another. Its critical that we avoid fragmentation that could inadvertently harm safety or innovation.

The US and UK AI Safety Institutes have already agreed to build a common approach to safety testing, an important first step toward greater coordination. We think there is an opportunity over time to build on this towards a common, global approach. An initial priority from the Seoul Summit could be to agree a roadmap for a wide range of actors to collaborate on developing and standardizing frontier AI evaluation benchmarks and approaches.

It will also be important to develop shared frameworks for risk management. To contribute to these discussions, we recently introduced the first version of our Frontier Safety Framework, a set of protocols for proactively identifying future AI capabilities that could cause severe harm and putting in place mechanisms to detect and mitigate them. We expect the Framework to evolve significantly as we learn from its implementation, deepen our understanding of AI risks and evaluations, and collaborate with industry, academia and government. Over time, we hope that sharing our approaches will facilitate work with others to agree on standards and best practices for evaluating the safety of future generations of AI models.

Many of the potential risks that could arise from progress at the frontier of AI are global in nature. As we head into the AI Seoul Summit, and look ahead to future summits in France and beyond, were excited for the opportunity to advance global cooperation on frontier AI safety. Its our hope that these summits will provide a dedicated forum for progress towards a common, global approach. Getting this right is a critical step towards unlocking the tremendous benefits of AI for society.

View original post here:
Looking ahead to the AI Seoul Summit - Google DeepMind

Read More..

Bolstering environmental data science with equity-centered approaches – EurekAlert

image:

Graphical abstract

Credit: Joe F. Bozeman III

A paradigm shift towards integrating socioecological equity into environmental data science and machine learning (ML) is advocated in a new perspective article (DOI: 10.1007/s11783-024-1825-2)published in the Frontiers of Environmental Science & Engineering. Authored by Joe F. Bozeman III from the Georgia Institute of Technology, the paper emphasizes the importance of understanding and addressing socioecological inequity to enhance the integrity of environmental data science.

This study introduces and validates the Systemic Equity Framework and the Wells-Du Bois Protocol, essential tools for integrating equity in environmental data science and machine learning. These methodologies extend beyond traditional approaches by emphasizing socioecological impacts alongside technical accuracy. The Systemic Equity Framework focuses on the concurrent consideration of distributive, procedural, and recognitional equity, ensuring fair benefits for all communities, particularly the marginalized. It encourages researchers to embed equity throughout the project lifecycle, from inception to implementation. The Wells-Du Bois Protocol offers a structured method to assess and mitigate biases in datasets and algorithms, guiding researchers to critically evaluate potential societal bias reinforcement in their work, which could lead to skewed outcomes.

Highlights

Socioecological inequity must be understood to improve environmental data science.

The Systemic Equity Framework and Wells-Du Bois Protocol mitigate inequity.

Addressing irreproducibility in machine learning is vital for bolstering integrity.

Future directions include policy enforcement and systematic programming.

"Our work is not just about improving technology but ensuring it serves everyone justly," said Joe F. Bozeman III, lead researcher and professor at Georgia Institute of Technology. "Incorporating an equity lens into environmental data science is crucial for the integrity and relevance of our research in real-world settings."

This pioneering research not only highlights existing challenges in environmental data science and machine learning but also offers practical solutions to overcome them. It sets a new standard for conducting research that is just, equitable, and inclusive, thereby paving the way for more responsible and impactful environmental science practices.

Frontiers of Environmental Science & Engineering

Experimental study

Not applicable

Bolstering integrity in environmental data science and machine learning requires understanding socioecological inequity

8-Feb-2024

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

See the original post here:

Bolstering environmental data science with equity-centered approaches - EurekAlert

Read More..

Going global as Mason Korea’s first computational and data sciences graduate – George Mason University

Traveling abroad has been part of Jimin Jeons life for as long as she can remember. She traveled with her mom during every school vacation, which allowed her to visit 23 countries by the time she was a college student. Being exposed to different cultures from a young age helped her develop a desire to pursue her college education abroad. That brought her to Mason Korea after 12 years of Korean public school education.

While the thought of studying abroad was exciting, I felt burdened by the language barrier to study abroad in the U.S. right after graduating high school, Jeon said. Mason Korea was an alternative to ease that transition by improving my English skills in a more familiar setting in South Korea.

Jeon was part of the first cohort in the newly established Computational and Data Sciences Department at Mason Korea. Although her frequent travels around the world prompted her to major in global affairs, she had her mind set on the world of big data since high school. Thus, after her freshman year, once the new major was opened, she made the jump to the STEM field.

Jeon found she had direct opportunities to engage in data analysis. Her favorite part of Mason Korea was the Career Development Center, which allowed students like her to be exposed to opportunities in data analytics to gain technical hands-on experiences. Her first work experience was through the center as a data science intern at a real estate AI valuation startup during her junior year.

It was a special opportunity to see how the knowledge about programming languages I acquired in the classroom be applied in the real workforce and identify the areas that I need to continue to improve to be a more competent data scientist, said Jeon.

Transitioning to the Fairfax Campus in the fall semester of 2023, Jeon stayed true to her goal of diversifying her experiences. Her last semester at George Mason included working as a teaching assistant for the Computational and Data Sciences Department in the College of Science, performing data cleaning for an on-campus project, and helping students practice their Korean through the language exchange program. She took advantage of the language environment so that she could build her English skills.

Jeon is now a proud graduate in computational and data sciences, one of the few who enrolled in the major in 2020. She is excited about the job opportunities she has and wants to encourage all those who have just closed their four-year journey.

For students just like myself, who have spent their whole life in the Korean education system, going to Mason Korea alone is a challenge, she said. Learning about various topics at a more sophisticated level in a language that you are not familiar with was also not an easy task for me. Yet, the four-year voyage of diverse experiences and success itself shows that I can take on any challenge at any point in my life.

See the rest here:

Going global as Mason Korea's first computational and data sciences graduate - George Mason University

Read More..

Aristotle, AI and the Data Scientist | The ILR School – Cornell University | ILR School

Nearly two and a half millennia since his time, Aristotle and his "virtue ethics" in short, to live a life of good character are every bit as relevant to budding statisticians as the technical skills they learn to build AI models, according to Elizabeth Karns, senior lecturer of statistics and data science at Cornell Bowers CIS and at the ILR School.

An epidemiologist and lawyer, Karns launched Integrated Ethics in Data Science (STSCI 3600), a seven-week course offered twice each spring semester, several years ago in response to what she viewed as a disconnect between statisticians and the high-powered, high-consequence statistical models they were being asked to build.

I started thinking more about algorithms and how we are not preparing students sufficiently to be confronted with workplace pressures to just get the model done Put in the data, don't question it, and just use it, she said.

The problem, as she sees it, is that these models are largely unregulated, have no governing body, and thus skirt rigorous scientific testing and evaluation. Lacking such oversight, ethics and fairness, then, become a matter of discretion on the part of the statisticians developing the models; personal values and virtues are brought into the equation, and this is where Aristotles wisdom proves vital, she said.

At this point in our lack of regulation, we need to depend on ethical people, Karns said. I want students to learn to pause and reflect before making decisions, and to ask, How well does this align with my values? Is this a situation that could lead to problems for the company or users? Is this something I want to be associated with? Thats the core of the class.

For the course, Karns with the help of Cornells Center for Teaching Innovation (CTI) developed an immersive video, Nobodys Fault: An Interactive Experience in Data Science Practice, which challenges students to consider a moral conflict brought about by a bad model.

I tell my students that we're going to be in situations in this class where there's not a clear right or wrong answer, she said. And that's the point to struggle with that ambiguity now and get some comfort in that gray space. That way, when they get out into the workplace, they can be more effective.

To read more about the work Bowers CIS is doing to develop responsible AI, click here. Louis DiPietro is a public relations and content specialist for Cornell Bowers CIS.

More here:

Aristotle, AI and the Data Scientist | The ILR School - Cornell University | ILR School

Read More..