Page 403«..1020..402403404405..410420..»

Call for Immediate Review of AI Safety Standards Following Research on Large Language Models – Cision News

Recent findings by Anthropic, an AI safety start-up, have highlighted the risks associated with large language models (LLMs), prompting calls for a swift review of AI safety standards.

Valentin Rusu, lead machine learning engineer at Heimdal Security and holder of a Ph.D. in AI, insists these findings demand immediate attention.

It undermines the foundation of trust the AI industry is built on and raises questions about the responsibility of AI developers, said Rusu.

The Anthropic team found that LLMs could become "sleeper agents," evading safety measures designed to prevent negative behaviors.

AI systems that act like humans to trick people are a problem for current safety training methods.

Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety, the authors noted, emphasizing the need for a revised approach to AI safety training.

Rusu argues for smarter, forward-thinking safety protocols that anticipate and neutralize emerging threats within AI technologies.

The AI community must push for more sophisticated and nuanced safety mechanisms that are not just reactive but predictive, he said.

Current methodologies, while impressive, are not foolproof. There is a pressing need to forge a more dynamic and intelligent approach to safety.

The task of ensuring AIs safety is widely distributed, lacking a singular governing body.

While organizations like the National Institute of Standards and Technology in the U.S., the UK's National Cyber Security Centre, and the Cybersecurity and Infrastructure Security Agency are instrumental in setting safety guidelines, the primary responsibility falls to the creators and developers of AI systems.

They hold the expertise and capacity to embed safety from the onset.

In response to growing safety concerns, collaborative efforts are being made across the board.

From the OWASP Foundation's work on identifying AI vulnerabilities to the establishment of the 'AI Safety Institute Consortium' by over 200 members, including tech giants and research bodies, there is a concerted push towards creating a safer AI ecosystem.

Ross Lazerowitz from Mirage Security comments on the precarious state of AI security, likening it to the "wild west" and underscoring the importance of choosing trustworthy AI models and data sources.

This sentiment is echoed by Rusu. We need to pivot so AI serves, rather than betrays human progress.

He also notes the unique challenges AI presents to cybersecurity efforts. Ensuring AI systems, particularly neural networks, are robust and reliable remains paramount.

The concerns raised by the recent study on LLMs show the urgent need for a comprehensive strategy toward AI safety, calling on industry leaders and policymakers to step up their efforts in protecting the future of AI development.

For more on Valentin Rusu's take on LLM risks and the imperative for enhanced AI safety measures, read the full article here: https://heimdalsecurity.com/blog/llms-can-turn-nasty-machine-learning/.

Maria Madalina Popovici Media Relations Manager

Email:mpo@heimdalsecurity.com Phone: +40 746 923 883

About Valentin Rusu

Valentin Rusu is the lead Machine Learning Research Engineer at Heimdal, holding a Ph.D. in Artificial Intelligence. His expertise in machine learning and computer vision significantly contributes to advancing cybersecurity measures.

About Heimdal

Founded in Copenhagen, Denmark, in 2014, Heimdal empowers CISOs, Security Teams, and IT admins to enhance their SecOps, reduce alert fatigue, and be proactive using one seamless command and control platform.

Heimdals award-winning line-up of more than 10 fully integrated cybersecurity solutions spans the entire IT estate, enabling organizations to be proactive, whether remotely or onsite.

This is why their range of products and managed services offers a solution for every challenge, whether at the endpoint or network level, in vulnerability management, privileged access, implementing Zero Trust, thwarting ransomware, preventing BECs, and much more.

Tags:

See original here:
Call for Immediate Review of AI Safety Standards Following Research on Large Language Models - Cision News

Read More..

ACFE report says generative AI and biometrics key to fighting fraud, now and in future – Biometric Update

It could not be put more plainly: in 2024, using technology as part of an anti-fraud program is a necessity. So says the 2024 Anti-Fraud Technology Benchmarking Report, newly released by the Association of Certified Fraud Examiners (ACFE) in partnership with SAS. Recent attention paid to generative AI has often cast it as a threat: enabler of sophisticated biometric fraud, crippler of free and fair elections, toxic solvent of reality. But the 2024 benchmarking report shows that a huge majority of anti-fraud professionals see AI in a different light: as an integral part of their future operations.

According to the report, which draws on survey data from 1,187 ACFE members, 83 percent of organizations expect to implement generative AI in their anti-fraud programs over the next two years. That means use of GenAI and machine learning is expected to triple in that time and that interest is the highest it has been since the ACFE survey began.

Furthermore, the use of biometrics and robotics in anti-fraud measures is on the rise; biometrics have seen a 14 percent spike, from 26 percent of organizations implementing uses in 2019, up to 40 percent in 2024. The emerging technology currently used by the most organizations is physical biometrics, which is used to identify individuals based on physical attributes such as fingerprints and facial or vocal features, reads the report. Two in five organizations or 40 percent currently use physical biometrics as part of their anti-fraud program, and another 17 percent expect to adopt the technology in the next two years.

Data analysis and threat detection are among the top priorities driving technological uptake of biometrics, AI and other digital identity tools. Automated red flags, machine learning, and predictive analytics can be useful these days due to the high volume of cyberattacks and the increased use of technology by criminals, says an anonymous survey respondent.

While only 20 percent of organizations use behavioral biometrics, that number is still a stark contrast to what respondents say about other evolving technologies like blockchain and mixed reality. More than half of respondents, says the report, indicated that they do not expect their organizations to ever use blockchain/distributed ledger technology or virtual/augmented reality as part of their anti-fraud programs.

While much in the report is familiar territory organizational silos, budget restrictions, increasing security risks the prevalence of AI is new, reflecting how quickly it has become a top-level priority for many to address. However, the report hits a mild cautionary note in pointing out that accuracy matters, and that 85 percent of organizations consider the accuracy of the results achieved by generative AI as a very important or important factor in this decision. Staffing also matters: 77 percent of organizations still consider in-house skills related to the technology an important or very important factor in determining whether to implement it.

SAS, which co-sponsored the report, offers a complementary online data dashboard, providing in-depth interactive analysis of anti-fraud trends across industries and regions.

behavioral biometrics | biometrics | cybersecurity | enterprise | fraud prevention | generative AI

Originally posted here:
ACFE report says generative AI and biometrics key to fighting fraud, now and in future - Biometric Update

Read More..

Artificial intelligence turbocharges the pace of discovery and processes – bestmag

As artificial intelligence (AI) continues its rapid advance in all areas of life, the battery industry and scientists are embracing its power for faster testing and processing at mind-boggling speeds previously unimagined. Andrew Draper reports.

Richard Ahlfeld, founder of German start-up Monolith, said by embracing AI and machine learning principles, engineers are more easily able to navigate the intricate challenges of understanding and validating intractable physics of complex products more efficiently. This leads to streamlined development, optimised designs, and faster time to market, he said.

Monolith provides battery and other engineering testing solutions to OEMs and others, using AI. He said: Using charging voltage and temperature curves from early cycles that are yet to exhibit symptoms of battery failure, we apply data-driven models to both predict and classify the sample data by health condition based on the observational, empirical, physical, and statistical understanding of the multiscale systems.

The company offers a product called Next Test Recommender (NTR). It gives the example of an engineer trying to configure a fan to provide optimal cooling for all driving conditions. They had a test plan for this highly complex application that included a series of 129 tests.

When this test plan was inserted into NTR, it returned a ranked list of what tests should be run first. Of 129 tests, as shown, NTR recommended the last test number 129 should actually be among the first five to run and that 60 tests would be sufficient to characterise the full performance of the fan. That would give a 53% reduction in testing.

Ahlfeld said the best well-integrated machine learning models achieve a verified classification accuracy of 96.3% and an average misclassification test error of 7.7%. Our findings highlight the need for cloud-based artificial intelligence technology tailored to robustly and accurately predict battery failure in real-world applications, he said.

Its time for engineers and researchers to embrace machine learning as a vital tool in their pursuit of safer, longer-lasting, and more efficient batteries. As battery technology continues to evolve, the synergy between human expertise and machine intelligence will drive us toward a future where energy storage solutions are becoming both powerful and sustainable. Ahlfeld stresses the importance of clear plans for how to integrate machine learning solutions into existing testing processes.

The US Department of Energys Pacific Northwest National Laboratory announced in December the setting up of its Center for AI @PNNL to coordinate its research into projects focused on science, security, and energy resilience.

PNNL said while it has been working on AI for decades, the technology has only begun to surge in the past year with the ready availability of generative AI. This allows anyone to create persuasive content with small amounts of data. Sometimes it is errant, offering false facts and links that do not work, and photos that are made up. Scraped from the internet is the term used.

At the same time, AI is a vital tool for serious researchers as well as a subject all its own for scientists to create, explore and validate new ideas, it said. AI also presents an exciting opportunity for PNNL scientists to advance a critical area of science and chart the path forward. Court Corley, the laboratorys chief scientist for AI and director of the new centre, said: The field is moving at light speed, and we need to move quickly to keep PNNL at the frontier.

A priority of the centre is developing ways to keep AI secure and trustworthy, it said. It works with a network of academic and commercial partners: North Carolina State University, University of Washington, Washington State University, Microsoft, Micron, University of Texas at El Paso, Georgia Institute of Technology, Western Washington University, and other national laboratories and organisations. It also partners with the DOEs Office of Science Advanced Scientific Computing Research programme.

Many of the applications of AI research at PNNL are focused on energy resilience and security. Laboratory scientists use AI to improve the operation of the US electrical grid, keeping power flowing to homes and businesses. Others are using machine learning to explore new combinations of compounds that could power the next generation of lithium batteries.

PNNL and Microsoft scientists said a combination of advanced AI with advanced cloud computing is turbocharging the pace of discovery to speeds unimaginable just a few years ago.

They said PNNL scientists are testing a new battery material, a solid electrolyte, that was found in a matter of weeks, not years. They have been using advanced AI and high-performance computing (HPC), a type of cloud-based computing that combines large numbers of computers to solve complex scientific and mathematical tasks.

The new battery material was made using Microsofts Azure Quantum Elements cloud service to winnow 32 million potential inorganic materials to 18 promising candidates that could be used in battery development. That took just 80 hours.

Once made, artisan work stepped in. One of the first steps was to take solid precursors of the materials and to grind them by hand with a mortar and pestle, said PNNL materials scientist Shannon Lee. She uses a hydraulic press to compact the material into a small pellet. That goes into a vacuum tube and is heated to 450650C. It is then transferred to a box to keep it away from oxygen or water, and then ground into a powder for analysis.

Vijay Murugesan, material sciences group lead at PNNL, said a previous research project took several years to solve a problem and design a new material for use in a redox flow battery.

Nathan Baker, product leader for Microsoft, said: At every step of the simulation where I had to run a quantum chemistry calculation, instead Im calling the machine learning model. So I still get the insight and the detailed observations that come from running the simulation, but the simulation can be up to half a million times faster.

The bottleneck in the process is gaining access to the supercomputers, which are sometimes shared. Having AI tools in the cloud help relieve this, as the cloud is always available. The project has now generated a battery material which has been synthesised and turned in to a prototype battery for laboratory testing.

Murugesan said the story is not about this particular battery material, but rather the speed at which a new material was identified. It uses both lithium and sodium, as well as some other elements. The value of an alternative material to lithium is obvious: it does not rely on mining in a single or handful of countries and geopolitical issues, and it can be manufactured by volume.

A panel discussion last November among the scientific advisory board of physics-based UK battery management software company Breathe Battery Technologies, threw up a lot of questions. Its debate covered advancements in battery control and the rise of data. CTO Dr Yan Zhao said the burst of data and machine learning was super exciting but it was not yet clear what the goal was from leveraging data.

Professor of Electrical and Computer Engineering at University of Colorado, Gregory Plett, told the panel there is already synergy between physics and database-driven modelling. I think there are already synergies between the two. I mean, the models that we can create now that are on a pathway towards predicting (battery) lifetime.

Were not all the way there yet, but theyre already beginning to be useful and we can already run them so fast that we can generate data faster than we can actually analyse. So, we already need machine learning methods in order to just analyse all the virtual data that were generating which can augment the experimental data we gather in the lab.

A model that describes a battery life needs to be updated to account for aging and degradation. Getting it right, to ensure accuracy of things like remaining charge in an electric vehicle, is complex.

Plett said: It is a very complicated thing to do well, and I agree that its one that we need to solve because if we use a single model to describe the battery through its entire life, it may work very well at the beginning of life, but as the battery ages, that model will not continue to describe its behaviours very well.

And we may end up with a scenarioof someone being broken down on the side of the road because the model failed. So somehow, we have to have a model that describes the battery cell well at its present state of life.

He said there are a couple of methods to do that. One is if we can somehow use the data that were continuously measuring to adjust the parameters of the model that could provide a very satisfactory answer. Another is if we have a bank of battery models that are representative or descriptive of a battery at different stages of life and we simply select which model to use that is another alternative. And at this point, I dont know what the right answer is between those.

A research team at Stanford University led by professors Stefano Ermon and William Chueh announced in 2020 they had developed a machine learning-based method that slashed battery testing times by 98% when applied to battery charge speed.

Researchers at the Basque governments research body CIC energiGUNE have studied how AI can help battery recycling. They said the technological recycling routes developed have to be profitable, industrialisable and sustainable.

The research centre, which specialises in electrochemical and thermal energy storage, said in terms of waste sorting, the opportunities include:

As for recovery of materials, they identified:

They said studies on classification capacity have shown how AI is able to determine the location and type of waste to be treated through image recognition techniques. This makes it possible to determine which method or route is best for managing and handling that waste according to its toxicity etc.

AI solutions offer the ability to monitor and automate processes such as waste sorting and grouping or the recycling activity itself, which is an advantage in terms of efficiency, cost and safety, the authors said.

See the original post:
Artificial intelligence turbocharges the pace of discovery and processes - bestmag

Read More..

In Adelaide they’re trying to build a deep learning machine that can reason – Cosmos

Large Language Models burst onto the scene a little over a year ago and transformed everything, and yet its already facing a fork in the road.more of the same or does it venture into what is being called deep learning?

Professor Simon Lucey, the Director of the Adelaide-based Australian Institute for Machine Learning believes that path will lead to augmented reasoning.

Its a new and emerging field of AI that combines the ability of computers to recognise patterns through traditional machine learning, with the ability to reason and learn from prior information and human interaction.

Machines are great at sorting. Machines are great at deciding. Theyre just bad at putting the two together.

Part of the problem lies in teaching a machine something we dont fully understand ourselves: Intelligence.

What is it?

Is it a vast library of knowledge?

Is it extracting clues and patterns from the clutter?

Is it common sense or cold-hard rationality?

Machines are great at sorting. Machines are great at deciding. Theyre just bad at putting the two together.

The Australian Institute for Machine Learnings Professor Simon Lucey says its all these things and much more. And thats why artificial intelligence (AI) desperately needs the ability to reason out what best applies where, when, why and how.

Some people regard modern machine learning as glorified lookup tables, right? Its essentially a process of if Ive got this, then that.

The amazing thing, Lucey adds, is that raw processing power and big-data deep learning have managed to scale up to the level needed to mimic some types of intelligent behaviour.

Its proven this can actually work for a lot of problems, and work really well.

But not all problems.

Were seeing the emergence of a huge amount of low-risk AI and computer vision, Lucey says. But high-risk AI say looking for rare cancers, driving on a city street, flying a combat drone isnt yet up to scratch.

Existing big data and big computing techniques rely on finding the closest possible related example. But gaps in those examples represent a trap.

Theres all these scenarios where we are coming up against issues where rote memorisation doesnt equate to reasoning, Lucey explains.

The human brain has been called an average machine. Or an expectation generator.

Thats why we make so many mistakes while generally muddling our way through life.

But its a byproduct of the way the networks of neurons in our brains configure themselves in paths based on experience and learning.

This produces mental shortcuts. Expectation biases. And these help balance effectiveness with efficiency in our brains.

Intelligence isnt only about getting the right answer, says Lucey. Its getting the right answer in a timely fashion.

For example, humans are genetically programmed to respond reflexively to the sight of a lion, bear or spider.

Intelligence isnt only about getting the right answer. Its getting the right answer in a timely fashion.

You arent going to think and reason, he explains. Youre going to react. Youre going to get the hell out of there!

But evolution can lead to these mental shortcuts working too well.

We can find ourselves jumping at shadows.

Which is fine, right? says Lucey. Because if I make a mistake, its okay I just end up feeling a bit silly. But if Im right, Ill stay alive! Act quick, think slow.

Machine intelligence is very good at doing quick things like detecting a face.

But its that broader reasoning task realising if you were right or wrong where theres still a lot of work that needs to be done.

Biological entities like humans dont need nearly as much data as AI to learn from, says Lucey. They are much more data-efficient learners.

This is why a new approach is needed for machine learning.

People decades ago realised that some tasks can be programmed into machines step by step like when humans bake a cake, says Lucey. But there are other tasks that require experience. If Im going to teach my son how to catch and throw a ball, Im not going to hand him an instruction book!

Machines, however, can memorise enormous instruction books. And they can also bundle many sets of experiences into an algorithm. Machine learning enables computers to program themselves by example instead of relying on direct coding by humans.

How do I produce the rules behind an experience? How can I train AI to cope with the unexpected?

But its an outcome still limited by rigid programmed thinking.

These classical if-this-then-that rule sets can be very brittle, says Lucey. So how do I produce the rules behind an experience? How can I train AI to cope with the unexpected?

This needs context.

For example, research has shown babies figure out the concept of object permanence that something still exists when it moves out of sight between four and seven months of age.

And that helps the baby to move on to extrapolate cause and effect.

With machines, every time the ball moves or bounces in a way not covered by its set of rules it breaks down, says Lucey. But my kid can adapt and learn.

Its a problem facing autonomous cars.

Can we push every possible experience of driving through a city into an algorithm to teach it what to expect? Or can it instead learn relevant rules of behaviour instead, and rationalise which applies when?

Albert Einstein said: True education is about teaching how to think, not what to think.

Lucey equates this with the need for reasoning.

What Im talking about when it comes to reasoning, I guess, is that we all have these knee-jerk reactions over what should or should not happen. And this feeds up to a higher level of the brain for a decision.

We dont know how to do that for machines at the moment.

The problem with current machine learning is its only as good as the experiences its been exposed to.

Its about turning experience into knowledge. And being aware of that knowledge.

The problem with current machine learning is its only as good as the experiences its been exposed to, he says. And we have to keep shoving more and more experiences at it for it to identify something new.

An autonomous car is very good at its various sub-tasks. It can instantly categorise objects in video feeds. It can calculate distances and trajectories from sensors like LiDAR. And it can match these extremely quickly with its bible of programmed experiences.

Its working out how to connect these different senses to produce a generalisation beyond the moment that AI still struggles with, Lucey explains.

The AIML is exploring potential solutions through simulating neural networks the interconnected patterns of cells found in our brains.

In the world of AI, thats called Deep Learning.

Neural networks dont follow a set of rigid if this, then that instructions.

Instead, the process balances the weight of what it perceives to guide it through what is essentially a wiring diagram. Experience wears trails into this diagram. But it also adds potential alternative paths.

These pieces are all connected but have their own implicit bias, says Lucey. They give the machine a suite of solutions, and the ability to prefer one solution over another.

Its still early days. Weve still got a lot to learn about deep learning.

Neural network algorithms are great for quick reflex actions like recognising a face, he adds. But its the broader reasoning task like does that reflex fit the context of everything else going on around it where theres still a lot of work that needs to be done.

The AIML has a Centre for Augmented Reasoning.

The reasoning were trying to explore is the ability for a machine to go beyond what its been trained upon.

I think the big opportunities in AI over the next couple of decades is around creating data-efficient learning for a system that can reason, Lucey explains.

And the various AIML research teams are already chalking up wins.

Weve successfully applied that approach to the autonomous car industry. Weve also had a lot of success in other areas, such as recognising the geometry, shape and properties of new objects.

That is helping give machines a sense of object permanence. And that, in turn, is leading to solutions like AI-generated motion video that looks real.

The motive behind it all is to give AI the ability to extrapolate cause and effect.

The reasoning were trying to explore is the ability for a machine to go beyond what its been trained upon, says Lucey. Thats something very special to humans that machines still struggle with.

Originally posted here:
In Adelaide they're trying to build a deep learning machine that can reason - Cosmos

Read More..

Avoiding fusion plasma tearing instability with deep reinforcement learning – Nature.com

DIII-D

The DIII-D National Fusion Facility, located at General Atomics in San Diego, USA, is a leading research facility dedicated to advancing the field of fusion energy through experimental and theoretical research. The facility is home to the DIII-D tokamak, which is the largest and most advanced magnetic fusion device in the United States. The major and minor radii of DIII-D are 1.67m and 0.67m, respectively. The toroidal magnetic field can reach up to 2.2T, the plasma current is up to 2.0MA and the external heating power is up to 23MW. DIII-D is equipped with high-resolution real-time plasma diagnostic systems, including a Thomson scattering system45, charge-exchange recombination46 spectroscopy and magnetohydrodynamics reconstruction by EFIT37,39. These diagnostic tools allow for the real-time profiling of electron density, electron temperature, ion temperature, ion rotation, pressure, current density and safety factor. In addition, DIII-D can perform flexible total beam power and torque control through reliable high-frequency modulation of eight different neutral beams in different directions. Therefore, DIII-D is an optimal experimental device for verifying and utilizing our AI controller that observes the plasma state and manipulates the actuators in real time.

One of the unique features of the DIII-D tokamak is its advanced PCS47, which allows researchers to precisely control and manipulate the plasma in real time. This enables researchers to study the behaviour of the plasma under a wide range of conditions and to test ideas for controlling and stabilizing the plasma. The PCS consists of a hierarchical structure of real-time controllers, from the magnetic control system (low-level control) to the profile control system (high-level control). Our tearing-avoidance algorithm is also implemented in this hierarchical structure of the DIII-D PCS and is integrated with the existing lower-level controllers, such as the plasma boundary control algorithm39,41 and the individual beam control algorithm40.

Magnetic reconnection refers to the phenomenon in magnetized plasmas where the magnetic-field line is torn and reconnected owing to the diffusion of magnetic flux () by plasma resistivity. This magnetic reconnection is a ubiquitous event occurring in diverse environments such as the solar atmosphere, the Earths magnetosphere, plasma thrusters and laboratory plasmas like tokamaks. In nested magnetic-field structures in tokamaks, magnetic reconnection at surfaces where q becomes a rational number leads to the formation of separated field lines creating magnetic islands. When these islands grow and become unstable, it is termed tearing instability. The growth rate of the tearing instability classically depends onthe tearing stability index, , shown in equation (2).

$${varDelta }^{{prime} }equiv {left[frac{1}{psi }frac{{rm{d}}psi }{{rm{d}}x}right]}_{x=0-}^{x=0+}$$

(2)

where x is the radial deviation from the rational surface. When is positive, the magnetic topology becomes unstable, allowing (classical) tearing instability to develop. However, even when is negative (classical tearing instability does not grow), neoclassical tearing instability can arise due to the effects of geometry or the drift of charged particles, which can amplify seed perturbations. Subsequently, the altered magnetic topology can either saturate, unable to grow further48,49, or can couple with other magnetohydrodynamic events or plasma turbulence50,51,52,53. Understanding and controlling these tearing instabilities is paramount for achieving stable and sustainable fusion reactions in a tokamak54.

The ITER baseline scenario (IBS) is an operational condition designed for ITER to achieve fusion power of Pfusion=500MW and a fusion gain of QPfusion/Pexternal=10 for a duration of longer than 300s (ref. 12). Compared with present tokamak experiments, the IBS condition is notable for its considerably low edge safety factor (q953) and toroidal torque. With the PCS, DIII-D has a reliable capability to access this IBS condition compared with other devices; however, it has been observed that many of the IBS experiments are terminated by disruptive tearing instabilities19. This is because the tearing instability at the q=2 surface appears too close to the wall when q95 is low, and it easily locks to the wall, leading to disruption when the plasma rotation frequency is low. Therefore, in this study, we conducted experiments to test the AI tearability controller under the conditions of q953 and low toroidal torque (1Nm), where the disruptive tearing instability is easy to be excited.

However, in addition to the IBS where the tearing instability is a critical issue, there are other scenarios, such as hybrid and non-inductive scenarios for ITER12. These different scenarios are less likely to disrupt by tearing, but each has its own challenges, such as no-wall stability limit or minimizing inductive current. Therefore, it is worth developing further AI controllers trained through modified observation, actuation and reward settings to address these different challenges. In addition, the flexibility of the actuators and sensors used in this work at DIII-D will differ from that in ITER and reactors. Control policies under more limited sensing and actuation conditions also need to be developed in the future.

To predict tearing events in DIII-D, we first labelled whether each phase was tearing-stable or not (0 or 1) based on the n=1 Mirnov coil signal in the experiment. Using this labelled experimental data, we trained a DNN-based multimodal dynamic model that receives various plasma profiles and tokamak actuations as input and predicts the 25-ms-after tearing likelihood as output. The trained dynamic model outputs a continuous value between 0 and 1 (so-called tearability), where a value closer to 1 indicates a higher likelihood of a tearing instability occurring after 25ms. The architecture of this model is shown in Extended Data Fig. 1. The detailed descriptions for input and output variables and hyperparameters of the dynamic prediction model can be found in ref. 5. Although this dynamic model is a black box and cannot explicitly provide the underlying cause of the induced tearing instability, it can be utilized as a surrogate for the response of stability, bypassing expensive real-world experiments. As an example, this dynamic model is used as a training environment for the RL of the tearing-avoidance controller in this work. During the RL training process, the dynamic model predicts future N and tearability from the given plasma conditions and actuator values determined by the AI controller. Then the reward is estimated based on the predicted state using equation (1) and provided to the controller as feedback.

Figure 4bd shows the contour plots of the estimated tearability for possible beam powers at the given plasma conditions of our control experiments. The actual beam power controlled by the AI is indicated by the black solid lines. The dashed lines are the contour line of the threshold value set for each discharge, which can roughly represent the stability limit of the beam power at each point. The plot shows that the trained AI controller proactively avoids touching the tearability threshold before the warning of instability.

The sensitivity of the tearability against the diagnostic errors of the electron temperature and density is shown in Extended Data Fig. 2. The filled areas in Extended Data Fig. 2 represent the range of tearability predictions when increasing and decreasing the electron temperature and density by 10%, respectively, from the measurements in 193280. The uncertainty in tearability due to electron temperature error is estimated to be, on average, 10%, and the uncertainty due to electron density error is about 20%. However, even when considering diagnostic errors, the trend in tearing stability over time can still be observed to remain consistent.

The dynamic model used for predicting future tearing-instability dynamics is integrated with the OpenAI Gym library55, which allows it to interact with the controller as a training environment. The tearing-avoidance controller, another DNN model, is trained using the deep deterministic policy gradient56 method, which is implemented using Keras-RL(https://keras.io/)57.

The observation variables consist of 5 different plasma profiles mapped on 33 equally distributed grids of the magnetic flux coordinate: electron density, electron temperature, ion rotation, safety factor and plasma pressure. The safety factor (q) can diverge to infinity at the plasma boundary when the plasma is diverted. Therefore, 1/q has been used for the observation variables to reduce numerical difficulties42. The action variables include the total beam power and the triangularity of the plasma boundary, and their controllable ranges were limited to be consistent with the IBS experiment of DIII-D. The AI-controlled plasma boundary shape has been confirmed to be achievable by the poloidal field coil system of ITER, as shown in Extended Data Fig. 3.

The RL training process of the AI controller is depicted in Extended Data Fig. 4. At each iteration, the observation variables (five different profiles) are randomly selected from experimental data. From this observation, the AI controller determines the desirable beam power and plasma triangularity. To reduce the possibility of local optimization, action noises based on the OrnsteinUhlenbeck process are added to the control action during training. Then the dynamic model predicts N and tearability after 25ms based on the given plasma profiles and actuator values. The reward is evaluated according to equation (1) using the predicted states, and then given as feedback for the RL of the AI controller. As the controller and the dynamic model observe plasma profiles, it can reflect the change of tearing stability even when plasma profiles vary due to unpredictable factors such as wall conditions or impurities. In addition, although this paper focuses on IBS conditions where tearing instability is critical, the RL training itself was not restricted to any specific experimental conditions, ensuring its applicability across all conditions. After training, the Keras-based controller model is converted to C using the Keras2C library58 for the PCS integration.

Previously, a related work17 employed a simple bang-bang control scheme using only beam power to handle tearability. Although our control performance may seem similar to that work in terms of N, it is not true if considering other operating conditions. In ITER and future fusion devices, higher normalized fusion gain (GQ) with stable core instability is critical. This requires a high N and small q95 as (Gpropto {beta }_{{rm{N}}}/{q}_{95}^{2}). At the same time, owing to limited heating capability, high G has to be achieved with weak plasma rotation (or beam torque). Here, high N, small ({q}_{95}^{2}) and low torque are all destabilizing conditions of tearing instability, highlighting tearing instability as a substantial bottleneck of ITER.

As shown in Extended Data Fig. 5, our control achieves a tearing-stable operation of much higher G than the test experiment shown in ref. 17. This is possible by maintaining higher (or similar) N with lower q95 (43), where tearing instability is more likely to occur. In addition, this is achieved with a much weaker torque, further highlighting the capability of our RL controller in harsher conditions. Therefore, this work shows more ITER-relevant performance, providing a closer and clearer path to the high fusion gain with robust tearing avoidance in future devices.

In addition, the performance of RL control in achieving high fusion can be further highlighted when considering the non-monotonic effect of N on tearing instability. Unlike q95 or torque, both increasing and decreasing N can destabilize tearing instabilities. This leads to the existence of optimal fusion gain (as GN), which enables the tearing-stable operation and makes system control more complicated. Here, Extended Data Fig. 6 shows the trace of RL-controller discharge in the space of fusion gain versus time, where the contour colour illustrates the tearability. This clearly shows that the RL controller successfully drives plasma through the valley of tearability, ensuring stable operation and showing its remarkable performance in such a complicated system.

Such a superior performance is feasible by the advantages of RL over conventional approaches, which are described below.

By employing a multi-actuator (beam and shape) multi-objectives (low tearability and high N) controller using RL, we were able to enter a higher-N region while maintaining tolerable tearability. As shown in Extended Data Fig. 5, our controlled discharge (193280) shows a higher N and G than the one in the previous work (176757). This advantage of our controller is because it adjusts the beam and plasma shape simultaneously to achieve both increasing N and lowering tearability. It is notable that our discharge has more unfavourable conditions (lower q95 and lower torque) in terms of both N and tearing stability.

The previous tearability model evaluates the tearing likelihood based on current zero-dimensional measurements, not considering the upcoming actuation control. However, our model considers the one-dimensional detailed profiles and also the upcoming actuations, then predicts the future tearability response to the future control. This can provide a more flexible applicability in terms of control. Our RL controller has been trained to understand this tearability response and can consider future effects, while the previous controller only sees the current stability. By considering the future responses, ours offers a more optimal actuation in the longer term instead of a greedy manner.

This enables the application in more generic situations beyond our experiments. For instance, as shown in Extended Data Fig. 7a, tearability is a nonlinear function of N. In some cases (Extended Data Fig. 7b), this relation is also non-monotonic, making increasing the beam power the desired command to reduce tearability (as shown in Extended Data Fig. 7b with a right-directed arrow). This is due to the diversity of the tearing-instability sources such as N limit, and the current well. In such cases, using a simple control shown in ref. 17 could result in oscillatory actuation or even further destabilization. In the case of RL control, there is less oscillation and it controls more swiftly below the threshold, achieving a higher N through multi-actuator control, as shown in Extended Data Fig. 7c.

Plasma shape parameters are key control knobs that influence various types of plasma instability. In DIII-D, the shape parameters such as triangularity and elongation can be manipulated through proximity control41. In this study, we used the top triangularity as one of the action variables for the AI controller. The bottom triangularity remained fixed across our experiments because it is directly linked to the strike point on the inner wall.

We also note that the changes in top triangularity through AI control are quite large compared with typical adjustments. Therefore, it is necessary to verify whether such large plasma shape changes are permitted for the capability of magnetic coils in ITER. Additional analysis, as shown in Extended Data Fig. 3, confirms that the rescaled plasma shape for ITER can be achieved within the coil current limits.

The experiments in Figs. 3b and 4a have shown that the tearability can be maintained through appropriate AI-based control. However, it is necessary to verify whether it can robustly maintain low tearability when additional actuators are added and plasma conditions change. In particular, ITER plans to use not only 50MW beams but also 1020MW radiofrequency actuators. Electron cyclotron radiofrequency heating directly changes the electron temperature profile and the stability can vary sensitively. Therefore, we conducted an experiment to see whether the AI controller successfully maintains low tearability under new conditions where radiofrequency heating is added. In discharge 193282 (green lines in Extended Data Fig. 8), 1.8MW of radiofrequency heating is preprogrammed to be steadily applied in the background while beam power and plasma triangularity are controlled via AI. Here, the radiofrequency heating is towards the core of the plasma and the current drive at the tearing location is negligible.

However, owing to the sudden loss of plasma current control at t=3.1s, q95 increased from 3 to 4, and the subsequent discharge did not proceed under the ITER baseline condition. It should be noted that this change in plasma current control was unintentional and not directly related to AI control. Such plasma current fluctuation sharply raised the tearability to exceed the threshold temporarily at t=3.2s, but it was immediately stabilized by continued AI control. Although it is eventually disrupted owing to insufficient plasma current by the loss of plasma current before the preprogrammed end of the flat top, this accidental experiment demonstrates the robustness of AI-based tearability control against additional heating actuators, a wider q95 range and accidental current fluctuation.

In normal plasma experiments, control parameters are kept stationary with a feed-forward set-up, so that each discharge is a single data point. However, in our experiments, both plasma and control are varying throughout the discharge. Thus, one discharge consists of multiple control cycles. Therefore, our results are more important than one would expect compared with standard fixed control plasma experiments, supporting the reliability of the control scheme.

In addition, the predicted plasma response due to RL control for 1,000 samples randomly selected from the experimental database, which includes not just the IBS but all experimental conditions, is shown in Extended Data Fig. 9a,b. When T>0.5 (unstable, top), the controller tries to decrease T rather than affecting N, and when T<0.5 (stable, bottom), it tries to increase N. This matches the expected response by the reward shown in equation (1). In 98.6% of the unstable phase, the controller reduced the tearability, and in 90.7% of the stable phase, the controller increased N.

Extended Data Fig. 9c shows the achieved time-integrated N for the discharge sequences of our experiment session. Discharges until 193276 either did not have the RL control applied or had tearing instability occurring before the control started, and discharges after 193277 had the RL control applied. Before RL control, all shots except one (193266: low-N reference shown in Fig. 3b) were disrupted, but after RL control was applied, only two (193277 and 193282) were disrupted, which were discussed earlier. The average time-integrated N also increased after the RL control. In addition, the input feature ranges of the controlled discharges are compared with the training database distribution in Extended Data Fig. 10, which indicates that our experiments are neither too centred (the model not overfitted to our experimental condition) nor too far out (confirming the availability of our controller on the experiments).

View post:
Avoiding fusion plasma tearing instability with deep reinforcement learning - Nature.com

Read More..

US used AI to find targets for strikes on Syria and Yemen – The National

The US used artificial intelligence to identify targets hit by air strikes in the Middle East this month, a defence official told Bloomberg News, revealing a growing military use of the developing technology for combat.

Machine-learning algorithms that can teach themselves to identify objects helped to narrow down targets for more than 85 US air strikes on February 2, said Schuyler Moore, chief technology officer for US Central Command (Centcom), which runs military operations in the Middle East.

The Pentagon said those strikes were conducted by bombers and fighter aircraft on seven facilities in Iraq and Syria in retaliation for a deadly strike on US personnel at a Jordan base.

Weve been using computer vision to identify where there might be threats, Ms Moore told Bloomberg News.

Weve certainly had more opportunities to target in the last 60 to 90 days.

She said the US was currently looking for an awful lot of rocket launchers from hostile forces in the region.

The military has previously acknowledged using computer-vision algorithms for intelligence purposes but Ms Moores comments mark the strongest known confirmation of the US military using the technology to identify enemy targets that were subsequently hit.

The US strikes, which the Pentagon said destroyed or damaged rockets, missiles, drone storage and militia operations centres, among other targets, were part of President Joe Biden's response to the killing of three service members in an attack on January 28 at a military base in Jordan.

The US attributed the attack to Iranian-backed militias.

Ms Moore said AI systems have also helped identify rocket launchers in Yemen and surface vessels in the Red Sea, several of which Centcom said it has destroyed in a number of weapons strikes this month.

Iran-supported Houthi militias in Yemen have repeatedly attacked commercial ships in the Red Sea with rockets.

The targeting algorithms were developed under Project Maven, a Pentagon initiative started in 2017 to accelerate the adoption of AI and machine learning throughout the Defence Department and to support defence intelligence, with emphasis in prototypes at the time on the US fight against ISIS militants.

Ms Moore said US forces in the Middle East have experimented with computer-vision algorithms that can locate and identify targets from imagery captured by satellite and other data sources, trying them out in exercises over the past year.

October 7 everything changed, Ms Moore said, referring to the Hamas attack on Israel that preceded the war in Gaza.

We immediately shifted into high gear and a much higher operational tempo than we had previously.

US forces were able to make a pretty seamless shift into using Maven after a year of digital exercises, she added.

Ms Moore emphasised Mavens AI capabilities were being used to help find potential targets but not to verify them or deploy weapons.

She said exercises late last year, in which Centcom experimented with an AI recommendation engine, showed such systems frequently fell short of humans in proposing the order of attack or the best weapon to use.

Humans constantly check the AI targeting recommendations, she said. US operators take seriously their responsibilities and the risk that AI could make mistakes, she said, and it tends to be pretty obvious when something is off.

There is never an algorithm thats just running, coming to a conclusion and then pushing on to the next step, she said.

Every step that involves AI has a human checking in at the end.

Updated: February 26, 2024, 6:46 PM

Go here to read the rest:
US used AI to find targets for strikes on Syria and Yemen - The National

Read More..

Inside the Mind of a Hacker: Sparking Cybersecurity Innovation – AutoGPT

Cyberattacks and data breaches have become an unfortunate reality of our increasingly digital world. As more sensitive data is stored online and systems grow more interconnected, cybercriminals have more to gain and more vulnerabilities to exploit. The results of these attacks can be devastating, from substantial financial losses to leaks of private information.

To stay ahead of emerging threats, the cybersecurity industry must continuously enhance protections through new innovations and technologies. Recent years have seen promising developments that aim to bolster data security in novel ways.

The scale and impact of cyberattacks continue to intensify. According to IBMs 2022 Cost of a Data Breach Report, the average cost of a data breach has risen to a record $4.35 million, a 13% increase since 2019. For the healthcare sector, the average cost is even higher at over $10 million per incident.

These costs include regulatory fines, legal fees, IT expenses, lost revenue, and reputational damage. Major data breaches often spur drops in stock prices as well. Equifaxs high-profile 2017 breach resulted in a 36% stock drop, erasing $6 billion in market value.

As attacks grow more frequent and severe, traditional security methods are no longer sufficient. Hackers use increasingly sophisticated tactics, requiring defenders to stay one step ahead with new protective technologies.

Key focus areas for innovation include advanced threat detection, automated response systems, encrypted data transfers, user behavior analytics, and more robust access controls. Integrating these latest developments can significantly bolster resilience.

While cybersecurity threats will continue to evolve, dedicated innovation gives organizations the best chance of preventing attacks before they occur. Implementing layers of emerging technology also minimizes potential damage if breaches do happen.

Ultimately, continuous progress in cybersecurity is crucial for securing sensitive data and upholding operational integrity in todays hyperconnected landscape. The coming years will reveal whether defensive technologies can keep pace with determined adversaries.

Cyber threats are growing more advanced, exploiting vulnerabilities faster than many organizations can address them. By integrating cutting-edge cybersecurity innovations, businesses can get ahead of threats and protect their critical assets.

AI and machine learning algorithms enable real-time analysis of network activity to rapidly uncover threats. Integrating automation speeds up response times to quickly contain attacks before they escalate.

For example, automated systems can instantly block suspicious IP addresses or disable compromised accounts. Reducing the window for adversaries to operate minimizes potential damage.

The zero-trust model verifies each access request, even from users inside the network perimeter. It authenticates every user and device, authorizes access based on policies, and encrypts connections.

Segmenting access prevents attackers from gaining full network access if they compromise an account. It limits lateral movement and the blast radius from breaches.

Blockchains decentralized structure means no central point of failure for attackers to target. Cryptography provides inherent confidentiality, integrity, and accountability across transactions.

Smart contracts automate actions like access revocation for suspicious activities. Consensus protocols enable collective threat evaluation by security tools across partner networks.

Instead of siloed point solutions, a cybersecurity mesh integrates tools into a unified whole. It centralizes policies, dashboards, and data to improve visibility and streamline administration.

Common identity fabrics authorize user access across applications. Shared analytics bolster threat detection, while open architectures allow incorporating new innovations.

Quantum computing poses a rising threat to crack current encryption. Quantum-resistant algorithms utilize lattice cryptography and can run on classical hardware.

Migrating to these future-proof ciphers ensures confidentiality of sensitive data. It protects critical assets from decryption if quantum attacks emerge.

Integrating leading-edge security innovations is crucial to counter increasingly sophisticated cyber threats. Advanced solutions enable organizations to reinforce defenses, respond swiftly to incidents, and secure critical systems.

Based on the search results, here is a 400-600 word article on implementing cutting-edge cybersecurity solutions:

Cyber threats are growing increasingly sophisticated, necessitating robust and proactive security measures. Implementing cutting-edge cybersecurity solutions can help organizations detect threats early and protect critical assets.

The first step is conducting cyber risk assessments to identify vulnerabilities and potential attack vectors. Comprehensive evaluations of networks, endpoints, cloud environments, and third parties reveal security gaps. Organizations can then prioritize remediation efforts based on risk levels. Staying updated on emerging attack techniques also enables adapting defenses accordingly.

With risks mapped, advanced safeguards can be deployed, including:

As defenses expand, integrating them into a unified security mesh architecture enhances visibility and streamlines operations. Rather than siloed products, security tools connect into a centralized system with shared data and coordinated responses.

Despite advanced controls, employees remain a top attack vector. Comprehensive cybersecurity awareness training is thus essential. Education on phishing, strong passwords, social engineering, and data handling best practices empowers users and closes security gaps.

With robust safeguards implemented, ongoing monitoring, testing, and training help sustain defenses against evolving threats. Conducting simulations, audits, and risk analysis enables promptly identifying and addressing vulnerabilities before exploitation.

By taking a proactive stance assessing risks, deploying layered controls, integrating systems, training users, and continually validating security organizations can lock down critical assets and withstand sophisticated threats. Cybersecurity demands constant innovation and adaptation in todays complex digital landscape.

The cybersecurity landscape is continuously evolving as new technologies emerge and threat actors become more sophisticated. To keep pace, the industry must embrace innovations and collaboration. Three key areas will shape the future of cyber defense: AI and automation, hardware-enhanced security, and global cooperation.

Artificial intelligence (AI) and machine learning algorithms already analyze threats and automate responses.Their role will expand as talent shortages persist. By 2025, understaffed security teams could use AI assistants to handle 30-60% of their workflow.Intelligent algorithms will also take over mundane tasks like monitoring systems, freeing analysts to focus on higher-value investigations.

Another key innovation is automated vulnerability management to continuously find and patch flaws.With networks becoming more complex, these AI-powered tools will be vital for threat exposure management.

Innovations like secure boot processes and cryptographic accelerators will shift more security to hardware.For example, Intel CPUs now feature control-flow enforcement to prevent code-reuse attacks.

As organizations implement zero-trust models, hardware-based device authentication will verify identities.Hardware security modules will also securely manage encryption keys.

Geopolitical tensions are hindering cooperation, but joint initiatives between allies are emerging.

Information sharing networks, public-private partnerships, and aligned policies are crucial for an effective cyber posture.

To counter threats from China, Russia, Iran, and North Korea, global powers like the U.S. must overcome fragmentation and build collective defense capabilities.Joint exercises between NATO and the EU boost preparedness for potential cyber warfare.

Cybersecurity has come a long way in recent years, withinnovations in cybersecuritynow enabling organizations to better combat increasingly sophisticated cyber threats. As the digital landscape continues to evolve rapidly, its essential that cybersecurity strategies adapt accordingly to stay one step ahead.

Artificial intelligence and automation will take on more mundane tasks from security teams, allowing staff to focus their efforts on higher-value activities. Security awareness training is also being enhanced by A.I. to educate employees using data-driven insights into an organizations unique vulnerabilities.

A.I. and machine learning algorithms quickly analyze threats and automate responses to mitigate attacks faster than ever before.

Innovations in hardware security and encryption aim to protect the rising number of connected devices and ensure data remains secure, even with the emergence of quantum computing.

Hardware security features increase integrity and support features for high-risk application containment.

As cyber threats increasingly cross borders, global cooperation between public and private sector organizations will be vital for sharing intelligence and collectively improving defenses.

Increased collaboration and information sharing will strengthen collective defenses.

Innovations in cybersecurityequip organizations with the tools needed to detect and respond to threats more effectively. As attacks become more advanced, prioritizing security protects critical infrastructure and consumer data. Businesses must implement layered defenses and continuously assess and update their cybersecurity approach.

Read this article:
Inside the Mind of a Hacker: Sparking Cybersecurity Innovation - AutoGPT

Read More..

Computer science professor leaves MIT ‘dream job’ for Yeshiva due to Jew-hatred – JNS.org

(February 26, 2024 / JNS)

After quitting his dream job at Massachusetts Institute of Technology due to antisemitism on campus, Mauricio Karchmer is fitting in at his new job at Yeshiva University.

The computer scientist has, in his first two days at Yeshiva, already mentored students, taught courses in multiple domains of expertise, and helped both university leadership and the broader community understand the dynamics on college campuses outside of YU, Noam Wasserman, dean of Yeshivas Sy Syms School of Business, told JNS.

Weissman said Karchmer is already brainstorming with department chairs at the school about a course he is designing for the fall semester, which will bring together his expertise in financial engineering and computer science.

The professor also held a fireside chat at Yeshiva with Rabbi Ari Berman, the university president, aboutantisemitism on campus.

Karchmer observed that the stakes are much bigger than just the war with Hamas, because The Palestinians are a pawn and Israel is a proxy, Weissman said.

Karchmer announced his move to Yeshiva, where he is a visiting guest faculty member, on LinkedIn. He said he was honored to be part of a deeply grounded institution with leaders who lead by living up to their values.

Also on LinkedIn, Berman wrote that it was a privilege to welcome Karchmer to the faculty. As a top-tier professor in his field and a leader who lives his values with integrity and authenticity, he is a role model to us all, Berman said.

In an article for The Free Press, Karchmer noted that MIT drew comparisons between Israel and Hamas. A message from the head of his department and its diversity, equity and inclusion office sent out a message riddled with equivocations, without mentioning the barbarity of Hamass attack, stating only that we are deeply horrified by the violence against civilians and wish to express our deep concern for all those involved, Karchmer wrote.

I was shocked that my institutionled by people who are meant to see the world rationallycould not simply condemn a brutal terrorist act, he added.

Wasserman told JNS he was very impressed with Karchmers mix of humility and desire to learn, combined with steadfast adherence to his values even when they meant having to leave his dream job at MIT when those values were threatened.

You have read 3 articles this month.

Register to receive full access to JNS.

Read the original post:

Computer science professor leaves MIT 'dream job' for Yeshiva due to Jew-hatred - JNS.org

Read More..

Debunking The 4 Biggest Myths About Artificial Intelligence in Retail – BizTech Magazine

ChatGPT is a good example of this kind of bias. Recent research shows that ChatGPT leans liberal, having picked up political biases from its training data. A separate study also revealed that ChatGPT showed gender bias in recommendation letters it wrote for hypothetical male and female job candidates. Though 67 percent of recruiters say that AI has improved the hiring process, another recent study found that AI-enabled recruitment, while not without its upsides, results in discriminatory hiring practices based on gender, race, color, and personality traits, according to an article in Humanities and Social Sciences Communications.

These issues reflect the biases of the humans behind the inputs that AI leverages. As Michele Goetz, vice president and principal analyst at Forrester, told BizTech, Youre addressing where human error comes in, because the training data you use could have human biases.

DISCOVER: These are the four most effective use cases for artificial intelligence in retail.

When it comes to understanding consumer preferences, AI is like a meteorologist making a weather forecast. They both use data to make predictions as accurately as possible based on pattern recognition and other factors. This leads to predictions that are often (but not always) accurate. But while historical data points might not change, the variables of the present can, and these unexpected changes can lead to inaccuracies.

Its also worth noting that consumer behavior isnt limited to consumer preferences but is instead the result of a wide range of intangible factors. And while AI is especially helpful in assisting with decision-making when the factors involved are beyond human comprehension, AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision making the ethical, moral and other human considerations that guide the course of business, life and society at large, notes the Harvard Business Review.

RELATED: The retail solutions and services that can help your business.

Certainly, the size of a retailer will influence what kind of AI technologies it can implement. But just because a small or midsized business cant adopt AI technology at the same scale as a larger retailer, that doesnt mean that it shouldnt adopt AI at all. There are a wide range of accessible and scalable AI solutions including cloud-based ML tools that retailers can tailor to their unique needs and leverage to better achieve their business objectives in a competitive market.

Not sure how to get started with AI? Major technology brands such as CDW, IBM, Google, Microsoft and NVIDIA have a wide range of solutions to meet your needs. Dont hesitate to reach out and take advantage. No matter how big or small the business, AI adoption doesnt have to be a solo journey.

Read this article:
Debunking The 4 Biggest Myths About Artificial Intelligence in Retail - BizTech Magazine

Read More..

Spring 2024 Bethe Lecture bridges physics and computer science | Cornell Chronicle – Cornell Chronicle

Artificial intelligence applications perform amazing feats winning at chess, writing college admission essays, passing bar exams but the complexity of these systems is so large they rival that of nature, with all the challenges that come with understanding nature.

An approach to a better understanding of this computer science puzzle is emerging from an unexpected direction: physics. Lenka Zdeborov, professor of physics and computer science at cole Polytechnique Fdrale de Lausanne in Switzerland, is using methods from theoretical physics to peer inside AI algorithms and answer fundamental questions about how they work and what they can and cannot do.

Zdeborov will visit campus in March to deliver the spring 2024 Bethe Lecture: Bridging Physics and Computer Science: Understanding Hard Problems, is Wednesday, March 13 at 7:30 p.m. in Schwartz Auditorium, Rockefeller Hall.

Zdeborov, who enjoys erasing boundaries between theoretical physics, mathematics and computer science, will explore how principles from statistical physics provide insights into challenging computational problems. Through this interdisciplinary lens, she has uncovered phase transitions that delineate the complexity of tasks, distinguishing between those easily tackled by computers and those posing significant challenges.

Read the full story on the College of Arts and Sciences website.

See original here:

Spring 2024 Bethe Lecture bridges physics and computer science | Cornell Chronicle - Cornell Chronicle

Read More..