Page 2,105«..1020..2,1042,1052,1062,107..2,1102,120..»

Steps to perform when your machine learning model overfits in training – Analytics India Magazine

Overfitting is a basic problem in supervised machine learning where the model shows well generalisation capabilities on seen data but poorly performs on unseen data. Overfitting occurs as a result of the existence of noise, the small size of the training set, and the complexity involved in algorithms. In this article, we will be discussing different strategies to overcome the overfitting of machine learners while at the training stage. Following are the topics to be covered.

Lets start with the overview of overfitting in the machine learning model.

Model is overfitting data when it memorises all the specific details of the training data and fails to generalise. It is a statistical error caused by poor statistical judgments. Because it is too closely tied to the data set, it adds bias to the model. Overfitting limits the models relevance to its data set and renders it irrelevant to other data sets.

Definition according to statistics

In the presence of a hypothesis space, a hypothesis is said to overfit the training data if there exists some alternative hypothesis with a smaller error than the hypothesis over the training examples, but the alternative hypothesis with a smaller overall error than the entire distribution of instances.

Are you looking for a complete repository of Python libraries used in data science,check out here.

Detecting overfitting is almost impossible before you test the data. During the training, there are two errors: training error and validation error when the training is constantly decreasing but the validation error decreases for a period and then starts to increase but meanwhile the training error is still decreasing. This kind of scenario is overfitting.

Lets understand the mitigation strategies for this statistical problem.

There are different stages in a machine learning project where different mitigation techniques could be applied to mitigate the overfitting.

High dimensional data lead to model overfitting because in these data the number of observations is much less than the number of features. This will result in indeterministic answers to the problem.

Ways to mitigate

During the process of data wrangling, one can face the problem of outliers in the data. As these outliers increase the variance in the dataset and due to this the model will train itself to these outliers and will result in an output which has high variance and low bias. Hence the bias-variance tradeoff is disturbed.

Ways to mitigate

They either require particular attention or should be utterly ignored, depending on the circumstances. If the data set contains a significant number of outliers, it is critical to utilise a modelling approach that is resistant to outliers or to filter out the outliers.

Cross-validation is a resampling technique used to assess machine learning models on a small sample of data. Cross-validation is primarily used in applied machine learning to estimate a machine learning models skill on unseen data. That is, to use a small sample to assess how the model will perform in general when used to generate predictions on data that was not utilised during the models training.

Evaluation Procedure using K-fold cross-validation

The above is the process of K fold when k is 5 this is known as 5 folds.

This method is used to prevent the learning speed slow-down problem. Because of noise learning, the accuracy of algorithms stops improving beyond a certain point or even worsens.

The green line represents the training error, and the red line represents the validation error, as illustrated in the picture, where the horizontal axis is an epoch and the vertical axis is an error. If the model continues to learn after the point, the validation error will rise while the training error will fall. So the goal is to pinpoint the precise time at which to discontinue training. As a result, we achieved an ideal fit between under-fitting and overfitting.

Way to achieve the ideal fit

To compute the accuracy after each epoch and stop training when the accuracy of test data stops improving, and then use the validation set to figure out a perfect set of values for the hyper-parameters, and then use the test set to complete the final accuracy evaluation. When compared to directly using test data to determine hyper-parameter values, this method ensures a better level of generality. This method assures that, at each stage of an iterative algorithm, bias is reduced while variance is increased.

Noise reduction, naturally, becomes one study path for overfitting inhibition. Pruning is recommended to lower the size of final classifiers in relational learning, particularly in decision tree learning, based on this concept. Pruning is an important principle that is used to minimise classification complexity by removing less useful or irrelevant data, and then to prevent overfitting and increase classification accuracy. There are two types of pruning.

In many circumstances, the amount and quality of training datasets may have a considerable impact on machine learning performance, particularly in the domain of supervised learning. The model requires enough data for learning to modify parameters. The sample count is proportional to the number of parameters.

In other words, an extended dataset can significantly enhance prediction accuracy, particularly in complex models. Existing data can be changed to produce new data. In summary, there are four basic techniques for increasing the training set.

When creating a predictive model, feature selection is the process of minimising the number of input variables. It is preferable to limit the number of input variables to lower the computational cost of modelling and, in some situations, to increase the models performance.

The following are some prominent feature selection strategies in machine learning:

Regularisation is a strategy for preventing our network from learning an overly complicated model and hence overfitting. The model grows more sophisticated as the number of features rises.

An overfitting model takes all characteristics into account, even if some of them have a negligible influence on the final result. Worse, some of them are simply noise that has no bearing on the output. There are two types of strategies to restrict these cases:

In other words, the impact of such ineffective characteristics must be restricted. However, there is uncertainty in the unnecessary characteristics, so minimise them altogether by reducing the models cost function. To do this, include a penalty word called regularizer into the cost function. There are three popular regularisation techniques.

Instead of discarding those less valuable qualities, it assigns lower weights to them. As a result, it can gather as much information as possible. Large weights can only be assigned to attributes that improve the baseline cost function significantly.

Hyperparameters are selection or configuration points that allow a machine learning model to be tailored to a given task or dataset. To optimise them is known as hyperparameter tuning. These characteristics cannot be learnt directly from the standard training procedure.

They are generally resolved before the start of the training procedure. These parameters indicate crucial model aspects such as the models complexity or how quickly it should learn. Models can contain a large number of hyperparameters, and determining the optimal combination of parameters can be thought of as a search issue.

GridSearchCV and RandomizedSearchCV are the two most effective Hyperparameter tuning algorithms.

GridSearchCV

In the GridSearchCV technique, a search space is defined as a grid of hyperparameter values, and each point in the grid is evaluated.

GridSearchCV has the disadvantage of going through all intermediate combinations of hyperparameters, which makes grid search computationally highly costly.

Random Search CV

The Random Search CV technique defines a search space as a bounded domain of hyperparameter values that are randomly sampled. This method eliminates needless calculation.

Image source

Overfitting is a general problem in supervised machine learning that cannot be avoided entirely. It occurs as a result of either the limitations of training data, which might be restricted in size or comprise a large amount of data, or noises, or the restrictions of algorithms that are too sophisticated and need an excessive number of parameters. With this article, we could understand the concept of overfitting in machine learning and the ways it could be mitigated at different stages of the machine learning project.

Read more:
Steps to perform when your machine learning model overfits in training - Analytics India Magazine

Read More..

Assistant Professor in Artificial Intelligence / Data Analytics / Machine Learning job with THE HONG KONG POLYTECHNIC UNIVERSITY | 292449 – Times…

The Hong Kong Polytechnic University is a government-funded tertiary institution in Hong Kong. It offers programmes at various levels including Doctorate, Masters and Bachelors degrees. It has a full-time academic staff strength of around 1,200. The total annual consolidated expenditure budget of the University is in excess of HK$7.6 billion.

DEPARTMENT OF ELECTRONIC AND INFORMATION ENGINEERING

Assistant Professor in Artificial Intelligence/Data Analytics/Machine Learning(Ref. 22050510)

Since its inception in 1974, the Department of Electronic and Information Engineering (EIE) has enjoyed a distinguished history of providing outstanding engineering education and applicable research in Hong Kong.

We are seeking applications from strong candidates in the areas listed above. This will complement the existing strengths in communications, wireless networks and IoT,information security, nonlinear circuits and systems, multimedia signal processing, artificial intelligence,photonic systems, and optoelectronic systems and devices. Apart from performing teaching duties at undergraduate and postgraduate levels in the related area, candidates are also expected to teach outside their main research areas.

Leading scholars regularly visit the Department for research collaboration and to enhance the Departments research profile. The Department has extensive resources to support a wide variety of teaching activities and research interests. Please visit the website at http://www.eie.polyu.edu.hk/for more information about the Department. EIE is a constituent department of the Faculty of Engineering. Information on the Faculty is available at http://www.polyu.edu.hk/feng/.

Duties

The appointee will be required to:

(a)undertake teaching duties at undergraduate and postgraduate levels in the area of Artificial Intelligence, Data Analytics or Machine Learning and also outside his/her main research areas;

(b)attract funding to support both fundamental and industrial research;

(c)conduct research that leads to publications in top-tier refereed journals;

(d)supervise student projects and theses;

(e)engage in scholarly research/consultancy in his/her area of expertise;

(f)contribute to departmental services and activities; and

(g)perform any other duties as assigned by the Head of Unit or his/her delegates.

Qualifications

Applicants should:

(a)have a doctoral degree in a relevant discipline;

(b)have the relevant postdoctoral research experience and a publication record commensurate with the level of appointment;

(c)demonstrate potential to establish significant externally funded research programmes;

(d)be able to demonstrate evidence of effective classroom teaching skills; and

(e)have excellent communication skills and the ability to use English as the medium of instruction.

Remuneration and Conditions of Service

A highly competitive remuneration package will be offered. Initial appointment will be on a fixed-term gratuity-bearing contract. Re-engagement thereafter is subject to mutual agreement. For general information on terms and conditions for appointment of academic staff in the University, please visit the website at https://www.polyu.edu.hk/hro/docdrive/careers/doc/Prof.pdf.Applicants should state their current and expected salary in the application.

Application

Please send a completed application form by post, nominate three referees from different institutions/organisations by providing their names, addresses and relationship with the applicants, to Human Resources Office, 13/F, Li Ka Shing Tower, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong or via email to hrstaff@polyu.edu.hk. Application forms can be downloaded from https://www.polyu.edu.hk/hro/careers/guidelines_and_forms/forms. If a separate curriculum vitae is to be provided, please still complete the application form which will help speed up the recruitment process.Consideration of applications will commence on 19May 2022 until the position is filled. The Universitys Personal Information Collection Statement for recruitment can be found at https://www.polyu.edu.hk/hro/careers/guidelines_and_forms/pics_for_recruitment.

PolyU is an equal opportunity employer committed to diversity and inclusivity. All qualified applicants will receive consideration for employment without regard to gender, ethnicity, nationality, family status or physical or mental disabilities.

Read this article:
Assistant Professor in Artificial Intelligence / Data Analytics / Machine Learning job with THE HONG KONG POLYTECHNIC UNIVERSITY | 292449 - Times...

Read More..

IBM promises a 4,000+ qubit quantum computer by 2025: Here’s what it means – ZDNet

Two years after unveiling its quantum roadmap, IBM is keeping pace with its goals, the company said Tuesday -- and it has new plans to deliver a 4,000+ qubit quantum computer by 2025. That progress will move quantum computing beyond the experimentation phase by 2025, IBM CEO Arvind Krishna said this week to reporters.

For some simple use cases, organizations should be able to deploy quantum computers "in the 2023 to 2025 time frame," Krishna said ahead of the annual IBM Think conference in Boston. That means he explained, that electric vehicle makers could use quantum computers to analyze materials like lithium hydride to develop better batteries, or they could analyze allows for lighter weight but stronger vehicles. Other companies, meanwhile, could use quantum for simple optimization use cases, like optimizing search engines.

"As we begin to get to 4,000 qubits, a lot of these problems become within reach of quantum computers," Krishna said.

Meanwhile, the CEO said more complex quantum computing problems could be solved a few years later. Pharmaceutical companies, for instance, could see advantages by 2025 or 2030, he said.

"If you think about pharmaceutical drugs... that's probably going to be a little bit later," he said, adding that IBM is in "deep discussion with a few of those biotech companies."

"Covid vaccines have taught many of them that computation, as applied to medicine, can make things happen a lot quicker," Krishna continued. "They've all woken up to what computation can achieve. You can imagine some of them might be thinking a bit further to say, 'What can we do with quantum?'"

Back in 2020, IBM said it would deliver a 1,121-qubit device in 2023, as well as components and cooling systems. The company also released images of a 6-foot wide and 12-foot high cooling system being built to house a 1,121-qubit processor called IBM Quantum Condor. According to IBM, the goal is to build a million-qubit quantum system. The company said it views the 1,000-qubit mark as a tipping point to overcome the hurdles limiting the commercialization of quantum systems.

While building Condor, IBM last year announced Eagle, a 127-qubit processor. Later this year, IBM expects to unveil its 433-qubit processor called Osprey.

"To get to 4,000 [qubits], there's quite a few problems we have to solve," Krishna said. "How do you begin to scale these systems? How do you communicate amongst them? How do you get the software to scale and work from a cloud into these computers? These are all the problems we believe we have a line of sight to... so we have high confidence in our 2025 timeline."

In addition to working on physical quantum computers, IBM in 2023 will continue to improve development with Qiskit Runtime, its open source software that allows users to interact with quantum computers. It's also building workflows right into the cloud to a serverless approach into IBM's core quantum software stack.

Krishna said IBM is likely to be "open to all of the approaches" to delivering quantum computing, including selling it as a machine, delivering it as a managed service to customers, or delivering it as an on-premise service.

Continued here:
IBM promises a 4,000+ qubit quantum computer by 2025: Here's what it means - ZDNet

Read More..

Revolutionary New Qubit Platform Could Transform Quantum Computing – SciTechDaily

An illustration of the qubit platform made of a single electron on solid neon. Researchers froze neon gas into a solid at very low temperatures, sprayed electrons from a light bulb onto the solid, and trapped a single electron there to create a qubit. Credit: Courtesy of Dafei Jin/Argonne National Laboratory

The digital device you are using to view this article is no doubt using the bit, which can either be 0 or 1, as its basic unit of information. However, scientists around the world are racing to develop a new kind of computer based on the use of quantum bits, or qubits, which can simultaneously be 0 and 1 and could one day solve complex problems beyond any classical supercomputers.

A research team led by scientists at the U.S. Department of Energys (DOE) Argonne National Laboratory, in close collaboration with FAMU-FSU College of Engineering Associate Professor of Mechanical Engineering Wei Guo, has announced the creation of a new qubit platform that shows great promise to be developed into future quantum computers. Their work is published in the journal Nature.

Quantum computers could be a revolutionary tool for performing calculations that are practically impossible for classical computers, but there is still work to do to make them reality, said Guo, a paper co-author. With this research, we think we have a breakthrough that goes a long way toward making qubits that help realize this technologys potential.

The team created its qubit by freezing neon gas into a solid at very low temperatures, spraying electrons from a light bulb onto the solid, and trapping a single electron there.

FAMU-FSU College of Engineering Associate Professor of Mechanical Engineering Wei Guo. Credit: Florida State University

While there are many choices of qubit types, the team chose the simplest one a single electron. Heating up a simple light filament such as you might find in a childs toy can easily shoot out a boundless supply of electrons.

One important quality for qubits is their ability to remain in a simultaneous 0 or 1 state for a long time, known as its coherence time. That time is limited, and the limit is determined by the way qubits interact with their environment. Defects in the qubit system can significantly reduce the coherence time.

For that reason, the team chose to trap an electron on an ultrapure solid neon surface in a vacuum. Neon is one of only six inert elements, meaning it does not react with other elements.

Because of this inertness, solid neon can serve as the cleanest possible solid in a vacuum to host and protect any qubits from being disrupted, said Dafei Jin, an Argonne scientist and the principal investigator of the project.

By using a chip-scale superconducting resonator like a miniature microwave oven the team was able to manipulate the trapped electrons, allowing them to read and store information from the qubit, thus making it useful for use in future quantum computers.

Previous research used liquid helium as the medium for holding electrons. That material was easy to make free of defects, but vibrations of the liquid-free surface could easily disturb the electron state and hence compromise the performance of the qubit.

Solid neon offers a material with few defects that doesnt vibrate like liquid helium. After building their platform, the team performed real-time qubit operations using microwave photons on a trapped electron and characterized its quantum properties. These tests demonstrated that solid neon provided a robust environment for the electron with very low electric noise to disturb it. Most importantly, the qubit attained coherence times in the quantum state competitive with other state-of-the-art qubits.

The simplicity of the qubit platform should also lend itself to simple, low-cost manufacturing, Jin said.

The promise of quantum computing lies in the ability of this next-generation technology to calculate certain problems much faster than classical computers. Researchers aim to combine long coherence times with the ability of multiple qubits to link together known as entanglement. Quantum computers thereby could find the answers to problems that would take a classical computer many years to resolve.

Consider a problem where researchers want to find the lowest energy configuration of a protein made of many amino acids. These amino acids can fold in trillions of ways that no classical computer has the memory to handle. With quantum computing, one can use entangled qubits to create a superposition of all folding configurations providing the ability to check all possible answers at the same time and solve the problem more efficiently.

Researchers would just need to do one calculation, instead of trying trillions of possible configurations, Guo said.

For more on this research, see New Qubit Breakthrough Could Revolutionize Quantum Computing.

Reference: Single electrons on solid neon as a solid-state qubit platform by Xianjing Zhou, Gerwin Koolstra, Xufeng Zhang, Ge Yang, Xu Han, Brennan Dizdar, Xinhao Li, Ralu Divan, Wei Guo, Kater W. Murch, David I. Schuster and Dafei Jin, 4 May 2022, Nature.DOI: 10.1038/s41586-022-04539-x

The team published its findings in a Nature article titled Single electrons on solid neon as a solid-state qubit platform. In addition to Jin, Argonne contributors include first author Xianjing Zhou, Xufeng Zhang, Xu Han, Xinhao Li, and Ralu Divan. Contributors from the University of Chicago were David Schuster and Brennan Dizdar. Other co-authors were Kater Murch of Washington University in St. Louis, Gerwin Koolstra of Lawrence Berkeley National Laboratory, and Ge Yang of Massachusetts Institute of Technology.

Funding for the Argonne research primarily came from the DOE Office of Basic Energy Sciences, Argonnes Laboratory Directed Research and Development program and the Julian Schwinger Foundation for Physics Research. Guo is supported by the National Science Foundation and the National High Magnetic Field Laboratory.

Continued here:
Revolutionary New Qubit Platform Could Transform Quantum Computing - SciTechDaily

Read More..

Quantum Computing Needs a Balance of Order and Disorder – Technology Networks

Research conducted within the Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) has analysed cutting-edge device structures of quantum computers to demonstrate that some of them are indeed operating dangerously close to a threshold of chaotic meltdown. The challenge is to walk a thin line between too high, but also too low disorder to safeguard device operation. The study Transmon platform for quantum computing challenged by chaotic fluctuations has been published today inNature Communications.

In the race for what may become a key future technology, tech giants like IBM and Google are investing enormous resources into the development of quantum computing hardware. However, current platforms are not yet ready for practical applications. There remain multiple challenges, among them the control of device imperfections (disorder).

Its an old stability precaution: When large groups of people cross bridges, they need to avoid marching in step to prevent the formation of resonances destabilizing the construction. Perhaps counterintuitively, the superconducting transmon qubit processor a technologically advanced platform for quantum computing favoured by IBM, Google, and other consortia relies on the same principle: intentionally introduced disorder blocks the formation of resonant chaotic fluctuations, thus becoming an essential part of the production of multi-qubit processors.

To understand this seemingly paradoxical point, one should think of a transmon qubit as a kind of pendulum. Qubits interlinked to form a computing structure define a system of coupled pendulums a system that, like classical pendulums, can easily be excited to uncontrollably large oscillations with disastrous consequences. In the quantum world, such uncontrollable oscillations lead to the destruction of quantum information; the computer becomes unusable. Intentionally introduced local detunings of single pendulums keep such phenomena at bay.

The transmon chip not only tolerates but actually requires effectively random qubit-to-qubit device imperfections, explained Christoph Berke, final-year doctoral student in the group of Simon Trebst at the University of Cologne and first author of the paper. In our study, we ask just how reliable the stability by randomness principle is in practice. By applying state-of-the-art diagnostics of the theory of disordered systems, we were able to find that at least some of the industrially pursued system architectures are dangerously close to instability.

From the point of view of fundamental quantum physics, a transmon processor is a many-body quantum system with quantized energy levels. State-of-the-art numerical tools allow one to compute these discrete levels as a function of relevant system parameters, to obtain patterns superficially resembling a tangle of cooked spaghetti. A careful analysis of such structures for realistically modelled Google and IBM chips was one out of several diagnostic tools applied in the paper to map out a stability diagram for transmon quantum computing.

When we compared the Google to the IBM chips, we found that in the latter case qubit states may be coupled to a degree that controlled gate operations may be compromised, said Simon Trebst, head of the Computational Condensed Matter Physics group at the University of Cologne. In order to secure controlled gate operations, one thus needs to strike the subtle balance between stabilizing qubit integrity and enabling inter-qubit coupling. In the parlance of pasta preparation, one needs to prepare the quantum computer processor into perfection, keeping the energy states al dente and avoiding their tangling by overcooking.

The study of disorder in transmon hardware was performed as part of the Cluster of Excellence ML4Q in a collaborative work among the research groups of Simon Trebst and Alexander Altland at the University of Cologne and the group of David DiVincenzo at RWTH Aachen University and Forschungszentrum Jlich. This collaborative project is quite unique, says Alexander Altland from the Institute for Theoretical Physics in Cologne. Our complementary knowledge of transmon hardware, numerical simulation of complex many-body systems, and quantum chaos was the perfect prerequisite to understand how quantum information with disorder can be protected. It also indicates how insights obtained for small reference systems can be transferred to application-relevant design scales.

David DiVincenzo, founding director of the JARA-Institute for Quantum Information at RWTH Aachen University, draws the following conclusion: Our study demonstrates how important it is for hardware developers to combine device modelling with state-of-the-art quantum randomness methodology and to integrate chaos diagnostics as a routine part of qubit processor design in the superconducting platform.

Reference:Berke C, Varvelis E, Trebst S, Atland A, DiVincenzo DP. Transmon platform for quantum computing challenged by chaotic fluctuations. Nat Comm. 2022;13:2495. doi: 10.1038/s41467-022-29940-y

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.

Read the rest here:
Quantum Computing Needs a Balance of Order and Disorder - Technology Networks

Read More..

Are these the best quantum computing stocks to watch? – IG UK

One of the highest profile players in the quantum computing space is Alphabets Google. Google Inc. announced in 2019 that they had attained quantum superiority. In other words, its quantum processor, Sycamore, had successfully performed its first-ever function beyond the capabilities of classical computers.

However, this was soon questioned by IBM, who claimed that the same problem could be solved by a standard computer just over the space of days, compared to Sycamores mere minutes.

Then, in the second half of 2020, a smaller version of Sycamore reached another milestone performing its first quantum chemistry reaction.

In May 2021, Google opened its new Quantum AI Campus in Santa Barbara, California, along with a new goal: to build the worlds first useful, error-corrected quantum computer by 2029. 1

Google has even, to a certain extent, opened this effort up to the public in collaboration, when it announced Quantum Computing Service in December 2021. This allows approved customers the opportunity to send their own computing programs to Google to be run on their quantum computing hardware at the lab in Santa Barbara.

With this kind of computing power, Google is hoping to solve problems humanity hasnt been able to for centuries. Some of these include developing better medicines, solving world hunger and climate crises. However, this is a long way off for now.

The only recent news regarding Google and quantum computing has been speculative. For example, there have been rumours that Google Inc. may or may not take Sandbox, its secretive quantum department unrelated to its quantum AI campus, public. However, nothing concrete has been confirmed and it could be years before any further tangible quantum milestones are reached.

See the original post:
Are these the best quantum computing stocks to watch? - IG UK

Read More..

Artificial intelligence tapped to fight Western wildfires – Portland Press Herald – Press Herald

DENVER With wildfires becoming bigger and more destructive as the West dries out and heats up, agencies and officials tasked with preventing and battling the blazes could soon have a new tool to add to their arsenal of prescribed burns, pick axes, chain saws and aircraft.

The high-tech help could come by way of an area not normally associated with fighting wildfires: artificial intelligence. And space.

Lockheed Martin Space, based in Jefferson County, is tapping decades of experience of managing satellites, exploring space and providing information for the U.S. military to offer more accurate data quicker to ground crews. They are talking to the U.S. Forest Service, university researchers and a Colorado state agency about how their their technology could help.

By generating more timely information about on-the-ground conditions and running computer programs to process massive amounts of data, Lockheed Martin representatives say they can map fire perimeters in minutes rather than the hours it can take now. They say the artificial intelligence, or AI, and machine learning the company has applied to military use can enhance predictions about a fires direction and speed.

The scenario that wildland fire operators and commanders work in is very similar to that of the organizations and folks who defend our homeland and allies. Its a dynamic environment across multiple activities and responsibilities, said Dan Lordan, senior manager for AI integration at Lockheed Martins Artificial Intelligence Center.

Lockheed Martin aims to use its technology developed over years in other areas to reduce the time it takes to gather information and make decisions about wildfires, said Rich Carter, business development director for Lockheed Martin Spaces Mission Solutions.

The quicker you can react, hopefully then you can contain the fire faster and protect peoples properties and lives, Carter said.

The concept of a regular fire season has all but vanished as drought and warmer temperatures make Western lands ripe for ignition. At the end of December, the Marshall fire burned 991 homes and killed two people in Boulder County. The Denver area just experienced its third driest-ever April with only 0.06 of an inch of moisture, according to the National Weather Service.

Colorado had the highest number of fire-weather alerts in April than any other April in the past 15 years. Crews have quickly contained wind-driven fires that forced evacuations along the Front Range and on the Eastern Plains. But six families in Monte Vista lost their homes in April when a fire burned part of the southern Colorado town.

Since 2014, the Colorado Division of Fire Prevention and Control has flown planes equipped with infrared and color sensors to detect wildfires and provide the most up-to-date information possible to crews on the ground. The onboard equipment is integrated with the Colorado Wildfire Information System, a database that provides images and details to local fire managers.

Last year we found almost 200 new fires that nobody knew anything about, said Bruce Dikken, unit chief for the agencys multi-mission aircraft program. I dont know if any of those 200 fires would have become big fires. I know they didnt become big fires because we found them.

When the two Pilatus PC-12 airplanes began flying in 2014, Colorado was the only state with such a program conveying the information in near real time, Dikken said. Lockheed Martin representatives have spent time in the air on the planes recently to see if its AI can speed up the process.

We dont find every single fire that we fly over and it can certainly be faster if we could employ some kind of technology that might, for instance, automatically draw the fire perimeter, Dikken said. Right now, its very much a manual process.

Something like the 2020 Cameron Peak fire, which at 208,663 acres is Colorados largest wildfire, could take hours to map, Dikken said.

And often the people on the planes are tracking several fires at the same time. Dikken said the faster they can collect and process the data on a fires perimeter, the faster they can move to the next fire. If it takes a couple of hours to map a fire, what I drew at the beginning may be a little bit different now, he said.

Lordan said Lockheed Martin engineers who have flown with the state crews, using the video and images gathered on the flights, have been able to produce fire maps in as little as 15 minutes.

The company has talked to the state about possibly carrying an additional computer that could help crunch all that information and transmit the map of the fire while still in flight to crews on the ground, Dikken said. The agency is waiting to hear the results of Lockheed Martins experiences aboard the aircraft and how the AI might help the state, he added.

Actionable intelligence

The company is also talking to researchers at the U.S. Forest Service Missoula Fire Sciences Laboratory in Montana. Mark Finney, a research forester, said its early in discussions with Lockheed Martin.

They have a strong interest in applying their skills and capabilities to the wildland fire problem, and I think that would be welcome, Finney said.

The lab in Missoula has been involved in fire research since 1960 and developed most of the fire-management tools used for operations and planning, Finney said. Were pretty well situated to understand where new things and capabilities might be of use in the future and some of these things certainly might be.

However, Lockheed Martin is focused on technology and thats not really been where the most effective use of our efforts would be, Finney said.

Prevention and mitigation and preemptive kind of management activities are where the great opportunities are to change the trajectory were on, Finney said. Improving reactive management is unlikely to yield huge benefits because the underlying source of the problem is the fuel structure across large landscapes as well as climate change.

Logging and prescribed burns, or fires started under controlled conditions, are some of the management practices used to get rid of fuel sources or create a more diverse landscape. But those methods have sometimes met resistance, Finney said.

As bad as the Cameron Peak fire was, Finney said the prescribed burns the Arapaho and Roosevelt National Forests did through the years blunted the blazes intensity and changed the flames movement in spots.

Unfortunately, they hadnt had time to finish their planned work, Finney said.

Lordan said the value of artificial intelligence, whether in preventing fires or responding to a fire, is producing accurate and timely information for fire managers, what he called actionable intelligence.

One example, Lordan said, is information gathered and managed by federal agencies on the types and conditions of vegetation across the country. He said updates are done every two to three two years. Lockheed Martin uses data from satellites managed by the European Space Agency that updates the information about every five days.

Lockheed is working with Nvidia, a California software company, to produce a digital simulation of a wildfire based on an areas topography, condition of the vegetation, wind and weather to help forecast where and how it will burn. After the fact, the companies used the information about the Cameron Peak fire, plugging in the more timely satellite data on fuel conditions, and generated a video simulation that Lordan said was similar to the actual fires behavior and movement.

While appreciating the help technology provides, both Dikken with the state of Colorado and Finney with the Forest Service said there will always be a need for ground-truthing by people.

Applying AI to fighting wildfires isnt about taking people out of the loop, Lockheed Martin spokesman Chip Eschenfelder said. Somebody will always be in the loop, but people currently in the loop are besieged by so much data they cant sort through it fast enough. Thats where this is coming from.

Invalid username/password.

Please check your email to confirm and complete your registration.

Use the form below to reset your password. When you've submitted your account email, we will send an email with a reset code.

Previous

Next

Read the rest here:
Artificial intelligence tapped to fight Western wildfires - Portland Press Herald - Press Herald

Read More..

Adoption of AI/ML: How artificial intelligence is scaling up the education industry – The Financial Express

With the advancement of technology, artificial intelligence and machine learning (AI/ML) is on the verge of becoming an integral part of every industry, and education is no exception. With AI being enabled, learning can be customised for students. In the last few years, due to the emergence of machine learning, data has been treated as a prime knowledge resource and it is valued. Simultaneously, tech-based industry has upped the demand for AI/ML rapidly, therefore more students are taking up the course due to good career opportunities, Rajesh Khanna, professor, president, NIIT University, said.

Besides courses, it is has been observed that the such technology is being leveraged by ed-tech platforms as an business strategy enhancing tool. Starting from career counselling to exam proctoring, ed-techs have utilised AI/ML to accelerate the accuracy and productivity. There are multiple options when it comes to career counselling and opportunities. AI/ML can provide a good fit option for students when the right algorithm is taken into consideration, Rohan Pasari, CEO, Cialfo, said.

Further, AI/ML is used to conduct various tests online, so much so that it being now believed that it puts a seal to the authenticity of the process, as it gives the ability to remotely invigilate the test. On photographs taken at an interval, an AI algorithm is run to analyse the accuracy and authenticity of the examination. In CY21, Mettle conducted 20 million assessments across the globe on the online platform, out of which 16 million were remotely invigilated, Siddhartha Gupta, CEO, Mercer Mettle, a tech-based exam assessment platform, said.

According to Rackspaces AI/ML Annual Research Report 2022, AI/ML has been considered as the top two most important strategic technologies, along with cybersecurity. The report shows that up to 72% of respondents have noted AI/ML as part of their business strategy, IT strategy or both. Initially, the kind of industries which would have benefited from AI/ML were the financial market based companies. But the time has come that heavy machinery is now opening up to AI and ML to figure out and address the problems in a more distinct and accurate manner, Khanna added.

Although penetration of tech-based education can upskill the students, it is believed that enabling technologies is associated with various challenges and risk factors. The ministry report on school education 2020-21 revealed that post-pandemic, the dropout rate of students increased to 8.9% from 2.6% as the main reason being closure of schools and irregular online classes. In places like rural India, accessibility has always been a point of contention, which has also resulted in a digital divide. Whenever a new technology is enabled, the magnification of inequality also takes place. Places where devices and connectivity are not strongly available, they will definitely suffer. But the gap has to be filled by non-government orgnisation (NGOs) and government intervention by providing ways to resolve the issues, Khanna said.

Read Also: Union Education Minister, Shri Dharmendra Pradhan chairs meeting on formulation of HECI

Originally posted here:
Adoption of AI/ML: How artificial intelligence is scaling up the education industry - The Financial Express

Read More..

Alcatel-Lucent Enterprise enhances its Asset Tracking solution with Artificial Intelligence capabilities and push-button alerts – Macau Business

With enabled instant location of Bluetooth Low Energy (BLE) tags connected to individuals and critical equipment, to supply real-time and historic contact tracing, with increased precision accuracy and additional new features.

PARIS, FRANCE News Direct 10 May 2022 Alcatel-Lucent Enterprise, a leading provider of network, communications and cloud solutions tailored to customers industries is placing AI (Artificial Intelligence) and ML (Machine Learning) at the heart of its technology development. ALEs enhanced OmniAccess Stellar Asset Tracking solution now offers new customisable push-button alerts and an AI/ML powered real-time location algorithm for environments that require improved accuracy compared with standard tools.

Designed to quickly locate assets or individuals, use analytics to optimise workflows, and simplify the ability to provide contact tracing, Alcatel-Lucent Enterprise Asset Tracking is set to deliver an enriched user experience with finer location precision thanks to its AI and Machine Learning capabilities.

Further enhancements of the solution include equipping BLE tags with a new alert button, to notify users of activity at the touch of a button, or by sending automated notifications from an indoor geofenced area and immediately share vital information in real-time.

This solution holds powerful potential for the healthcare industry, for use cases such as calling medical staff for assistance, locating and assessing the availability of critical equipment, and improving safety of patients and staff.

The alert button function is also fully programmable for use case flexibility and enables configuration for button press request action, with real-time location, extending its value beyond the healthcare sector to be used to enhance campus security for staff and students in schools or enable security personnel to call for assistance in a variety of indoor environments.

Asset tracking users can also receive alerts via a range of media, making sure information is delivered to the right person, or group, at the right time, through the most convenient channel.

Notifications are sent instantly to the Alcatel-Lucent OmniVista Cirrus Asset Manager and distributed via Android push notification to the OmniAccess Stellar Asset Tracking app, web push to desktop or mobile device, email, SMS, Rainbow and other third-party systems, such as IQ Messenger. This message server includes additional notification media such as Alcatel-Lucent desktop, DECT and WLAN phones, nurse call systems, etc.

Daniel Faurlin, Business Line Manager, Network Business Division at Alcatel-Lucent Enterprise, comments:

Our OmniAccess Stellar Asset Tracking solution has proved an essential tool for our customers and they can now track, locate and monitor the usage patterns of their assets with even greater accuracy and efficiency. Although contact tracing and asset tracking came to light most prominently during the health crisis, its ability to improve performance across numerous industries extends beyond the turbulence of the pandemic and can be harnessed to bring operations into the digital age.

As ALE continues to enrich its offer under the traditional CAPEX model, it has also expanded to a new hybrid Network as a Service offering, combining both CAPEX & OPEX options.

In line with customer requirements, ALE plans to add asset tracking and contact tracing capabilities to its Network-as-a-Service offer. The company also provides a pay-as-you-grow model for businesses looking to ramp up their digital transformation with a manageable predictable monthly fee and the opportunity to benefit from the latest technology updates with a reduced initial investment.

Our aim is always to make accessing high-performance and data-rich solutions as easy as possible for our customers. As we continue to innovate and enhance our solutions, so too will we develop new models to make digital transformation universally accessible with options for every business and industry, adds Nolwenn Simon, Product Line Manager Network Value added solutions, Alcatel-Lucent Enterprise.

Alcatel-Lucent Enterprise delivers the customised technology experiences enterprises need to make everything connect.

ALE provides digital-age networking, communications and cloud solutions with services tailored to ensure customers success, with flexible business models in the cloud, on premises, and hybrid. All solutions have built-in security and limited environmental impact.

Over 100 years of innovation have made Alcatel-Lucent Enterprise a trusted advisor to more than a million customers all over the world.

With headquarters in France and 3,400 business partners worldwide, Alcatel-Lucent Enterprise achieves an effective global reach with a local focus.

al-enterprise.com | LinkedIn | Twitter | Facebook | Instagram

#AlcatelLucentEnterprise

The issuer is solely responsible for the content of this announcement.

See original here:
Alcatel-Lucent Enterprise enhances its Asset Tracking solution with Artificial Intelligence capabilities and push-button alerts - Macau Business

Read More..

Adoption of Artificial Intelligence in Indian Armys C4ISR: Here is what the Chief said – The Financial Express

One of the major lessons learnt from the ongoing Ukraine-Russia war is that multi-domain battle space is getting more influenced by technology. And these include usage of swarms of drones, missiles, unmanned ground vehicles and more. And all of these are being driven by Artificial Intelligence or computer algorithms these are used in the war zones to not only process huge quantities of information, but have the ability to make decisions.

Artificial intelligence is definitely being leveraged for enhancing the current C4ISR capabilities. The National Task Force had identified the 12 AI domains and the Indian Army has since undertaken projects both in-house as well as with the industry, especially deep tech start-ups, the Indian Army Chief Gen Manoj Pande told Financial Express Online.

In response to a question, he said that to enhance C4ISR capabilities, the Indian Army is looking at critical use cases for aerial threats from drones/UAVs, drone imagery analysis, integrated situational awareness for integrated decision support system/COP and analysis of OSINT & SM platforms, drone imagery analysis.

To effectively build capabilities for C4ISR, there is a need to integrate and build capacities in emerging domains of IoT, 5G and BDA.

Why?

Because by meshing these emerging domains it will enable the military to effectively link the sensor to the decision maker to the shooter.

AI engines in various facets of C4ISR

They range from sensors to analysis and decision support systems, and are currently under development standalone and part of platforms or systems.

For instance,in the sensor domain swarm drone platforms, surveillance system inputs or autonomous platforms, AI is enabling remote target detection as well as classification.

ISR analysis: According to Gen Pande, AI engines are being trained for interpretation, change and anomaly detection and even intrusion detection. Similarly, in domains of autonomous lethal weapons, decision support systems or predictive maintenance, a serious effort is afoot to leverage AI.

Adding, We realise that to build an effective C4ISR grid, there is a need to get our data strategy right. And towards this, we have promulgated a Data Governance policy. Work is going on towards building a structured data management framework. Meanwhile, we need to churn out data for our AI engines through improvised on-the-fly techniques.

While evolving QR for any system, the use of AI is deliberated for its ability to enhance operational or logistic effectiveness. Therefore, the Indian Army has also established the AI Centre of Excellence (COE) at MCTE, Mhow. And at this facility, For skill development for our soldiers, AI development is being undertaken simultaneously.

Originally posted here:
Adoption of Artificial Intelligence in Indian Armys C4ISR: Here is what the Chief said - The Financial Express

Read More..