Page 537«..1020..536537538539..550560..»

The biggest technology breakthroughs of 2023 – Livescience.com

The technology world moves quickly, which can make it hard to see the bigger picture. But there are three areas that, for me, have stood out in 2023.

Artificial intelligence (AI) has been massively hyped. But this year, a lot of the hype seems justified: 2023 was a breakout year for AI. This was the year when ChatGPT the poster child for generative AI was packaged into something that can fit onto your smartphone or laptop. Gemini, Google's answer to OpenAI, began powering services you use on a day-to-day basis. This sort of integration into everyday services and devices was unthinkable five years ago.

This technology has its fair share of downsides, however, as well as unintended consequences that we're only now beginning to realize. For instance, people can no longer tell the difference between real and artificially generated faces which may make deepfakes harder to detect.

And we found out that like humans, ChatGPT can be dishonest when put under pressure. That's especially worrisome because the program was built to be honest.

How ChatGPT learned to lie and cheat in the pursuit of money

Is it just me, or is quantum computing one of those technologies that always seems "five or 10 years away"? But there's no denying the field is making meaningful advancements every few months.

One of the most impressive ones came in December when IBM launched its System Two quantum computer. The launch coincided with the release of a 133-qubit quantum chip, dubbed "Heron" which experts are way more excited about than the 1,000-qubit chip it released at the same time. Why? Because Heron is less noisy than its larger cousin and thus will prove to be a foundational technology for future chips.

IBM's new Heron chip inches us closer to a quantum reality

I remain unconvinced by the "metaverse", but Meta's insistence on pushing us into its digital world (where, until recently, we didn't have legs) touches on a much wider trend. Mixed reality hasn't quite hit the mainstream but it's getting closer.

Apple threw its hat into the ring with its Vision Pro headset, which lets us interact with apps and services using gestures and varying perspectives rather than through a screen.

Augmented reality (AR) has seen real strides, too, and is an area I'm convinced can meaningfully make a difference in the future. Smart glasses are getting more stylish, too just look at Meta's collaboration with Ray-Ban, for example. Ultimately, as the tech matures, these advancements will only serve to further blur the lines between what's real, and what belongs in cyber space for better or worse.

Here's how sonar-enhanced "smart" glasses could protect privacy in this burgeoning mixed reality.

It's also worth mentioning the leaps we've seen in robotics, both big and small, as well as electrical engineering breakthroughs that could give us Star Wars-style laser weapons and the technology to build 6G systems. Regardless of the area, Live Science will be at the forefront covering the biggest technology breakthroughs that matter next year and beyond.

View post:
The biggest technology breakthroughs of 2023 - Livescience.com

Read More..

Future-Tech Trends to Watch in 2024 – Video – CNET

Speaker 1: From quantum computers to brain implants. There's a lot of companies pushing forward in areas that have previously been the stuff of science fiction. Here's the top, what the future tech trends we'll be keeping an eye on in 2024 with computing at the heart of so much tech, it's no surprise we're kicking off with a technology that could revolutionize the very act of computing itself. Quantum computers IBM recently revealed its quantum system two at 22 feet wide and 12 feet tall. These modular computing units can be linked [00:00:30] together to amplify their power when technology like this becomes mature. Extremely complicated. Problems like those at the forefront of medicine, ecology, economics, and more that are beyond modern computer's ability to solve, could be made solvable by quantum computing. That's why the race is on between players like IBM, Google, Microsoft, and China to create a viable quantum system. Speaker 1: Quantum computing components are very sensitive and need to be isolated from outside forces that might throw [00:01:00] them off. That's why most of the hardware you see in these quantum setups are dedicated to keeping the system extremely cold near absolute zero to preserve the integrity of the system, which is sensitive to things like heat energy. The road to quantum computers is long and full of challenges, but with its transformative potential, we'll definitely be keeping an eye on all the big players in 2024. Another trend to watch is a diverse array of new electric vehicles covering land, air, and sea. [00:01:30] We've seen development of personal EV tolls like the Jetson one, electric trucks like those developed by Tesla and Eide Electric boats like the ARC one, which I got to test drive this year, and electric scooters like the Honda Moto compactor in 2024. I'll be looking forward to hopefully test drive the Aptera solar car, which I got to ride in last year. Speaker 1: More details on Zapata's. Recently announced air scooter and a new boat in development from ARC to name a few. Last, but certainly [00:02:00] not least, we're watching the brain computer interface space. Our team visited Syncro headquarters this year to get a demo of their entro, a device that can be inserted via catheter, therefore bypassing the need for open brain surgery. Syncro has implanted stent hodes in several patients who have used the device to navigate their phones, computers, and beyond using only their thoughts. Elon Musk's Ner link hasn't shied away from open brain surgery, instead developing surgical robots to install its devices. [00:02:30] Neuralink recently announced recruitment for its clinical trials in September of 2023. BlackRock Neurotech, another leading company in this space is preparing its move again, system for a commercial launch as a medical device. It also announced a product its calling neural lace, an ultra thin flexible electrode that claims it could capture much more data than current brain computer interface technology. As always, thanks so much for watching. I'm your host, Jesse Oral. See you next time with the fam.

See original here:
Future-Tech Trends to Watch in 2024 - Video - CNET

Read More..

7 cybersecurity predictions to look out for in 2024 – TechRadar

It's that time of the year again, so while we wait for the final tick of the clock, let's look back over the past 365 days in the world of cybersecurity and predict what's coming next.

Throughout 2023 we saw the use of VPN services remain high as internet restrictions increased across the world, new privacy threats loomed, and governments enforced VPN censorship. The same goes for local and national-scale internet shutdowns, with Iran being the biggest perpetrator in the past 12 months.

It's not possible to talk about this year's cybersecurity landscape without mentioningAI. The boom of ChatGPT and similar tools have presented new challenges for online privacy, scams, and disinformation, but they certainly opened up new possibilities within the security software industry as well. At the same time, the race to bring encryption protections up to the post-quantum world has never been so fierce.

So, with these past events in mind, let's dive into our top 7 cybersecurity predictions to look out for in 2024.

As mentioned, in 2023, everyday people have increasingly turned to VPNs to access censored sites and apps, enjoy better online privacy, or simply boost their overall internet performance.

Short for virtual private network, a VPN is a security software that encrypts internet connections and spoofs IP addresses. As a result, VPNs are an incredibly versatile tooland they've never been more commercially accessible.

Experts expect this trend to consolidate during 2024, as censorship and privacy threats are on the rise. On this topic, Head of Product at Private Internet Access (PIA) Himmat Bains told me: "With the increasing of online scams and governments becoming more and more interested in people's data and what they do online, I think now than ever before VPNs are incredibly useful for customers to protect their most important access: their own digital privacy."

We already mentioned how generative AI shook the security industry this year, presenting it with a series of new threats to internet privacy and security.

Hackers have been using ChatGPT to write more effective malware, for example. Data-scraping practices behind these Large Language Models (LLMs) have also been worrying privacy experts. Online disinformation, deepfakes, and online scams are becoming more sophisticated, too, all thanks to AI tools.

Again, experts foresee this worsening throughout 2024especially considering that we still don't have proper AI regulations in place.

Andrew Whaley, the Senior Technical Director at Norwegian security firm Promon, said: "The emergence of advanced AI-assisted attacks, including deep fakes for social engineering and bypassing ID controls, can be anticipated. This raises the threat of AI being exploited for disinformation campaigns, with potentially major consequences for the upcoming US election."

As the name might suggest, passwordless authentication refers to the act of signing into a service without using a password. Instead, sign-in can be done with certificates, security tokens, one-time passwords (OTPs), or biometrics.

With data breaches on the rise, the industry has increasingly been moving in this more secure direction over the past few years, including the big tech giant Microsoft. Experts now expect a consolidation of the passwordless market in 2024.

Bassam Al-Khalidi, co-founder and co-CEO of passwordless solutions firm Axiad, said: "Next year, well start to see mergers between passwordless and credential management companies, which will create a new category in the authentication space: think passwordless plus. This movement will be similar to the consolidation we saw a few years back between identity management and access management companies, which resulted in the identity and access management (IAM) industry."

If, on the one hand, AI brought huge issues to people's privacy and security online, these powerful tools also have huge potential for doing good. That's why cybersecurity experts and software engineers will undoubtedly begin to harness their power more and more in 2024.

In August, NordVPN launched a new initiative aiming to do exactly this. NordLabswants to provide a platform for engineers and developers to test new ideas and approaches to ever-changing online security and privacy issues. A month later, the team launched Sonar, an AI-enabled tool to fight back against increasingly sophisticated phishing attacks.

"New emerging technologies raise challenges for cybersecurity, privacy, and internet freedom, but at the same time, they bring new opportunities. NordLabs will allow us to have additional flexibility when it comes to the development of experimental tools and services," said Vykintas Maknickas, the head of product strategy at Nord Security.

We are sure the new year will bring even more of these innovative AI-powered solutions.

The year of policymakers has been characterized by worldwide governments trying to regulate new technologies and the internet at large.

The long-awaited Online Safety Bill became law, despite heated debates. Similar proposed legislation, introducing stricter age verification rules and more power to check on people's communications in an attempt to protect children online, is also on the table elsewhere. So, we expect that we'll see new policies in 2024.

The race for a comprehensive AI Act has been fierce, too. The UK AI Summit ended with a world-first signed agreement among the UK, the US, China, and 25 more countries to develop safe and responsible AI software. The EU also managed to agree on the backbone of the future law, which is likely to become the go-to model for the West.

When it comes to data protection and privacy laws, the US took positive steps in Colorado and Virginia, finally enforcing privacy laws, but the ADPPA is still stalled at the time of writing. What's certain is that organizations will need to adapt their internal practice to keep up with an ever-changing environment.

Once again, internet shutdowns surged across the world in 2023. VPN provider Surfshark counted 42 instances affecting over 4 billion people in the first half of the year.

At the time, researchers pointed out how there was a 31% reduction in new internet restrictions compared to the same period in 2022. However, the decrease in new restrictions (from 42 in the first half of 2023 to 61 in the same period the year before) primarily resulted from the drop in cases acrossJammu and Kashmir(from 35 to only 2). Excluding this region, global restrictions suddenly increased by 54% compared to 2022, suggesting that digital freedoms across the world "may have worsened."

While it's not possible to say for sure, the data collected from 2015 onwards indicate that a spike in internet and social media shutdowns is, sadly, a very likely scenario we'll need to cope with next year.

Despite being a few years away from becoming commonplace, the threat of quantum computing to current encryption models is looming. That's because hackers in 2023 began to perform attacks deemed as "harvest now, decrypt later."

It's in this context that providers have been racing to implement quantum-resistant cryptography in their services. The list so far includes the encrypted messaging app Signal, secure email provider Tuta (previously Tutanota), and some VPN services, including ExpressVPN and PureVPN.

Again, we expect this trend to consolidate throughout 2024.

Compare today's best five overall VPNs on price

Original post:
7 cybersecurity predictions to look out for in 2024 - TechRadar

Read More..

A smarter society, rise of the robots and security worries — Internet of Things predictions for 2024 – BetaNews

With ever increasing numbers of smart devices in our homes and workplaces, the Internet of Things has become an established facet of everyday life.

But like the rest of the technology industry the IoT isn't standing still. Here are some expert views on the opportunities and risks it's likely to present in 2024.

Eric Purcell, senior vice president of global partner sales at Cradlepoint, thinks the IoT will finally bring the 'smart' society to life -- from cities, to malls, to businesses. "From powering smart infrastructure to traffic management to smart parking, IoT devices throughout cities are actively creating seamless experiences and empowering the cities of the future In 2024, we'll see an increase in industries that leverage IoT devices to bolster connectivity opportunities to increase efficiency, bolster productivity, and meet the need for consumer and customer experiences. As such, we'll begin to see the inklings of a smart society as IoT-enabled establishments from shopping malls to public transportation to modern businesses take flight."

Felix Zhang, founder and CEO of Pudu Robotics, thinks the IoT will be a key part of a new robotic era. "If 2023 is the year of Gen AI, 2024 will be the year of the robot. As autonomous technology becomes more advanced and the integration of Gen AI makes robots more intelligent, we can expect to see robots in even more applications and places than just restaurants, warehouses, and factories. We are only years away (and in some cases months) from seeing robots in stores that can greet shoppers with personalized recommendations, clean and traverse large venues like casinos and outdoor spaces like amphitheaters, carry medicine in hospitals, and even monitor the elderly in senior living facilities. As robots gain more IoT-related controls, we anticipate architecture will follow, enabling robots to use elevators, control lights and other smart home devices, and literally open new doors."

However, Kevin Kumpf, chief OT/ICS security strategist at Cyolo, thinks this could be a double-edged sword:

In the coming year, industrial sectors will experience rising threats to OT and ICS security due to the increasing number of Industrial Internet of Things (IIoT). IIoT devices have historically enabled a wide range of advancements in smart factories, making them more efficient, safe and intelligent. For example, AI/ML-driven technologies can be used to automate factory lighting, monitor vital signs and performance metrics and enhance overall worker safety. AI-intelligent heavy machinery and recently deployed factory robot dogs can also assist in manufacturing processes and ensure the safety of workers in the field.

However, the accelerated integration of IIoT devices will also make organizations significantly more vulnerable to cyber threats. Smart factories generate lots of critical data, and this vast amount of information will become increasingly difficult to analyze and secure effectively, which can hinder its optimization and place organizations at risk of cyberattacks. This upcoming year and beyond, we'll see a growing demand for OT security experts, as there is currently a skills gap in this area which organizations will seek to be filled, especially as vulnerable smart technologies continue to be integrated within these environments.

Yaniv Vardi, CEO at Claroty, thinks generative AI will help handle data from IoT devices. "Generative AI will enhance the resilience of cyber-physical systems against AI-armed threat actors. With the rapid increase of IoT devices, there's an abundance of data, and generative AI will help harness this data for better security and operational insights. It will automate workflows and add better visibility into the attack surface which will in turn empower CPS defenders to anticipate malicious attacks."

Mike Nelson, vice president of digital trust at DigiCert, says devices will become more tamper-resistant. "As the world grows increasingly mobile and dynamic, device security is becoming more important than ever. With individual identity now frequently tied to smartphones and other devices, the root of identity must be specialized per device and per individual -- all protected under the umbrella of trust. We predict that more and more devices will be secured with identity and operational checks to confirm authenticity, enabling individuals to interact with devices that support everyday activity with the confidence that the devices are tamper-resistant and their information is secure. Increased levels of IoT trust will also open up more opportunities for particularly sensitive use cases, such as electric vehicle chargers and medical devices."

Ellen Boehm, SVP, IoT strategy and operations at Keyfactor, thinks cryptography will be part of this. "Similar to how AI has accelerated marketing content, AI will help developers iterate faster on designs and innovate features that might not have been possible through standard methods. The challenge with using any AI engine always comes back to proving the origin, authenticity, and record of how code has changed over time. This is where the new security vulnerabilities could be introduced into IoT products, if AI-based code development leverages an unknown source."

Rajeev Gupta, co-founder and chief product officer at cyber insurance company Cowbell, says, "The increasing connectivity of devices due to the Internet of Things (IoT) will likely create new vulnerabilities, making cybersecurity measures even more critical. As a result, there may be a growing demand for insurance coverage related to IoT security breaches."

VP of security services at Edgio, Tom Gorup thinks the IoT will drive more DDoS attacks:

DDoS attacks have been a thorn in the side of businesses for years, and it seems that they will not be letting up anytime soon. In fact, based on current trends and emerging technologies, DDoS attacks are on track to become even more frequent and larger in scale by the year 2024.

One of the reasons for this is the increasing availability of massive resources for cybercriminals to launch these attacks. Attackers are more often compromising web servers to run massive layer 7 or DDoS attacks, giving them more powerful compute capabilities to increase the intensity of their exploit attempts.

In addition, with the proliferation of Internet of Things (IoT) devices, more and more devices are becoming connected to the internet, which can be exploited by attackers to create massive IoT botnets for DDoS attacks. According to a recent report, the number of IoT devices is expected to reach 38.5 billion by 2025, providing cybercriminals with even more ammunition to launch DDoS attacks.

Seth Blank, CTO at Valimail, expects the IoT to come under attacks as other channels become more secure. "With advancements in email security, particularly through stringent authentication requirements, there will be a shift in the threat landscape. As email becomes more secure and less susceptible to attacks, attackers will pivot to other, less secure communication channels, such as SMS, phone calls, and IOT communications. This shift will reflect the adaptive nature of cyber-criminals, who continually seek out the weakest points in the security infrastructure, and highlight the ongoing challenge of maintaining a comprehensive security posture that evolves in response to the changing tactics of cyber attackers."

Debbie Gordon, founder and CEO of Cloud Range, echoes this view. "There will be a continued expansion of attack surfaces driven by Internet of Things (IoT) devices and a lack of security standards. As more devices become connected to the internet, entry points for cyber threats will become more present. The absence of uniform security standards for these devices will create more vulnerabilities and pose a risk to personal security."

Shankar Somasundaram, CEO at Asimily, says, "Healthcare organizations increasingly depend on vast fleets of internet-connected devices for patient care and outcomes. However, these devices come with thousands of new reported security vulnerabilities each month: an unparalleled challenge that no cybersecurity budget could surmount. In 2024, I think we'll see more healthcare organizations approaching this cybersecurity challenge by adopting risk-first strategies, and utilizing IoT device visibility to prioritize the 5-10 percent of vulnerabilities that represent true immediate risk considering their use cases, network configurations, and common cyberattacker practices. For healthcare organizations with limited budgets, this approach will optimize resources, and results."

Image credit: Jirsak / Shutterstock

Original post:
A smarter society, rise of the robots and security worries -- Internet of Things predictions for 2024 - BetaNews

Read More..

Using Interpretable Machine Learning to Develop Trading Algorithms – DataDrivenInvestor

11 min read

One problem with many powerful machine learning algorithms is their uninterpretable nature. Algorithms such as neural networks and their many varieties take numbers in and spit numbers out while their inner workings, especially for sufficiently large networks, are impossible to understand. Because of this, its difficult to determine exactly what the algorithms have learned. This non-interpretability loses key information about the structure of the data such as variable importance and variable interactions.

However, other machine learning (ML) algorithms dont suffer these drawbacks. For example, decision trees, linear regression, and general linear regression provide interpretable models with still-powerful predictive capabilities (albeit typically less powerful than more complex models). This post will use a handful of technical indicators as input vectors for this type of ML algorithm to predict buy and sell signals determined by asset returns. The trained models will then be analyzed to determine the importance of the input variables, leading to an understanding of the trading decisions.

For simplicity, indicators readily available from FMPs data API will be used. If replicating, other indicators can easily be added to the dataset and integrated into the model to allow more complex trading decisions.

For demonstration, the indicators used as input to the ML models will be those readily available from FMPs API. A list of these indicators is below.

An n-period simple moving average (SMA) is an arithmetic moving average calculated using the n most recent data points.

FMP Endpoint:

https://financialmodelingprep.com/api/v3/technical_indicator/5min/AAPL?type=sma&period=10

The exponential moving average (EMA), is similar to the SMA but smooths the raw data by applying higher weights to more recent data points.

where S is a smoothing factor, typically 2, and V_t is the value of the dataset at the current time.

Continued here:
Using Interpretable Machine Learning to Develop Trading Algorithms - DataDrivenInvestor

Read More..

Using Machine Learning and AI in Oncology – Targeted Oncology

James Zou, PhD, assistant professor of biomedical data science at Stanford University, discusses machine learning and the different ways oncologists are utilizing it for the management, treatment, and diagnosis of cancer.

Machine learning is being applied in both early- and late-stage disease, and aids clinicians in providing the best treatment plans and options for their patients with cancer. In this video, Zou further discusses some of the specific methods the algorithm is trained to look at.

Transcription:

0:09 | Machine learning and artificial intelligence are seeing a lot of applications in oncology. For example, in diagnosis, often the clinicians are working with different kinds of imaging data could be mammography images or CT scans. Machine learning AI algorithms can be very helpful in helping clinicians to analyze those kinds of images for them to identify or to segment relevant regions.

0:39 | There are different stages where machine learning is being applied. They will go all the way from early stages in diagnosis to later stages in terms of treatment planning and treatment recommendations. [On the] diagnosis side, we are seeing a lot of these computer vision algorithms, which is a type of AI or machine learning models that are trained to really understand and analyze different images. For example, now there are algorithms that are looking at histopathology images and slides, and then try to diagnose and predict patient outcomes based on those histology images.

1:18 | There are also algorithms that are trained to look at mammography images and try to detect tumors, legions from these mammography images as other diagnosis sites and other treatment planning sites. People also develop machine learning models that look at, for example, mutation profiles of patients, right from their somatic mutations, and then try to predict based on these mutation profiles if immunotherapy or some other treatments are likely to be a good treatment for this particular patient.

Read the original:
Using Machine Learning and AI in Oncology - Targeted Oncology

Read More..

Machine learning methods to protect banks from risks of complex investment products – Tech Xplore

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

by KeAi Communications Co.

close

Artificial intelligence (AI) is frequently touted as a silver bullet to solve complex modeling problems. Among its many applications, it has been investigated as a tool to manage risks of complex investment productsso-called derivative contractsin the investment banking area. Despite the multiple positive reports in this area, concerns have been raised about their practical applicability.

In a new study published in The Journal of Finance and Data Science, a team of researchers from Switzerland and the U.S. explored whether reinforcement learning RL agents can be trained to hedge derivative contracts.

"It should come as no surprise that if you train an AI on simulated market data, it will work well on markets that are reflective of the simulation, and the data consumption of many AI systems is outrageous," explains Loris Cannelli, first author of the study and a researcher at IDSIA in Switzerland.

To overcome the lack of training data, researchers tend to assume an accurate market simulator to train their AI agents. However, setting up such a simulator leads to a classical financial engineering problem: choosing a model to simulate from and its calibration, and making the AI-based approach much like the standard Monte Carlo methods in use for decades.

"Such an AI can also be hardly considered model-free: this would apply only if enough market data was available for training, and this is rarely the case in realistic derivative markets," says Cannelli.

The study, a collaboration between IDSIA and investment bank of UBS, was based on so-called Deep Contextual Bandits, which are well-known in RL for their data-efficiency and robustness. Motivated by operational realities of real-world investment firms, it incorporates end of day reporting requirements and is characterized by a significantly lower training data requirement compared to conventional models, and adaptability to the changing markets.

"In practice, it's the availability of data and operational realities, such as requirements to report end-of-day risk figures, that are the main drivers that dictate the real work at the bank, instead of ideal agent training," clarifies senior author Oleg Szehr, whom, prior to his appointment at IDSIA, was a staff member at several investment banks. "One of the strengths of the newly developed model is that it conceptually resembles business operations at an investment firm and thus is applicable from a practical perspective."

Although the new method is simple, rigorous assessment of model performance demonstrated that the new method outperforms benchmark systems in terms of efficiency, adaptability and accuracy under realistic conditions. "As often the case in real life, less is morethe same applies to risk management too," concludes Cannelli.

More information: Loris Cannelli et al, Hedging using reinforcement learning: Contextual k-armed bandit versus Q-learning, The Journal of Finance and Data Science (2023). DOI: 10.1016/j.jfds.2023.100101

Provided by KeAi Communications Co.

Read this article:
Machine learning methods to protect banks from risks of complex investment products - Tech Xplore

Read More..

Utilizing a novel high-resolution malaria dataset for climate-informed predictions with a deep learning transformer … – Nature.com

Thomson, M. C. et al. Malaria early warnings based on seasonal climate forecasts from multi-model ensembles. Nature 439(7076), 576579 (2006).

Article ADS CAS PubMed Google Scholar

Hashizume, M., Terao, T. & Minakawa, N. The Indian Ocean Dipole and malaria risk in the highlands of western Kenya. Proc. Natl. Acad. Sci. 106(6), 18571862 (2009).

Article ADS CAS PubMed PubMed Central Google Scholar

Haileselassie, W. et al. Burden of malaria, impact of interventions and climate variability in Western Ethiopia: an area with large irrigation-based farming. BMC Public Health 22(1), 111 (2022).

Article Google Scholar

Zhou, G., Minakawa, N., Githeko, A. K. & Yan, G. Association between climate variability and malaria epidemics in the East African highlands. Proc. Natl. Acad. Sci. 101(8), 23752380 (2004).

Article ADS CAS PubMed PubMed Central Google Scholar

MBra, R. K. et al. Impact of climate variability on the transmission risk of malaria in northern Cte dIvoire. PLoS One 13(6), e0182304 (2018).

Article PubMed PubMed Central Google Scholar

Talapko, J., krlec, I., Alebi, T., Juki, M. & Vev, A. Malaria: the past and the present. Microorganisms 7(6), 179 (2019).

Article CAS PubMed PubMed Central Google Scholar

World Health Organization. World Malaria Report 2020 (World Health Organization, 2020).

Book Google Scholar

Ohrt, C. et al. Information systems to support surveillance for malaria elimination. Am. J. Trop. Med. Hyg. 93(1), 145 (2015).

Article PubMed PubMed Central Google Scholar

Kim, Y. et al. Malaria predictions based on seasonal climate forecasts in South Africa: A time series distributed lag nonlinear model. Sci. Rep. 9(1), 110 (2019).

Google Scholar

Santosh, T., Ramesh, D. & Reddy, D. LSTM based prediction of malaria abundances using big data. Comput. Biol. Med. 124, 103859 (2020).

Article PubMed Google Scholar

Mohapatra, P., Tripathi, N. K., Pal, I. & Shrestha, S. Comparative analysis of machine learning classifiers for the prediction of malaria incidence attributed to climatic factors.

Masinde, M. Africa's Malaria epidemic predictor: Application of machine learning on malaria incidence and climate data. Proc. of the 2020 the 4th International Conference on Compute and Data Analysis. 2937 (2020).

Mussumeci, E. & Coelho, F. C. Large-scale multivariate forecasting models for Dengue-LSTM versus random forest regression. Spatial Spatio Temporal Epidemiol. 35, 100372 (2020).

Article Google Scholar

Rajkomar, A., Dean, J. & Kohane, I. Machine learning in medicine. N. Engl. J. Med. 380(14), 13471358 (2019).

Article PubMed Google Scholar

Nkiruka, O., Prasad, R. & Clement, O. Prediction of malaria incidence using climate variability and machine learning. Inf. Med. Unlocked 22, 100508 (2021).

Article Google Scholar

Thomson, M. C., Mason, S. J., Phindela, T. & Connor, S. J. Use of rainfall and sea surface temperature monitoring for malaria early warning in Botswana. Am. J. Trop. Med. Hyg. 73(1), 214221 (2005).

Article PubMed Google Scholar

Behera, S. K. et al. Malaria incidences in South Africa linked to a climate mode in southwestern Indian Ocean. Environ. Dev.. 27, 4757 (2018).

Article Google Scholar

Eikenberry, S. E. & Gumel, A. B. Mathematical modeling of climate change and malaria transmission dynamics: A historical review. J. Math. Biol. 77(4), 857933 (2018).

Article MathSciNet PubMed Google Scholar

Kifle, M. M. et al. Malaria risk stratification and modeling the effect of rainfall on malaria incidence in Eritrea. J. Environ. Public Health 2019, 111 (2019).

Article Google Scholar

Okuneye, K. & Gumel, A. B. Analysis of a temperature-and rainfall-dependent model for malaria transmission dynamics. Math. Biosci. 287, 7292 (2017).

Article MathSciNet PubMed Google Scholar

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, . & Polosukhin, I. Attention is all you need. In: Advances in neural information processing systems. Vol 30. (2017).

Carmichael, I. & Marron, J. S. Data science vs. statistics: Two cultures?. Jpn. J. Stat. Data Sci. 1(1), 117138 (2018).

Article MathSciNet Google Scholar

Abbasimehr, H. & Baghery, F. S. A novel time series clustering method with fine-tuned support vector regression for customer behavior analysis. Expert Syst. Appl. 204, 117584 (2022).

Article Google Scholar

Xu, J. et al. Forecast of dengue cases in 20 Chinese cities based on the deep learning method. Int. J. Environ. Res. Public Health 17(2), 453 (2020).

Article PubMed PubMed Central Google Scholar

Ho, T. S. et al. Comparing machine learning with case-control models to identify confirmed dengue cases. PLoS Negl. Trop. Dis. 14(11), e0008843 (2020).

Article PubMed PubMed Central Google Scholar

Wang, M. et al. A novel model for malaria prediction based on ensemble algorithms. PloS One 14(12), e0226910 (2019).

Article CAS PubMed PubMed Central Google Scholar

Lim, B., Ark, S. ., Loeff, N. & Pfister, T. Temporal fusion transformers for interpretable multi-horizon time series forecasting. Int. J. Forecast. 37(4), 17481764 (2021).

Article Google Scholar

Susan, S. & Kumar, A. The balancing trick: Optimized sampling of imbalanced datasetsa brief survey of the recent state of the art. Eng. Rep. 3(4), e12298 (2021).

Article Google Scholar

Thickstun, J. The Transformer Model in Equations (University of Washington, 2021).

Google Scholar

Bengio, S., Vinyals, O., Jaitly, N. & Shazeer, N. Scheduled sampling for sequence prediction with recurrent neural networks. Advances in Neural Information Processing Systems. 28 (2015).

Mohapatra, P., Tripathi, N. K., Pal, I. & Shrestha, S. Determining suitable machine learning classifier technique for prediction of malaria incidents attributed to climate of Odisha. Int. J. Environ. Health Res. 32(8), 17161732 (2022).

Article CAS PubMed Google Scholar

Jdey, I., Hcini, G. & Ltifi, H. Deep learning and machine learning for Malaria detection: Overview, challenges and future directions. arXiv preprint arXiv:2209.13292. (2022).

Munir, M., Siddiqui, S. A., Chattha, M. A., Dengel, A. & Ahmed, S. Fusead: Unsupervised anomaly detection in streaming sensors data by fusing statistical and deep learning models. Sensors 19(11), 2451 (2019).

Article ADS PubMed PubMed Central Google Scholar

Kim, M. Prediction of COVID-19 confirmed cases after vaccination: Based on statistical and deep learning models. Sci. Med. J. 3(2), 153165 (2021).

CAS Google Scholar

Martineau, P. et al. Predicting malaria outbreaks from sea surface temperature variability up to 9 months ahead in Limpopo, South Africa, using machine learning. Front. Pub. Health 25(10), 962377 (2022).

Article Google Scholar

Adeola, A. M., Botai, J. O., Olwoch, J. M., Rautenbach, H. C., Adisa, O. M., De Jager, C., Botai, C. M. & Aaron, M. Predicting malaria cases using remotely sensed environmental variables in Nkomazi, South Africa. Geospatial Health. 14(1) (2019).

Mbunge, E., Milham, R. C., Sibiya, M. N. & Jr Takavarasha, S. Machine learning techniques for predicting malaria: Unpacking emerging challenges and opportunities for tackling malaria in sub-saharan Africa. Proc. Computer Science On-line Conference 327344. (Springer International Publishing, Cham, 2023).

Nguyen, V. H. et al. Deep learning models for forecasting dengue fever based on climate data in Vietnam. PLoS Neglect. Trop. Dis. 16(6), e0010509 (2022).

Article Google Scholar

Wu, N., Green, B., Ben, X. & O'Banion, S. Deep transformer models for time series forecasting: The influenza prevalence case. arXiv preprint arXiv:2001.08317. (2020).

Zerveas, G., Jayaraman, S., Patel, D., Bhamidipaty, A. & Eickhoff, C. A transformer-based framework for multivariate time series representation learning. Proc. of the 27th ACM SIGKDD conference on knowledge discovery & data mining 21142124 (2021).

Wang, N. & Zhao, X. Time series forecasting based on convolution transformer. IEICE Trans. Inf. Syst. 106(5), 976985 (2023).

Article Google Scholar

Xu, C., Li, J., Feng, B. & Lu, B. A financial time-series prediction model based on multiplex attention and linear transformer structure. Appl. Sci. 13(8), 5175 (2023).

Article CAS Google Scholar

Ahmed, D. M., Hassan, M. M. & Mstafa, R. J. A review on deep sequential models for forecasting time series data. Appl. Comput. Intell. Soft Comput. 3, 2022 (2022).

Google Scholar

Ahmed, S., Nielsen, I. E., Tripathi, A., Siddiqui, S., Rasool, G. & Ramachandran, R. P. Transformers in time-series analysis: A tutorial. arXiv 2022. arXiv preprint arXiv:2205.01138.

Haugsdal, E., Aune, E. & Ruocco, M. Persistence initialization: A novel adaptation of the transformer architecture for time series forecasting. Appl. Intell. 29, 16 (2023).

Google Scholar

Mohammadi Farsani, R. & Pazouki, E. A transformer self-attention model for time series forecasting. J. Electric. Comput. Eng. Innov. (JECEI) 9(1), 1 (2020).

Google Scholar

Kamana, E., Zhao, J. & Bai, D. Predicting the impact of climate change on the re-emergence of malaria cases in China using LSTMSeq2Seq deep learning model: A modelling and prediction analysis study. BMJ Open. 12(3), e053922 (2022).

Article PubMed PubMed Central Google Scholar

Teklehaimanot, H. D., Schwartz, J., Teklehaimanot, A. & Lipsitch, M. Alert threshold algorithms and malaria epidemic detection. Emerg. Infect. Dis. 10(7), 1220 (2004).

Article PubMed PubMed Central Google Scholar

Hartfield, M. & Alizon, S. Introducing the outbreak threshold in epidemiology. PLoS Pathog. 9(6), e1003277 (2013).

Article CAS PubMed PubMed Central Google Scholar

Bingham, N. H. & Fry, J. M. Regression: Linear Models in Statistics (Springer Science & Business Media, 2010).

Book Google Scholar

Das, A., Kong, W., Sen, R. & Zhou, Y. A decoder-only foundation model for time-series forecasting. arXiv preprint arXiv:2310.10688. (2023).

Radford, A. et al. Language models are unsupervised multitask learners. Open AI Blog. 1(8), 9 (2019).

Google Scholar

NOAA Physical sciences laboratory. NCEP/DOE AMIP-II Reanalysis (Reanalysis-2) Data. NOAA physical sciences laboratory. Available from: https://psl.noaa.gov/data/gridded/ data.ncep.reanalysis2.html. Accessed March 2023.

Liu, M., Ren, S., Ma, S., Jiao, J., Chen, Y., Wang, Z. & Song, W. Gated transformer networks for multivariate time series classification. arXiv preprint arXiv:2103.14438. (2021).

Chu J, Cao J, Chen Y. An ensemble deep learning model based on transformers for long sequence time-series forecasting. Proc. International Conference on Neural Computing for Advanced Applications 273286 (Springer Nature, Singapore, 2022).

Liu, C., Yu, S., Yu, M., Wei, B., Li, B., Li, G. & Huang, W. Adaptive smooth L1 loss: A better way to regress scene texts with extreme aspect ratios. Proc. 2021 IEEE Symposium on Computers and Communications (ISCC) 17 (IEEE, 2021).

Original post:
Utilizing a novel high-resolution malaria dataset for climate-informed predictions with a deep learning transformer ... - Nature.com

Read More..

How AI, including ChatGPT, is Revolutionizing Healthcare in 2024 – TechiExpert.com

In 2023, AI, like ChatGPT, became a big deal in healthcare. The World Health Organization thinks AI can change healthcare. Now, in 2024, Canadian experts are figuring out how AI will be a big part of healthcare.

AI is making healthcare more personal. Roxana Sultan from the Vector Institute in Toronto says soon AI will look at a lot of information about a patient, not just X-rays. It will look at notes from doctors, lab results, medicines and genetics. This helps not only to find out what is wrong but also to make a special plan for each person.

AI is also making clinical trials faster. These trials test new medicines. Sue Paish from DIGITAL says AI can look at billions of data pieces in a second. This means we can find out fast if new medicines are safe and work well.

But, using AI in healthcare needs good data. If the data is not good, the answers from AI wont be good either. So, it is important to use data from trusted places.

AI is also helping folks take care of their health. Some wear special AI devices that help them check how they are doing, especially people with heart issues. In faraway places, AI is used to check wounds through cellphones. The smart AI sends info to doctors and they can help patients without even meeting them.

But, when we use AI in healthcare, we need to be careful. Dr. Theresa Tam says we need rules to keep patient privacy safe. This means using AI in the right way.

In 2024, healthcare will be more personal and better for patients. But we also need to be careful and use AI in a way that is fair as well as safe for everyone.

Read more here:
How AI, including ChatGPT, is Revolutionizing Healthcare in 2024 - TechiExpert.com

Read More..

Making The Most Out Of Machine Learning – World Cement

Ali Hasan R., ThroughPut Inc., outlines three ways cement manufacturers can use artificial intelligence and machine learning to boost output and efficiency.

Cement manufacturers today face many complex challenges. Amid technological disruption and an intensely competitive landscape, their world has become more regulated and less predictable. As part of an industry highly connected to the environment, cement manufacturers are under rising pressure to lower their carbon impact and meet increasingly complicated environmental regulations.

It is clear that manufacturers in this space need a solution something to help them make progress on their goals and sift through the noise to find the most effective actions to take. Are tools such as artificial intelligence and machine learning the answer?

Looking at manufacturing in general, the science shows that AI-driven tools can help solve problems, cut both costs and waste, and squeeze out efficiencies in difficult times. According to a2022 reportby McKinsey & Company, 42% of organisations that adopted AI in 2021 decreased their manufacturing costs, and 61% saw an increase in revenues.

What about cement manufacturers specifically? AI and Machine Learning (ML) technology can help them get through the specific challenges the industry is facing.

Consider supply chain disruptions. The cement industry involves the large-scale transportation of materials; as such delays and disruptions such as those seen since the pandemic and its associated crises began can wreak havoc on production schedules. ML and AI bring higher levels of data capture and automation to the process, making supply chains more transparent and adaptable. Manufacturers can see ahead, spot problems in the pipeline, and reduce costly downtime.

Improving supply chains is just one example of how AI and ML tools will be able to streamline production, environmental regulations are another. The cement industry is amajor producer of carbon dioxide emissions, but new technologies can show cement manufacturing companies where they can perform carbon capture and where, precisely, they can reduce their carbon footprint and comply with regulations.

However, manufacturers might hesitate as they approach adopting complex technologies. The perceived costs of implementing new AI and ML technologies, at a time when the costs of manufacturing are already perilously high, can turn companies away from technology that could help them. Risk aversion has historically made sense in such a capital-intensive industry, and using new tools at scale could require significant investments in equipment, infrastructure, and training before their benefits appear.

The key for cement manufacturers will be to do their due diligence and only select technologies that make the most sense for their operations.

Enjoyed what you've read so far? Read the full article and the rest of the December issue of World Cement by registering today for free!

Read the original here:
Making The Most Out Of Machine Learning - World Cement

Read More..