Page 242«..1020..241242243244..250260..»

AI’s Illusion of Rapid Progress – Walter Bradley Center for Natural and Artificial Intelligence

The media loves to report on everything Elon Musk says, particularly when it is one of his very optimistic forecasts. Two weeks ago he said: If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years.”

In 2019, he predicted there would be a million robo-taxis by 2020 and in 2016, he said about Mars, “If things go according to plan, we should be able to launch people probably in 2024 with arrival in 2025.”

On the other hand, the media places less emphasis on negative news such as announcements that Amazon would abandon its cashier-less technology called “Just Walk Out, because it wasnt working properly. Introduced three years ago, the tech purportedly enabled shoppers to pick up meat, dairy, fruit and vegetables and walk straight out without queueing, as if by magic. That magic, which Amazon dubbed ‘Just Walk Out’ technology, was said to be autonomously powered by AI.

Unfortunately, it wasnt. Instead, the checkout-free magic was happening in part due to a network of cameras that were overseen by over 1,000 people in India who would verify what people took off the shelves. Their tasks included “manually reviewing transactions and labeling images from videos.

Why is this announcement more important than Musks prediction? Because so many of the predictions by tech bros such as Elon Musk are based on the illusion that there are many AI systems that are working properly, when they are still only 95% there, with the remaining 5% dependent on workers in the background. The obvious example is self-driving vehicles, which are always a few years away, even as many vehicles are controlled by remote workers.   

But self-driving vehicles and cashier-less technology are just the tip of the iceberg. A Gizmodo article listed about 10 examples of AI technology that seemed like they were working, but just werent.

A company named Presto Voice sold its drive-thru automation services, purportedly powered by AI, to Carls Jr, Chilis, and Del Taco, but in reality, Filipino offsite workers are required to help with over 70% of Prestos orders.

Facebook released a virtual assistant named M in 2015 that purportedly enabled AI to book your movie tickets, tell you the weather, or even order you food from a local restaurant. But it was mostly human operators who were doing the work.

There was an impressive Gemini demo in December of 2023 that showed how Geminis AI could allegedly decipher between video, image, and audio inputs in real-time. That video turned out to be sped up and edited so humans could feed Gemini long text and image prompts to produce any of its answers. Todays Gemini can barely even respond to controversial questions, let alone do the backflips it performed in that demo.

Amazon has offered a service for years called Mechanical Turk of which one service was Expensify in 2017 in which you could take a picture of a receipt and the app would automatically verify that it was an expense compliant with your employers rules, and file it in the appropriate location. In reality, Amazon used a team of secure technicians to file the expense on your behalf, who were often Amazon Mechanical Turk workers.

Twitter offered a virtual assistant in 2016 that had access to your calendar and could correspond with you over email. In reality, humans, posing as AI, responded to emails, scheduled meetings on calendars, and even ordered food for people.”

Google claims that AI is scanning your Gmail inbox for information to personalize ads, but in reality, humans are doing the work, and are seeing your private information.

In the last three cases, real humans were viewing private information such as credit card numbers, full names, addresses, food orders, and more.

Then there are the hallucinations that keep cropping up in the output from large-language models. Many experts claim that the lowest hallucination rates among tracked AI models are around 3 to 5%, and that they arent fixable because they stem from the LLMs doing exactly what they were developed and trained to do: respond, however they can, to user prompts.

Every time you hear one of the tech bros talking about the future, keep in mind that they think large language models and self-driving vehicles already work almost perfectly. They have already filed away those cases as successfully done and they are thinking about whats next.

For instance, Garry Tan, the president and CEO of startup accelerator Y Combinator, claimed that Amazons cashier-less technology was:

“ruined by a professional managerial class that decided to use fake AI. Honestly it makes me sad to see a Big Tech firm ruined by a professional managerial class that decided to use fake AI, deliver a terrible product, and poison an entire market (autonomous checkout) when an earnest Computer Vision-driven approach could have reached profitable.

The president of Y Combinator should have known that humans were needed to make Amazons technology work, and many other AI systems. It is one of Americas most respected venture capital firms. It has funded around 4,000 startups and Sam Altman, currently CEO of OpenAI, was president of it between 2014 and 2019. For the president, Rodney Tan, to claim that Amazon could have succeeded if they had used real tech after many other companies have failed doing the same thing suggests he is either misinformed or lying.

So the next time you hear that AGI is imminent or jobs will soon be gone, remember that most of these optimistic predictions assume that Amazons cashierless technology, self-driving vehicles, and many other systems already work, when they are only 95 percent there, and the last five percent is the hardest.

In reality, those systems wont be done for years because the last few percentage points of work usually take as long as the first 95%. So what the media should be asking the tech bros about is how long will it take before those systems go from 95% successfully done autonomously to 99.99% or higher. Similarly, companies should be asking the consultants is when the 95% will become 99.99% because the rapid progress is an illusion.

Too many people are extrapolating from the systems that are purportedly automated, even though they arent yet working properly. This means that any extrapolations should attempt to understand when they will become fully automated, not just when those new forms of automated systems will begin to be used. Understanding whats going on in the background is important for understanding what the future will be in the foreground.

Read the original here:

AI's Illusion of Rapid Progress - Walter Bradley Center for Natural and Artificial Intelligence

Read More..

Will AI help or hinder trust in science? – CSIRO

By Jon Whittle 23 April 2024 6 min read

In the past year, generative artificial intelligence tools such as ChatGPT , Gemini , and OpenAIs video generation tool Sora have captured the publics imagination.

All that is needed to start experimenting with AI is an internet connection and a web browser. You can interact with AI like you would with a human assistant: by talking to it, writing to it, showing it images or videos, or all of the above.

While this capability marks entirely new terrain for the general public, scientists have used AI as a tool for many years.

But with greater public knowledge of AI will come greater public scrutiny of how its being used by scientists.

AI is already revolutionising science six percent of all scientific work leverages AI, not just in computer science, but in chemistry, physics, psychology and environmental science.

Nature, one of the worlds most prestigious scientific journals, included ChatGPT on its 2023 Natures 10 list of the worlds most influential and, until then, exclusively human scientists.

The use of AI in science is twofold.

At one level, AI can make scientists more productive.

When Google DeepMind released an AI-generated dataset of more than 380,000 novel material compounds, Lawrence Berkeley Lab used AI to run compound synthesis experiments at a scale orders of magnitude larger than what could be accomplished by humans.

But AI has even greater potential: to enable scientists to make discoveries that otherwise would not be possible at all.

It was an AI algorithm that for the first time found signal patterns in brain-activity data that pointed to the onset of epileptic seizures, a feat that not even the most experienced human neurologist can repeat.

Early success stories of the use of AI in science have led some to imagine a future in which scientists will collaborate with AI scientific assistants as part of their daily work.

That future is already here. CSIRO researchers are experimenting with AI science agents and have developed robots that can follow spoken language instructions to carry out scientific tasks during fieldwork.

While modern AI systems are impressively powerful especially so-called artificial general intelligence tools such as ChatGPT and Gemini they also have drawbacks.

Generative AI systems are susceptible to hallucinations where they make up facts.

Or they can be biased. Googles Gemini depicting Americas Founding Fathers as a diverse group is an interesting case of over-correcting for bias.

There is a very real danger of AI fabricating results and this has already happened. Its relatively easy to get a generative AI tool to cite publications that dont exist .

Furthermore, many AI systems cannot explain why they produce the output they produce.

This is not always a problem. If AI generates a new hypothesis that is then tested by the usual scientific methods, there is no harm done.

However, for some applications a lack of explanation can be a problem.

Replication of results is a basic tenet in science, but if the steps that AI took to reach a conclusion remain opaque, replication and validation become difficult, if not impossible.

And that could harm peoples trust in the science produced.

A distinction should be made here between general and narrow AI.

Narrow AI is AI trained to carry out a specific task.

Narrow AI has already made great strides. Google DeepMinds AlphaFold model has revolutionised how scientists predict protein structures.

But there are many other, less well publicised, successes too such as AI being used at CSIRO to discover new galaxies in the night sky, IBM Research developing AI that rediscovered Keplers third law of planetary motion , or Samsung AI building AI that was able to reproduce Nobel prize winning scientific breakthroughs .

When it comes to narrow AI applied to science, trust remains high.

AI systems especially those based on machine learning methods rarely achieve 100 percent accuracy on a given task. (In fact, machine learning systems outperform humans on some tasks, and humans outperform AI systems on many tasks. Humans using AI systems generally outperform humans working alone and they also outperform AI working alone. There is a large scientific evidence base for this fact, including this study. )

AI working alongside an expert scientist, who confirms and interprets the results, is a perfectly legitimate way of working, and is widely seen as yielding better performance than human scientists or AI systems working alone.

On the other hand, general AI systems are trained to carry out a wide range of tasks, not specific to any domain or use case.

ChatGPT, for example, can create a Shakespearian sonnet, suggest a recipe for dinner, summarise a body of academic literature, or generate a scientific hypothesis.

When it comes to general AI, the problems of hallucinations and bias are most acute and widespread. That doesnt mean general AI isnt useful for scientists but it needs to be used with care.

This means scientists must understand and assess the risks of using AI in a specific scenario and weigh them against the risks of not doing so.

Scientists are now routinely using general AI systems to help write papers , assist review of academic literature, and even prepare experimental plans.

One danger when it comes to these scientific assistants could arise if the human scientist takes the outputs for granted.

Well-trained, diligent scientists will not do this, of course. But many scientists out there are just trying to survive in a tough industry of publish-or-perish. Scientific fraud is already increasing , even without AI.

AI could lead to new levels of scientific misconduct either through deliberate misuse of the technology, or through sheer ignorance as scientists dont realise that AI is making things up.

Both narrow and general AI have great potential to advance scientific discovery.

A typical scientific workflow conceptually consists of three phases: understanding what problem to focus on, carrying out experiments related to that problem and exploiting the results as impact in the real world.

AI can help in all three of these phases.

There is a big caveat, however. Current AI tools are not suitable to be used naively out-of-the-box for serious scientific work.

Only if researchers responsibly design, build, and use the next generation of AI tools in support of the scientific method will the publics trust in both AI and science be gained and maintained.

Getting this right is worth it: the possibilities of using AI to transform science are endless.

Google DeepMinds iconic founder Demis Hassabis famously said : Building ever more capable and general AI, safely and responsibly, demands that we solve some of the hardest scientific and engineering challenges of our time.

The reverse conclusion is true as well: solving the hardest scientific challenges of our time demands building ever more capable, safe and responsible general AI.

Australian scientists are working on it.

This article was originally published by360info under a Creative Commons license. Read theoriginal article .

Professor Jon Whittle is Director of CSIROs Data61, Australias national centre for R&D in data science and digital technologies. He is co-author of the book, Responsible AI: Best Practices for Creating Trustworthy AI Systems.

Dr Stefan Harrer is Program Director of AI for Science at CSIROs Data61, leading a global innovation, research and commercialisation programme aiming to accelerate scientific discovery through the use of AI. He is the author of the Lancet article Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine.

Stefan Harrer is an inventor on several granted US and international patents that relate to using AI for science.

See the original post:

Will AI help or hinder trust in science? - CSIRO

Read More..

Can Oil, Gas Companies Use Generative AI to Help Hire People? – Rigzone News

Artificial Intelligence (AI) will definitely help oil and gas companies hire people.

Thats what Louisiana based OneSource Professional Search believes, according to Dave Mount, the companys president.

Our search firm is already implementing AI to augment our traditional recruiting/headhunting practices to more efficiently source a higher number of candidates along with managing the extra activity related to sourcing and qualifying a larger amount of candidates/talent pool, Mount revealed to Rigzone.

Were integrating AI as we speak and its definitely helping in covering more ground and allowing us to access a larger talent pool, although its a learning process to help the quality of the sourcing/screening match the increased quantity of qualified candidates, he added.

Gladney Darroh - an energy search specialist with 47 years of experience who developed and coaches the interview methodology Winning the Offer, which earned him the ranking of #1 technical and professional recruiter in Houston for 17 consecutive years by HAAPC - told Rigzone that oil and gas companies will use generative AI to help hire people, and so will everyone else.

Generative AI is a historic leap in technology and oil and gas companies have used technology for years to hire people, the Founding Partner and President of Houston, Texas, based Piper-Morgan Associates Personnel Consultants said.

It is typically a time intensive exercise to develop an initial pool of qualified candidates, determine which will consider a job change, which will consider a job change for this opportunity, who is really gettable, who meets the expectations of the hiring company in terms of what he/she brings to the table now, and if she/he possesses the talent to become a long term asset, Darroh added.

Deep learning models can be trained on key word content searches for anything and everything: education, training, skillset, general and specific experience all quantitative data, Darroh continued.

Once AI is trained this way and applied to searches, AI will generate in seconds what an in-house or outside recruiter might generate over days or weeks, he went on to state.

Darroh also noted that AI is developing inference - the ability to draw conclusions from data, which is the qualitative data that helps determine a candidates long-term potential for promotion and leadership roles.

For companies who are racing against their competitors to identify and hire the right talent, whether an entry level or an experienced hire, they will all adopt AI to help hire people, Darroh concluded.

Earlier this year, Enverus Chief Innovation Officer, Colin Westmoreland, revealed to Rigzone that the company believes generative AI will shape oil and gas decision making in 2024 and into the future.

Generative AI will reduce the time to value significantly by providing rapid analysis and insights, leveraging vast amounts of curated data, he said.

Westmoreland also told Rigzone that generative AI is expected to become commonplace among oil and gas companies over the next few years.

Back in January, Trygve Randen, the Senior Vice President of Digital Products and Solutions at SLB, outlined to Rigzone that generative AI will continue to gain traction in the oil and gas industry this year.

In an article published on its website in January 2023, which was updated in April 2024, McKinsey & Company noted that generative AI describes algorithms, such as ChatGPT, that can be used to create new content, including audio, code, images, text, simulations, and videos.

OpenAI, which describes itself as an A.I. research and deployment company whose mission is to ensure that artificial general intelligence benefits all of humanity, introduced ChatGPT on November 30, 2022.

In April 2023, Rigzone looked at how ChatGPT will affect oil and gas jobs. To view that article, clickhere.

To contact the author, emailandreas.exarheas@rigzone.com

Read the original here:

Can Oil, Gas Companies Use Generative AI to Help Hire People? - Rigzone News

Read More..

Beyond AI doomerism: Navigating hype vs. reality in AI risk – TechTarget

With attention-grabbing headlines about the possible end of the world at the hands of an artificial superintelligence, it's easy to get caught up in the AI doomerism hype and imagine a future where AI systems wreak havoc on humankind.

Discourse surrounding any unprecedented moment in history -- the rapid growth of AI included -- is inevitably complex, characterized by competing beliefs and ideologies. Over the past year and a half, concerns have bubbled up regarding both the short- and long-term risks of AI, sparking debate over which issues should be prioritized.

Although considering the risks AI poses and the technology's future trajectory is worthwhile, discussions of AI can also veer into sensationalism. This hype-driven engagement detracts from productive conversation about how to develop and maintain AI responsibly -- because, like it or not, AI seems to be here to stay.

"We're all pursuing the same thing, which is that we want AI to be used for good and we want it to benefit people," said Brian Green, director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University.

As AI has gained prominence, so has the conversation surrounding its risks. Concerns range from immediate ethical and societal harms to long-term, more hypothetical risks, including whether AI could pose an existential threat to humanity. Those focused on the latter, a field known as AI safety, see AI as both an avenue for innovation and a source of possibly devastating risks.

Spencer Kaplan, an anthropologist and doctoral candidate at Yale University, studies the AI community and its discourse around AI development and risk. During his time in the San Francisco Bay Area AI safety scene, he's found that many experts are both excited and worried about the possibilities of AI.

"One of the key points of agreement is that generative AI is both a source of incredible promise and incredible peril," Kaplan said.

One major long-term concern about AI is existential risk, often abbreviated as x-risk, the fear that AI could someday cause the mass destruction of humans. An AI system with unprecedented and superhuman levels of intelligence, often referred to as artificial general intelligence (AGI), is considered a prerequisite for this type of destruction. Some AI safety researchers postulate that AGI with intelligence indistinguishable from or superior to that of humans would have the power to wipe out humankind. Opinions in the AI safety scene on the likelihood of such a hostile takeover event vary widely; some consider it highly probable, while others only acknowledge it as a possibility, Kaplan said.

Some circles hold a prevailing belief that long-term risks are the most concerning, regardless of their likelihood -- a concept influenced by tenets of effective altruism (EA), a philosophical and social movement that first gained prominence in Oxford, U.K., and the Bay Area in the late 2000s. Effective altruists' stated aim is to identify the most impactful, cost-effective ways to help others using quantifiable evidence and reasoning.

In the context of AI, advocates of EA and AI safety have coalesced around a shared emphasis on high-impact global issues. In particular, both groups are influenced by longtermism, the belief that focusing on the long-term future is an ethical priority and, consequently, that potential existential risks are most deserving of attention. The prevalence of this perspective, in turn, has meant prioritizing research and strategies that aim to mitigate existential risk from AI.

Fears about extinction-level risk from AI might seem widespread; a group of industry leaders publicly said as much in 2023. A few years prior, in 2021, a subgroup of OpenAI developers split off to form their own safety-focused AI lab, Anthropic, motivated by a belief in the long-term risks of AI and AGI. More recently, Geoffrey Hinton, sometimes referred to as the godfather of AI, left Google, citing fears about the power of AI.

"There is a lot of sincere belief in this," said Jesse McCrosky, a data scientist and principal researcher for open source research and investigations at Mozilla. "There's a lot of true believers among this community."

As conversation around the long-term risks of AI intensifies, the term AI doomerism has emerged to refer to a particularly extreme subset of those concerned about existential risk and AGI -- often dismissively, sometimes as a self-descriptor. Among the most outspoken is Eliezer Yudkowsky, who has publicly expressed his belief in the likelihood of AGI and the downfall of humanity due to a hostile superhuman intelligence.

However, the term is more often used as a pejorative than as a self-label. "I have never heard of anyone in AI safety or in AI safety with longtermist concerns call themselves a doomer," Kaplan said.

Although those in AI safety typically see the most pressing AI problems as future risks, others -- often called AI ethicists -- say the most pressing problems of AI are happening right now.

"Typically, AI ethics is more social justice-oriented and looking at the impact on already marginalized communities, whereas AI safety is more the science fiction scenarios and concerns," McCrosky said.

For years, individuals have raised serious concerns about the immediate implications of AI technology. AI tools and systems have already been linked to racial bias, political manipulation and harmful deepfakes, among other notable problems. Given AI's wide range of applications -- in hiring, facial recognition and policing, to name just a few -- its magnification of biases and opportunity for misuse can have disastrous effects.

"There's already unsafe AI right now," said Chirag Shah, professor in the Information School at the University of Washington and founding co-director of the center for Responsibility in AI Systems and Experiences. "There are some actual important issues to address right now, including issues of bias, fairness, transparency and accountability."

As Emily Bender, a computational linguist and professor at the University of Washington, has argued, conversations that overlook these types of AI risks are both dangerous and privileged, as they fail to account for AI's existing disproportionate effect on marginalized communities. Focusing solely on hypothetical future risk means missing the important issues of the present.

"[AI doomerism] can be a distraction from the harms that we already see," McCrosky said. "It puts a different framing on the risk and maybe makes it easier to sweep other things under the carpet."

Rumman Chowdhury, co-founder of the nonprofit Humane Intelligence, has long focused on tech transparency and ethics, including in AI systems. In a 2023 Rolling Stone article, she commented that the demographics of doomer and x-risk communities skew white, male and wealthy -- and thus tend not to include victims of structural inequality.

"For these individuals, they think that the biggest problems in the world are can AI set off a nuclear weapon?" Chowdhury told Rolling Stone.

McCrosky recently conducted a study on racial bias in multimodal LLMs. When he asked the model to determine whether a person was trustworthy based solely on facial images, he found that racial bias often influenced its decision-making process. Such biases are deeply concerning and have serious implications, especially when considered in the context of AI applications, such as military and defense.

"We've already seen significant harm from AI," McCrosky said. "These are real harms that we should be caring a whole lot more about."

In addition to fearing that discussions of existential risk overshadow current AI-related harms, many researchers also question the scientific foundation for concerns about superintelligence. If there's little basis for the idea that AGI could develop in the first place, they worry about the effect such sensational language could have.

"We jump to [the idea of] AI coming to destroy us, but we're not thinking enough about how that happens," Shah said.

McCrosky shared this skepticism regarding the existential threat from AI. The plateau currently reached by generative AI isn't indicative of the AGI that longtermists worry about, he said, and the path towards AGI remains unclear.

Transformers, the models underlying today's generative AI, were a revolutionary concept when Google published the seminal paper "Attention Is All You Need" in 2017. Since then, AI labs have used transformer-based architectures to build the LLMs that power generative AI tools, like OpenAI's chatbot, ChatGPT.

Over time, LLMs have become capable of handling increasingly large context windows, meaning that the AI system can process greater amounts of input at once. But larger context windows come with higher computational costs, and technical issues, like hallucinations, have remained a problem even for highly powerful models. Consequently, scientists are now contending with the possibility that advancing to the next frontier in AI may require a completely new architecture.

"[Researchers] are kind of hitting a wall when it comes to transformer-based architecture," Kaplan said. "What happens if they don't find this new architecture? Then, suddenly, AGI becomes further and further off -- and then what does that do to AI safety?"

Given the uncertainty around whether AGI can be developed in the first place, it's worth asking who stands to benefit from AI doomerism talk. When AI developers advocate for investing more time, money and attention into AI due to possible AGI risks, a self-interested motive may also be at play.

"The narrative comes largely from people that are building these systems and are very excited about these systems," McCrosky said. While he noted that AI safety concerns are typically genuine, he also pointed out that such rhetoric "becomes very self-serving, in that we should put all our philanthropic resources towards making sure we do AI right, which is obviously the thing that they want to do anyway."

Despite the range of beliefs and motivations, one thing is evident: The dangers associated with AI feel incredibly tangible to those who are concerned about them.

A future with extensive integration of AI technologies is increasingly easy to imagine, and it's understandable why some genuinely believe these developments could lead to serious dangers. Moreover, people are already affected by AI every day in unintended ways, from harmless but frustrating outcomes to dangerous and disenfranchising ones.

To foster productive conversation amid this complexity, experts are emphasizing the importance of education and engagement. When public awareness of AI outpaces understanding, a knowledge gap can emerge, said Reggie Townsend, vice president of data ethics at SAS and member of the National AI Advisory Committee.

"Unfortunately, all too often, people fill the gap between awareness and understanding with fear," Townsend said.

One strategy for filling that gap is education, which Shah sees as the best way to build a solid foundation for those entering the AI risk conversation. "The solution really is education," he said. "People need to really understand and learn about this and then make decisions and join the real discourse, as opposed to hype or fear." That way, sensational discourse, like AI doomerism, doesn't eclipse other AI concerns and capabilities.

Technologists have a responsibility to ensure that overall societal understanding of AI improves, Townsend said. Hopefully, better AI literacy results in more responsible discourse and engagement with AI.

Townsend emphasized the importance of meeting people where they are. "Oftentimes, this conversation gets way too far ahead of where people actually are in terms of their willingness to accept and their ability to understand," he said.

Lastly, polarization impedes progress. Those focused on current concerns and those worried about long-term risk are more connected than they might realize, Green said. Seeing these perspectives as contradictory or in a zero-sum way is counterproductive.

"Both of their projects are looking at really important social impacts of technology," he said. "All that time spent infighting is time that could be spent actually solving the problems that they want to solve."

In the wake of recent and rapid AI advancements, harms are being addressed on multiple fronts. Various groups and individuals are working to train AI more ethically, pushing for better governance to prevent misuse and considering the impact of intelligent systems on people's livelihoods, among other endeavors. Seeing these efforts as inherently contradictory -- or rejecting others' concerns out of hand -- runs counter to a shared goal that everyone can hopefully agree on: If we're going to build and use powerful AI, we need to get it right.

Olivia Wisbey is associate site editor for TechTarget Enterprise AI. She graduated from Colgate University with Bachelor of Arts degrees in English literature and political science, where she served as a peer writing consultant at the university's Writing and Speaking Center.

Lev Craig contributed reporting and research to this story.

See the original post:

Beyond AI doomerism: Navigating hype vs. reality in AI risk - TechTarget

Read More..

Regulation must be put in place today for the superhuman AI of tomorrow – Verdict

Companies such as OpenAI and Meta have publicly committed to achieving AGI and are working towards this goal. Credit: TY Lim / Shutterstock.

With its potential, artificial intelligence (AI) is perhaps the most hotly discussed technology theme in the world. Since the launch of ChatGPT in November 2022, a day has not gone by in which AI did not capture the headlines. And it is only the beginning. GlobalData forecasts that the global AI market will grow drastically at a compound annual growth rate (CAGR) of 39% between 2023 and 2030.

According to GlobalData, we are only in the very early stages of AI. But even now, AI can do a lot. It can engage in high-quality conversations and some people have even reportedly got married to AI bots. Such incredible capabilities at this early stage suggest how advanced the technology will get. Scarily, at one point, AI could go on to become more intelligent than the most gifted minds in the world. Researchers call this stage of development artificial superintelligence (ASI).

So far, many influential businesspeople and experts have made guesses and expressed their opinions on ASI. In April 2024, Elon Musk argued that AI smarter than humans will be here as soon as the end of next year. This is a drastic change from his previous forecast in which he predicted that ASI would exist by 2029.

However, according to GlobalData, this is unlikely. GlobalData notes that researchers theorise that we first must achieve artificial general intelligence (AGI) before reaching ASI. At this stage, machines will have consciousness and be able to do anything people can do.

Although companies such as OpenAI and Meta have publicly committed to achieving AGI and are working towards this goal, it looks like it is going to take years before we see human-like AI machines around us that can do and think exactly as humans do. As a result, GlobalData expects that AGI will be achieved in no earlier than 35 years. Considered the holy grail of AI, AGI remains completely theoretical for now, despite the hype.

And considering that ASI is the step after AGI, it is also likely decades away.

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Your download email will arrive shortly

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

Country * UK USA Afghanistan land Islands Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bonaire, Sint Eustatius and Saba Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos Islands Colombia Comoros Congo Democratic Republic of the Congo Cook Islands Costa Rica Cte d"Ivoire Croatia Cuba Curaao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and McDonald Islands Holy See Honduras Hong Kong Hungary Iceland India Indonesia Iran Iraq Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati North Korea South Korea Kuwait Kyrgyzstan Lao Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia Moldova Monaco Mongolia Montenegro Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Runion Romania Russian Federation Rwanda Saint Helena, Ascension and Tristan da Cunha Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan Tajikistan Tanzania Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates US Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Vietnam British Virgin Islands US Virgin Islands Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe Kosovo

Industry * Academia & Education Aerospace, Defense & Security Agriculture Asset Management Automotive Banking & Payments Chemicals Construction Consumer Foodservice Government, trade bodies and NGOs Health & Fitness Hospitals & Healthcare HR, Staffing & Recruitment Insurance Investment Banking Legal Services Management Consulting Marketing & Advertising Media & Publishing Medical Devices Mining Oil & Gas Packaging Pharmaceuticals Power & Utilities Private Equity Real Estate Retail Sport Technology Telecom Transportation & Logistics Travel, Tourism & Hospitality Venture Capital

Tick here to opt out of curated industry news, reports, and event updates from Verdict.

Submit and download

This level of advancement brings to mind science fiction movies and literature, in which AI takes over the world. Notably, Elon Musk has commented on this possibility before, as he argued there is a slim but not zero chance that AI will kill all of humanity.

In September 2023, headlines announced that tech executives like Bill Gates, Elon Musk, and Mark Zuckerberg met with lawmakers to discuss the dangers of uncontrolled AI and superintelligence behind closed doors. Evidently, not everyone is excited about ASI.

Even todays good enough AI with its limited capabilities is concerning tech leaders and world governments. AI-enhanced issues such as misinformation have already caused considerable trouble. Noticing the current and future threats of AI, governments, key influencers, and organizations have taken action. For instance, in March 2024, the General Assembly accepted the first-ever UN resolution for AI to ensure that the technology is used safely and reliably.

In the end, it looks like it may still be some time before ASI exists. Nevertheless, although ASI has the potential to revolutionize how humans and machines interact, steps must be taken today to minimize any potential threats. Perhaps, to ensure ASI remains safe, we must turn to fiction. Maybe the world needs a set of rules like Isaac Asimovs Three Laws of Robotics, which were followed by robots in many of his stories and prevented the machines from harming humans.

Read the rest here:

Regulation must be put in place today for the superhuman AI of tomorrow - Verdict

Read More..

Pico Launches Machine Learning and AI Capabilities in Corvil Analytics 10.0 Software Release – Yahoo Finance

Pico

NEW YORK, April 23, 2024 (GLOBE NEWSWIRE) -- Pico, a leading global provider of mission-critical technology services, software, data and analytics for the financial markets community, today announced the general availability of its Corvil Analytics 10.0 software release. This latest version leverages groundbreaking internal research in machine learning (ML) and artificial intelligence (AI) to enable proactive notification and natural language descriptions, correlating performance-impacting events, unusual events and extreme events that influence trading outcomes and infrastructure performance.

Corvil Analytics is widely deployed across the financial services community, delivering crucial performance and actionable business insights for trading infrastructure and IT operations and teams focused on trade reconciliation and regulatory compliance. Additionally, data scientists and quantitative analysts are equipped with advanced tools for in-depth data analysis and operational support. A single Corvil Analytics appliance deployed in these environments scales to provide up to 7.5 million data points every day. Corvil Analytics 10.0 applies our research into ML / AI techniques relevant to the data patterns, volume of analytics and the performance challenges in the financial services sector. The resulting innovation is a new capability in 10.0 that automatically identifies, correlates and narrates the underlying cause of the most significant business-impacting events.

The Corvil Analytics 10.0 release represents a significant milestone in our continuous research and innovation on the platform. Years of research in ML / AI techniques by our data science team has delivered the capability to automatically detect business-impacting events in trading infrastructure and corporate IT infrastructure, said Ken Jinks, Managing Director, Product Management at Pico. And proactively detecting these events in real-time is only the first step. Highlighting other correlated events and describing the events in natural language enables all users to quickly understand and communicate root cause analysis to all stakeholders, resulting in decisive corrective action.

Corvil Analytics 10.0 is a major software release that also introduces:

Enhanced User Experience Smarter analytics tooltips, flexible comparison of business time periods, and a new customer portal that offers access to software updates, documentation, support tickets, webinars, and knowledge base articles.

Reduced Cost of Ownership Enhanced configuration capabilities enable easy Corvil setup, higher accuracy, and lower cost of ownership.

Advanced Timestamp Options Corvil now supports start-of-frame timestamps addressing the need of trading applications where specific timing is critical for real-time performance analytics.

Story continues

Corvil Analytics is trusted by the worlds largest banks, exchanges, electronic market makers, quantitative hedge funds, data service providers and brokers. With a twenty-plus-year legacy, the Corvil Analytics platform continues to improve its ability to extract and correlate technology and transaction performance intelligence from dynamic network environments. This release of Corvil Analytics 10.0 continues our investment in the platform, focusing on the unique patterns of data in the financial services markets to intelligently identify events of interest, improve the user experience, and lower the cost of ownership/configuration in these complex environments.

The Corvil Analytics 10.0 release will be available to download and deploy starting May 1st via the Pico Client Portal.

Register nowto learn more about Corvil Analytics 10.0 in an upcoming webinar hosted by Pico on May 2, 2024 at 10:00am EDT | 3:00pm BST.

About Pico Pico is a leading global provider of technology services for the financial markets community. Picos technology and services power mission-critical systems for global banks, exchanges, electronic trading firms, quantitative hedge funds, and financial technology service providers. Pico provides a best-in-class portfolio of innovative, transparent, low-latency markets solutions coupled with an agile and expert service delivery model. Instant access to financial markets is provided via PicoNet, a globally comprehensive network platform instrumented natively with Corvil to generate analytics and telemetry. Clients choose Pico when they want the freedom to move fast and create an operational edge in the fast-paced world of financial markets.

To learn more about Pico, please visit https://www.pico.net

Contact info: Pico Press Office pr@pico.net

See the rest here:
Pico Launches Machine Learning and AI Capabilities in Corvil Analytics 10.0 Software Release - Yahoo Finance

Read More..

The best free AI courses (and whether AI ‘micro-degrees’ and certificates are worth it) – ZDNet

Generative AI is an astonishing technology that is not only here to stay but will impact all sectors of work and business, and it has already made unprecedented inroads into our daily lives.

We all have a lot to learn about it. Spewing out a few prompts to ChatGPT may be easy, but before you can turn all these new capabilities into productive tools, you need to grow your skills. Fortunately, there are a wide range of classes that can help.

Also: Want to work in AI? How to pivot your career in 5 steps

Many companies and schools will try to sell you on their AI education programs. But as I'll show in the following compendium of great resources, you can learn a ton about AI and even get some certifications -- all for free.

Speaking personally, I have to say that this has been really cool. I don't normally get a lot of time to hang out and watch stuff. But because I've been going hands-on with AI for you all here on ZDNET, I've had the excuse opportunity to watch a bunch of these videos. Sitting there on the couch, cup of coffee in one hand, doggo on the lap, and able to legitimately claim, "I'm working."

Also:The best AI image generators: Tested and reviewed

I have taken at least one class from each of the providers below, and they've all been pretty good. Obviously, some teachers are more compelling than others, but it's been a very helpful process. When working on AI projects for ZDNET, I've also sometimes gone back and taken other classes to shore up my knowledge and understanding.

So, I recommend you take a quick spin through my short reviews, possibly dig deeper into the linked-to articles, and bookmark all of these, because they're valuable resources. Let's get started.

LinkedIn Learning is one of the oldest online learning platforms, established in 1995 as Lynda.com. The company offers an enormous library of courses on a broad range of topics. There is a monthly fee, but many companies and schools have accounts for all their employees and students.

Also: Two powerful LinkedIn Premium features that make the subscription worth it

LinkedIn Learning (and Lynda.com, which is what it started) has probably been the one online education site I've used more than any other. I've used it regularly since at least the end of the 1990s. For years, I paid for a membership. Then I got a membership as an alum of my grad school, which is how I use it now. So many courses on so many topics, it's a great go-to learning resource.

I took two classes on LinkedIn Learning. Here's my testimonial on one of them.

I also took the two-hour Machine Learning with Python: Foundations course, which had a great instructor -- Prof. Frederick Nwanganga -- who was previously unknown to me. I have to hand it to LinkedIn. They choose people who know how to teach.

I learned a lot in this course, especially about how to collect and prepare data for machine learning. I also was able to stretch my Python programming knowledge, specifically about how a machine learning model can be built in Python. In just two hours, I felt like I got a friendly and comprehensive brain dump.

You can read more here:How LinkedIn's free AI course made me a better Python developer.

Since there are so many AI courses, you're bound to find a helpful series. To get you started, I've picked three that might open some doors:

It's worth checking with your employer, agency, or school to see if you qualify for a free membership. Otherwise, you can pay by month or year (the by-year option is about half price).

Amazon puts the demand in infrastructure on demand. Rather than building out their own infrastructure, many companies now rely on Amazon to provide scalable cloud infrastructure on demand. Nearly every aspect of IT technology is available for rent from Amazon's wide range of web services. This also includes a fairly large spectrum of individual AI services from computer vision to human-sounding speech to Bedrock, which "makes LLMs from Amazon and leading AI startups available through an API."

Amazon also offers a wide range of training courses for all these services. Some of them are available for free, while others are available via a paid subscription. Here are three of the free courses you can try out:

In addition to classes located on Amazon's sites, they also have quite a few classes on YouTube. I spent a fun and interesting weekend gobbling up theGenerative AI Foundations series, which is an entire playlist of cool stuff to learn about AI.

If you're using or even just considering AWS-based services, these courses are well worth your time.

IBM, of course, is IBM. It led the AI pack for years with its Watson offerings. Its generative AI solution is called Watsonx. It focuses on enabling businesses to deploy and manage both traditional machine learning and generative AI, tailored to their unique needs.

Also:Have 10 hours? IBM will train you in AI fundamentals - for free

The company's SkillsBuild Learning classes offer a lot, providing basic training for a few key IT job descriptions -- including cybersecurity specialist, data analyst, user experience designer, and more.

Right now, there's only one free AI credential, but it's one that excited a lot of our readers. That's the AI Fundamentals learning credential, which offers six courses. You need to be logged in to follow the link. But registration is easy and free. When you're done, you get an official credential, which you can list on LinkedIn. After I took the course, I did just that -- and, of course, I documented it for you:

See: How to add a new credential to your LinkedIn profile, and why you should

My favorite was the AI Ethics class, which is an hour and 45 minutes. Through real-world examples you'll learn about AI ethics, how they are implemented, and why AI ethics are so important in building trustworthy AI systems.

DeepLearning is an education-focused company specializing in AI training. The company is constantly adding new courses that provide training, mostly for developers, in many different facets of AI technology. It partnered with OpenAI (the makers of ChatGPT) to create a number of pretty great courses.

I took the ChatGPT Prompt Engineering for Developers course below, which was my first detailed introduction to the ChatGPT API. If you're at all interested in how coders can use LLMs like ChatGPT, this course is worth your time. Just the interspersing of traditional code with detailed prompts that look more like comments than commands can help you get your head around integrating these two very different styles of coding.

Read more: I took this free AI course for developers in one weekend and highly recommend it

Three courses I recommend you check out are:

With AI such a hot growth area, I never cease to be amazed at the vast quantity of high-value courseware available for free. Definitely bookmark DeepLearning and keep checking back as it adds more courses.

Udemy is a courseware aggregator that publishes courses produced by individual trainers. That makes course style and quality a little inconsistent, but the rating system does help the more outstanding trainers rise to the top. Udemy has a free trial, which is why it's on this list.

Read more:I'm a ChatGPT pro but this quick course taught me new tricks, and you can take it for free

I spent some time in Steve Ballinger'sComplete ChatGPT Course For Work 2023 (Ethically)! and found it quite helpful. Clocking in at a little over two hours, it helps you understand how to balance ChatGPT with your work processes, while keeping in mind the ethics and issues that arise from using AI at work.

It also sells a $20/mo all-you-can-eat plan, as well as its courses individually. I honestly can't see why anyone would buy the courses individually, since most of them cost more for one course than the entire library does on a subscription.

Also:I'm taking AI image courses for free on Udemy with this little trick - and you can too

Here are three courses you might want to check out:

One of the more interesting aspects of Udemy is that you may find courses on very niche applications of AI, which might not suit vendors offering a more limited selection of mainstream courses. If you have a unique application need, don't hesitate to spend some extra time searching for just the right course.

Google's Grow With Google program offers a fairly wide range of certificate programs, which are normally run through Coursera. Earning one of those certificates often requires paying a subscription fee. But we're specifically interested in one Grow With Google program, which is aimed at teachers, and does not involve any fees.

The Generative AI for Educators class, developed in concert with MIT's Responsible AI for Social Empowerment and Education, is a 2-hour program designed to help teachers learn about generative AI, and how to use it in the classroom.

Also:Google and MIT launch a free generative AI course for teachers

Generative AI is a big challenge in education because it can provide amazing support for students and teachers and, unfortunately, provide an easy way out for students to cheat on their assignments. So a course that can help teachers come up to speed on all the issues can be very powerful.

The course provides a professional development certificate on completion, and this one is free.

I've been working with AI for a very long time. I conducted one of the first-ever academic studies of AI ethics as a thesis project way back in the day. I created and launched an expert system development environment before the first link was connected on the World Wide Web. I did some of the first research of AI on RISC-based computing architectures (the chips in your phone) when RISC processors were the size of refrigerators.

Also:Six skills you need to become an AI prompt engineer

When it comes to the courses and programs I'm spotlighting here, there's no way I could take all of them. But I have taken at least one course from each vendor, in order to test them out and report back to you. And, given my long background in the world of AI, this is a topic that has fascinated and enthralled me for most of my academic and professional career.

With all that, I will say that the absolute high point was when I could get an AI to talk like a pirate.

Let's be clear: A micro-degree is not a degree. It's a set of courses with a marketing name attached. Degrees are granted by accredited academic institutions, accredited by regional accrediting bodies. I'm not saying you won't learn anything in those programs. But they're not degrees and they may cost more than just-as-good courses that don't have a fancy marketing name attached.

Yes, but how much value they have depends on your prospective employer's perspective. A certificate says you completed some course of study successfully. That might be something of value to you, as well. You can set a goal to learn a topic, and if you get a credential, you can be fairly confident you achieved some learning. Accredited degrees, by contrast, are an assurance that you not only learned the material, but did so according to some level of standard and rigor common to other accredited institutions.

Also:How to write better ChatGPT prompts in 5 steps

My advice: If you can get a certificate, and the price for getting it doesn't overly stretch your budget, go ahead and get it. It still is a resume point. But don't fork over bucks on the scale of a college tuition for some promise that you'll get qualified for a job faster and easier than, you know, going to college.

See original here:
The best free AI courses (and whether AI 'micro-degrees' and certificates are worth it) - ZDNet

Read More..

Machine learning approach predicts heart failure outcome risk – HealthITAnalytics.com

April 22, 2024 -Researchers from the University of Virginia (UVA) have developed a machine learning tool designed to assess and predict adverse outcome risks for patients with advanced heart failure with reduced ejection fraction (HFrEF), according to a recent study published in the American Heart Journal.

The research team indicated that risk models for HFrEF exist, but few are capable of addressing the challenge of missing data or incorporating invasive hemodynamic data, limiting their ability to provide personalized risk assessments for heart failure patients.

Heart failure is a progressive condition that affects not only quality of life but quantity as well, explained Sula Mazimba, MD, an associate professor of medicine at UVA and cardiologist at UVA Health, in the news release. "All heart failure patients are not the same. Each patient is on a spectrum along the continuum of risk of suffering adverse outcomes. Identifying the degree of risk for each patient promises to help clinicians tailor therapies to improve outcomes.

Outcomes like weakness, fatigue, swollen extremities and death are of particular concern for heart failure patients, and the risk model is designed to stratify the risk of these events.

The tool was built using anonymized data pulled from thousands of patients enrolled in heart failure clinical trials funded by the National Institutes of Health (NIH) National Heart, Lung and Blood Institute (NHLBI).

Patients in the training and validation cohorts were categorized into five risk groups based on left ventricular assist device (LVAD) implantation or transplantation, rehospitalization within six months of follow-up and death, if applicable.

To make the model robust in the presence of missing data, the researchers trained it to predict patients risk categories using either invasive hemodynamics alone or a feature set incorporating noninvasive hemodynamics data.

Prediction accuracy for each category was determined separately using area under the curve (AUC).

Overall, the model achieved high performance across all five categories. The AUCs ranged from 0.896 +/- 0.074 to 0.969 +/- 0.081 for the invasive hemodynamics feature set and 0.858 +/- 0.067 to 0.997 +/- 0.070 for the set incorporating all features.

The research team underscored that the inclusion of hemodynamic data significantly aided the models performance.

This model presents a breakthrough because it ingests complex sets of data and can make decisions even among missing and conflicting factors, said Josephine Lamp, a doctoral researcher in the UVA School of Engineerings Department of Computer Science. It is really exciting because the model intelligently presents and summarizes risk factors reducing decision burden so clinicians can quickly make treatment decisions.

The researchers have made their tool freely available online for researchers and clinicians in the hopes of driving personalized heart failure care.

In pursuit of personalized and precision medicine, other institutions are also turning to machine learning.

Last week, a research team from Clemson University shared how a deep learning tool can help researchers better understand how gene-regulatory network (GRN) interactions impact individual drug response.

GRNs map the interactions between genes, proteins and other elements. These insights are crucial for exploring how genetic variations influence a patients phenotypes such as drug response. However, many genetic variants linked to disease are in areas of DNA that dont directly code for proteins, creating a challenge for those investigating the role of these variants in individual health.

The deep learning-based Lifelong Neural Network for Gene Regulation (LINGER) tool helps address this by using single-cell multiome data to predict how GRNs work, which can shed light on disease drivers and drug efficacy.

Read the original post:
Machine learning approach predicts heart failure outcome risk - HealthITAnalytics.com

Read More..

Microsoft wants to bolster the manufacturing process of future Surface devices with AI and machine learning – Windows Central

What you need to know

Microsoft is seemingly placing all its bets on generative AI. As you might have noticed, the tech giant has ramped up its efforts in the category and virtually integrated the technology across most of its products and services.

Now, the company shared a detailed blog post highlighting how its Microsoft Surface and Azure team used Azure's high-performance computing technology to revolutionize the product design process of manufacturing Surface products while simultaneously saving time and cost.

According to Microsoft's Principal Engineer and structural designer, Prasad Raghavendra, the company integrated Abaqus, "a Finite Element Analysis (FEA) software," into Azure HPC in 2016. Abaqus helped the company solve many issues and fully transition "product-level structural simulations for Surface Pro 4 and the original Surface laptopto Azure HPC from on-premises servers."

Raghavendra indicates the availability of Azure HPC for structural simulations using Abaqus, which has completely revolutionized the product design process for Surface devices. It translated design concepts created in digital computer-aided design (CAD) systems into the FEA model.

This made it easier for analysts to use FEA models to run numerous tests in different reliability conditions in a virtual environment rather than physically going through the entire process step-by-step. Consequently, the team ran hundreds of simulations to determine the feasibility of proposed design ideas and solutions. This ability made narrowing down potential design ideas easier, which were turned into prototypes for further scrutiny.

Reliability and customer satisfaction remain a top priority for the Microsoft Surface team. To scale greater heights, Microsoft intends to continue using digital prototypes (FEA model) for simulation runs on Azure HPC clusters. Microsoft seeks to leverage machine learning and AI in product manufacturing and developing future Surface devices.

Microsoft unveiled its new lineup of business-focusedSurface devices in March, including the Surface Pro 10 and Surface Laptop 6. The entries will ship with Intel Core Ultra, new NPUs, and display upgrades. The company is potentially leaning toward AI PCs featuring a dedicated Copilot button.

All the latest news, reviews, and guides for Windows and Xbox diehards.

That aside, Microsoft's Windows and Surface engineering department has a new boss. When Panos Panay left the company and later joined Amazon, his role split into two. Pavan Davuluri took over the Surface wing, while Mikhail Parakhin handled everything Windows-related.

However, normalcy seems to have been restored at the company. Pavan Davuluri is now in charge of both Windows and Surface engineering. Microsoft also started selling replacement parts for Surface PCs, including screens, kickstands, batteries, SSDs, and more directly from the Microsoft Store.This strategy is designed to improve the repairability of Surface devices.

Continued here:
Microsoft wants to bolster the manufacturing process of future Surface devices with AI and machine learning - Windows Central

Read More..

Machine-learning prediction of a novel diagnostic model using mitochondria-related genes for patients with bladder … – Nature.com

The diagnosis of BC represents a pivotal medical challenge, encompassing the application of various methods29. Presently, diagnostic approaches for BC include clinical symptom analysis, urine testing, imaging examinations, and tissue biopsies30,31. Nonetheless, these methods exhibit limitations in terms of early detection, accuracy, and invasiveness. While clinical symptom analysis and urine testing can capture potential BC symptoms and cellular information, their specificity and sensitivity need improvement to mitigate the risk of misdiagnosis or missed diagnosis. Although imaging techniques offer insights into tumor location and size, their efficacy in detecting early lesions remains constrained, often demanding prolonged time and considerable costs. Conversely, tissue biopsies, the "gold standard" for diagnosing BC, entail invasive procedures that cause patient discomfort and carry risks of complications. Furthermore, a reliable non-invasive method for early BC screening is lacking32,33. Hence, a pressing need arises to research and develop innovative technologies and methods, such as the integration of machine learning with transcriptome sequencing. This integration holds the promise to enhance the accuracy and early detection rate of BC diagnosis, ultimately offering improved medical services to patients. Overall, while the field of BC diagnosis confronts several challenges, it concurrently provides an opportunity to explore inventive diagnostic strategies and methodologies. Thus, identification of novel sensitive biomarker is very important the clinical prognosis of BC patients.

The critical role of mitochondria within cells goes beyond energy production, encompassing various biological processes, including cell survival, apoptosis, and signal transduction. Consequently, mitochondria may play a central role in tumor development, including BC. Several studies suggest a potential link between mitochondrial dysfunction and BC34,35. Tumor tissues from BC patients might exhibit abnormalities in mitochondrial function, including mitochondrial DNA mutations, alterations in mitochondrial membrane potential, and increased oxidative stress. These alterations have the potential to impair mitochondrial energy production and disrupt apoptotic pathways, thereby promoting the survival and proliferation of cancer cells. Moreover, BC progression is intricately connected to changes in metabolic pathways, which may also be associated with mitochondrial dysfunction. Some research suggests that tumor cells tend to favor glycolysis for energy production over oxidative phosphorylation. This shift in metabolic pathways, known as the " Warburg effect," could be influenced by changes in mitochondrial function6,36. In this study, we analyzed GSE13507 datasets and identified 752 DE-MRGs in BC patients. Through functional correlation analysis of 752 DE-MRGs, we have revealed their potential roles in the progression of BC. The analysis results indicated that these DE-MRGs were primarily involved in biological processes related to pattern specification, cell fate commitment, and transcription regulator complexes, which are closely associated with cell development and gene regulation. Additionally, KEGG pathway analysis has uncovered associations between these genes and neurodegenerative diseases (such as Huntington's disease, Parkinson's disease, Alzheimer's disease), cellular energy metabolism (oxidative phosphorylation), as well as metabolic pathways (such as valine, leucine, and isoleucine degradation, and the citrate cycle). Furthermore, the DO analysis indicated a correlation between these DE-MRGs and diseases such as muscular disorders, myopathy, muscle tissue diseases, and inherited metabolic disorders. In conclusion, the 752 DE-MRGs may participate in diverse biological processes and pathways during the progression of BC. These processes encompass cell development, gene regulation, energy metabolism, and neurodegenerative diseases. These findings suggested the intricate involvement of these genes in BC development, potentially influencing tumor growth, progression, metabolic anomalies, and associations with other diseases.

Machine learning combined with transcriptomic data offers several advantages in the screening of tumor biomarkers compared to traditional methods37. First, machine learning can handle high-dimensional transcriptomic data by extracting essential features to accurately identify gene expression patterns relevant to tumors. Second, machine learning can capture intricate nonlinear relationships and interactions among genes, unveiling molecular mechanisms underlying tumor development, which traditional methods may overlook. Moreover, machine learning enables personalized biomarker selection, tailoring diagnostic and treatment plans based on patients' transcriptomic data, thus enhancing precision38,39. In the realm of large-scale data analysis, machine learning's efficient processing capabilities are better equipped to uncover crucial information hidden within extensive datasets, providing timely decision support. Simultaneously, machine learning techniques rapidly generate predictive models, expediting decision-making processes with increased efficiency compared to traditional methods. Furthermore, machine learning can uncover novel biological insights, offering clues to new mechanisms of tumor development and guiding further research and therapeutic strategies40,41. Overall, the amalgamation of machine learning and transcriptomic data in tumor biomarker screening offers advantages by delivering more accurate, comprehensive, and personalized information, thereby revolutionizing tumor diagnosis and treatment. In this study, we performed LASSO and SVM-RFE, and identified four critical diagnostic genes, including GLRX2, NMT1, OXSM and TRAF3IP3. Then, we used the above four genes and developed a novel diagnostic model. Its diagnostic value was further confirmed in GSE13507, GSE3167 and GSE37816 datasets. For BC, these findings hold significant clinical implications and potential application value. Firstly, the identification of these four diagnostic genes suggests their potential pivotal role in early detection and confirmation of BC. Secondly, the development of a novel diagnostic model held the promise of providing a more precise and reliable means of diagnosing BC, thus aiding healthcare professionals in better assessing disease progression and treatment strategies. Furthermore, our findings offered substantial support for the investigation of the molecular mechanisms underpinning BC. It has the potential to uncover the latent mechanistic roles of these diagnostic genes in the progression of BC. In summary, this research paved the way for new approaches to early detection and diagnosis of BC, providing valuable insights for the advancement of precision medicine and personalized treatment.

GLRX2 is a protein closely associated with mitochondrial function and redox balance. It belongs to the glutaredoxin family of proteins, whose members exhibit redox activity within cells, aiding in the maintenance of cellular redox states and thereby sustaining normal biological functions42,43. GLRX2 is primarily localized within mitochondria, allowing it to play a crucial role in regulating mitochondrial redox balance and other mitochondrial functions. Its structural features enable it to switch between oxidized and reduced forms, participating in redox reactions. As a member of the mitochondrial glutaredoxin family, GLRX2 is involved in ensuring proper protein folding, redox state, and related biological functions within mitochondria, contributing to the maintenance of normal mitochondrial functions, including energy production processes such as ATP synthesis44,45. To data, the potential function of GLRX2 in BC was rarely reported. In this study, we found that GLRX2 was highly expressed in BC specimens. The low GLRX2 expression group exhibited an activation trend in several biological processes and diseases, including asthma, drug metabolism (via the cytochrome P450 pathway), IgA production in the intestinal immune network, xenobiotic metabolism (via the cytochrome P450 pathway), systemic lupus erythematosus, and viral myocarditis. These findings suggested a potential association between low GLRX2 expression and the aberrant activation of these biological processes, as well as the development of multiple diseases. However, further research was required to confirm specific mechanisms and interrelationships. These discoveries contributed to a deeper understanding of GLRX2's roles in biology and disease development. In addition, we found that the levels of GLRX2 were positively associated with NK cells activated and Plasma cells. The study found a positive connection between GLRX2 levels and activated NK cells as well as plasma cells, suggesting that GLRX2 might play a role in boosting NK cell activity and contributing to immune responses. Additionally, the link between GLRX2 and plasma cells hinted at its potential involvement in regulating immune reactions and inflammation. These findings could point towards GLRX2 as a potential biomarker for monitoring immune system activity and response. Further research was needed to fully comprehend the mechanisms underlying these associations.

TRAF3IP3 is a gene that encodes a protein which plays a significant role in various cellular functions, including signal transduction, apoptosis (programmed cell death), and inflammation in biological processes46,47. AIP1 typically interacts with proteins like TRAF3 (Tumor Necrosis Factor Receptor-Associated Factor 3) and RIP1 (Receptor-Interacting Protein 1), participating in the regulation of multiple signaling pathways. Among these, TRAF3 is a signaling molecule that plays a critical role in immune responses mediated by Toll-like receptors, RIG-I-like receptors, and other receptors. AIP1's interaction with TRAF3 may play an important role in regulating these immune signaling pathways48,49. Furthermore, AIP1 is believed to have a significant role in the pathway of apoptosis. Apoptosis is a programmed cell death that cells regulate to maintain the normal development and function of tissues and organs. AIP1 may influence intracellular signal transduction and impact the regulation of apoptotic pathways. In recent years, several studies have reported the potential function of TRAF3IP3 in several types of tumors. For instance, Lin et al. reported that high TRAF3IP3 levels in glioma are linked to poorer survival, possibly due to its role in promoting glioma growth through ERK signaling. TRAF3IP3 might serve as a prognostic biomarker for glioma50. However, the function of TRAF3IP3 in BC has not been investigated. In this study, we observed that TRAF3IP3 expression was distinctly decreased in BC specimens suggesting it as a tumor promotor in BC. Moreover, we found that TRAF3IP3 may play a role in regulating immune responses, antigen processing and presentation, cell adhesion, and chemokine signaling. These findings indicated that TRAF3IP3 could have significant functions in modulating immunity and cellular communication during the development of BC.

NMT1 is a gene that encodes a protein. The protein encoded by NMT1 plays a crucial role in cellular processes involving protein modification and signal transmission51,52. Belonging to the acyltransferase enzyme family, the protein produced by NMT1 is primarily responsible for attaching myristic acid molecules to amino acid residues of other proteins, a process known as N-myristoylation. This common cellular protein modification, N-myristoylation, affects protein localization, interactions, and function. Specifically, NMT1 catalyzes the N-myristoylation reaction, linking myristic acid molecules to amino acid residues of target proteins. This modification can impact various cellular processes, including signal transduction, apoptosis, and proteinprotein interactions53,54. NMT1's role in these processes is likely associated with regulating the function, stability, and localization of specific proteins. Previously, several studies have reported that NMT1 served as a tumor promotor in several tumors. For instance, Deng et al. showed that blocking N-myristoyltransferase at the genetic level breast cancer cell proliferation, migration, and invasion were all inhibited by NMT1 through the stress-activated c-Jun N-terminal kinase pathway55. In BC, elevated NMT1 expression was found to be inversely correlated with overall survival, indicating that NMT1 overexpression is associated with a poor prognosis. Moreover, increased levels of NMT1 were observed to facilitate cancer progression while simultaneously inhibiting autophagy both in vitro and in vivo56. Based on our findings, a comprehensive analysis suggested that NMT1 may have a multifaceted role in BC. Elevated NMT1 expression could be linked to interactions involving the extracellular matrix and neuroactive ligand receptor pathways, implying a potential involvement of NMT1 in tumor cell interactions with the extracellular matrix and neuro-pathways. Conversely, reduced NMT1 expression may relate to metabolic pathways (ascorbate and aldarate metabolism, starch and sucrose metabolism) and the TGF-beta signaling pathway, indicating that NMT1 might influence tumor cell metabolism and growth regulation. In this study, we also found that NMT1 was highly expressed in BC specimens and its knockdown suppressed the proliferation of BC cells, which was consistent with previous findings.

However, there were several limitations in this study. Firstly, the GEO datasets were the primary resources for our clinical data. The majority of its patients are either White, Black, or Latinx. Our results should not be generalized to patients of different races without further investigation. The current research was motivated by the statistical analysis of previously collected data; nevertheless, an optimum threshold must be established before the findings may be applied clinically. Secondly, more experiments are needed to determine the role of these essential diagnostic genes and their protein expression levels in the etiology and development of BC.

See the rest here:
Machine-learning prediction of a novel diagnostic model using mitochondria-related genes for patients with bladder ... - Nature.com

Read More..