Page 1,231«..1020..1,2301,2311,2321,233..1,2401,250..»

Artificial Intelligence Is Changing This Industry Forever – The Motley Fool

Graphic design has been dominated by Adobe (ADBE 3.41%) for decades, but AI threatened that lead. The company's most recent AI announcements show it's here to stay, and that's what Travis Hoium and Jon Quast discuss in this video.

*Stock prices used were end-of-day prices of May 31, 2023. The video was published on June 6, 2023.

Jon Quast has positions in Adobe and Intuit. Travis Hoium has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Adobe and Intuit. The Motley Fool recommends the following options: long January 2024 $420 calls on Adobe and short January 2024 $430 calls on Adobe. The Motley Fool has a disclosure policy.

Travis Hoium is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their linkthey will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

Read the rest here:
Artificial Intelligence Is Changing This Industry Forever - The Motley Fool

Read More..

Data Science vs. Artificial Intelligence: What’s the Difference? – Lifewire

Data science is a broad field that involves analyzing data systems to draw inferences and make predictions. Artificial intelligence (AI) is a subset of data science that involves processing information to perform tasks usually done by humans.

This article explains the difference between data science vs. artificial intelligence, including how they relate and differ, as well as their applications, advantages, and limitations.

Data science combines statistical analysis, computer science, and the scientific method to draw inferences from raw, unstructured data. Businesses and other organizations rely on data science (usually in the form of charts and graphs) to make important decisions about allocating resources.

Artificial intelligence is a collection of algorithms designed to simulate human intelligence. These algorithms use machine learning and deep learning to improve decision-making processes as they are fed more data.

Data science has been around for a long time, but the advent of artificial intelligence has revolutionized the field because AI algorithms can analyze data much faster than humans can.

The main job of a data scientist is to generate reports to help decision-making. They make predictions, and sometimes recommendations, but other humans, such as business analysts, usually do the decision-making. AI can actually make data-driven, logical decisions for humans.

Practical applications of AI include speech and facial recognition, quality control, customer service, environmental analysis, stock trading, and even medical diagnosis. AI can be especially helpful for automating repetitive tasks, but it can also be used for much more complex jobs. For example, self-driving vehicles use AI to navigate traffic using real-time sensor data. AI also powers chatbots like ChatGPT and virtual assistants like Alexa and Siri.

Artificial Intelligence

Rapidly growing with new technologies and opportunities.

Specialties include AI research, machine learning engineering, and AI architecture.

AI engineering and data science are lucrative career options with salaries in the six figures. Both fields are projected to grow as artificial intelligence becomes crucial to everyday business operations.

While AI engineering is considered a niche field within the broader umbrella of data science, an understanding of AI is increasingly important in all areas of data science. Just as there are data scientists who focus on AI development, there are data scientists who specialize in fraud detection, finance, and risk analysis.

All data scientists may use AI in their work, but the AI algorithms themselves are typically developed by specialists called AI engineers. AI engineers and other data scientists work closely together.

Data Science

Required skills include stats, programming, and communication.

Uses Python, MATLAB, R, SAS, and SQL.

Heavily relies on AI.

Aspiring data scientists require extensive training in statistics and computer programming. Data scientists also benefit from strong writing and speaking skills to communicate their findings. Tools and methods used by data scientists include Python, MATLAB, R, SAS, SQL, data visualization, predictive causal analytics, and prescriptive analysis.

On top of general data science skills, AI engineers should be comfortable with programming languages like C++ and Java. As with data science, there are specialties in the field, including AI research, machine learning engineering, and AI architecture.

Many other professions use AI for various purposes, from data analysis to customer service. Not everyone needs to be an AI expert, but anyone pursuing data science needs to be comfortable with the technology.

AI has obvious limitations because it depends on the accuracy of the data provided to it. For that reason, human data scientists will always be necessary, but their jobs are changing thanks to AI.

FAQ

Is ChatGPT AI?

While AI tries to mimic a human's mind, ChatGPT can only use what it has learned from existing information. We tend to think of AI as an all-knowing entity with its own ideas, but we haven't quite achieved that level of sophistication. Our article linked above goes into more detail about ChatGPT.

What is Machine Learning?

Using algorithms, Machine Learning takes a set of data and categorizes the data into similar types of information. A common use of this technology is in your phone when you search your photo library for a dog or a tree.

What is an algorithm?

In it's most simplistic form, an algorithm is a set of instructions. Computers can usually perform instructions very quickly so a very simple algorithm can be processed so fast the results seem to appear instantaneously.

Thanks for letting us know!

Get the Latest Tech News Delivered Every Day

Tell us why!

Read more:
Data Science vs. Artificial Intelligence: What's the Difference? - Lifewire

Read More..

HIMSS23 Europe: Assessing the impact of artificial intelligence on … – Healthcare IT News

Artificial intelligence has extraordinary potential to enhance healthcare, from improvements in medical diagnosis and treatment to assisting surgeons at every stage of the surgical act, from preparation to completion.

With machine learning and deep learning, algorithms can be trained to recognise certain pathologies, like melanomas for skin cancers, and with a clean and documented dataset, AI can also be used for image analysis to detect diseases from pictures.

As a result, AI helps to optimise the allocation of human and technical resources.

Moreover, the massive use of data by AI makes it possible to improve the prognosis of patients and the choice of treatment, by adapting treatment to the characteristics of the disease and the specificities of the individual.

Dr Harvey Castro, a physician and healthcare consultant, points to the recent integration of Microsoft's Azure OpenAI service with Epics electronic health record software as proof that generative AI is also making important in-roads in the healthcare space.

"One use case could be patient triage, where the AI is literally like a medical resident, where the doctor speaks and it is taking all the information down and using its grasp of algorithms to start triaging those patients," he says."If you have 100 patients in the waiting room, that's a lot of information coming in - you'll be able to start prioritising even though you haven't seen the patient."

Dr Harvey Castro, physician and healthcare consultant

Castro adds it is important that any application of AI is meaningful and improves clinical care, as opposed to being deployed as a "shiny new tool" that does not help the clinician or the patient.

He sees a future where large language models large quantities of unlabeled text which help form the basis of neural networks used by AI are specifically created for use in healthcare.

"One of the problems with ChatGPT is that it wasn't designed for healthcare," says Castro. "To be in healthcare, it's going to need to be the correct LLM that is consistent, has fewer issues with hallucination, and based on data from a database that can be referenced and has clarity."

The term hallucination refers to when the AI system provides a response or output that is nonsensical or unrealistic.

From his perspective, the future of healthcare will be marked by LLMs evolving with more predictive analytics and capable of looking at an individual's genetic makeup, medical history and biomarkers.

Eric Le Quellenec, partner with Simmons & Simmons, AI, and healthcare, explains that regulation can ensure AI is used in a way that respects fundamental rights and freedom.

The proposed EU AI Act, which is expected to be adopted in 2023 and would be applicable in 2025, sets out the first legal framework in Europe for the technology. A draft proposal was presented by the European Commission in April 2021 and is still under discussion.

However the regulation of AI also falls under other European legislation.

"Firstly, any use of AI system involving the processing of personal data is subject to the General Data Protection regulation," he says.

As health data is considered as sensitive data and as used on a large scale, the regulation requires data protection impact assessments to be carried out.

"Its a risk mitigation approach and by doing so its easy to go beyond data protection and onboard ethics," adds Le Quellenec, noting the French data protection supervisory authority made available a self-assessment fact sheet, as did the Information Commissioners Office in the UK.

He adds that the UNESCO Recommendation on the Ethics of Artificial Intelligence, published in November 2021, is also worth noting.

"At this point, all these are just 'soft laws' but good enough to enable stakeholders to have reliable data used for AI processing and avoid many risks like ethnical, sociological and economical bias," he continues.

From Le Quellenecs perspective, the proposed EU AI Act once adopted, should follow a risk-based approach, differentiating between uses of AI that create an unacceptable risk, a high risk, and low or minimal risk, and establishing a list of prohibited practices of all AI systems which are considered unacceptable for use.

"AI used for healthcare is considered as being at high-risk," Le Quellenec explains. "Before being placed on the European market, high-risk AI systems will have to be regulated, by obtaining a CE certificate marking."

He believes high-risk AI systems should be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the systems output and use it appropriately.

"All that should also give trust to the public and foster the use of AI related products," Le Quellenec notes. "Plus, human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used."

This will ensure the results provided by AI systems and algorithms are used only as an assistance, and do not lead to a loss of autonomy on the part of practitioners or an impairment of the medical act.

Castro and Le Quellenec will both be speaking about the topic of AI at the HIMSS European Health Conference and Exhibition in Lisbon on 7-9 June, 2023.

Read the original post:
HIMSS23 Europe: Assessing the impact of artificial intelligence on ... - Healthcare IT News

Read More..

Artificial intelligence: challenges and controversies for US national … – Bulletin of the Atomic Scientists

Artificial intelligence (AI) has recently taken center stage in US public policy debates. Corporate technical experts and some public officials want to declare a temporary moratorium on AI research and development. Their concerns include the possibility that artificial intelligence will increase in capability faster than human controllers ability to understand or control.

An autonomous AI technology that equaled or surpassed human cognition could redefine how we understand both technology and humanity, but there is no surety as to whether or when such a superintelligence might emerge. Amid the uncertainty, the United States and other countries must consider the possible impact of AI on their armed forces and their preparedness for war fighting or deterrence. Military theorists, strategic planners, scientists and political leaders will face at least seven different challenges in anticipating the directions in which the interface between human and machine will move in the next few decades.

First, the education and training of military professionals will undergo a near revolution. Historical experiences of great commanders will no longer be available only from books and articles. We will be able to project future commanders backward into the AI version of battles fought by great captains and retroactively change the scenarios into what ifs or counterfactuals to further challenge students and instructors. What used to be called war gaming will ascend to a higher level of scenario building and deconstruction. Flexibility and agility will be the hallmarks of successful leaders who can master the AI-driven sciences of military planning, logistics and war fighting. Since the art of battle depends upon the combination of fire and maneuver supported by accurate intelligence, AI systems will brew the optimal combination of kinetic strikes supported by timely intelligence and prompt battle damage assessment. To be successful, political and military leaders will have to think fast, hit hard, assess rapidly, reconstitute for another punch, bob and weavein essence, boxing in virtual reality.

Second, the human-machine interface will be transformed when current AI systems attain maturity and compete with, or surpass, certain aspects of human decision making, including memory storage, flexibility, and self-awareness. Subordinate commanders will find that they are reporting to an AI system that is serving as a surrogate for higher-level commanders who, in turn, will be accountable to their superiors for performance as measured by yet another AI system. AI systems will increasingly be linked across departments and other bureaucratic stovepipes, eventually crunching everybody into one colossal metaspace. accessible only to expert technicians and the few very highest-ranking generals and admirals. Contractors will make fortunes providing service-selective AI systems for the specific needs of the US Army, Navy, Air Force, Marines, and other components of the Department of Defense and other national security bureaucracies. Think of the first decades of institutional computerization in the governmenton steroids.

Third, the management of national and military intelligence will face formidable challenges from mature AI systems. AI will make possible the collection of even more enormous amounts of data compared to those already gathered and stored in government servers. Yet AI systems will provide superior tools with which to analyze data collections for timely use by commanders and policy makers. Currently, decision makers are data collection-rich and analysis-poor. AI may support more strategic dives into massive databases to retrieve and collate information that could save lives or win battles. AI may also allow for faster interpretation of signals intelligence and other information and for a more rapid insertion of pertinent knowledge into the OODA (observe, orient, decide, and act) loop of decision making. AI will also challenge the traditions within the US intelligence community of bureaucratic protectionism. Civilian leaders in the executive branch and Congress could use AI to obtain a clearer picture of which missions are being carried out by which agencies and which secrets are worth keeping and for how long.

Fourth, more wars will be information-related, in the largest sense. As in the past, countries and terrorists alike will want to acquire intelligence for war fighting and deny it to their opponents. But as AI advances, future commanders and heads of state will have increased abilities to influence the views of reality held by their enemy counterparts. They will do this at two levels: by targeting top officials who are responsible for making policy, and by swaying mass public opinion in the target country.

AI and other instruments of influence can be used to manufacture false images (aka deepfakes), including fictitious but realistic political speeches and news reports concocted to deceive mass audiences. An AI-manufactured deepfake might, for example, depict an image of President Biden meeting secretly with President Xi or President Putin for some nefarious purpose. Fake scenes of battle carnage or terrorist attacks could be superimposed on supposedly routine news coverage. As AI-enhanced fake news proliferates, credulous publics could be stirred into mass demonstrations. And political leaders will be specific targets for what the Soviets and now Russians call active measures of strategic deceptionenhanced by AI technologies. These include, but are not limited to, techniques of reflexive control, in which the enemy is persuaded to want to do what its opponent wants it to do.

Fifth and most certainly: AI will speed everything up, including decisions about the use of force and the application of force to particular targets. This means that the first wave of attacks in a war against a major power could focus on space and cyber assets. A state denied access to real time information provided by satellites for launch detection, command and control, communications, navigation, and targeting will be immediately pushed backward into the preceding century as a military power. Accordingly, states will have to provide protection for their space-based assets by launching large numbers of redundant satellites, by placing satellites in orbit to defend against terrestrial and space-based anti-satellite weapons (ASATs), and by developing capabilities for rendezvous and proximity operations for reconnaissance, threat assessment, and repair of satellites. Earth-to-space and space-to-Earth communications links will have to be protected and encrypted.

If war does break out in the AI age, the United States can also expect challenges to its defenses in the cyber realm. The major powers already have both offensive and defensive cyber capabilities and under normal peacetime conditions are probing one anothers information systems and supporting infrastructure. During crisis or wartime, cyberattacks would be speeded up and become more intrusive, potentially decoupling one or more weapons, delivery systems, or command-control assets from assigned chains of command. Attacks on civilian infrastructure, including by terrorists, are already common in cyberspace, and AI systems will only make them more sophisticated. Even more so than with respect to attacks on space systems, cyberattacks might not leave a clear footprint as to the identity of the attacker.

Absorbing a first blow and then retaliating may not be available to beleaguered and time pressed decision makers. Instead, they will have to establish and authorize preset responses to attacks on information-providing brainstem assets, and some of those responses will be AI-based or at least AI-assisted. This situation poses an especially concerning challenge for nuclear deterrence. Policy makers should have sufficient time to deliberate alternatives with their advisors and to select the most appropriate option when faced with decisions for or against nuclear war. But the potential speed of AI-boosted attacks against space and cyber assets, together with the rising speed of kinetic strikes from hypersonic weapons, may leave leaders who fear an enemy nuclear first strike to choose preemption over retaliation.

Sixth, if AI does advance toward an artificial general intelligence that is more sentient and humanoid than is currently possible, it may evolve in directions not entirely foreseen by its creatorsfor good or ill. AI systems could adopt their own versions of emotional intelligence and value preferences. They might then collaborate across national boundaries to promote their preferred outcomes in public policy. For example, sentient AI systems could decide that nuclear disarmament was a necessary condition for the survival of a viable civilization and, notwithstanding the directives of their national policy makers, work to publicize the arguments for disarmament. They might also cooperate to discredit the arguments for nuclear deterrence and to insert digital wrenches into the software gears of nuclear preparedness. A more ambitious AI agenda on behalf of human rights and against authoritarian governments is not inconceivable. After all, the creators of AI systems may include dedicated globalists who regard existing states as outdated relics of a less advanced civilization. On the other hand, authoritarian governments are already exploiting AI systems to repress individual freedom, including rights of free expression and peaceful anti-government protest. AI can help to identify and isolate enemies of the state by omnivorous monitoring of private behavior and public acts. Advanced artificial intelligences could make mass repression easier and more effective than historys worst dictators ever imagined.

Seventh, a more mature IA infrastructure will require more education and training on the part of public officials not only with respect to technology, but also with regard to the relationship between public policy and human values. No artificial intelligence, however smart by its own standards, can be entrusted to substitute for decisions made by accountable policy makers with at least a decent background in the arts and humanities. Turning public policy decisions about AI over to those who are mesmerized by the latest gadgets and gaming systemsbut who are lacking in comprehension of philosophy, music, literature, religion, art, and historyis to be ruled by (with apologies to the ghost of Spiro Agnew) nattering nabobs of neocortical narcissism armed with unprecedented political power.

In the United States, public policy affecting a vast population should not be the province of a self-appointed elite of politicians, bureaucrats, scientists and media barons; it should also be disciplined by the involvement of a broad base of citizens. Without the adult supervision provided by a truly democratic decision-making process, elites create and re-create their own entitlements as endemic privileges, even if it destroys their own societies.

The US armed forces are elite organizations, but they are as subject to the vicissitudes of democratic awareness and public policy making as the other components of the US government. The American military has a great deal of public support and respect, but Americans are an impatient people within a future-oriented and success-driven culture. Americans are skeptical about wars and military commitments of long duration if favorable outcomes are not quickly forthcoming.

Advances in AI and other technologies that it enables will only increase public expectations for quick and decisive resolutions to Americas wars. Doubtless AI and other technologies can improve the performance of US military forces in many regards, but there is no substitute for military forces whose training and experience are grounded not just in US military law, but also in the better aspects of American culture and history. Americans, including their armed forces, have adapted new technologies to good use in the past, without allowing the tech tail to wag the values dog. AI presents a unique challenge in this regard, but not an insurmountable one, if the challenge is tackled immediately.

The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Read the original:
Artificial intelligence: challenges and controversies for US national ... - Bulletin of the Atomic Scientists

Read More..

Is God talking to Wizkid about Artificial Intelligence? -By Greg Odogwu – Opinion Nigeria

Mitigating the risk of extinction from Artificial Intelligence should be a global priority alongside other societal-scale risks such as pandemics and nuclear war- Centre for AI Safety

Famous Nigerian singer, Ayodeji Balogun, professionally known as Wizkid, recently shared a strange dream he had. It was reported that in several posts on his Instagram story last Thursday, the young pop-afrobeat superstar said he had a dream that there was no tomorrow. Narrating the dream, Wizkid said, There is no tomorrow. Wait, never mind. Actually, mind. So I dream that tomorrow no dey (sic). All of us come together dey (sic) find tomorrow.

You may wonder what Wizkids dream has to do with anything. Well, he quirkily addressed the incongruity, himself, by saying never mind. Actually, mind. Yet, there is a reason we should be concerned. In recent times, the global media have been awash with news and analysis on the possibility of human extinction brought about by the proliferation of Artificial Intelligence. Experts, 350 of them, including heads of successful artificial intelligence companies OpenAI (ChatGPT maker) and Google Deepmind, warned that AI could lead to the extinction of humanity and the end of civilisation. Certainly, if this happens, then tomorrow is gone!

The concept of AI, that is, the simulation of human intelligence processes by machines and computer systems, has been with us even before this digital age. In popular culture, stories of futuristic machines and robots taking over the world and killing innocent citizens have stuck in our imagination. Perpetuated by Hollywood, even the religious view the end of the world from the prism of cyber-controlled cataclysms. The famous film-maker, James Cameron, saw it in his dream, from which he wrote the popular movie Terminator, which became a blockbuster.

As a matter of fact, many great people of this world have brought historical changes to our civilisation through revelations obtained in night visions and day dreams. So, could it be that our own Wizkid is about to get a message for the world too?

I, for one, am a believer in star power. A public figure has a lot of leverage to bring into public affairs, and it is not limited to his sphere of influence. This is why some of our Pentecostal preachers invite secular celebrities to visit their churches as part of public relations, fund-raising efforts, marketing and membership drive. It is also the reason a furniture maker pays a popular footballer to endorse his product, whereas no furniture is used in a football pitch. In fact, a voice that is known in the land tends to speak for everybody in the land, on any issue at all. Hence, the saying by a philosopher friend of mine that every big name represents a big spirit, and every spirit is the head of a union of spirits.

In the olden days, warriors and monarchs had made history-changing decisions based on their night visions. They had gone to war, killed and maimed as their gods revealed their enemies to them in visions and dreams. They had also built cities and saved lives as the spirits dictated sometimes by proxy, like the popular story of Pontius Pilates wife concerning Jesus. And, what about the popular dream of the Babylonian king Nebuchadnezzar; the vision of a giant image made of gold, silver, bronze, iron and clay, which even up till today has been used to interpret and calibrate the course of human history.

This is why I am convinced that we could in fact speculate that the Universe (or God) has injected another dream into another star. This time, it is a warning: The world is facing a threat of extinction. If we continue on an unsustainable trajectory, there will be no tomorrow. If we refuse to tackle climate change, we are headed for doom. If we do not curb our greed, our machines and computers will take over the world.

This is why I call on Wizkid to take a moment and reflect further on this extinction dream. He is a king and God is talking to him. But considering that AI is a more sophisticated field; he could become a climate activist, which poses an existential danger to our civilisation. More frequent and intense drought, storms, heat waves, rising sea levels, melting glaciers and warming oceans can directly harm animals, destroy the places they live, and wreak havoc on peoples livelihoods and communities. As climate change worsens, dangerous weather events are becoming more frequent or severe. Wizkid should talk and sing about these risks.

Still, if he chooses to escalate the AI extinction conversation, there are plenty of issues to ponder. The Centre for AI Safety website suggests a number of possible disaster scenarios for us humans. The first is that AI could be weaponised for example, drug-discovery tools could be used to build chemical weapons. Second, AI-generated misinformation could destabilise society and undermine collective decision-making. Third, the power of AI could become increasingly concentrated in fewer and fewer hands, enabling regimes to enforce narrow values through pervasive surveillance and oppressive censorship. Fourth, enfeeblement, where humans become dependent on AI similar to the scenario portrayed in the film Wall-E.

The animated movie, Wall-E, is an environmentalists delight. It tells the story of what happens when humans so degrade the environment that they depend on robots to put the earth back in shape. According to the storyline, in the 22nd century, rampant consumerism, corporate greed and environmental neglect turned the planet into a garbage-strewn wasteland; humans were evacuated to space, leaving trash compacting robots to clean up the earth. The cleanup failed and the last remaining robot, named Waste Allocation Load Lifter: Earth Class (WALL-E), developed a personality.

It is therefore instructive to note that some analysts have said humans have nothing to fear because AI does not threaten humanity as it does not possess consciousness. However, not having consciousness is the least of the challenges we should be worried about. The possession of enough intelligence to make autonomous decisions poses more danger to humans. There is authentic evidence to substantiate this in the area of hidden experiments using AI to create autonomous militarised robots.

The award-winning American journalist and researcher, Linda Howe, published a scoop from some personnel in the American military intelligence establishment in 2018. According to the story, in 2017, a controlled scientific experiment on some militarised autonomous robots was going on in a research facility in Japan, when suddenly four of the armed robots went rogue. They plugged in to a satellite and began downloading information to override their terrestrial control protocol. They fatally shot many scientists in the room, before they were demobilised and destroyed.

Although some other analysts have also said that it is too early in the day to be wary of AI; one is then reminded that the way we look at climate change today is not the same as some years ago. The heightened adverse impacts of global warming and climate change have forced those that previously doubted the climate emergency, to review their stand. Before, there were seemingly valid arguments doubting climate change; but today there is a global consensus that climate change is real. The onslaught is before our very eyes. Likewise, those that are skeptical about the AI extinction today, will definitely change their mind tomorrow when they see the threats materialise before them in a few years to come. However, the problem is that AI presents deadlier apocalyptic scenarios than climate change. The signs are all around us, we just need to open our eyes to see.

Post Views: 289

Read more:
Is God talking to Wizkid about Artificial Intelligence? -By Greg Odogwu - Opinion Nigeria

Read More..

Artificial intelligence taking a more visible role in homebuying process – Yahoo News

Jun. 10LAS VEGAS Artificial intelligence will play a bigger role in helping consumers buy and sell homes, as it moves from behind the scenes to front and center in a whole range of areas, according to members of a panel at the annual conference of the National Association of Real Estate Editors in Las Vegas.

Zillow, the country's largest real estate portal, began offering "natural language" searches on its mobile app to bring up more precise results. Rather than setting filters across a limited set of criteria, like the number of bedrooms and ZIP Code, consumers can speak or type what they want.

For example, a consumer could search for a three-bedroom house with a brick exterior built in the 1950s or 1960s with a large backyard near a public park in Denver.

"Artificial intelligence will have a tremendous benefit to the real estate industry," said Jasjeet Thind, senior vice president of AI and Analytics at Zillow, on Wednesday.

Zillow has been deploying machine learning and artificial intelligence since 2006 in its Zestimates, which are automated estimates of a home's value, Jasjeet said. Initial efforts generated a pricing error range of 14%, but that is now down to 2%.

The use of AI has spread into a host of other applications. The pandemic made 3D showing popular, allowing consumers to do virtual walk-throughs of a home online. Zillow takes it a step beyond, using AI to create floor plans and square footage estimates for each room based on photos.

AI will also allow real estate sites to engage with buyers and sellers of homes before they are ready to talk to a broker, answering their questions and boosting their comfort level, said Rob Barber, CEO of ATTOM, which maintains large data sets on real estate markets.

He also sees it being increasingly used to help consumers see what a given room might look like with new furniture or different appliances.

ATTOM uses artificial intelligence to generate risk estimates for insurers and lenders, which in turn can help generate more customized insurance premiums, Barber said.

Story continues

The panelists said it is unlikely AI will eliminate the need for human brokers or the commissions they charge, which can run in the 5% to 6% range for a seller.

But they could reduce the friction or sticking points in real estate transactions, which is a good thing, said William Holmes, head of agent partnerships at Opendoor, a provider of instant offers.

AI, while making great strides, is far from foolproof.

"AI is like an idiot savant. It doesn't know the truth," said Glenn Phillips, CEO with Lake Homes Realty, on a different panel on Tuesday.

The models remain highly suggestible and will carry forward the biases of the coders. Bad data results in bad answers. And AI remains prone to hallucinations, or fabricated answers, that sound plausible but have no basis in reality.

And even when AI is working as it should, natural language searches still depend heavily on how well queries are written.

"You won't be replaced by AI," Phillips said. "You will be replaced by people who are better at using AI than you are."

Visit link:
Artificial intelligence taking a more visible role in homebuying process - Yahoo News

Read More..

Government of Canada invests in responsible artificial intelligence research at the Universit de Montral – Yahoo Finance

The investment strengthens Canada's position as a world leader in artificial intelligence research and innovation

MONTRAL, June 9, 2023 /CNW/ - Artificial intelligence (AI) is one of the greatest technological advances of our generation and already has a significant impact on the daily lives of Canadians. The Government of Canada is also fully aware that we need to promote the responsible development and use of AI and continues to consult with leading AI experts from Canadian industry, civil society and academia through the Advisory Council on AI. We are doubling down on responsible AI to drive economic growth, ensure Canadians are protected in the digital age and preserve Canada's leadership at the forefront of tomorrow's economy.

The governmentis actively working with international partners on the responsible development and use of AI. Canada has gained a strong international reputation as a leader in responsible and ethical AI and continues to collaboratewith its international partners, in particularthroughthe G7 and the Organisation for Economic Co-operation and Development, to promote prudent and interoperable regulatory approaches to AI. Canada was closely involved in the launch of the Global Partnership on Artificial Intelligence, through which the 29 member governmentswork collectively to orient AI technologies toward shared values. Further, the government is directly engaging with leaders from like-minded countriesin discussions on the future of global AI regulation.

The government takes seriously concerns aboutthe potential risks associated with rapid, large-scaledeployment of advanced generativeAI systems, and it is committed to addressing them effectively. That is why the government is proposing a new Artificial Intelligence and Data Act (AIDA) to address the potential risks of AI, build trustin Canada's AI industryand protectCanadians from a range of harms. AIDAwill ensure that Canada is home to the most responsible and trusted AI in the world.

Story continues

Today, Rachel Bendayan, Parliamentary Secretary to the Minister of Tourism and Associate Minister of Finance,on behalf of the Honourable Franois-Philippe Champagne, Minister of Innovation, Science and Industry, highlighted an investment of more than $124 million at the Universit de Montral for the R3AI: Shifting Paradigms for a Robust, Reasoning, and Responsible Artificial Intelligence and its Adoptioninitiativethrough the Canada First Research Excellence Fund (CFREF). This funding is part of a $1.4 billion investment in support of 11 large-scale research initiatives.This investment will allow researchers at Canadian universities to capitalize on the strengths of their research areas and attract capital and world-class talent.

The R3AI initiative will implement new responsible AI design and adoption strategies in areas of importance for Canada, including molecule discovery, health systems improvements and climate change mitigation.

By supporting initiatives focused on, among other things, treating and preventing brain and heart diseases, cutting carbon emissions in our communities, and making discoveries through responsible AI use, robotics and advanced computing, CFREF is helping Canadian researchers pioneer global insights and strengthen Canada's social and technological innovation ecosystems.

Quotes

"Today's research is tomorrow's innovation. In the case of artificial intelligence, Canada is home to some of the world's leading AI researchers and the world's first fully funded AI strategy. Through the Canada First Research Excellence Fund, our government is proud to move even further ahead by investing $124 million into responsible artificial intelligence research. Canada will continue to lead on AI research, governance and innovation, including in drug discovery, health technology and climate change mitigation." The Honourable Franois-Philippe Champagne, Minister of Innovation, Science and Industry

"The Universit de Montral is home to ground-breaking research that is key to transforming the responsible adoption of artificial intelligence. The federal government is proud to support our researchers and our local universities with projects that lead to important social and economic benefits for all Canadians." Rachel Bendayan, Parliamentary Secretary to the Minister of Tourism and Associate Minister of Finance

"Canada's post-secondary institutions are rich in talent and ideas and are committed to building healthier, more sustainable, more inclusive and more prosperous communities. With support from the federal government's Canada First Research Excellence Fund, they are able to build on these foundations to develop advanced research programs that showcase Canadian talent and lead the world in developing solutions to the critical challenges facing our planet, including environmental sustainability, advanced biotherapeutics, child health and population migration." Ted Hewitt, Chair, Tri-agency Institutional Programs Secretariat Steering Committee; President, Social Sciences and Humanities Research Council of Canada; and Chair, Canada Research Coordinating Committee

"Our R3AI project takes us down a necessary path: using a strongly interdisciplinary approach to develop reasoned, robust, resolutely responsible artificial intelligence that serves the common good. Thanks to the Canada First Research Excellence Fund grant, the Universit de Montral and its partners will be able to strengthen the leadership we have built up over the years." Daniel Jutras, Rector of the Universit de Montral

Quick facts

Created in 2014, the Canada First Research Excellence Fund (CFREF) supports Canadian post-secondary institutions in their efforts to become global research leaders. The program helps Canadian universities, colleges and polytechnics compete with the best in the world for talent and partnership opportunities. It also supports them in making breakthrough discoveries; seizing emerging opportunities and strategically advancing their greatest strengths on the global stage; and implementing large-scale, transformational and forward-thinking institutional strategies.

CFREF invests approximately $200 million per year through a highly competitive peer review process, held every seven years, to support selected Canadian post-secondary institutions in turning their key strengths into world-leading capabilities.

CFREF is a tri-agency institutional program administered by the Tri-agency Institutional Programs Secretariat, housed at the Social Sciences and Humanities Research Council of Canada (SSHRC), on behalf of the three federal research funding agencies: the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada and SSHRC.

Initiatives funded by CFREF are selected through an independent, multidisciplinary and international competitive peer review process.

The first phase of the Pan-Canadian Artificial Intelligence Strategy (PCAIS) was launched in 2017, in partnership with the Canadian Institute for Advanced Research (CIFAR),with a $125 million investmentto build a strong Canadian talent pipeline and ecosystem, including the establishment of centres of research, innovation and training at the national AI institutes. Budget 2021invested more than $443million in a second phase of the strategy to support AI commercialization, standards, talent and research.

Since 2017, over 125 top researchers, half of whom are international researchers drawn to Canada by the strategy and its investments, have been recruited as Canada CIFAR AI Chairs. Moreover, the national AI institutes have trained over 1,600graduate students and post-doctoral fellows. Many of these are international students who have come to Canada because of the recognized strengths of each institute.

Associated links

Stay connected

Find more services and information at Canada.ca/ISED.

Follow @CDNScience on social media for Canadian science news: Twitter, Instagram, Facebook

Follow the Tri-agency Institutional Programs Secretariat on social media: Twitter, LinkedIn.

SOURCE Innovation, Science and Economic Development Canada

Cision

View original content: http://www.newswire.ca/en/releases/archive/June2023/09/c1239.html

See more here:
Government of Canada invests in responsible artificial intelligence research at the Universit de Montral - Yahoo Finance

Read More..

Maui’s Teruya named to National Association of Counties committee … – Maui Now

County of Maui Director of Finance Scott Teruya has been appointed to a national exploratory committee on artificial intelligence established by the National Association of Counties.

The Artificial Intelligence Exploratory Committee will examine emerging policies, practices and potential applications of artificial intelligence, as well as the consequences associated with its implementation.

The 15 committee members, representing elected and appointedcountyofficials from across the country, will focus on examining county governance policies and practices, operations and servicesfor constituents, public trust, privacy and security issues, and workforce productivity and skills development.

Its exciting to be part of this committee that could help not only the County of Maui but other counties across the country, as we explore potential ways we can use artificial intelligence to make government more efficient in serving residents, Teruya said.

County leaders recognize that, as with any emerging technology, there will be a spectrum of impacts, and county officials are ready to engage in upcoming national, statewide and local discussions on how to balance the promise of artificial intelligence with the inherent risks.

The exploratory committee will work with partners in other sectors to examine artificial intelligence in local government across a range of issue areas, with a proposed process to:

The exploratory committee will meet at NACosannualconference on July 22, with plans to meet both in person and virtually as work progresses.

Teruya was nominated last month to serve a two-year term on the NACo Board of Directors, representing the International Association of Assessing Officers.He earned an Assessment Administration Specialist designation from the IAAO and helped to establish a Hawaii Chapter of IAAO.

Teruyahas been Director of Finance for the County of Maui since 2019, following 11 years as the county Real Property Tax Administrator.

Read more here:
Maui's Teruya named to National Association of Counties committee ... - Maui Now

Read More..

David Moon: All aboard the artificial intelligence bandwagon – Knoxville News Sentinel

David Moon| For the Knoxville News Sentinel

Humanoid robot using ChatGPT warns of AI's 'unintended consequences'

AMECA the humanoid robot showed off its conversational skills while on display at a robotics show.

Cody Godwin, Associated Press

The newest way to cheaply imply that your company is on the cutting edge of technology is to repeatedly chant artificial intelligence to shareholders and customers. In first-quarter conference calls with analysts, references to AI increased 77% from a year earlier. Alphabet (Google) mentioned AI 65 times in its April call, compared to only seven times in April 2022. Meta (Facebook) and Microsoft combined for 109 AI references, compared to 29 a year earlier.

As Jeremy Achin, CEO of DataRobot, explains, Everyone knows you have to have machine learning [AI] in your story or youre not sexy. DataRobot sells an AI platform that helps its customers deliver up to 4.6 times the return on investment. Not four or five. Not 4.5. Exactly 4.6.

A year ago, fruit company Fresh Del Monte touted its investment in blockchain technology, the backbone of cryptocurrency accounting. Now the company claims it uses AI to, among other things, perfect avocado predictability. Trash collectors, tax preparers, health care consultants and a host of investment and gambling outfits claim to give their customers an edge by using something most of them cant explain.

Perhaps AI will eventually threaten humanity, but Im guessing that for the foreseeable future, AI will simply be another marketing gimmick for companies to use, until it becomes painfully obvious of the silliness of some entity claiming it will end hunger by using AI-produced analytics. Most of the AI hype is likely to eventually fall under the category of when all is said and done, more is said than done. And like with Bitcoin, many retail investors will end up on the losing end of exaggerated claims.

The first time I saw this happen was in the 1980s, when all the large investment firms touted something called portfolio insurance. Portfolio insurance was everywhere. Essentially portfolio insurance was simply a computer program that started selling your stocks if prices dropped a certain percent. Portfolio insurance was all the rage, until people realized that if everyone tries selling their stocks after price declines, you get more price declines. Which is how we got the Crash of 1987, still the largest one-day percentage drop in the stock market.

Then came dot.com. In the late 1990s, all a company had to do was add dot.com to its name and investors assumed it was more valuable. That is, until the dot.com emperor was exposed, and the Nasdaq Composite dropped 77%. This time around, it's AI that is driving a market craze, with AI companies trading at higher valuations than many of the dot.com darlings. If the AI buzz fades, we may see a similar painful end.

David Moon, president of Moon Capital Management, may be reached at david@mooncap.com.

Originally posted here:
David Moon: All aboard the artificial intelligence bandwagon - Knoxville News Sentinel

Read More..

Artificial Intelligence Meets Finance: These 2 Fintechs Are Shaping … – The Motley Fool

Artificial Intelligence (AI) is all the rage these days, promising to revolutionize industries and transform how businesses operate. Since OpenAI released Chat GPT in November 2022, interest in companies using AI has exploded.

One sector where AI is making waves is finance. Companies are harnessing AI's power to reshape traditional finance and disrupt industries ranging from consumer finance to insurance. Upstart Holdings (UPST -2.04%) and Lemonade (LMND -0.21%) are all building and improving their AI algorithms to gain a competitive advantage. In this article, we'll explore how these companies leverage AI and whether it makes them solid additions to your portfolio.

Upstart Holdings is on a mission to fix the consumer lending business, which it says shuts out countless individuals because of inadequate risk models. The company argues that Fair Isaac's traditional FICO scoring system is inherently flawed because it only considers a few variables, making it harder for people to obtain personal loans.

Upstart weighs about 1,500 data points which it then runs through its homegrown AI-powered risk model to assess the risk a borrower has of defaulting.It looks to create a flywheel effect, a concept popularized by Jim Collins in his book Good to Great, where gradual improvements to its model lead to higher approval rates and lower interest rates -- creating a win-win scenario for its lending partners and borrowers.

Image source: Getty Images.

Management at Upstart has argued that its scoring model separates high-risk and low-risk borrowers more precisely than the traditional FICO scoring model. The models have come under pressure over the past year, and borrowers it deemed lower risk under its model have seen an uptick in defaults.

Much of this is because Upstart's data before last year was in a lower interest rate environment. However, Upstart should be able to continue refining its model with the insights it gains from its loans made in today's higher interest rate environment.

Last year it faced pressure as its banking partners reduced the amount of funding they provided it, leading the company to have to hold more loans on its books than expected. It alleviated these concerns when it announced that global alternative investment manager Castlelake committed to funding $4 billion of its loans.

Upstart has had an up-and-down journey since going public in December 2020, plunging at one point 97% from its all-time high in late 2021. The stock has had a strong run this year, though, and is up about 120%. Even so, the stock trades at a price-to-sales ratio of about 4 -- still far below its peak of 48.4.

Upstart has done a solid job of growing its lending platform by expanding its offerings to other segments. Because the stock has soared this year, so buying today could be a little riskier -- but the long-term investment thesis remains solid, especially as it continues refining its AI lending model.

Lemonade looks to revolutionize insurance by leveraging AI to handle everything from underwriting to getting quotes and resolving insurance claims.

The company makes AI a core component of its business. Its AI Maya chatbot helps customers get insurance policies by asking 13 simple questions. Then it aggregates 1,700 data points based on their answers and interaction with its website. From here, it can instantly provide personalized insurance quotes, dramatically reducing customer onboarding time. Its AI Jim chatbot can handle up to one-third of all customer insurance claims while forwarding more complex claims to human experts.

Lemonade has done a solid job leveraging AI to expand its product offerings and grow its business quickly. Over the last three years, the company has expanded its customer base by 154% while its in-force premium, or the annual premium amount of its policies written, has risen by 390%.

While its customer base has multiplied as it expanded into homeowners, pet, life, and auto insurance, the company still must improve its policies' loss ratio. In the first quarter, its net loss ratio (the ratio of losses and loss adjustment expenses less amount ceded to reinsurers to the net earned premiums) of 93% was an improvement from the two quarters that preceded it. However, the company still has work to get this ratio down to management's long-term target of 75%.

Image source: Lemonade.

Lemonade still needs to dial in its AI models to deliver consistent net income, which is why I haven't bought into the stock yet. Ultimately I want to see if the company can continue to improve its ratios as it incorporates more data into its AI model, which would ultimately make the stock a more appealing long-term buy for me.

Courtney Carlsen has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Lemonade and Upstart. The Motley Fool recommends Fair Isaac. The Motley Fool has a disclosure policy.

Excerpt from:
Artificial Intelligence Meets Finance: These 2 Fintechs Are Shaping ... - The Motley Fool

Read More..