Page 1,349«..1020..1,3481,3491,3501,351..1,3601,370..»

Weve discovered the secret of immortality. The bad news is its not for us: why the godfather of AI fears for humanity – The Guardian

Artificial intelligence (AI)

Geoffrey Hinton recently quit Google warning of the dangers of artificial intelligence. Is AI really going to destroy us? And how long do we have to prevent it?

The first thing Geoffrey Hinton says when we start talking, and the last thing he repeats before I turn off my recorder, is that he left Google, his employer of the past decade, on good terms. I have no objection to what Google has done or is doing, but obviously the media would love to spin me as a disgruntled Google employee. Its not like that.

Its an important clarification to make, because its easy to conclude the opposite. After all, when most people calmly describe their former employer as being one of a small group of companies charting a course that is alarmingly likely to wipe out humanity itself, they do so with a sense of opprobrium. But to listen to Hinton, were about to sleepwalk towards an existential threat to civilisation without anyone involved acting maliciously at all.

Known as one of three godfathers of AI, in 2018 Hinton won the ACM Turing award the Nobel prize of computer scientists for his work on deep learning. A cognitive psychologist and computer scientist by training, he wasnt motivated by a desire to radically improve technology: instead, it was to understand more about ourselves.

For the last 50 years, Ive been trying to make computer models that can learn stuff a bit like the way the brain learns it, in order to understand better how the brain is learning things, he tells me when we meet in his sisters house in north London, where he is staying (he usually resides in Canada). Looming slightly over me he prefers to talk standing up, he says the tone is uncannily reminiscent of a university tutorial, as the 75-year-old former professor explains his research history, and how it has inescapably led him to the conclusion that we may be doomed.

In trying to model how the human brain works, Hinton found himself one of the leaders in the field of neural networking, an approach to building computer systems that can learn from data and experience. Until recently, neural nets were a curiosity, requiring vast computer power to perform simple tasks worse than other approaches. But in the last decade, as the availability of processing power and vast datasets has exploded, the approach Hinton pioneered has ended up at the centre of a technological revolution.

In trying to think about how the brain could implement the algorithm behind all these models, I decided that maybe it cant and maybe these big models are actually much better than the brain, he says.

A biological intelligence such as ours, he says, has advantages. It runs at low power, just 30 watts, even when youre thinking, and every brain is a bit different. That means we learn by mimicking others. But that approach is very inefficient in terms of information transfer. Digital intelligences, by contrast, have an enormous advantage: its trivial to share information between multiple copies. You pay an enormous cost in terms of energy, but when one of them learns something, all of them know it, and you can easily store more copies. So the good news is, weve discovered the secret of immortality. The bad news is, its not for us.

Once he accepted that we were building intelligences with the potential to outthink humanity, the more alarming conclusions followed. I thought it would happen eventually, but we had plenty of time: 30 to 50 years. I dont think that any more. And I dont know any examples of more intelligent things being controlled by less intelligent things at least, not since Biden got elected.

You need to imagine something more intelligent than us by the same difference that were more intelligent than a frog. And its going to learn from the web, its going to have read every single book thats every been written on how to manipulate people, and also seen it in practice.

He now thinks the crunch time will come in the next five to 20 years, he says. But I wouldnt rule out a year or two. And I still wouldnt rule out 100 years its just that my confidence that this wasnt coming for quite a while has been shaken by the realisation that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better.

Theres still hope, of sorts, that AIs potential could prove to be over-stated. Ive got huge uncertainty at present. It is possible that large language models, the technology that underpins systems such as ChatGPT, having consumed all the documents on the web, wont be able to go much further unless they can get access to all our private data as well. I dont want to rule things like that out I think people who are confident in this situation are crazy. Nonetheless, he says, the right way to think about the odds of disaster is closer to a simple coin toss than we might like.

This development, he argues, is an unavoidable consequence of technology under capitalism. Its not that Googles been bad. In fact, Google is the leader in this research, the core technical breakthroughs that underlie this wave came from Google, and it decided not to release them directly to the public. Google was worried about all the things we worry about, it has a good reputation and doesnt want to mess it up. And I think that was a fair, responsible decision. But the problem is, in a capitalist system, if your competitor then does do that, theres nothing you can do but do the same.

He decided to quit his job at Google, he has said, for three reasons. One was simply his age: at 75, hes not as good at the technical stuff as I used to be, and its very annoying not being as good as you used to be. So I decided it was time to retire from doing real work. But rather than remain in a nicely remunerated ceremonial position, he felt it was important to cut ties entirely, because, if youre employed by a company, theres inevitable self-censorship. If Im employed by Google, I need to keep thinking, How is this going to impact Googles business? And the other reason is that theres actually a lot of good things Id like to say about Google, and theyre more credible if Im not at Google.

Since going public about his fears, Hinton has come under fire for not following some of his colleagues in quitting earlier. In 2020, Timnit Gebru, the technical co-lead of Googles ethical AI team, was fired by the company after a dispute over a research paper spiralled into a wide-ranging clash over the companys diversity and inclusion policies. A letter signed by more than 1,200 Google staffers opposed the firing, saying it heralds danger for people working for ethical and just AI across Google.

But there is a split within the AI faction over which risks are more pressing. We are in a time of great uncertainty, Hinton says, and it might well be that it would be best not to talk about the existential risks at all so as not to distract from these other things [such as issues of AI ethics and justice]. But then, what if because we didnt talk about it, it happens? Simply focusing on the short-term use of AI, to solve the ethical and justice issues present in the technology today, wont necessarily improve humanitys chances of survival at large, he says.

Not that he knows what will. Im not a policy guy. Im just someone whos suddenly become aware that theres a danger of something really bad happening. I want all the best brains who know about AI not just philosophers, politicians and policy wonks but people who actually understand the details of whats happening to think hard about these issues. And many of them are, but I think its something we need to focus on.

Since he first spoke out on Monday, hes been turning down requests from the worlds media at a rate of one every two minutes (he agreed to meet with the Guardian, he said, because he has been a reader for the past 60 years, since he switched from the Daily Worker in the 60s). I have three people who currently want to talk to me Bernie Sanders, Chuck Schumer and Elon Musk. Oh, and the White House. Im putting them all off until I have a bit more time. I thought when I retired Id have plenty of time to myself.

Throughout our conversation, his lightly jovial tone of voice is somewhat at odds with the message of doom and destruction hes delivering. I ask him if he has any reason for hope. Quite often, people seem to come out of situations that appeared hopeless, and be OK. Like, nuclear weapons: the cold war with these powerful weapons seemed like a very bad situation. Another example would be the Year 2000 problem. It was nothing like this existential risk, but the fact that people saw it ahead of time and made a big fuss about it meant that people overreacted, which was a lot better than under-reacting.

The reason it was never a problem is because people actually sorted it out before it happened.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more:
Weve discovered the secret of immortality. The bad news is its not for us: why the godfather of AI fears for humanity - The Guardian

Read More..

Will A.I. Become the New McKinsey? – The New Yorker

When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, its become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.

So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinseya consulting firm that works with ninety per cent of the Fortune 100and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to turbocharge sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.

A former McKinsey employee has described the company as capitals willing executioners: if you want something done but dont want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but dont want to be blamed for doing whats necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that its just doing what the algorithm says, even though it was the company that commissioned the algorithm in the first place.

The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term A.I. If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as capitals willing executioners? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make peoples lives worse? Suppose youve built a semi-autonomous A.I. thats entirely obedient to humansone that repeatedly checks to make sure it hasnt misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.

Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. Thats the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinseys solutions will increase shareholder value more than your firms solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.

Is there a way for A.I. to do something other than sharpen the knife blade of capitalism? Just to be clear, when I refer to capitalism, Im not talking about the exchange of goods or services for prices determined by a market, which is a property of many economic systems. When I refer to capitalism, Im talking about a specific relationship between capital and labor, in which private individuals who have money are able to profit off the effort of others. So, in the context of this discussion, whenever I criticize capitalism, Im not criticizing the idea of selling things; Im criticizing the idea that people who have lots of money get to wield power over people who actually work. And, more specifically, Im criticizing the ever-growing concentration of wealth among an ever-smaller number of people, which may or may not be an intrinsic property of capitalism but which absolutely characterizes capitalism as it is practiced today.

As it is currently deployed, A.I. often amounts to an effort to analyze a task that human beings perform and figure out a way to replace the human being. Coincidentally, this is exactly the type of problem that management wants solved. As a result, A.I. assists capital at the expense of labor. There isnt really anything like a labor-consulting firm that furthers the interests of workers. Is it possible for A.I. to take on that role? Can A.I. do anything to assist workers instead of management?

Some might say that its not the job of A.I. to oppose capitalism. That may be true, but its not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does. If we cannot come up with ways for A.I. to reduce the concentration of wealth, then Id say its hard to argue that A.I. is a neutral technology, let alone a beneficial one.

Many people think that A.I. will create more unemployment, and bring up universal basic income, or U.B.I., as a solution to that problem. In general, I like the idea of universal basic income; however, over time, Ive become skeptical about the way that people who work in A.I. suggest U.B.I. as a response to A.I.-driven unemployment. It would be different if we already had universal basic income, but we dont, so expressing support for it seems like a way for the people developing A.I. to pass the buck to the government. In effect, they are intensifying the problems that capitalism creates with the expectation that, when those problems become bad enough, the government will have no choice but to step in. As a strategy for making the world a better place, this seems dubious.

You may remember that, in the run-up to the 2016 election, the actress Susan Sarandonwho was a fervent supporter of Bernie Sanderssaid that voting for Donald Trump would be better than voting for Hillary Clinton because it would bring about the revolution more quickly. I dont know how deeply Sarandon had thought this through, but the Slovenian philosopher Slavoj iek said the same thing, and Im pretty sure he had given a lot of thought to the matter. He argued that Trumps election would be such a shock to the system that it would bring about change.

What iek advocated for is an example of an idea in political philosophy known as accelerationism. There are a lot of different versions of accelerationism, but the common thread uniting left-wing accelerationists is the notion that the only way to make things better is to make things worse. Accelerationism says that its futile to try to oppose or reform capitalism; instead, we have to exacerbate capitalisms worst tendencies until the entire system breaks down. The only way to move beyond capitalism is to stomp on the gas pedal of neoliberalism until the engine explodes.

I suppose this is one way to bring about a better world, but, if its the approach that the A.I. industry is adopting, I want to make sure everyone is clear about what theyre working toward. By building A.I. to do jobs previously performed by people, A.I. researchers are increasing the concentration of wealth to such extreme levels that the only way to avoid societal collapse is for the government to step in. Intentionally or not, this is very similar to voting for Trump with the goal of bringing about a better world. And the rise of Trump illustrates the risks of pursuing accelerationism as a strategy: things can get very bad, and stay very bad for a long time, before they get better. In fact, you have no idea of how long it will take for things to get better; all you can be sure of is that there will be significant pain and suffering in the short and medium term.

Im not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. Its A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.

People who criticize new technologies are sometimes called Luddites, but its helpful to clarify what the Luddites actually wanted. The main thing they were protesting was the fact that their wages were falling at the same time that factory owners profits were increasing, along with food prices. They were also protesting unsafe working conditions, the use of child labor, and the sale of shoddy goods that discredited the entire textile industry. The Luddites did not indiscriminately destroy machines; if a machines owner paid his workers well, they left it alone. The Luddites were not anti-technology; what they wanted was economic justice. They destroyed machinery as a way to get factory owners attention. The fact that the word Luddite is now used as an insult, a way of calling someone irrational and ignorant, is a result of a smear campaign by the forces of capital.

Whenever anyone accuses anyone else of being a Luddite, its worth asking, is the person being accused actually against technology? Or are they in favor of economic justice? And is the person making the accusation actually in favor of improving peoples lives? Or are they just trying to increase the private accumulation of capital?

Today, we find ourselves in a situation in which technology has become conflated with capitalism, which has in turn become conflated with the very notion of progress. If you try to criticize capitalism, you are accused of opposing both technology and progress. But what does progress even mean, if it doesnt include better lives for people who work? What is the point of greater efficiency, if the money being saved isnt going anywhere except into shareholders bank accounts? We should all strive to be Luddites, because we should all be more concerned with economic justice than with increasing the private accumulation of capital. We need to be able to criticize harmful uses of technologyand those include uses that benefit shareholders over workerswithout being described as opponents of technology.

See the article here:
Will A.I. Become the New McKinsey? - The New Yorker

Read More..

Artificial Intelligence in judiciary: CJI DY Chandrachud speaks on possibilities of AI, role of judges in such cases | Mint – Mint

DY Chandrachud, the Chief Justice of India, has called on judges to embrace technology for the benefit of litigants, stating that litigants should not be burdened because judges are uneasy with technology.

DY Chandrachud, the Chief Justice of India, has called on judges to embrace technology for the benefit of litigants, stating that litigants should not be burdened because judges are uneasy with technology.

Speaking at the National Conference on Digitisation held in Odisha, the CJI implored High Courts to continue using technology for hybrid hearings, pointing out that such facilities are not meant for use only during the COVID-19 pandemic.

Speaking at the National Conference on Digitisation held in Odisha, the CJI implored High Courts to continue using technology for hybrid hearings, pointing out that such facilities are not meant for use only during the COVID-19 pandemic.

CJI Chandrachud stated that in a judgement he was editing the previous night, he mentioned that lawyers should not be overburdened because judges are not comfortable with technology. He added that the solution to this is straightforward - judges need to retrain themselves.

CJI Chandrachud stated that in a judgement he was editing the previous night, he mentioned that lawyers should not be overburdened because judges are not comfortable with technology. He added that the solution to this is straightforward - judges need to retrain themselves.

He also touched upon his recent correspondence with Chief Justices to allow lawyers to appear virtually, adding that some High Courts have disbanded video conference systems despite having the infrastructure in place.

He also touched upon his recent correspondence with Chief Justices to allow lawyers to appear virtually, adding that some High Courts have disbanded video conference systems despite having the infrastructure in place.

According to CJI Chandrachud, they have received numerous PILs from lawyers in India stating that hybrid hearings have been discontinued. Therefore, he requested the Chief Justices to refrain from dismantling the infrastructure.

According to CJI Chandrachud, they have received numerous PILs from lawyers in India stating that hybrid hearings have been discontinued. Therefore, he requested the Chief Justices to refrain from dismantling the infrastructure.

CJI Chandrachud also inaugurated a neutral citation system and spoke about his vision to create paperless and virtual courts over the cloud. However, he also flagged recent incidents resulting from the live-streaming of proceedings.

CJI Chandrachud also inaugurated a neutral citation system and spoke about his vision to create paperless and virtual courts over the cloud. However, he also flagged recent incidents resulting from the live-streaming of proceedings.

CJI Chandrachud mentioned the issue of certain video clips of a judge in Patna High Court questioning an Indian Administrative Service (IAS) officer for their inappropriate attire in court. Although these clips are amusing, they should be regulated as there are more significant occurrences happening in the courtroom.

CJI Chandrachud mentioned the issue of certain video clips of a judge in Patna High Court questioning an Indian Administrative Service (IAS) officer for their inappropriate attire in court. Although these clips are amusing, they should be regulated as there are more significant occurrences happening in the courtroom.

He explained that social media's connection with live streaming presents a new challenge, requiring a centralised cloud infrastructure for live streaming, as well as new court hardware.

He explained that social media's connection with live streaming presents a new challenge, requiring a centralised cloud infrastructure for live streaming, as well as new court hardware.

The CJI reiterated that Artificial Intelligence (AI) tools would be useful, but judges' discretion would still be necessary, particularly in areas such as sentencing policy.

The CJI reiterated that Artificial Intelligence (AI) tools would be useful, but judges' discretion would still be necessary, particularly in areas such as sentencing policy.

"We do not think we want to cede our discretion, which we exercise on sound judicial lines in terms of sentencing policy. At the same time, AI is replete with possibilities and it is possible for the Supreme Court to have record of 10,000 or 15,000 pages? How do you expect a judge to digest documents of 15,000 pages, which comes with a statutory appeal?" Bar and Bench quoted him as saying..

"We do not think we want to cede our discretion, which we exercise on sound judicial lines in terms of sentencing policy. At the same time, AI is replete with possibilities and it is possible for the Supreme Court to have record of 10,000 or 15,000 pages? How do you expect a judge to digest documents of 15,000 pages, which comes with a statutory appeal?" Bar and Bench quoted him as saying..

The top court recently launched a new version of its e-filing portal for crowd testing, engaging with lawyers and clerks to raise awareness and provide training. The CJI emphasised that the top court exists for the entire country and called for the centralisation of cloud infrastructure for live streaming in order to address new challenges posed by social media.

The top court recently launched a new version of its e-filing portal for crowd testing, engaging with lawyers and clerks to raise awareness and provide training. The CJI emphasised that the top court exists for the entire country and called for the centralisation of cloud infrastructure for live streaming in order to address new challenges posed by social media.

See the original post:
Artificial Intelligence in judiciary: CJI DY Chandrachud speaks on possibilities of AI, role of judges in such cases | Mint - Mint

Read More..

How artificial intelligence could fundamentally change certain types of work – CBS News

New York City Since he started using artificial intelligence, copywriter Guillermo Rubio estimates his productivity has increased by as much as 20%.

"It just makes certain things go a bit faster, like research or brainstorming ideas," Rubio told CBS News. "It's really useful for coming up with those things. Not necessarily writing them, but just generating the ideas when you're stuck."

That innovation also means change. A report released by Goldman Sachs in March found that AI services could automate as many as 300 million full-time jobs worldwide. Many are calling it a new age in the way we work.

"It's very powerful," said Daniel Keum, an assistant professor of management at Columbia Business School. "AI is able to actually outperform us in learning and adapting. So that we have not seen before in any technologies."

click to expand

Keum believes the impact of AI will stretch across industries. The issue has already taken center stage in Hollywood, where Writers Guild of America members went on strike this week for the first time in 16 years. Among the demands from the more than 11,000 WGA writers to the studios is a ban on the use of AI to create feature and television scripts.

"These more very physical and labor-intensive jobs won't be replaced," Keum said. "But I think ... thinking, analytical, creative skills, these things are actually most exposed to AI at the moment."

The spike in the popularity of AI has raised alarm among some in the tech world, who say that there are ethical issues that still need to be fleshed out. In March, a group of about 1,000 tech leaders, including Elon Musk and Steve Wozniak, signed a letter calling for a pause on AI development because they believe it poses "profound risks to society and humanity."

"ChatGPT came on the scene in November, and it's been like a wildfire ever since," said Margaret Lilani, vice president of talent solutions at the job search site Upwork.

"You have to be smart about it and really look at it as this opportunity," Lilani added. "It is not an 'or' between ChatGPT and humans. It's an 'and.' And when you combine those two together and really harness that potential of utilizing technology to increase your productivity, and really showcase your creativity, it's going to take you that much further."

That is a mindset that Rubio has embraced, saying it's not just about adapting in order to survive.

"Survive and even thrive, I would say," Rubio said.

Nancy Chen is a CBS News correspondent, reporting across all broadcasts and platforms.

Go here to read the rest:
How artificial intelligence could fundamentally change certain types of work - CBS News

Read More..

Vice President Harris To Meet With CEOs About Artificial Intelligence … – Black Enterprise

Vice President Kamala Harris will meet with the CEOs of leading technology companies to discuss the future of artificial intelligence and its possible risks.

As the Biden administration prepares to roll out a set of initiatives to ensure the rapidly evolving technology improves lives without jeopardizing peoples rights and safety, Harris will discuss the risks they see in current AI development with the leaders of Alphabet, Anthropic, Microsoft, and OpenAI.

The administration is also preparing to invest up to $140 million to establish seven new AI research institutes, according to The Associated Press.

The announcement comes as stories about the dangers of artificial intelligence mount. Earlier this week, the New York Times profiled Geoffrey Hinton, renowned researcher and the godfather of AI, who expressed fears about how quickly the technology is advancing and how little regulations exist to keep track of developments.

I think if you take the existential risk seriously, as I now doI used to think it was way off, but I now think its serious and fairly closeit might be quite sensible to just stop developing these things any further, said Hinton, who left Google to speak more freely about his fears. But I think its completely naive to think that would happen.

According to Hinton, tech companies have instead joined the race to create AI technology that will continue to advance past the knowledge available to control it.

PresidentJoe Biden noted that AI can help to address disease and climate change but also could harm national security and disrupt the economy in destabilizing ways, according to the Associated Prss. Because AI can generate human-like writing and fake images, the ethical and societal ramifications concern many.

The government leaders message to AI companies is they have a role to play in reducing the risks and that they can work with the government to do so.

Excerpt from:
Vice President Harris To Meet With CEOs About Artificial Intelligence ... - Black Enterprise

Read More..

With Artificial Intelligence and Leadership, There is a ‘Learning Curve’ – GovExec.com

Cookie List

A cookie is a small piece of data (text file) that a website when visited by a user asks your browser to store on your device in order to remember information about you, such as your language preference or login information. Those cookies are set by us and called first-party cookies. We also use third-party cookies which are cookies from a domain different than the domain of the website you are visiting for our advertising and marketing efforts. More specifically, we use cookies and other tracking technologies for the following purposes:

Strictly Necessary Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Functional Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Performance Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Sale of Personal Data

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Social Media Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Targeting Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Read more:
With Artificial Intelligence and Leadership, There is a 'Learning Curve' - GovExec.com

Read More..

Snoop Dogg addresses risks of artificial intelligence: ‘Sh– what the f—‘ – Fox News

American rapper Snoop Dogg expressed confusion about recent developments in artificial intelligence, comparing the technology to movies he saw as a child.

At the Milken Institute Global Conference in Beverly Hills this week, Snoop, whose given name is Calvin Broadus, turned his focus to artificial intelligence while discussing a strike of the Writers Guild of America. The writers strike is, in part, about the potential for artificial intelligence to take writing jobs.

"I got a motherf---ing AI right now that they did made for me," Snoop said. "This n----- could talk to me. Im like, man, this thing can hold a real conversation? Like real for real? Like its blowing my mind because I watched movies on this as a kid years ago."

WHITE HOUSE ANNOUNCES PLAN FOR RESPONSIBLE AI USE, VP HARRIS TO MEET WITH TECH EXECUTIVES

Snoop Dogg discussed artificial intelligence at the Milken Institute 2023 Global Conference (Milken Institute)

Snoop also referenced Geoffrey Hintons recent warnings about artificial intelligence, who recently quit his job at Google so he could discuss the harms of AI.

"And I heard the dude, the old dude that created AI saying, This is not safe, 'cause the AIs got their own minds, and these mother---ers gonna start doing their own s---. I'm like, are we in a f---ing movie right now, or what? The f-- man?"

GODFATHER OF ARTIFICIAL INTELLIGENCE SAYS AI IS CLOSE TO BEING SMARTER THAN US, COULD END HUMANITY

Hinton is often referred to as the "Godfather of AI," told the New York Times he believes bad actors will use artificial intelligence platforms the very ones his research helped create for nefarious purposes.

Snoop Dogg compared artificial intelligence to movies he saw as a child. ((Photo by Jerod Harris/Getty Images))

Snoop Dogg questioned the safety of artificial intelligence at the Milken Institute 2022 Global Conference. ((Photo by Jerod Harris/Getty Images))

And while Snoop highlighted potential concerns about artificial intelligence, he also questioned whether he should invest in the technology.

"So do I need to invest in AI so I can have one with me? Or like, do y'all know? S---, what the f---? I'm lost, I don't know," Snoop continued, drawing laughter from the audience.

MEET THE 72-YEAR-OLD CONGRESSMAN GOING BACK TO SCHOOL TO LEARN ABOUT AI

The release of ChatGPT last year has sparked both excitement and concern among experts, who believe the technology will revolutionize business and human interactions.

CLICK HERE TO GET THE FOX NEWS APP

Thousands of tech leaders and experts, including Musk, signed an open letter in March that called on artificial intelligence labs to pause research on systems that were more powerful than GPT-4, OpenAIs most advanced AI system. The letter argued that "AI systems with human-competitive intelligence can pose profound risks to society and humanity."

See the rest here:
Snoop Dogg addresses risks of artificial intelligence: 'Sh-- what the f---' - Fox News

Read More..

Artificial intelligence helping detect early signs of breast cancer in some US hospitals – FOX 5 Atlanta

Loading Video

This browser does not support the Video element.

October raises awareness for Breast Cancer and LiveNOW from FOX talks with a doctor about the advances in treatments and importance of early detection.

BOCA RATON, Fla. - Some doctors believe artificial intelligence is saving lives after a major advancement in breast cancer screenings. In some cases, AI is detecting early signs of the disease years before the tumor would be visible on a traditional scan.

The Christine E. Lynn Women's Health and Wellness Institute at the Boca Raton Regional Hospital found a 23% increase in cancer cases since implementing AI during breast cancer screenings.

Dr. Kathy Schilling, the medical director at the institute, told Fox News Digital the practice has nine dedicated breast radiologists who are all fellowship trained, so the increase in early detections was surprising.

"All we do is read breast imaging studies, and so I thought, you know, we were probably pretty good at what we were doing, but this study really comes in shows us that even the dedicated and committed breast radiologists can do better utilizing artificial intelligence," Schilling said.

CHAT GPT ANSWERED 25 BREAST CANCER SCREENING QUESTIONS , BUT 'IT'S NOT READY FOR THE REAL WORLD'-HERE'S WHY

"ProFound AI," created by iCad, is designed to flag problem areas on mammograms. The program studied millions of breast cancer scans and, over time, learned to circle lesions and estimate the cancer risk.

"If you realize that 90% of the cases are benign and have no findings, you know, you just become fatigued. You get mesmerized by scrolling through the images. The AI helps us to refocus and find those little tiny cancers that we're looking for," Schilling said.

Medical personnel use a mammogram to examine a woman's breast for breast cancer. Photo: Hannibal Hanschke/dpa (Photo by Michael Hanschke/picture alliance via Getty Images)

ProFound AI became the first technology of its kind to be FDA cleared in December 2018. The Christine E. Lynn Women's Health and Wellness Institute adopted the groundbreaking technology during the COVID-19 pandemic, and the hospital now boasts one of the earliest studies on AI's impact on cancer.

"What I think we're going to be finding is that we're finding cancers when they're three to six millimeters in size, and finding the invasive lobular cancers which are very difficult for us to find, because they don't form masses in the breast," Schilling said.

Schilling also stated that over the past two years, the institute has offered less severe therapies to patients diagnosed with breast cancer because the cells are so small.

"We are doing smaller lumpectomies, fewer mastectomies, less chemotherapy, less radiation therapy," she continued. "I think we're entering into a whole new era in breast care."

ARTIFICIAL INTELLIGENCE IN HEALTH CARE: NEW PRODUCT ACTS AS COPILOT FOR DOCTORS'

Schilling also believes AI's early detection capabilities may have helped save Luz Torres' life after a routine mammogram on April 1 revealed a small cancerous tumor. Torres said she had no symptoms or inclination that something could be wrong.

"I have very dense breast tissue, so I always have a mammography and an ultrasound. The recommendation of that visit was the breast biopsy, so I had that done within a week's time, and then I got a phone call that the pathology was breast cancer," Torres said in an emotional interview. "It was an early detection. I come every year, I'm on track with my mammography, so it's very small tumor."

RELATED: New FDA rule requires info on breast density with all mammograms

Torres was diagnosed with stage 1 breast cancer in early April and recently completed surgery. Fortunately, she is expected to make a full recovery after early detection.

"It looks good. Because it was called early stage 1, I won't need chemotherapy so very happy about that," said Torres, who described the institute as "amazing."

Loading Video

This browser does not support the Video element.

Dr. Ko Un Park, a surgical oncologist at OSUs Comprehensive Cancer Center, discusses the signs of inflammatory breast cancer, treatment, and other things to know about the rare, yet deadly form of the disease.

"The desire to improve the technology for the patients to find this breast cancer in patients early when it's treatable, and the prognosis ends up being great. I'm fortunate enough to be one of those patients. It's a blessing," she concluded.

Several companies have released AI products with the ability to flag abnormalities during cancer screenings. Doctors are also using AI to detect brain cancer, lung cancer and prostate cancer.

Find more updates on this story at FOXNews.com.

More:
Artificial intelligence helping detect early signs of breast cancer in some US hospitals - FOX 5 Atlanta

Read More..

What is The Role of Artificial Intelligence in Healthcare? – ReadWrite

AI is becoming more and more popular. As we are employing it in our daily lives, large hospitals are using it in their daily operations. The development of artificial intelligence in healthcare sector is seeing many innovations to make it easily accessible for both patients and doctors.

AI in healthcare can easily gather all of the information during the diagnostic process. When combined with technology like big data analytics, telehealth services, and remote monitoring services, this data can be used to better understand the diseases in question and develop more effective treatment plans.

Additionally, patients lives will be made easier. A digital record will be available to doctors. They will have one. By doing so, people are able to receive medical advice while relaxing at home. Those doctors who have access to the records can decide to modify them. Therefore, they can do it without difficulty. The security of peoples personal data will also be provided.

Artificial intelligence in healthcare has more advantages than the conventional methods. Lets take a look at a few of them:

Doctors are prone to making mistakes, because of their demanding jobs, They must always use extreme caution and pay close attention to each patients needs. This causes the patients activity levels to decline and can occasionally be fatal. As a result, AI aids them by finishing some of the challenging jobs like data organization, inspection, etc. The doctors will be able to work more efficiently and without getting exhausted.

There are many occasions when a patient needs to have surgery or receive medication immediately. AI technology will show to be life-saving in certain situations. Whereas doctors must review the patients prior data, AI can quickly assess the records and make immediate action suggestions. This will take less time and improve the effectiveness of your decision-making.

Virtual health assistants are a great asset for both patients and doctors. For the doctors, it may analyze data and provide recommendations. Additionally, it can assist patients by advising them on diets, sending their health information to their doctors, and reminding them to take their medications.

AI now handles numerous duties that doctors once had to perform. This helps save time. Due to this, the process of treating patients is now more time-effective and efficient. The results of tests like MRIs, CT scans, ultrasounds, etc. can be provided by AI. This shortens the time it takes to do the test and delivers the results right away. In this way, patients wont have to wait. Now, they can obtain test results quickly instead of having to wait weeks.

Along with automating diagnosis, artificial intelligence (AI) in healthcare can also help with disease prevention. It can project the spread of diseases at the macro level. It assess the likelihood that a condition would be transmitted by an individual. This can encourage improved health outcomes and assist healthcare professionals with duties like planning and logistics.

With the help of AI in healthcare, you wont have to rush to the hospital to show the doctors your medical records. This helps in cutting costs. Personal assistants powered by artificial intelligence can advise patients on health-related issues. As a result, they will reduce the expense of going to a clinic or hospital. They can even connect patients with doctors directly for guidance.

AI has also been incorporated into wearable medical device. This will improve the patient care. Software like FitBits and wearables employ AI to analyze data and inform consumers and their healthcare providers about potential health concerns and issues. Technology-enabled self-assessment of ones health reduces the effort placed on experts and helps avoid unnecessary hospitalizations or remissions. Healthcare apps, developed by professional healthcare app development services, can also connect with these devices to provide better access to doctors for patients data.

You might occasionally ponder whether robots will ever replace nurses. The shanswer raises a number of difficult questions. In fact, nurses life can be improved by robots.

The nursing tasks that the Robots are designed to perform include taking vital signs. They assist with ambulation, giving medication, and learning infection control procedures. The traditional role played by nurses may change as a result of all these benefits becoming a reality and robots gradually integrating into healthcare environments.

According to research, eight to sixteen percent of the time nurses spend at performing tasks that arent genuinely nursing-related. Another team member can perform these tasks. Robots can assisting nurses by following them around. This way, the nurses will have more time to devote to caring for their patients.

If we look at a robot that could compete on a human level, Sophia is the best example of how far technology has come. Sophia is a well-known social robot. It is designed to act as a companion to senior citizens. This robot serves as a metaphor for the potential that technology has to improve how human-like robots can function.

Do you wonder about the future of artificial intelligence in the healthcare sector? Artificial intelligence (AI) is revolutionizing all aspects of human interaction, and information consumption, including the way we buy goods and services.

Artificial intelligence in healthcare is altering how clinicians practice medicine. It is changing how patients are treated and how the pharmaceutical sector functions in the field of health care. Lets look at the few trends that will shape the AIs future.

As a result, we may conclude that artificial intelligence is making every effort to advance the healthcare industry. Artificial Intelligence in healthcare is supporting physicians, nurses, and even patients with their health and promoting a quicker recovery. AIs developed and more reliable diagnosis will aid patients in lowering costs. AI gives doctors more time to focus less on administrative tasks and more on comprehending and caring for patients.

Simply said, AIs goal is to increase the efficiency with which computers can comprehend challenging healthcare issues. As a result, we will see more improvement in the healthcare industry over the next few days thanks to artificial intelligence.

Hello, I'm Srushti Tete, an Enthusiastic Digital Marketer. I am a Digital Marketing Executive and Strategic Partner at Futurionic with over 3+ years of experience in this field. I am passionate about leveraging the right strategic partnerships and software to scale digital growth.

Read more:
What is The Role of Artificial Intelligence in Healthcare? - ReadWrite

Read More..

Artificial Intelligence is here friend, foe or both? – Citrus County Chronicle

A whole new thing to worry about has just arrived. It joins a list of existential concerns for the future, along with global warming, the wobbling of democracy, the relationship with China, the national debt, the supply chain crisis, and the wreckage in the schools.

Artificial intelligence, known as AI, has had pride of place on the worry list for several weeks. Its arrival was trumpeted for a long time, including by the government and by techies across the board. But it took ChatGPT, an AI chatbot developed by OpenAI, for the hair on the back of the national neck to rise.

Now we know the race into the unknown is speeding up. The tech biggies, like Google and Facebook, are trying to catch the lead claimed by Microsoft. They are rushing headlong into a science the experts say they only partially understand. They really dont know how these complex systems work; maybe like a book that the author cannot read after having written it.

Get more from the Citrus County Chronicle

Incalculable acres of newsprint and untold decibels of broadcasting have been raising the alarm ever since a ChatGPT test told a New York Times reporter that it was in love with him and he should leave his wife. Guffaws all around, but also fear and doubt about the future.

Will this Frankenstein creature turn on us? Maybe it loves just one person, hates the rest of us, and plans to do something about it.

In an interview on the PBS television program White House Chronicle, John Savage, An Wang professor emeritus of computer science at Brown University, told me there was a danger of over-reliance, and hence mistakes, on decisions made using AI.

For example, he said, some Stanford students partly covered a stop sign with black and white pieces of tape. AI misread the sign as signaling it was OK to travel 45 miles an hour. Similarly, Savage said the slightest calibration error in a medical operation using artificial intelligence could result in a fatality.

Savage believes AI needs to be regulated and that any information generated by AI needs verification. As a journalist, it is the latter that alarms.

Already, AI is writing fake music almost undetectably. There is a real possibility that it can write legal briefs. So why not usurp journalism for ulterior purposes and put stiffs like me out of work?

AI images can already be made to speak and look like the humans they are aping. How will you recognize a deep fake from the real thing? Probably, you wont.

Currently, we are struggling with what is fact and where is the truth. There is so much disinformation, so speedily dispersed that some journalists are in a state of shell shock, particularly in Eastern Europe, where legitimate writers and broadcasters are assaulted daily with disinformation from Russia.

How can we tell what is true? a reporter in Vilnius, Lithuania, asked me during an Association of European Journalists meeting as the Russian disinformation campaign was revving up before the Russian invasion of Ukraine.

Well, that is going to get a lot harder. You need to know the provenance of information and images before they are published, Brown Universitys Savage said.

But how? In a newsroom on deadline, we have to trust the information we have. One wonders to what extent malicious users of the new technology will infiltrate research materials or, later, the content of encyclopedias. Or, are the tools of verification themselves trustworthy?

Obviously, there will be upsides to thinking-machines scouring the internet for information on which to make decisions. I think of handling nuclear waste; disarming old weapons; simulating the battlefield; incorporating historical knowledge; and seeking new products and materials. Medical research will accelerate, one assumes.

However, privacy may be a thing of the past it almost certainly will be.

Just consider that attractive person you saw at the supermarket but were unsure what would happen if you initiated a conversation. Snap a picture on your camera, and in no time AI will tell you who the stranger is, whether the person might want to know you and, if that should be your interest, whether the person is married, in a relationship or just waiting to meet someone like you. Or whether he or she is a spy for a hostile government.

AI might save us from ourselves. But we should ask how badly we need saving and be prepared to ignore the answer. Damn it, we are human.

Llewellyn King is executive producer and host of White House Chronicle on PBS. His email is llewellynking1@gmail.com and you can follow him on Twitter @LlewellynKing2. He wrote this for InsideSources.com.

Read this article:
Artificial Intelligence is here friend, foe or both? - Citrus County Chronicle

Read More..