Category Archives: Artificial Intelligence
What Nvidia Stock Investors Should Know About the Latest Artificial Intelligence (AI) Chip Announcement – The Motley Fool
Nvidia continues to dominate the AI market.
In today's video, I discuss recent updates affecting Nvidia (NVDA -2.68%) and other semiconductor companies. Check out the short video to learn more, consider subscribing, and click the special offer link below.
*Stock prices used were the after-market prices of April 11, 2024. The video was published on April 11, 2024.
Jose Najarro has positions in Advanced Micro Devices, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has positions in and recommends Advanced Micro Devices, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends BlackBerry and Marvell Technology. The Motley Fool has a disclosure policy. Jose Najarro is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.
View original post here:
What Nvidia Stock Investors Should Know About the Latest Artificial Intelligence (AI) Chip Announcement - The Motley Fool
The Hidden Peril of Over-Reliance on Artificial Intelligence – yTech
Summary: As artificial intelligence (AI) becomes more ingrained in everyday life, its important to consider the ramifications of allowing AI to make decisions on our behalf. While the technology promises numerous advantages, especially in data-intensive fields, the potential for AI to erode our decision-making abilities shouldnt be overlooked.
The steady infiltration of artificial intelligence into various aspects of our lives has raised a multitude of concerns, from privacy violations to biased algorithms. However, one significant danger that might be overshadowed is that AI could weaken our capabilities in making thoughtful and disciplined decisions.
Thoughtful decision-making traditionally comprises a few vital steps that promote understanding and exploring multiple options while considering their trade-offs. This process usually culminates with a well-informed choice, resisting the urge to swiftly conclude without adequate reflection. Regrettably, the convenience of AI shortcuts this regimen, providing solutions devoid of transparent reasoning and stifling our crucial cognitive exercises.
Harnessing the potential of AI without becoming subservient to it involves recognizing the technologys limitations. AI-generated advice, while helpful in certain contexts, does not substitute the need for personal scrutiny and critical thinking. Societal over-reliance on AI threatens to entrench the existing biases and group conformity, neglecting the individuals analytical growth.
Going forward, a balanced approach is essential. While AI can revolutionize sectors such as healthcare and cybersecurity, its vital to retain the human element in everyday decision-making. Embracing AIs benefits while ensuring that society does not forfeit the fundamental human capacity for thoughtful choice may help prevent an over-dependence that could ultimately render humans less autonomous and wise. Maintaining this delicate balance honors the intrinsic human privilege and duty of discernment and choice, fostering personal and societal advancement.
Artificial Intelligence: The Implications of Outsourcing Decisions
As the prominence of artificial intelligence (AI) in modern life continues to escalate, we are beginning to face the complex consequences of integrating this advanced technology into our daily routines. The AI industrys spread reaches far and wide, touching upon areas like healthcare, where it assists in diagnosing diseases; finance, where it aids in fraud detection; and transportation, through the advent of self-driving vehicles. These applications are just the tip of the iceberg, revealing the immense potential that AI possesses to transform our world.
However, as we marvel at AIs capabilities, we must also ponder the market forecasts that predict its growth. Experts anticipate that the AI sector will witness exponential growth in the coming years. Industries worldwide are slated to spend billions on AI technologies, with healthcare, automotive, and finance sectors leading the charge. The global AI market size is expected to reach new heights, suggesting a future where AI applications become even more omnipresent.
But this anticipated boon comes with a caveat the issues related to the AI industry and product. As AIs decision-making capabilities surpass human speed and accuracy, we must confront the ethical and practical challenges it creates. These range from job displacement in the workforce to significant privacy and security concerns. Furthermore, the potential for ingrained biases within AI algorithms poses a severe risk, potentially amplifying existing societal inequalities.
Ethically, there is a burgeoning debate on how much control should be relinquished to AI. Reliance on machine-generated choices could lead to atrophy in human cognitive abilities, specifically in critical thinking and problem-solving skills. As humans become mere supervisors of AI-driven processes, the fear of eroding our decision-making faculties looms large.
To approach these challenges effectively, broad discussions across various sectors are essential. Stakeholdersincluding technologists, ethicists, policymakers, and the publicmust work together to establish guidelines that balance AI advancements with human oversight. Creating transparent AI systems that can explain their reasoning is crucial for fostering trust and understanding.
In fostering this balance, we are reminded of the importance of nurturing human competencies that AI cannot replicateempathy, moral reasoning, and deep contextual understanding. It is in these uniquely human traits that our dominion over artificial intelligence must be maintained.
To explore the latest AI news and insights, you can visit IBM Watson and DeepMind, leaders in the AI space.
Maintaining a measured perspective is key to leveraging AI effectively while safeguarding our inherent ability to make thoughtful decisions. The challenge lies in embracing the convenience of AI without becoming dependent on it, thereby preserving the crucial human element in an increasingly automated world.
Leokadia Gogulska is an emerging figure in the field of environmental technology, known for her groundbreaking work in developing sustainable urban infrastructure solutions. Her research focuses on integrating green technologies in urban planning, aiming to reduce environmental impact while enhancing livability in cities. Gogulskas innovative approaches to renewable energy usage, waste management, and eco-friendly transportation systems have garnered attention for their practicality and effectiveness. Her contributions are increasingly influential in shaping policies and practices towards more sustainable and resilient urban environments.
View post:
The Hidden Peril of Over-Reliance on Artificial Intelligence - yTech
1 Stealthy Artificial Intelligence (AI) Stock That Could Be Huge – sharewise
There is something special about seeing events live, whether you like sports, concerts, or shows. I recently visited T-Mobile Arena in Las Vegas to watch the Vegas Golden Knights take on the Detroit Red Wings, and there was something different about getting into the arena. I'll explain below.
A large part of any event is security. Sadly, mass shootings have skyrocketed over the past 20 years. New York City has also seen a surge in violence on the subways, so it is turning to this company to help keep people safe.
Whatever our personal experience, we are all at least indirectly affected by unfortunate incidents when we go through tight security and long lines. However, traditional metal detector security has problems -- problems that Evolv Technologies (NASDAQ: EVLV) is trying to solve using artificial intelligence (AI) technology.
Continue reading
Source Fool.com
See the original post:
1 Stealthy Artificial Intelligence (AI) Stock That Could Be Huge - sharewise
Artificial intelligence in NY’s courts? Panel will study benefits and potential risks. – Gothamist
The New York state court system has established an advisory panel to study the potential benefits and risks of how artificial intelligence is utilized in court.
The Advisory Committee on Artificial Intelligence and the Courts will be made up of judges, court administrators, attorneys, academics and other experts from around New York. They will examine the use of AI tools in the courts by judges, court staff, attorneys and litigants, and identify how it could be used to improve the administration of justice while minimizing risks. The group will also be charged with developing appropriate guardrails to ensure AI is used safely.
Chief Administrative Judge Joseph A. Zayas, who is tasked with overseeing the day-to-day operation of the statewide court system, announced the formation of the committee in a press release on Thursday.
While these are incredibly exciting times, with AI showing tremendous promise for transforming court operations, improving court experiences for all users, and greatly expanding access to justice, we have to move cautiously in considering the adoption of the use of AI tools, Zayas said in a statement. The New York State Courts must aspire to the effective, responsible, and impartial use of AI, taking every step possible to guard against bias and the lack of human input, and to ensure that all security and privacy concerns are protected.
Initially, the 39-member panel will focus on studying and then recommending AI training, determining how to ensure AI use is equitable and assessing the ethical implications of using AI tools, according to the press release.
Among those appointed to the panel are NYU Law School Director and professor Jason Schultz, who will serve as one of three co-chairs, and Manhattan District Attorney Alvin Bragg.
The creation of the committee comes as elected officials in the state and across the country grapple with how to handle the growing use of AI. Earlier this year, Gov. Kathy Hochul announced her commitment to putting New York State at the cutting-edge of AI research, which included a proposal to create a consortium to create an AI computing center in Upstate New York.
More:
Artificial intelligence in NY's courts? Panel will study benefits and potential risks. - Gothamist
AI makes retinal imaging 100 times faster, compared to manual method – National Institutes of Health (NIH) (.gov)
News Release
Wednesday, April 10, 2024
NIH scientists use artificial intelligence called P-GAN to improve next-generation imaging of cells in the back of the eye.
Researchers at the National Institutes of Health applied artificial intelligence (AI) to a technique that produces high-resolution images of cells in the eye. They report that with AI, imaging is 100 times faster and improves image contrast 3.5-fold. The advance, they say, will provide researchers with a better tool to evaluate age-related macular degeneration (AMD) and other retinal diseases.
Artificial intelligence helps overcome a key limitation of imaging cells in the retina, which is time, said Johnny Tam, Ph.D., who leads the Clinical and Translational Imaging Section at NIH's National Eye Institute.
Tam is developing a technology called adaptive optics (AO) to improve imaging devices based on optical coherence tomography (OCT). Like ultrasound, OCT is noninvasive, quick, painless, and standard equipment in most eye clinics.
Adaptive optics takes OCT-based imaging to the next level, said Tam. Its like moving from a balcony seat to a front row seat to image the retina. With AO, we can reveal 3D retinal structures at cellular-scale resolution, enabling us to zoom in on very early signs of
While adding AO to OCT provides a much better view of cells, processing AO-OCT images after theyve been captured takes much longer than OCT without AO.
Tams latest work targets the retinal pigment epithelium (RPE), a layer of tissue behind the light-sensing retina that supports the metabolically active retinal neurons, including the photoreceptors. The retina lines the back of the eye and captures, processes, and converts the light that enters the front of the eye into signals that it then transmits through the optic nerve to the brain. Scientists are interested in the RPE because many diseases of the retina occur when the RPE breaks down.
Imaging RPE cells with AO-OCT comes with new challenges, including a phenomenon called speckle. Speckle interferes with AO-OCT the way clouds interfere with aerial photography. At any given moment, parts of the image may be obscured. Managing speckle is somewhat similar to managing cloud cover. Researchers repeatedly image cells over a long period of time. As time passes, the speckle shifts, which allows different parts of the cells to become visible. The scientists then undertake the laborious and time-consuming task of piecing together many images to create an image of the RPE cells that's speckle-free.
Tam and his team developed a novel AI-based method called parallel discriminator generative adversarialnetwork (P-GAN)a deep learning algorithm. By feeding the P-GAN network nearly 6,000 manually analyzed AO-OCT-acquired images of human RPE, each paired with its corresponding speckled original, the team trained the network to identify and recover speckle-obscured cellular features.
When tested on new images, P-GAN successfully de-speckled the RPE images, recovering cellular details. With one image capture, it generated results comparable to the manual method, which required the acquisition and averaging of 120 images. With a variety of objective performance metrics that assess things like cell shape and structure, P-GAN outperformed other AI techniques. Vineeta Das, Ph.D., a postdoctoral fellow in the Clinical and Translational Imaging Section at NEI, estimates that P-GAN reduced imaging acquisition and processing time by about 100-fold. P-GAN also yielded greater contrast, about 3.5 greater than before.
By integrating AI with AO-OCT, Tam believes that a major obstacle for routine clinical imaging using AO-OCT has been overcome, especially for diseases that affect the RPE, which has traditionally been difficult to image.
Our results suggest that AI can fundamentally change how images are captured, said Tam. Our P-GAN artificial intelligence will make AO imaging more accessible for routine clinical applications and for studies aimed at understanding the structure, function, and pathophysiology of blinding retinal diseases. Thinking about AI as a part of the overall imaging system, as opposed to a tool that is only applied after images have been captured, is a paradigm shift for the field of AI.
More news from the NEI Clinical and Translational Imaging Section.
This press release describes a basic research finding. Basic research increases our understanding of human behavior and biology, which is foundational to advancing new and better ways to prevent, diagnose, and treat disease. Science is an unpredictable and incremental process each research advance builds on past discoveries, often in unexpected ways. Most clinical advances would not be possible without the knowledge of fundamental basic research. To learn more about basic research, visit https://www.nih.gov/news-events/basic-research-digital-media-kit.
NEIleads the federal governments effortsto eliminate vision loss and improve quality of life through vision researchdriving innovation, fostering collaboration, expanding the vision workforce, and educating the public and key stakeholders.NEI supports basic and clinical science programs to develop sight-saving treatments and to broaden opportunities for people with vision impairment.For more information, visithttps://www.nei.nih.gov.
About the National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.
NIHTurning Discovery Into Health
Vineeta Das, Furu Zhang, Andrew Bower, et al. Revealing speckle obscured living human retinal cells with artificial intelligence assisted adaptive optics optical coherence tomography.Communications Medicine. April 10, 2024,https://doi.org/10.1038/s43856-024-00483-1.
###
Report: Sam Altman Seeks $1 Billion to Fund AI Hardware Device – PYMNTS.com
How much will it cost to build Sam Altmans planned artificial intelligence-powered personal device?
At least $1 billion, according to areportby The Information, which listed that amount as what Altman, CEO ofOpenAI, and his partner, Jony Ive,Apples former design guru, are seeking from investors.
While the precise nature of the device is unclear, it will not resemble a smartphone, the report said. The effort was first reported last fall, but the latest story indicated that discussions withThrive Capitaland venture capital groupEmerson Collective are proceeding to move the company forward.
Writing about the effort last year, PYMNTS likened the project to the way Apples business model has always revolved around a close integration betweenhardware and software.
This approach allowed Apple to control both the device and the operating system, ensuring a consistent and user-friendly experience, as well as facilitating the rise of subscription-based pricing models for apps and content, allowing users to pay on a recurring basis for access to premium services or content, the October report said. Similar pricing and operating model evolutions are already taking place in the AI ecosystem.
The steep cost of AI, fueled mainly by the computing power AI models need, is a reality that businesses need to face to stay competitive. Estimates from analysts showed thatMicrosofts Bing AI chatbot, powered byOpenAI, requires at least $4 billion ofinfrastructureto perform its tasks.
Managing these costs could lead to the development of new business models, or the transformation of existing ones, as businesses look to pass on both costs and cost-savings to end-users through dynamic pricing strategies, PYMNTS wrote.
In a separate report, PYMNTS noted that the AI landscape is growing more crowded and competitive, making it tougher for companies to get their ownAI productsinto the hands of consumers.
Building an AI hardware device may give OpenAI an edge, the September report said. The firm currently relies on Apple and Android phones to run its apps, and browsers housed in a variety of computer manufacturers casings to run its software.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
Continued here:
Report: Sam Altman Seeks $1 Billion to Fund AI Hardware Device - PYMNTS.com
How Artificial Intelligence Is Fueling Incel Communities – The Daily Beast
In late January 2024, X was flooded with graphic, deepfaked images of Taylor Swift. While celebrities have long been the victims of photo leaks and cyber-attacks, this time it was different because these were generated using artificial intelligence.
The images were quickly reported by the Shake It Off singers fanbase and taken down after being live on the posters profile for less than a day. However, it was enough time for them to go viral despite the platform having policies against non-consensual nudity. A report from disinformation research firm Graphika later found that the images had been created on 4chan, where users encouraged each other to generate sexually charged deepfakes in an attempt to skirt content policies surrounding nudity using famous female celebrities.
Unfortunately, Swifts experience isnt a one-off. Marvel actress Xochitl Gomez, who is only 17 years old at the time of reporting, said on the podcast The Squeeze that she struggled to get deepfakes of her taken down from X and shared the mental impact that had on her. Gomez and Swift are just two of the countless women whove recently become victims to deepfakes depicting them in sexual ways.
People have always used media to try and defame people, that hasnt changed. Whats changed is how accessible its now gotten, Siwei Lyu, professor of Computer Science at the University of Buffalo, told The Daily Beast.
Late last year, AI image generation platform CivitAI became popular for its Bounties feature, which encouraged users to create deepfakes in exchange for virtual rewards. Almost all the bounties created were of women, according to reporting from 404 Media. Some included women who werent celebrities or public figures either, but rather private citizens.
Experts expect it to only get worseespecially as more and more incel communities online use these technologies. Henry Ajder, an AI and deepfake adviser and expert, told The Daily Beast that this has been a growing problem for years now and CivitAI is an example of a platform heavily linked to that kind of evolution.
He said that CivitAI has become a hotbed for not just artistically created content, but also content thats erotic. Its a specific place to find specific knowledge and people have started using it for pornographic content.
Ajder also describes the technology on the platform as agnostic or dual use, saying once its there it can be used in any way, while, others are explicitly designed for creating pornographic content without consent. The tools have only gotten popular within incel culture via platforms like Reddit and 4chan.
Governments are simply trying to play catch-up: the technology has gone faster than their ability to regulate.
Belinda Barnet
Theres such a low threshold, Hera Husain, founder of Chayn, a nonprofit supporting victims of gender-based violence and trauma, told The Daily Beast. Its an easy-to-access method which allows people to fulfill the darkest fantasies they may have. [...]They may feel it is victimless, but it has huge consequences for those people.
Its not just deepfakes that have penetrated incel culture either. Theres even research that shows that AI girlfriends will be making incels even more dangerous. With this tech allowing them to form and control their perceptions of a so-called ideal woman, theres a danger that they may push those perceptions on real women. When they find themselves unable to do so or when a woman seems unattainable like in the case of Swift or Gomez, incels begin deepfake campaigns. At least, then, incels can make these women do what they like.
Governments are simply trying to play catch-up; the technology has gone faster than their ability to regulate, Belinda Barnet, senior lecturer in media at Swinburne University, told The Daily Beast.
This gets even more dangerous as we look at global contexts. Patriarchal norms in different nations often further endanger women who become victims to such campaigns. In many more conservative countries, even a deepfake of a woman can be enough for her family to ostracize her or, in extreme cases, use violence against her. For example, in late 2023, an 18-year-old was killed by her father over an image of her with a man which police suspect was doctored.
It doesnt matter that the image is fake. The fact that their image is associated with such a depiction is enough for society to ostracize them. Its not so much about people believing the images are real as it is about pure spite. Its a different kind of trauma to revenge porn, Ajder explained.
With AI generation becoming more accessible, this also makes it an easier barrier to entry for global incels who may have struggled with language barriers. In South Asia, where Husain focuses much of her work, it also becomes harder to counter incel radicalization, both socially and on a policy level. They dont have as strong a counter to the radicalization theyre seeing in the incel community, she explained.
Lyu says that policies regarding free speech and tech access across the world vary so there can be different impacts. In the U.S., using AI generation tools to create content... is freedom of speechbut people can take advantage of that as well. Drawing that line becomes very hard. Whereas in China, theres very strong limitations on the use of this technology, so that is possible but prevents positive uses of the same line of technology.
Incel culture existed long before AI generation tools became popular. Now that theyre mainstream, these communities will be quick to adopt them to further cause harm and trauma. The issue is sure to get worse before it gets better.
In terms of incel culture, this is another weapon in their twisted arsenal to abuse women, perpetuate stereotypes, and further make visceral the twisted ideas they have about women, Ajder said.
Read the original post:
How Artificial Intelligence Is Fueling Incel Communities - The Daily Beast
Video: Where Bitcoin and Artificial Intelligence Meet – Bloomberg
The halving, a preordained event in the code of Bitcoin that happens every four years, is upon us again. Once it occursperhaps as soon as this monththe reward every miner receives for mining the digital asset is immediately cut in half.
There will be a day when miners come to work and they mine roughly half the number of Bitcoin they mined the day before, says Tyler Page, chief executive of Cipher Mining Technologies Inc. The halving is a natural phenomenon in Bitcoin that disciplines the entire market and forces it to become more efficient. As it turns out, each time its happened in the past, Bitcoin prices eventually hit a new record. Still, the event comes as some miners are looking for a hedgespecifically by branching out into artificial intelligence. In the mini-documentary Where Bitcoin and AI Meet, Bloomberg Originals explains how the two hottest technologies of the 21st century are coming together.
Read more:
Video: Where Bitcoin and Artificial Intelligence Meet - Bloomberg
The potential for artificial intelligence to transform healthcare: perspectives from international health leaders – Margolis Center for Health Policy
Artificial intelligence (AI) has the potential to transform care delivery by improving health outcomes, patient safety, and the affordability and accessibility of high-quality care. AI will be critical to building an infrastructure capable of caring for an increasingly aging population, utilizing an ever-increasing knowledge of disease and options for precision treatments, and combatting workforce shortages and burnout of medical professionals. However, we are not currently on track to create this future. This is in part because the health data needed to train, test, use, and surveil these tools are generally neither standardized nor accessible. There is also universal concern about the ability to monitor health AI tools for changes in performance as they are implemented in new places, used with diverse populations, and over time as health data may change. The Future of Health (FOH), an international community of senior health care leaders, collaborated with the Duke-Margolis Institute for Health Policy to conduct a literature review, expert convening, and consensus-building exercise around this topic. This commentary summarizes the four priority action areas and recommendations for health care organizations and policymakers across the globe that FOH members identified as important for fully realizing AIs potential in health care: improving data quality to power AI, building infrastructure to encourage efficient and trustworthy development and evaluations, sharing data for better AI, and providing incentives to accelerate the progress and impact of AI.
Experts discuss misinformation, artificial intelligence, grassroots solutions at panel – The Brown Daily Herald
Misinformation experts discussed social media, algorithms and artificial intelligence at a Tuesday panel hosted by The Information Futures Lab.
Titled Everything We Know (And Dont Know) About Tackling Rumors and Conspiracies, the panel was moderated by Claire Wardle, a co-director of the IFL and a professor of the practice of health services, policy and practice.
Despite its societal impact, research on media misinformation remains a young field, according to Stefanie Friedoff, another co-director of the IFL and an associate professor of the practice of health services, policy and practice.
Having worked as a senior policy advisor on the White House COVID-19 Response Team, she later contributed to a literature review on pandemic misinformation interventions: a topic she discussed at the panel.
Were significantly understudying this, Friedoff said, citing a lack of longitudinal research on non-American and video-based misinformation. We dont have a lot of useful evidence to apply in the field, and we need to work on that.
Evelyn Prez-Verdia, founder of We Are Ms, a strategic consulting firm, spoke about her work to combat misinformation at the panel. She aims to empower Spanish-speaking diasporas in South Florida through community-based trust-building: Recently, she has worked with the IFL as a fellow to conduct a survey of information needs in Florida.
According to Prez-Verdia, non-English-speaking and immigrant communities are prone to misinformation because of language and cultural barriers. When people are offered accessible resources, she argues, communities become empowered and less susceptible to misinformation. People are hungry for information, she said.
Abbie Richards, another panelist and senior video producer at Media Matters for America, a watchdog journalism organization, identified social media algorithms as an exacerbating factor. In a video shown during the panel, Richards highlighted the proliferation of misleading or inaccurate content on platforms like TikTok. As a video producer, she looks to distill research and discourse on this topic for audiences who wouldnt necessarily read research papers, she said.
She researched AI-generated content on social media, which is often designed to take advantage of the various platforms monetization policies. Theres a monetization aspect behind this content, Richards elaborated.
Algorithms are designed to show (users) what they want to see and what theyll engage with, she said. When viewers feel disempowered it makes it really easy to gravitate towards misinformation."
When discussing AI-generated misinformation that is designed to be entertaining, Freidhoff noted that only some of us have the luxury to laugh at misinformation.
But from the perspective of somebody behind the paywall, who doesnt necessarily speak English, factual information becomes increasingly difficult to access, she added. She describes this as misinformation inequities, which all speakers acknowledged existed in their projects.
In an interview with The Herald, Friedhoff and Wardle emphasized how the online information ecosystem connects different types of misinformation. Vaccine skepticism, Wardle said, is a slippery slope towards climate change denial: We have to understand as researchers and practitioners that we can't think in silos.
Many of the speakers agreed that misinformation spreads in part because people tend to prioritize relationships both in real life and parasocial over fact. Theres nothing more powerful than someone you trust and close to you, Prez-Verdia said.
Richards said emotional literacy is the backbone to navigating both AI and misinformation. This includes teaching people how to recognize (confirmation bias) within themselves and understanding common misinformation techniques.
When asked to offer potential solutions, the speakers offered a range of responses. Richards suggested a marketing campaign for federal agencies to facilitate increased governmental literacy that allows for all citizens to understand how the government functions. Prez-Verdia also identified diverse and culturally conscientious government messaging as key, while Friedhoff recommended creating community conversations to explore perspectives rather than further polarizing them.
Audience member Benjy Renton, a research associate at the School of Public Health, was inspired by community-based approaches like Prez-Verdias work: it was great to see the diverse range of perspectives on misinformation.
The speakers told The Herald that they found each others perspectives enlightening. I'm somebody that people feel like they can go to because I've spent years talking about (misinformation), Richards said in an interview with The Herald after the event. But the idea of how you measure (trust) is fully beyond me.
Get The Herald delivered to your inbox daily.
Prez-Verdia ended the discussion by re-iterating the fight against misinformation as founded on teamwork: When you look at all of these pieces, the women here, a collaboration where we all have our individual gifts thats exactly what needs to be done on a larger spectrum.
Megan is a Senior Staff Writer covering community and activism in Providence. Born and raised in Hong Kong, she spends her free time drinking coffee and wishing she was Meg Ryan in a Nora Ephron movie.
The rest is here:
Experts discuss misinformation, artificial intelligence, grassroots solutions at panel - The Brown Daily Herald