Page 670«..1020..669670671672..680690..»

AI that reads brain scans shows promise for finding Alzheimer’s genes – Nature.com

A type of MRI scan (artificially coloured) shows the brain of a person with Alzheimers disease.Credit: Mark and Mary Stevens Neuroimaging and Informatics Institute/Science Photo Library

Washington DC

Researchers have sifted through genomes from thousands of individuals in an effort to identify genes linked to Alzheimers disease. But these scientists have faced a serious obstacle: its hard to know for certain which of those people have Alzheimers. Theres no foolproof blood test for the disease, and dementia, a key symptom of Alzheimers, is also caused by other disorders. Early-stage Alzheimers might cause no symptoms at all.

Now, researchers have developed artificial intelligence (AI)-based approaches that could help. One algorithm efficiently sorts through large numbers of brain images and picks out those that include characteristics of Alzheimers. A second machine-learning method identifies important structural features of the brain an effort that could eventually help scientists to spot new signs of Alzheimers in brain scans.

Conquering Alzheimers: a look at the therapies of the future

The goal is to use peoples brain images as visual biomarkers of Alzheimers. Applying the method to large databases that also include medical information and genetic data, such as the UK Biobank, could allow scientists to pinpoint genes that contribute to the disease. In turn, this work could aid the creation of treatments and of models that predict whos at risk of developing the disease.

Combining genomics, brain imaging and AI is allowing researchers to find brain measures that are tightly linked to a genomic driver, says Paul Thompson, a neuroscientist at the University of Southern California in Los Angeles, who is spearheading efforts to develop these algorithms.

Thompson and others described the new AI techniques on 4 November at the annual conference of the American Society of Human Genetics in Washington DC.

Thousands of people have had both their genomes sequenced and their brains scanned in the past two decades as part of efforts to build massive research databases. But the rate at which this torrent of information is being produced is outpacing researchers ability to analyse and interpret it.

Were very data-rich these days compared with how things were 510 years ago, and thats where AI [and machine learning] approaches can excel, says Alison Goate, a geneticist at the Icahn School of Medicine at Mount Sinai in New York City.

In 2020, Thompson launched AI4AD, a consortium of researchers across the United States that aims to develop AI tools to analyse and integrate genetic, imaging and cognitive data relating to Alzheimers disease. As part of this project, researchers created an AI model trained on tens of thousands of magnetic resonance imaging (MRI) brain scans. These images had previously been reviewed by physicians, who picked out scans that showed evidence of Alzheimers. From the images, the AI tool learned what the brains of people with and without Alzheimers look like.

In one trial, reported in a preprint1 that has not yet been peer reviewed, the AI classifier detected Alzheimers in brain scans with an accuracy of more than 90%. The consortium has also used a similar approach to create a classifier that can accurately sort scans into separate categories according to specific pathological changes in the brain that are associated with cognitive decline and dementia2.

Degui Zhi, a data scientist at the University of Texas Health Science Center at Houston, and his colleagues have taken a different approach. Whereas Thompson and his team focused the AI model on areas of the brain that are known to be linked to Alzheimers, Zhi wanted the tool to learn for itself the structural features of the brain that can help to diagnose the disease.

How AI could lead to a better understanding of the brain

The researchers AI tool reviewed thousands of brain scans and chose the features that most reliably differentiated one persons brain from anothers3. Zhi says that this minimizes the likelihood of human bias influencing the algorithm. Now, Zhis team is using the algorithm to identify the traits that best distinguish between brain scans of people with and without Alzheimers.

Thompson and Zhi acknowledge that the AI models are only as good as the data on which theyre trained. There is a lack of racial and geographical diversity in individuals who have had their brains scanned and genomes sequenced, especially in databases such as the UK Biobank, so the findings from this AI-guided research might not be applicable to everyone. Furthermore, Goate says it will be crucial to show that the AI models performance can be replicated in other databases, and that they show consistent results.

Rudolph Tanzi, a neurogeneticist at Massachusetts General Hospital in Boston, says that these biomarkers could one day become part of a set of risk scores for the disease that also integrate blood-based biomarkers and genetics. When all of these bits of data are combined, risk scores can become exponentially more sensitive, which will hopefully allow people to seek early treatment before the disease progresses, he adds.

Alzheimers is just the beginning, Thompson says. If this approach works, it could also be applied to other diseases that have a physical presentation on brain imaging, he says.

View post:

AI that reads brain scans shows promise for finding Alzheimer's genes - Nature.com

Read More..

ACR keynote: AI in medicine ‘could be exciting or it could be terrifying’ – Healio

November 12, 2023

2 min read

Add topic to email alerts

Receive an email when new articles are posted on

Back to Healio

SAN DIEGO If wielded appropriately, artificial intelligence could transform health care for the better by improving diagnoses and allowing physicians to provide more individualized care, said the keynote speaker at ACR Convergence 2023.

He also said it could be terrifying.

In medicine in particular, the excitement and anxiety about AI have accelerated in the last year, Avi Goldfarb, PhD, Rotman Chair in Artificial Intelligence and Healthcare and professor of marketing at the Rotman School of Management, at the University of Toronto, told attendees.

Goldfarb noted that the launch of ChatGPT and other such large-language models have been heralded by some as having the ability to change the course of history, a development which some may find terrifying or fascinating, he said.

The fascinating part pertains to whether AI will provide revolutionary improvements in patient care. The terrifying part, meanwhile, relates to the question of whether it will replace the human element in medicine.

The optimistic view is that we are on the verge of the Jetsons or the robots from Star Wars, Goldfarb said. They can do everything we do and they listen to us.

However, the flip side of this hypothetical is the Terminator or The Matrix, where the robots are just as functional but do not listen to humans, he added.

For Goldfarb, the reality, at the moment, is that AI and machine learning are simply helping to make predictions. In medicine, this means diagnosis.

Prediction has gotten better, faster and cheaper, he said.

However, Goldfarb assured attendees that humans will always be necessary in the predictive component of patient diagnosis.

You take data about a patients symptoms and fill in the missing information about the cause of those symptoms, he said.

The larger concern, for Goldfarb, is that of all the industries that are using AI technology to make advances, health care is way at the bottom, he said. When you look at AI applications in health care, it is not looking good.

Goldfarb encouraged attendees to embrace the technology to improve the lives of patients.

If we have machines that can do diagnosis as well or better than 90% of medical professionals, then we can think about a transformation of the medical industry, he said.

According to Goldfarb, allowing nurses, advanced practice practitioners and other providers to oversee the gathering of symptom data and then input that data into AI or machine learning programs could save rheumatologists time, and allow them to offer more personalized care in the clinic. Moreover, these types of technology could be used by health care professionals anywhere, allowing them to provide care to rural and underserved populations.

Such a transformation may be necessary given the workforce shortage that is coming in rheumatology, Goldfarb added.

This is a very exciting vision of what could happen in a particular industry, he said. It could be exciting or it could be terrifying.

Collapse

Goldfarb A. Keynote address. Presented at: ACR Convergence 2023; Nov. 10-15, 2023; San Diego.

Disclosures: Goldfarb reported no relevant financial disclosures.

Add topic to email alerts

Receive an email when new articles are posted on

Back to Healio

See the original post:

ACR keynote: AI in medicine 'could be exciting or it could be terrifying' - Healio

Read More..

Sign up for the beta version of Col-Lin AI today! – BlackEngineer.com

Sign up for the beta version of Col-Lin AI today! It is the worlds most extensive digital library on the STEM City USA Metaverse Platform powered by artificial intelligence (AI).

The digital library has a wealth of information on women and minorities in science, technology, engineering, and math (STEM).

For more than 40 years, Career Communications Group (CCG) has published magazines, audio, seminars, videos, and conference information. Their flagship publications include US Black Engineer, Hispanic Engineer, and Women of Color.

The name Col-Lin AI is a tribute to two BEYA honorees, Collin Paris and Linda Gooden, whose contributions have inspired the development of this innovative endeavor.

Curated and developed by CCG, the Col-Lin library is designed to become the worlds largest digital repository, focusing on minorities in science and engineering. The library provides a platform for users to connect, collaborate, and exchange knowledge. Sign up here to explore the rich heritage of minority engineers.

Career Communications Group is launching a new AI library in December. In anticipation of its release, the media company has launched a beta version of its digital library called Col-Lin, powered by artificial intelligence.

It will offer a comprehensive database with information on Black, Women, and other underrepresented groups in STEM. The library will present volumes of magazine issues, seminar notes, and nominations.

The beta library includes exciting features that allow users to immerse themselves in a virtual study realm. The voice search feature, which is in development, will enable users to locate materials with a vocal command.

A digital assistant will guide users through the library, ensuring they find what they need and understand it effectively.

Interactive lessons, high-quality video lectures, and three-dimensional experiences will make learning more engaging and enjoyable.

The Col-Lin library will also host virtual classes that mimic the dynamics of a physical classroom, encouraging connection and engagement. Gamifying content will make learning more interactive and enjoyable.

The Col-Lin library provides a platform for users to connect, collaborate, and exchange knowledge. It sheds light on the contributions of minority engineers, serving as a testament to the vast potential and value that diversity brings to the engineering sector.

The library also offers exclusive material from the extensive archives of CCG, which is well-known for promoting minorities in engineering and science.

In promoting diversity, CCG recognizes that representation matters. The library seeks to inspire future engineers by making these narratives easily accessible.

The library is an invaluable resource for aspiring engineers, educators, and industry professionals. It offers rich, adaptable resources for creating lesson plans that celebrate diversity and inspire students to consider careers in STEM.

By incorporating the groundbreaking accomplishments of minority engineers, the library can provide fresh perspectives and innovative solutions to contemporary engineering challenges.

Interested parties are urged to sign up here to secure their place. Early registration will provide an exclusive opportunity to experience the librarys wealth of information and be among the first to explore the rich heritage of minority engineers.

Read more:

Sign up for the beta version of Col-Lin AI today! - BlackEngineer.com

Read More..

How Should Developers Respond to AI? – The New Stack

The tech-oriented podcasts at Changelog released a special edition last month focusing on how developers should respond to the arrival of AI. Jerod Santo, the producer of their Practical AI podcast had moderated a panel at the All Things Open conference on AIs Impact on Developers. And several times the panel had wondered whether some of the issues presented by AI require a collective response from the larger developer community

The panelists exploring AIs impact on developers were:

Speaking about recent high-profile strikes in other industries, Quick had lauded the general principle and the power of community, and people being able to come together as a community to stand up for what they think they deserve. I dont know that were, like, here right now, but I think its just an example of what people that come together with a common goal can do for an entire industry.

And then his thoughts took an interesting turn. And maybe we get to a point where we unionize against AI.

I dont know, thats maybe not. But the power of those connections, I think, can lead to being able to really make positive influence wherever we end up.

Unionize against AI. You heard it here first, moderator Santo said wryly then moved on to another topic. (When Freeman warned about prompts that trigger hallucinations of non-existent solutions, quipping that generative AI is on drugs, Santo joked the audience was hearing lots of breaking news on this panel.)

As the discussion moved to other areas, it reminded the audience that the issue is not just the arrival of powerful, code-capable AI. The real question is how the developer community will respond to the range of issues raised, from code licensing to the need for responsible guidelines for AI-developing companies. Beyond preserving their careers by adapting to the new technology, developers could help guide the arrival of tools alleviating their own pain points. They could preserve that fundamental satisfaction of helping others, while tackling increasingly more complex problems.

But as developers find themselves adapting to the arrival of AI, the first question is whether theyll have to mount a collective response.

Unionizing against AI wasnt a specific goal, Quick clarified in an email interview with The New Stack. Hed meant it as an example of the level of just how much influence can come from a united community. My main thought is around the power that comes with a group of people that are working together. Quick noted what happened when the United Auto Workers went on strike. We are seeing big changes happening because the people decided collectively they needed more money, benefits, etc. I can only begin to guess at what an AI-related scenario would be, but maybe in the future, it takes people coming together to push for change on regulation, laws, limitations, etc.

Even this remains a concept more than any tangible movement, Quick stressed in his email. Honestly, I dont have much more specific actions or goals right now. Were just so early on that all we can do is guess. But there is another scenario where Quick thinks community action would be necessary to push for change: the hot-button issue of who owns the code. AI has famously trained by ingesting code from public repositories and during the panel discussion, Quick worried developers might be tempted to abandon open source licenses.

He acknowledged to the audience that there are obviously much larger issues and that they can seem a little overwhelming. But he also believes theres some evolution that needs to happen, and in a lot of areas legally, morally, ethically open sourcedly. There has to be things that catch up, and give some sort of guidelines to this stuff that we have going on. Quick later argued it will follow the trajectory of other advancements that humanity has made including the need for acknowledging that theres probably a point where we need to have limitations.

Although he quickly added, What that means and what that looks like, I dont know.

But soon the discussion got down to specifics. Santo noted theres already ways that a robots.txt file can be updated by individual users to block specific AI agents from crawling their site. Quick suggested flagging GitHub repositories in the same way as a reasonable intermediary step, though later admitting that itd be hard to later prove where AI-generated code had taken its training data.

But Freeman returned to the role of communities in addressing companies with a profit-only mentality both developers and users. To some degree, between our work and also where we spend our money, we have to tell the market that that is not acceptable.

So I dont want to live in a world where were trying to hide from crawlers. I want to live in a world where we have decided on standards and guidelines that lead toward responsible use of that information, so that we all have some compromise around how were proceeding with this.

At one point Freeman seemed to suggest a cautious choosing-your-battles strategy, telling the audience to make demands where you can. But one area where she sees that as essential? Calling for responsible development of AI again meaning guidelines and standards. We are in the place where it is truly our responsibility to push for this, and push against the sort of market forces that would say, Were moving forward quickly with a profit-based approach to this a profit-first approach.'

Its a topic she returned to throughout the panel, emphasizing the importance of developers recognizing our own power and influence on pushing toward a holistic and appropriate approach to responsible AI.

The panel kept returning to the needs of the community. Freeman also agreed with Quirk that AIs impact on developers will someday include tools designed to relieve their least-favorite chores like debugging strange code though it may take a while to get there. But I think truly, I keep coming back to this we have ownership and responsibility over this. And we can kind of determine what this actually looks like in usage.

The biggest surprise came when Santo asked if they were bearish on bullish about the long-term impact of AI on developers. Santo admitted that he was long-term positive and both his panelists took the same view.

Quick characterized his attitude as a very super-positive thing, with a goal of easing peoples fears about AI replacing their jobs. And Freeman also said with a laugh that she was bullish on AI because its happening, right? Like, this is happening. We have to kind of make it our own and lean into it, rather than try and fight it, in my opinion.

Freemans advice for todays developers? Learn as much as you can, whether its about designing prompts or understanding the models that youre using, and recognizing the strengths and the limitations and being ready to adapt and change as we move forward Just as developers have in the past, its time to grow with a new technology.

And on the plus side, Freeman anticipates a ton of new AI tools being created as venture capitalists fund investment in the AI ecosystem.

Toward the end, Santo asked a provocative question: since detail-oriented programmers take pride in their meticulous carefulness, is AI stealing some of our joy? And Emily Freeman responded: I think you have a point. Maybe we humans glory in our ability to spot errors quickly, and that pattern recognition is something that makes us really powerful.

But a moment later Freeman conceded that I think thats the joy for some people its not the joy for others. Freeman described her own joy as building tools that matter to people I think the spark of joy is going to be different for all of us. But Freeman emphasized that joy and personal growth are important to humans, and will remain so in the future.

And this led back to the larger theme of taking control of how AI arrives in the developer world. We set the standards here. This is not happening to us. It is happening with us. It is happening by us. Freeman urged developers to take ownership of that to identify which areas they want to hand off to AI, and the areas where they want developers to remain, growing and evolving with the newly-arrived tools.

So instead coding up yet another CREATE/READ/UPDATE/DELETE service for the thousandth time, I want to solve the really complex problems. The challenge of solving new problems at scale is interesting, Freeman argues. And I think its that kind of problem-solving and looking higher up in the stack, and having that holistic view that will empower us along the way.

In our email interview, we asked Quick if hed gotten any reactions to the panel. His response? I think we got an overwhelming response of this is something I should be paying attention to.

See the original post:

How Should Developers Respond to AI? - The New Stack

Read More..

A New Era of Galaxy AI is Coming Here’s a Glimpse – Samsung Global Newsroom

Internet at your fingertips. A camera in your pocket. And then, artificial intelligence improving your everyday life.

Samsung Galaxy helped democratize internet access and turned cameras into a communication tool we cant live without. Even as all eyes are on AI, some of its best benefits havent come to mobile technology yet. Galaxy is about to change that.

No company can harness AIs potential like Galaxy. Why? Because Galaxy puts the power of openness in the palm of your hand. Designed to empower everyone, everywhere, Galaxy AI is universal intelligence on your phone as youve never seen it before. In all the places it matters most from barrier-free communication, to simplified productivity, to unconstrained creativity were unleashing new possibilities.

Galaxy AI is a comprehensive mobile AI experience, powered by both on-device AI developed at Samsung and cloud-based AI enabled by our open collaborations with like-minded industry leaders. It will transform your everyday mobile experience with the peace of mind you count on from Galaxy security and privacy.

For a tiny glimpse of one benefit Galaxy AI will enable, look no further than the very thing our phones were originally created to do, calling each other. AI Live Translate Call will soon give users with the latest Galaxy AI phone a personal translator whenever they need it. Because its integrated into the native call feature, the hassle of having to use third-party apps is gone. Audio and text translations will appear in real-time as you speak, making calling someone who speaks another language about as simple as turning on closed captions when you stream a show. Because its on-device Galaxy AI, you can trust that no matter the scenario, private conversations never leave your phone.

Coming early next year, Galaxy AI will bring us closer to a world where some of the most common barriers to social connection dissolve, and communication is easier and more productive than ever.

Thats AI changing the world and your life for the better.

Mobile technology has an incredible power to enable connection, productivity, creativity and more for people around the world, but until now, we havent seen mobile AI ignite that in truly meaningful ways, said Wonjoon Choi, EVP and Head of R&D, Mobile eXperience Business. Galaxy AI is our most comprehensive intelligence offering to date, and it will change how we think about our phones forever.

Breakthrough experiences that empower real connection and open up new possibilities right from your phone. Thats Galaxys promise.

This is just a glimpse of whats to come. Life opens up with Galaxy, so get ready for a new era of mobile AI.

See the original post:

A New Era of Galaxy AI is Coming Here's a Glimpse - Samsung Global Newsroom

Read More..

Travel Trends 2024 Report (Part 1): Authenticity And The Rise In A.I. – Forbes

In this series of articles, I talk to a wide range of travel experts, insiders and luxury brands to find out more about the future of travel for next year and beyond. Today, I look at two growing trends: the search for authentic travel and how technology can elevate travel experiences.

TREND 1: DEEP AND MEANINGFUL TRAVEL

Connecting with local communities, learning about local cultures and harnessing authentic experiences are all key for travelers.

In Intrepid Travels recent A Sustainable Future for Travel Reportwhich was published in collaboration with foresight agency, The Future LaboratoryPeople-Positive travel is pegged as the successor to sustainable travel. Going forward, regenerative travel will focus on travel being social-led instead of product-led, says the report, with people-positive travel focusing on forging deeper human connections, as well as considering the environmental and social impact. Inclusivity will also be key, with travel companies like Intrepid focusing on social change through connection. One area has seen Intrepid investing in recruiting more female leaders in locations such as Morocco and India, doubling the number of female leaders in recent years, and introducing 100 new Indigenous-focused trips in 2023. Focusing on local communities will also be key, helping prevent tourism leakage, when money flows out of the destinations.

One of the problems with tourism at the moment is that it is the opposite of regenerative, explains Darrell Wade, co-founder and chairman of Intrepid Travel. It's extractiveand this cannot continue for much longer.

Meanwhile, Hiltons 2024 Trends Report finds the makeup of the modern traveller is evolving. It identifies an emerging theme of travelers prioritising experiences (85%), with many looking forward to exploring the unknown (81%), trying the local cuisine (64%) and learning about local customs and traditions (48%) when on holiday. And people are saving the pennies to make it happen, with more than half (52%) reducing spending in other areas to prioritise travel. Hiltons global trends report found over half (56%) of people in Britain plan to spend more on travel in 2024 than in 2023. Some are going further to immerse themselves into local traditions, with 25% looking for locally-sourced food while away.

A Sri Lankan cookery class, with Cartology.

(Credit: Cartology)

Cartology Travel, a bespoke, luxury travel agency, describes its focus as curating unique experiences around the world by working with select local partners to ensure memorable stays. The company agrees that the search for authentic travel experiences has been a growing trend over the past few years. Clients are keen to meet and connect with local people and delve into new experiences, says co-founder Justin Huxter.This could be learning more about conservation alongside researchers in South Africa or a cooking class in Sri Lanka, in the home of a villager, which focuses on traditional recipes handed down through the generations.

Clients are asking for more 'unique' experiences when they travel, says Audley Travel. The specialist in tailor-made travel says its North America specialists are responding with suggestions of experiences such as guided kayak and camping trips to see whales and wolves; or exploring lesser travelled regions, such as Saskatchewan and the Yukon. This trend is also being recognised by Audleys partners in destinations. While arranging tailormade trips has always been at the heart of Audleys operations, the country specialists report that an increasing number of partners are also customising the excursions and experiences that they offer for individual clients.

Original Travel is also reporting a similar trend for travellers wanting to deep-dive into a destination. In fact, it has created 10 new itineraries to satisfy the demand from what they are calling waterculturalists: divers who want to deepen their understanding of the seas and the life and cultures they support. Just as horticulturalists are students of the land and plantlife, waterculturalists want to better understand the seas they explore, educate themselves on issues which need addressing and get involved in ways to protect and cultivate the waters and the cultures which depend on themand its becoming a vital part to many more dive trips than ever before. The new and innovative projects include: building reef highways in Fiji; planting coral in the Philippines; restoring coral in French Polynesia; diving with Bubu fisherman in Indonesia and observing local traditional fishing practices from a unique perspective underwater; and joining a safari dive in Tanzania, where you will be equipped with an earpiece to listen to the dive guide as they take you on a tour of coral gardens.

TREND 2: AI AND TRAVEL

How advances in technology and artificial intelligence can aid the hospitality sector and secure the future of the planet.

Timbers Resorts, a boutique hospitality developer and operator, says that AI can play an important role in hospitality. CMO, Heidi Nowak, says: In the luxury travel sector we stress there is no substitute for personalized service when it comes to providing a truly meaningful experience for guests. However, AI can and does play an effective role both for guests and staff. We discovered that the optimal approach in utilizing this technology is for tasks that require less of a human touch, such as expediting requests for fresh linens or addressing various housekeeping requests.

Timbers Resorts, Kaua'i, Hawaii.

(Credit: SHAWN O'CONNOR)

She continues: Through AI, guests can share their needs via text which can be sent from anywhere and at any time throughout their stay. So if there is a request for fresh glassware or towels, they can make that ask while theyre working on their swing on the golf course or spending a day on the slopes and can expect the delivery to occur before their return. We strongly believe technology could never replace the personal connection creating lasting memories with every guest's stay. We recommend AI for routine tasks where efficiency is the top priority.

Wearable Carbon Tracker.

(Credit: Intrepid Travel)

Intrepid Travels aforementioned A Sustainable Future for Travel Report also touches upon artificial intelligence. The report says: A genuine concern for the state of the planet and its people manifests as holding the travel industry accountable. These are Travel Transformers, who not only want to mitigate any further harm but plan to drive positive change through travel. They wont let the travel industry get away with greenwashing, and they want tangible results. By 2040, it will be unusual to see members of Generation Alpha without a carbon footprint tracker on their smartphones. Every Uber ride, plane journey and trip to the supermarket will be logged in their devices, noting their carbon footprint in real time.

The report also forecasts how technology can aid regenerative travel: Tracking travel metrics in real time will create an era of live traceability and accountability within the travel industry. 2040s travelers will hold themselves accountable, leaning into technology to measure and optimize their behaviours in line with environmental values and targets. By 2028, the global travel technology market is predicted to reach 11.2bn, up from 7.3bn in 2022. This booming category will give Travel Transformers and other cohorts the means to log their daily emissions and track their travel metrics in real time to help them reduce their footprints. Noteworthy strides have already been made in shaping this landscape. Ariel, a sustainability platform, is recognised for its accuracy in gauging carbon footprints and subsequently offsetting emissions for individuals and businesses. Other platforms, such as Klima, Earth Hero and Joro, calculate travel and everyday footprints travel, aiding people to achieve decarbonisation goals.

In 2020, Intrepid Travel adopted science-based targets, which set out a path to reduce emissions in accordance with the Paris Climate Agreement. The World Economic Forums Mission Possible Platform aims to achieve net-zero carbon emissions by mid-century from a group of traditionally hard-to-abate sectors, including aviation.

Writing and discovering the world are second nature to me. I have been a journalist for over 30 years, based in London, UK. I started off as an editorial assistant at Marie Claire Magazine and moved on to write 'The London Pages' within the magazine. My next role was as features editor for In Britain, the magazine for the British Tourist Authority. After my post-graduate qualification in Print Journalism at the London School of Printing - I joined British Airways' High Life Magazine. During my 11 years on this title, I could be frequently found in an airport en route to places such as Costa Rica, Montenegro or Mexico. For the past 14+ years, I have been a freelance travel journalist, writing for a wide range of titles, such as the Mail on Sunday newspaper, Conde Nast Traveller, Wanderlust Magazine, City AM Newspaper, Tatler's Travel Guide, Harper's Bazaar, Country Life and The Jewish Chronicle, plus many more print and online outlets. I'm interested in new places, interesting people, cool design and, above all, authenticity. My expertise is 'luxury intelligence' interesting openings that have a story to tell and new destinations. Ive lived in Barbados and Venice, Italy, and continue to travel widely, often with my family in tow. Ill never tire of discovering the world around me. Im always looking for the new, so tweet me @angelinavillacl, follow me on Instagram (@angelinavillaclarke) and read my blog: https://angelinascasa.com

Read the original here:

Travel Trends 2024 Report (Part 1): Authenticity And The Rise In A.I. - Forbes

Read More..

Chamath Palihapitiya says theres a reasonable case to make that the job of VC doesnt exist in a world of AI-powered two-person startups – Fortune

If you accept the argument that todays artificial intelligence boom will lead to dramatic productivity gains, it follows that smaller companies will be able to accomplish things that only larger ones could in the past.

In a world like that, venture capitalists might need to change their approach to funding startups. So believes billionaire investor Chamath Palihapitiya, a former Facebook executive and the CEO of Silicon Valley VC firm Social Capital.

It seems pretty reasonable and logical that AI productivity gains will lead to tens or hundreds of millions of startups made up of only one or two people, he said on a Friday episode of the All-In Podcast.

Theres a lot of sort of financial engineering that kind of goes away in that world, he said. I think the job of the venture capitalist changes really profoundly. I think theres a reasonable case to make that it doesnt exist.

Palihapitiya became the face of the SPAC boom-and-bust a few years ago due to his involvement with special purpose acquisition companies. Also known as blank check companies, SPACs are shell corporations listed on a stock exchange that acquire a private company, thereby making it public while skipping the rigors of the IPO process.

At one point, Palihapitiya suggested that he might become his generations version of Berkshire Hathaway chairman Warren Buffett. I do want to have a Berkshire-like instrument that is all things, you know, not to sound egotistical, but all things Chamath, all things Social Capital, he said in early 2021.

Buffetts right-hand man at Berkshire, Charlie Munger, recently expressed his disdain for venture capitalists. You dont want to make money by screwing your investors, and thats what a lot of venture capitalists do, the 99-year-old said on the Acquired podcast, adding, To hell with them!

Palihapitiya suggested that VCs might be replaced at some level by an automated system of capital against objectivesyou want to be making many, many, many small $100,000 [or] $500,000 bets.

Once a tiny-team startup gets to certain level, it can go and get the $100 and $200 million checks, he said, adding, I dont know how else all of this gets supported financially.

Many Silicon Valley leaders expect AI will lead to some types of jobs going away, but that overall it will result in greater productivity and more jobs. Among them is Jensen Huang, the billionaire CEO of Nvidia, which makes the chips that are in hot demand from companies racing to launch AI services.

My sense is that its likely to generate jobs, he recently told the Acquired podcast. The first thing that happens with productivity is prosperity. When the companies get more successful, they hire more people, because they want to expand into more areas.

He added, humans have a lot of ideas.

Read the rest here:

Chamath Palihapitiya says theres a reasonable case to make that the job of VC doesnt exist in a world of AI-powered two-person startups - Fortune

Read More..

Big risks: Obama and tech experts address harms of AI to marginalized communities – NBC News

CHICAGO More must be done to curb AIs potential for harm or the further marginalization of people of color, a panel of experts weighing the ever-widening reach of AI warned last week.

The warning came during a panel discussion here at the Obama Foundations Democracy Forum, a yearly event for thought leaders to exchange ideas on how to create a more equitable society. This years forum was focused on the advances and challenges of AI.

During a panel titled, Weighing AI and Human Progress, Alondra Nelson, a professor of social science at the Institute for Advanced Study, said AI tools can be incorrect and even perpetuate discrimination.

Theres already evidence that the tools sometimes discriminate and sort of amplify and exacerbate bias in life big problems that were already trying to grapple with in society, Nelson said.

A 2021 paper published by AI researchers revealed how large language models can reinforce racism and other forms of oppression. People in positions of privilege tend to be overrepresented in training data for language models, which incorporates encoded biases like racism, misogyny and ableism.

Furthermore, just in the last year multiple Black people have said they were misidentified by facial recognition technology, which is based on AI, leading to unfair criminalization. In Georgia, 28-year-old Randall Reid said he was falsely arrested and jailed in 2022 after Louisiana authorities used facial recognition technology to secure an arrest warrant linking him to three men involved in theft. Noticeable physical differences, including a mole on his face, prompted a Jefferson Parish sheriff to rescind the warrant.

Porcha Woodruff sued the city of Detroit for a false arrest in February. Her lawsuit accuses authorities of using an unreliable facial recognition match in a photo lineup linking her to a carjacking and robbery. Woodruff, who was eight months pregnant at the time, was charged and released on a $100,000 personal bond. The case was later dropped for insufficient evidence, according to the lawsuit.

In polls, Black people have already expressed skepticism over the technology. In April the Pew Research Center found that 20% of Black adults who see racial bias and unfair treatment in hiring as an issue said they think AI would make it worse, compared to about 1 in 10 white, Asian and Latino adults.

Former President Barack Obama, in the forums keynote address, said he was encouraged by the Biden administrations recently signed executive order on AI, which established broad federal oversight and investment in the technology and which Obama provided advice on, but acknowledged that there are some big risks associated with it.

During the panel, Hany Farid, a professor at the University of California, Berkeley, said that predictive AI in hiring, in the criminal legal system and even in banking can sometimes perpetuate human biases.

That predictive AI is based on historical data, Farid said. So, if your historical data is biased, which it is against people of color, against women, against the LGBTQ community well guess what? Your AI is going to be biased. So, when we push these systems without fully understanding them, all we are doing is repeating history.

Over the past two years, Nelson has been working within the White House Office of Science and Technology Policy, focusing on the equitable innovation of AI to include many people and voices, she said. Under the Biden administration, her team developed a Blueprint for an AI Bill of Rights, a guide to protect people from the threats of automated systems and includes insights from journalists, policymakers, researchers and other experts.

More conversations are happening about AI around the globe, Nelson said, which is really important, and she hopes that society will seize the opportunity.

Even if youre not an expert in mathematics, you can have an opinion about this very powerful tool thats going to accomplish a quite significant social transformation, Nelson said. We have choices to make as a society about what we want our future to look like, and how we want these tools to be used in that future and it really is going to fall to all of us and all of you to do that work.

For more from NBC BLK,sign up for our weekly newsletter.

Claretta Bellamy is a fellow for NBC News.

Read the original here:

Big risks: Obama and tech experts address harms of AI to marginalized communities - NBC News

Read More..

Here’s what we know about generative AI’s impact on white-collar work – Financial Times

What is included in my trial?

During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.

Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.

Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.

If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.

For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.

You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.

Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.

You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.

You can still enjoy your subscription until the end of your current billing period.

We support credit card, debit card and PayPal payments.

Read the original:

Here's what we know about generative AI's impact on white-collar work - Financial Times

Read More..

AI robotics GPT moment is near – TechCrunch

Image Credits: Robust.ai

Its no secret that foundation models have transformed AI in the digital world. Large language models (LLMs) like ChatGPT, LLaMA, and Bard revolutionized AI for language. While OpenAIs GPT models arent the only large language model available, they have achieved the most mainstream recognition for taking text and image inputs and delivering human-like responses even with some tasks requiring complex problem-solving and advanced reasoning.

ChatGPTs viral and widespread adoption has largely shaped how society understands this new moment for artificial intelligence.

The next advancement that will define AI for generations is robotics. Building AI-powered robots that can learn how to interact with the physical world will enhance all forms of repetitive work in sectors ranging from logistics, transportation, and manufacturing to retail, agriculture, and even healthcare. It will also unlock as many efficiencies in the physical world as weve seen in the digital world over the past few decades.

While there is a unique set of problems to solve within robotics compared to language, there are similarities across the core foundational concepts. And some of the brightest minds in AI have made significant progress in building the GPT for robotics.

To understand how to build the GPT for robotics, first look at the core pillars that have enabled the success of LLMs such as GPT.

GPT is an AI model trained on a vast, diverse dataset. Engineers previously collected data and trained specific AI for a specific problem. Then they would need to collect new data to solve another. Another problem? New data yet again. Now, with a foundation model approach, the exact opposite is happening.

Instead of building niche AIs for every use case, one can be universally used. And that one very general model is more successful than every specialized model. The AI in a foundation model performs better on one specific task. It can leverage learnings from other tasks and generalize to new tasks better because it has learned additional skills from having to perform well across a diverse set of tasks.

To have a generalized AI, you first need access to a vast amount of diverse data. OpenAI obtained the real-world data needed to train the GPT models reasonably efficiently. GPT has trained on data collected from the entire internet with a large and diverse dataset, including books, news articles, social media posts, code, and more.

Its not just the size of the dataset that matters; curating high-quality, high-value data also plays a huge role. The GPT models have achieved unprecedented performance because their high-quality datasets are informed predominantly by the tasks users care about and the most helpful answers.

OpenAI employs reinforcement learning from human feedback (RLHF) to align the models response with human preference (e.g., whats considered beneficial to a user). There needs to be more than pure supervised learning (SL) because SL can only approach a problem with a clear pattern or set of examples. LLMs require the AI to achieve a goal without a unique, correct answer. Enter RLHF.

RLHF allows the algorithm to move toward a goal through trial and error while a human acknowledges correct answers (high reward) or rejects incorrect ones (low reward). The AI finds the reward function that best explains the human preference and then uses RL to learn how to get there. ChatGPT can deliver responses that mirror or exceed human-level capabilities by learning from human feedback.

The same core technology that allows GPT to see, think, and even speak also enables machines to see, think, and act. Robots powered by a foundation model can understand their physical surroundings, make informed decisions, and adapt their actions to changing circumstances.

The GPT for robotics is being built the same way as GPT was laying the groundwork for a revolution that will, yet again, redefine AI as we know it.

By taking a foundation model approach, you can also build one AI that works across multiple tasks in the physical world. A few years ago, experts advised making a specialized AI for robots that pick and pack grocery items. And thats different from a model that can sort various electrical parts, which is different from the model unloading pallets from a truck.

This paradigm shift to a foundation model enables the AI to better respond to edge-case scenarios that frequently exist in unstructured real-world environments and might otherwise stump models with narrower training. Building one generalized AI for all of these scenarios is more successful. Its by training on everything that you get the human-level autonomy weve been missing from the previous generations of robots.

Teaching a robot to learn what actions lead to success and what leads to failure is extremely difficult. It requires extensive high-quality data based on real-world physical interactions. Single lab settings or video examples are unreliable or robust enough sources (e.g., YouTube videos fail to translate the details of the physical interaction and academic datasets tend to be limited in scope).

Unlike AI for language or image processing, no preexisting dataset represents how robots should interact with the physical world. Thus, the large, high-quality dataset becomes a more complex challenge to solve in robotics, and deploying a fleet of robots in production is the only way to build a diverse dataset.

Similar to answering text questions with human-level capability, robotic control and manipulation require an agent to seek progress toward a goal that has no single, unique, correct answer (e.g., Whats a successful way to pick up this red onion?). Once again, more than pure supervised learning is required.

You need a robot running deep reinforcement learning (deep RL) to succeed in robotics. This autonomous, self-learning approach combines RL with deep neural networks to unlock higher levels of performance the AI will automatically adapt its learning strategies and continue to fine-tune its skills as it experiences new scenarios.

In the past few years, some of the worlds brightest AI and robotics experts laid the technical and commercial groundwork for a robotic foundation model revolution that will redefine the future of artificial intelligence.

While these AI models have been built similarly to GPT, achieving human-level autonomy in the physical world is a different scientific challenge for two reasons:

The growth trajectory of robotic foundation models is accelerating at a very rapid pace. Robotic applications, particularly within tasks that require precise object manipulation, are already being applied in real-world production environments and well see an exponential number of commercially viable robotic applications deployed at scale in 2024.

Chen has published more than 30 academic papers that have appeared in the top global AI and machine learning journals.

Originally posted here:

AI robotics GPT moment is near - TechCrunch

Read More..