Page 828«..1020..827828829830..840850..»

Councilwoman talks about the future of artificial intelligence in NYC – Spectrum News NY1

As city schools embrace the use of emerging artificial intelligence technology, local lawmakers are looking to understand the benefits and the risks involved.

The City Council held a hearing Wednesday to get some answers from education officials on how AI tools are being used in schools, as well as how third-party vendors are vetted.

Education officials testified that AI tools are important to students education and career training.

City Councilwoman Jennifer Gutirrez, chair of the councils committee on technology, joined NY1 political anchor Errol Louis on Inside City Hall to discuss more.

We want to ensure that our schools, obviously, are completely connected and that they have access, that our young people have access. That they can start thinking about careers in this industry as early as first, second and third grade, said Gutirrez.

Her district includes the Brooklyn neighborhoods Williamsburg and Bushwick, as well as Ridgewood in Queens.

Continued here:
Councilwoman talks about the future of artificial intelligence in NYC - Spectrum News NY1

Read More..

FTC to Host Virtual Roundtable on AI and Content Creation – Federal Trade Commission News

The Federal Trade Commission staff will be hosting a virtual roundtable discussion on October 4, 2023 to better understand the impact of the use of generative artificial intelligence on music, filmmaking, and other creative fields.

FTC staff are seeking to better understand how the development and deployment of AI tools that can generate text, images, and audiooften referred to as generative artificial intelligencemay impact open and fair competition or enable unlawful business practices across markets, including in creative industries.The listening session will focus on different issues posed by generative AI, including concerns raised by musicians, actors, and other content creators about the use of AI to create entertainment and other content.

FTC Chair Lina M. Khan will provide opening remarks to kick off the event and will then hear from representatives from a variety of creative fields. They will explore the ways emerging AI tools are reshaping each of the participants respective industries and how they are responding to these changes. The listening session, which is being led by the FTCs Office of Technology, is part of the agencys efforts to keep up with the latest developments in emerging technologies such as AI.

The event will begin at 3 p.m. ET and be webcast on the FTCs website at FTC.gov. Additional information, including a list of panelists will be posted in the coming days to the event page.

The lead staffer on this matter is Madeleine Varner from the FTCs Office of Technology.

Read more here:

FTC to Host Virtual Roundtable on AI and Content Creation - Federal Trade Commission News

Read More..

World must pass ‘AI stress test’, UK Deputy PM says, announcing … – UN News

Mr. Dowden said the so-named AI Safety Summit, set for November, will aim to preempt the risks posed by frontier AI and explore how it can be used for the public good.

AI is the biggest transformation the world has known, he emphasized, noting that it is going to change everything we do, the way we live, relations between nations, and it is going to change the United Nations, fundamentally.

Our task as governments is to understand it, grasp it, and seek to govern it, and we must do so at great speed, he stressed.

Mr. Dowden drew parallels between the work of inventors Thomas Edison (lightbulb) and Tim Berners-Lee (email) and the potential of artificial intelligence today.

They could not surely have respectively envisaged the illumination of the New York skyline at night or the wonders of the modern internet but they suspected the transformative power of their inventions.

He emphasized that frontier AI has the potential not just to similarly transform our lives, but to reimagine our understanding of science, from decoding the smallest particles to the farthest reaches of the universe.

One of the main concerns highlighted by the Deputy Prime Minister is the unprecedented speed at which AI is evolving, with the pace having far-reaching implications, both in terms of the opportunities it presents and the risks it poses.

On the positive side, AI models currently under development could play a pivotal role in addressing some of the worlds most pressing challenges: clean energy, climate action, food production or detecting diseases and pandemics.

In fact, every single challenge discussed at this years General Assembly and more could be improved or even solved by AI, he stated.

However, amidst the promise of AI, Mr. Dowden also sounded a cautionary note, underscoring the dangers of misuse, citing examples such as hacking, cyberattacks, deepfakes and the potential loss of control over AI systems.

Indeed, many argue this technology is like no other, in the sense that its creators themselves dont know how it works the principal will therefore come from misuse, misadventure, or misalignment with human objectives, he added.

There is no future in which this technology does not develop at an extraordinary pace, he said, and while companies were doing their best to set up guardrails, the starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible.

Against this backdrop, the AI Safety Summit will focus on addressing extreme risks associated with frontier AI, the Deputy Prime Minister said.

The summit aims to bring together experts, policymakers and stakeholders to explore strategies for mitigating these risks while harnessing the positive potential of AI for public good.

We cannot afford to become trapped in debates about whether AI is a tool for good or a tool for ill, it will be a tool for both. We must prepare for both and insure against the latter, he urged.

Full statement available here.

See more here:

World must pass 'AI stress test', UK Deputy PM says, announcing ... - UN News

Read More..

Smarter AI Assistants Could Make It Harder to Stay Human – WIRED

Researchers and futurists have been talking for decades about the day when intelligent software agents will act as personal assistants, tutors, and advisers. Apple produced its famous Knowledge Navigator video in 1987. I seem to remember attending an MIT Media Lab event in the 1990s about software agents, where the moderator appeared as a butler, in a bowler hat. With the advent of generative AI, that gauzy vision of software as aide-de-camp has suddenly come into focus. WIREDs Will Knight provided an overview this week of whats available now and whats imminent.

Im concerned about how this will change us, and our relations with others, over the longer term. Many of our interactions with others will be mediated by bots acting in our stead. Robot assistants are different from human helpers: They dont take breaks, they can instantly access all the worlds knowledge, and they wont require paying a living wage. The more we use them, the more tempting it will become to turn over tasks we once reserved for ourselves.

Right now the AI assistants on offer are still unrefined. Were not yet at the point where autonomous bots will routinely take over activities where screw-ups cant be tolerated, like booking flights, making doctors appointments, and managing financial portfolios. But that will change, because it can. We seem destined to live our lives like long-haul airline pilotsafter setting a course, we can lean back in the cockpit as AI steers the plane, switching to manual mode when necessary. The fear is that, eventually, it might be the agents who decide where the plane is going in the first place.

Doomerism aside, all of us will have to deal with someone elses supersmart and possibly manipulative agents. Well turn over control of our own daily activities and everyday choices, from shopping lists to appointment calendars, to our own AI assistants, who will also interact with the agents of our family, friends, and enemies. As they gain independence, our automated helpers may end up making decisions or deals on our behalf that arent good at all.

For an upbeat view of this future, I consult Mustafa Suleyman. A cofounder of AI startup DeepMind, now the heart of Googles AI development, hes now the CEO of Inflection.ai, a company developing chatbots. Suleyman has also recently taken residency on The New York Times bestseller list for his book The Coming Wave, which suggests how humans can confront the existential perils of AI. Overall, hes an optimist and of course has a rosy outlook about software agents. He describes the bot his company makes, Pi, as a personal chief of staff that provides not only wisdom but empathetic encouragement and kindness.

Today Pi is not able to book you restaurants or arrange a car or, you know, buy things for you, Suleyman says. But in the future, it will have your contractual and legal proxy, which means that you've granted permissions to enter into contracts on your behalf, and spend real money and bind you to material agreements in the real world. Also on the road map: Pi will make phone calls on its owners behalf and negotiate with customer service agents.

That seems fair, because right now, too many of those service agents are already bots, andmaybe by design?not open to reasonable arguments that their corporate employers screw over their own customers. Inevitably, well be launching our AIs into negotiations with other AIs in all areas of life. Suleyman acknowledges that we dont want those bots to get too cozy with each other or interact in ways not open to human inspection. We actually want AI-to-AI communication to be limited to plain English, says Suleyman. That way, we can audit it.

Link:

Smarter AI Assistants Could Make It Harder to Stay Human - WIRED

Read More..

A view from DC: White House preps a ‘bridge’ to AI regulation – International Association of Privacy Professionals

U.S. President Joe Biden is expected to issue a new executive order on artificial intelligence in the coming weeks. Its content and scope have become the subject of much speculation in Washington.

For many months, multiple workstreams at the White House have touched on AI policy priorities that closely track the broader policy conversation in D.C. Concordantly, a large part of the energy of this work is taken up by issues beyond the scope of operational AI governance. As in Congress, the Biden administration is concerned about issues of competitiveness finding ways to secure America's lead on technological development and national security.

That said, the message that responsible development of AI tools is key to an American style of innovation has increasingly been heard echoing across the policy conversation. But whether, and how, this will be reflected in the forthcoming executive order are still open questions.

Some official hints about the scope of the executive order have started to emerge. At a Chamber of Commerce event this week, U.S. Deputy National Security Advisor Anne Neuberger described the order as "incredibly comprehensive." According to NextGov's reporting of the event, Neuberger went on to say it is "a bridge to regulation because it pushes the boundaries and is only within the boundaries of what's permissible by law." MLex reportedNeuberger's remarks focused on the importance of solving the challenge of watermarking for AI-produced content to help track provenance.

At the DEF CON hacker convention in August, White House Office of Science Technology and Policy Director Arati Prabhakar told reporters that setting federal government policies on AI has become an urgent priority for the administration. "It's not just the normal process accelerated it's just a completely different process," she said.

The executive order may also find avenues to build on the voluntary commitments the White House received from leading AI companies, which will also be one of the proposals the U.S. brings to the table at the upcoming U.K. AI safety summit, according to MLex's reporting of Neuberger's remarks.

Importantly, the administration seems mindful of the global geopolitical context of AI development and seems intent to share the mic with allies who are also advancing new AI principles. In announcing the most recent set of voluntary commitments, the White House said they were developed in consultation with 20 other listed governments. Further, the administration acknowledged the commitments "complement Japan's leadership of the G-7 Hiroshima Process, the United Kingdom's Summit on AI Safety, and India's leadership as Chair of the Global Partnership on AI."

Meanwhile, federal agencies are continuing to enact the policies mandated by President Biden's prior order on AI. Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence inthe Federal Government, directed agencies to implement principle-based policies on the deployment of AI systems. The Department of Homeland Security this month announced new policies from its AI Task Force, including one on the "acquisition and use of artificial intelligence and machine learning by Department of Homeland Security components," which directly responds to Executive Order 13960.

The other new DHS policy, Directive 026-11, goes further, covering the "use of face recognition and face capture technologies" by the agency. Some highlights of the policy:

At the same time, DHS Secretary Alejandro Mayorkas appointed Chief Information Officer Eric Hysen to add a title to his office, serving as DHS's first chief AI officer. According to the press release, "Hysen will promote AI innovation and safety within the Department, along with advising Secretary Mayorkas and Department leadership on AI issues." The Senate Committee on Homeland Security and Governmental Affairs recently approved a bill that would create a chief AI officer role at each federal agency.

Like the DHS, agencies across the federal government seem to be reflecting on how AI will impact their missions, whether by executive order or legislative powers. Director Prabakar's comments to reporters at DEF CON also.mentioned how encouraged she was by the work of federal agencies, "They know it's serious, they know what the potential is, and so their departments and agencies are really stepping up," she said.

The homepage of the official website for the National AI Strategy now provides direct links to AI landing pages from 27 agencies and departments ranging from the Department of Defense's Chief Digital and Artificial Intelligence Office, AI.mil, to the Department of Education's Office of Educational Technology, which recently published a report on AI and the Future of Teaching and Learning.

Another major development is expected in the form of updated guidance from the Office of Management and Budget, arguably the most influential agency when it comes to coordinating and implementing cross-government policy initiatives.

Although earlier White House announcements said the OMB draft guidance would be released "this summer," it has still not become available for public comment. Once implemented, new OMB policies will update its 2020 memorandum to the heads of all executive department agencies to "inform the development of regulatory and non-regulatory approaches regarding technologies and industrial sectors that are empowered or enabled by AI and consider ways to reduce barriers to the development and adoption of AI technologies."

Advocates from across the spectrum have engaged on the forthcoming executive order. Many joined the Leadership Conference on Civil and Human Rights in calling on the Administration to make its AI Bill of Rights binding policy across the federal government. Others, like the U.S. Chamber of Commerce, argued for making targeted adjustments to immigration policies to attract and retain top AI talent in the U.S.

Even if the details are still a mystery, it is clear the order will include a wide-ranging set of policies that extend to the limits of the executive branch's legal powers. At least when it's not in a state of shutdown, the federal government will continue to act on AI.

Here's what else I'm thinking about:

Upcoming happenings:

Please send feedback, updates and draft EO text to cobun@iapp.org.

Excerpt from:

A view from DC: White House preps a 'bridge' to AI regulation - International Association of Privacy Professionals

Read More..

Generative AI: How will it affect future jobs and workflows? – McKinsey

As companies struggle to understand the implications and applications of generative AI (gen AI), one thing seems clear: AI and its future iterations are not going anywhere. On this episode of The McKinsey Podcast, senior partner Kweilin Ellingrud and partner Saurabh Sanghvi share findings from McKinseys latest report on gen AIwith editorial director Roberta Fusaro and explain why companies must pivot to embrace the technology itself and the deep and lasting changes it may create. Individual and organizational adaptability will be critical.

Also on this episode, the advent of gen AI has many people thinking about the future of their jobs. Joanne Lipman, author of Next! The Power of Reinvention in Life and Work (Mariner Books, March 2023), shares a four-step process for figuring out where you might go next and how youll get there, in our Author Talks series.

This transcript is edited for clarity and length.

The McKinsey Podcast is cohosted by Roberta Fusaro and Lucia Rahilly.

Roberta Fusaro: Before we dive into the findings of McKinsey Global Institutes report on gen AI and the future of work in America, I want to ask, why this report and why now?

Kweilin Ellingrud: Two influences have changed things. One is were emerging from three years of COVID, where there has been so much turmoil and change in the workforce. And two is generative AI that burst onto the scene about six months ago.

Together, theyve changed the nature of work and jobs and inspired us to ask, Whats different now? and What can we expect in the future?

Saurabh Sanghvi: I would add that the labor market looks different from what weve typically seen. We think that some of that has to do with the pandemic, and some of that has to do with workers needs and preferences changing.

Were seeing all kinds of unprecedented technical change even beyond generative AI. If we go a step further, were seeing a record level of federal investment in infrastructure and efforts for reaching net zero. Its this confluence of factors that we really wanted to cover with this new piece around the future of work and generative AI.

Roberta Fusaro: Are pandemic-era labor shortages here to stay?

Kweilin Ellingrud: Right now we have two job openings for every person whos unemployed. So were quite out of balance compared with previous years. We do think that a tight labor market will persist. I dont think it will be quite as tight as it was maybe a year ago, as we were emerging from the pandemic.

Some of the long-term trends we saw accelerate during the pandemic will persist, including people looking for greater flexibility in work, greater control over their career evolution, and greater meaning and connectivity in their jobs and workplaces.

Roberta Fusaro: Which occupations or which occupational categories have been most affected during the pandemic era? Which are growing, and which are declining?

Kweilin Ellingrud: There are a number of growing occupations. Those would be a lot of health care jobs, STEM jobs, transportation jobs, delivery.

And then there are a lot of shrinking occupations. I think 80 percent of the occupational transitions between now and 2030 are in four occupational categories: customer service, food service, production or manufacturing, and office support. Those four occupations are going to need a lot of reskilling, upskilling, and support to encourage those workers to gain the skills so that they can re-pot in other occupations that are growing in our economy.

Saurabh Sanghvi: To add, were starting to see a steady rise in rebound, like builders, educators, and the creative economy. Some of that has been from federal infrastructure. When we start talking about builders and construction, a lot of that has been from major infrastructure builds that have been motivating and incentivizing infrastructure.

But the other one that Id want to underscore is the educator one, where the pandemic really put a structural hit on how we thought about education. But now were going back to the long-term historical trend of the fact that we need a lot more educators at all levels.

Roberta Fusaro: But then when you add automation and the specter of gen AI to the mix, what effect do we project that'll have on the labor market?

Kweilin Ellingrud: The impact of gen AI alone could automate almost 10 percent of tasks in the US economy. That affects all spectrums of jobs. It is much more concentrated on lower-wage jobs, which are those earning less than $38,000. In fact, if youre in one of those jobs, you are 14 times more likely to lose your job or need to transition to another occupation than those with wages in the higher range, above $58,000, for example.

The impact of gen AI alone could automate almost 10 percent of tasks in the US economy.

But it also does affect the jobs on the higher end of the wage range. Writers, creatives, lawyers, consultants, everybody is going to need to work differently, because parts of our job will be affected by gen AI. For some, it will be a more fundamental elimination of the job. For others, it will more remake how we spend our time.

Roberta Fusaro: It is quite a significant number, and sort of frightening. But also, I know there are a lot of great opportunities associated with gen AI. When we were talking about the impact on lower-wage workers, in particular, what are the mechanisms by which we can ensure that there is a way to move up the ladder?

Kweilin Ellingrud: Twelve million occupational transitionsare likely going to need to happen between now and 2030, with 80 percent of those in those four occupations that I mentioned: customer service, food service, production, and office support.

How do we make sure that workers in those jobs can reskill and upskill? A lot of that depends on individual companies to do that at scale, not in the hundreds but in the thousands of workers. I think publicprivate partnerships between the federal government and educational institutions could help to train and build the skills of our workforce.

How do we make sure that workers in those jobs can reskill and upskill? A lot of that depends on individual companies to do that at scale, not in the hundreds but in the thousands of workers.

Then, as employers shift to more skills-based hiring, looking for the skills that they need as opposed to credentials, that will help as well. The silver lining of all of this is that we will have more jobs in the future than we do today, given demographic trends, given consumption trends, and GDP growth. On average, those jobs will be higher-paying jobs, but require higher levels of education.

Roberta Fusaro: I like the positive outlook. I think its what we all need. I am curious, though, how do we ensure that the future of work doesnt exclude traditionally disadvantaged groups?

Kweilin Ellingrud: Its a great question because it is a danger, and if left unmanaged, it will affect lower-wage workers more. It will affect people of color more. It will affect women more. For instance, women are about 50 percent more likely to be in one of those occupations that needs to transition, compared with men.

How we make sure that we support this need is to target reskilling and upskilling. There are certain programs that can help identify if you are in a customer service and sales role. They can help to identify the career paths that could build on those skills to help you move up the ladder and maybe reskill and upskill in a targeted way to find that next job.

Federal programs could help as well. For example, on the infrastructure investments, we saw that contracts would be awarded to companies that provide childcare, for instance. And that will also help to be more inclusive to working mothers, as construction has at least historically been quite male and quite White. But with all the infrastructure investment, a lot of the jobs created there can now be more equal in terms of where the job growth goes.

Saurabh Sanghvi: We really need intentional focus. As we think about the solution space, if you look in the higher-ed sector, for example, the ability to work with historically Black colleges and universities is a great way to really help Black learners, for instance, get into these high-wage, high-growth pathways and some of the occupations that were highlighting.

Similarly, when we think about some of the workforce development programs, theres a lot of investment thats going into new regions that havent really had vibrant innovation hubs. Investments like that can really help create some of these opportunities in the geographies that typically havent had all of the job growth previously.

Roberta Fusaro: How should employers think about the data in this report in terms of changing the way theyre thinking about talent development and so forth?

Saurabh Sanghvi: Theres a huge role that employers can play in this. First, really thinking about how to hire for skills instead of looking at someones degree or the prerequisite of the job they previously had.

The second thing is that theres a real opportunity to think about pipelines and pathways. Everybody is learning about these new technologies. Everybody needs to upskill. Companies can really think about opportunities to hire from within instead of externally for an open role. They can find pathways within the company that helps someone move out of a role that is in less demand and into a role that has more of a demand. That would be probably far more cost-efficient, because you already have the human capital.

Finally, even if we think a little beyond generative AI, what were seeing is that workers are demanding a new style and way of working. So, the other thing thats going to be really important is, can employers really design new ways of working and working models that can be more flexible, that can really take advantage of in-person moments as well as remote moments? And can that be done in a way that actually becomes more inclusive and attractive to women workers and adult workers who are also maybe taking a college course?

Roberta Fusaro: The report notes the role of the federal government in investing in infrastructure renewal and initiatives to achieve net zero climate goals. What impact could this have on the labor market?

Saurabh Sanghvi: We think net zero is going to be a tremendous opportunity for job growth. But similar to some of the things that weve talked about regarding automation and AI, it will require quite a bit of transition. Ultimately, what we will see is some labor demand declines in jobs that relate to the oil and gas industry and labor demand increases in things that we would call greener jobs around renewable energy, like solar and wind, and the entire infrastructure around that.

Ultimately, what we will see is some labor demand declines in jobs that relate to the oil and gas industry and labor demand increases in things that we would call greener jobs around renewable energy, like solar and wind, and the entire infrastructure around that.

The key thing thats going to need to happen is, how do we help workers transition from categories that are potentially declining to those greener jobs? And how do we think about upskilling in those greener jobs? The infrastructure story is a little simpler to understand. A lot of the investments the federal government has been making in many ways are record-level. Theyre going toward construction, building, repairing roads, bridges, and other things like that. So the big area that this is going to directly impact is the construction sector.

One of the big opportunities there is that its a sector that has typically been quite male-dominated. So there are significant opportunities to think about, how can we expand those opportunities to women? Similarly, its also a sector that isnt very diverse if we look at it from a race and ethnicity standpoint. How can we expand opportunities there for people of color as well?

Kweilin Ellingrud: On the net-zero side, we see a net increase of about 700,000 jobs. But its actually displacing about 3.5 million jobs that either directly or indirectly would be eliminated.

Then there are gains in renewable energy and other areas of over four million jobs. But thats a lot of both displacement and then net new job creation. So there is a lot of disruption in the workplace in the overall job economy for that net creation of 700,000 jobs. But, hopefully, that will be a huge growth driver in the future.

Roberta Fusaro: Im already thinking that, as a female podcast host and editor, I should start thinking about a shift into the construction sector or some of these other opportunities that youre citing, because its really compelling data.

Saurabh Sanghvi: There could be many new green jobs, so theres a huge opportunity in the country to really start marketing the fact that theres an entire sector of jobs that students and adult workers could be getting into, that could really go about saving the planet.

If you start thinking about the next generation of student that wants to be more mission-driven and cares about the purpose of their job, theres a tremendous opportunity to connect those two things.

Kweilin Ellingrud: As we look over time, about 9 or 10 percent of the jobs every decade are net new occupations that didnt exist before. That could be in advanced analytics, renewable energy, as we were just describing, or a social media influencer.

How do we think about these new jobs and the skills that they will need? Those new jobs, at least historically, have typically been more male-dominated than female-dominated, but how do we build the skills that well need in the future both for the jobs we know we will need but also for the net new occupations that we havent even imagined yet?

Roberta Fusaro: Are there any other forces cited in the report that we havent raised that could affect the labor market in the next two to five years?

Saurabh Sanghvi: Some of the other factors that could impact the labor market are, one, we have an aging workforce. That will have an impact on everything as it relates to retirement and the fact that weve seen quit rates at an all-time high over the past two years. Well see an impact on demand for healthcare in the US as well.

Another finding that was probably my favorite of this report is that, over the past three years, weve seen 50 percent more occupational transitions than the previous three years. This is really positive news in my mind, because were seeing the occupation transitions result in workers disproportionately being in higher-wage roles.

If we could continue that trend over the next decade and through 2030, then we could be in a world where some of the disruptions that were talking about end up being really positive and help workers get into higher-wage roles and opportunities for more fulfilling careers. And for the first time ever, we have real, hard evidence that its happening and at a scale thats faster than what weve previously seen.

Kweilin Ellingrud: One other element in terms of our workforce is that immigration has been quite low for an extended period. As we combine the higher quit rates with early retirements due to health and other concerns, and then lower immigrationall of these things exacerbating the talent shortage that we were describing earlierwhen you combine it all together, its a very challenging job market for employers more so than for employees.

Roberta Fusaro: For both of you, Kweilin and Saurabh, what was the most surprising finding from this research?

Kweilin Ellingrud: The finding that shocked me most was that people in lower-wage jobs, below $38,000 a year, were 14 times more likely to need to transition occupations than those in the highest-wage jobs. I thought that it would be more egalitarian in the impact of automation and generative AI. I knew it would have a bit of a disproportionate effect. But 14 times was really quite stunning to me.

Roberta Fusaro: Definitely. Saurabh, any surprising finding on your end?

Saurabh Sanghvi: As Kweilin mentioned earlier, our overall automation number in the midpoint scenario went from 21 percent of activities being able to be automated to 29 percent of activities being able to be automated. But one of the really surprising parts thats not been covered is the number before gen AI was 21 percent.

That 21 percent is having an impact on our findings more than some of the recent increase from generative AI. When we think about other technologies like robotics and kiosks coming into the fast-food sector, a lot of those technologies are having more of an impact on a lot of these sectors. But we also need to be thinking about the implications of all the other technologies too.

Roberta Fusaro: If there were one message or sort of one silver lining that you could share with our listeners, what would that be?

Kweilin Ellingrud: Looking at the upside, looking at the increasing number of jobs, higher wages over time, yes, there are a lot of occupational switches that well need to transition through, a lot of upskilling and reskilling at scale. But the GDP growth, the upside is that more jobs at higher wages gives me a lot of comfort that theres a better future as we get through that tumultuous and challenging transition period.

Saurabh Sanghvi: All too often, as we talk about AI and generative AI, we jump to the conclusion of job loss. One of the findings from our report is that its much more a story of augmentation. To highlight that, if we take the role of teachers, for example, they are some of our most overworked workers today in the country.

All too often, as we talk about AI and generative AI, we jump to the conclusion of job loss. One of the findings from our report is that its much more a story of augmentation.

If you look at all the activities that teachers are working on, there are a number of things that theyre doing that are not student-facing, that are just administrative. The huge kind of potential for this technology is, how can we help augment professions and help free up time so that it can then be repurposed? In the case of a teacher, it would allow them to spend more time directly with students to help improve student outcomes. Thats just one example, but it has a lot of analogs in other professions.

Lucia Rahilly: If gen AI frees up lots of your time, would you think about a life overhaul? Heres Joanne Lipman, author of Next! The Power of Reinvention in Life and Work.

Joanne Lipman: These four steps, or four Ss, I call them the reinvention road map: search, struggle, stop, solution.

The first step, the search, is fascinating because this is when you are collecting information and experiences. This is the stuff that is going to take you to your transition, your reinvention, but you dont know it at the time. For career people, it could be a side hustle, a random interest, something you like to read about, or a hobby.

The second step is the struggle. The struggle is where you have disconnected or youre starting to disconnect from that previous identity, but you have not figured out where you are going. When we tell these reinvention stories about people whove had these amazing transformations, we tend to skip over this part, but this is where all the important work gets done. The struggle often doesnt end until you hit a stop.

Thats the third step: the stop. The stop might be something that you initiate. For example, I quit my job, right? But that may be something that is imposed on you. So for example, it could be you lose your job, or it could be a trauma, like a divorce or an illness in the family, a pandemic. Whatever it is, it stops you in your tracks, and only then are you really able to synthesize all of these experiences.

It all coalesces into what leads you to your solution, which is the final reinvention step. Even the people who had the most extreme reinventions, they dont see it as a reinvention. People just see it as sort of an extreme of themselves.

James Patterson is the biggest-selling novelist of all time, most financially successful. I first met him more than 30 years ago. As a young Wall Street Journal reporter, I covered the advertising business. He was working at J. Walter Thompson and ran the Burger King account. I show up early in the morning and hes, like, Oh, Ive been up for hours already because what I really want to do is be a novelist. Im thinking to myself, good thing the guy has a day job.

He went through all four of those steps. The search was when he was writing and trying to find his voice while he was still an ad executive. The struggle was when he started publishing books and they began getting better. He had been brought up to have and keep a job. (And, you know, writing is not a career.) So he was very nervous. He wasnt sure if he was good enough, but then his stop came. Hed already published almost ten books at this point.

He can tell you exactly the moment the stop happened. He was coming back from the beach and was stuck in standstill traffic on the New Jersey Turnpike. He was watching cars on the other side going back to the beach and he said, I am on the wrong side of the road. My job is to get to the other side of the road where I get to go to the beach on a Sunday. He went back and subsequently quit to become a full-time novelist, which turned out to be a very good idea.

Generally, these moments are unique to every one of us. You get married or lose your job, or you move. There are a couple of strategies for success, but there are a few things that you want to avoid. One of the bigger mistakes I was surprised to find was people quitting too soon.

The right way to fail is to do this iteratively. But there are two really important myths. The Cinderella myth is the idea that transformation is abrupt and instant. We tell these stories, and they sound amazing. We dont ever talk about the in-between struggle. But theres nothing wrong with you, and not only that, youre actually moving forward, and this is an incredibly important part of the transformation process.

The second myth that I think is incredibly damaging is this idea that you have to have an absolute plan of where you want to go. That is very good advice if you really know you want to be an oral surgeon. But for so many of the people I interviewed, it was the reverse. They didnt know where they were going. They had no idea. They dont have that end goal in mind, but they were open to exploring and allowing to seep in what a future might look like.

Read more:

Generative AI: How will it affect future jobs and workflows? - McKinsey

Read More..

Seaver College to Host Human-Centered AI Conference – Pepperdine University Newsroom

On Tuesday, September 26, Seaver College will host the Human-Centered AI conference, which brings together experts to discuss how artificial intelligence (AI) is affecting the fields of robotics, neuroscience, and ethics.

The world around us is changing rapidly, says Fabien Scalzo, associate professor of computer science and director of the Keck Data Science Institute. Students should be aware of how this powerful technology works. But they also need to understand the necessity of doing what is right and wielding these powerful tools for the betterment of society.

Operating with these goals in mind, Scalzo has organized a diverse set of AI researchers and practitioners to speak at the conference. The keynote address will be delivered by Charles Elachi, a former director of NASA and a Professor Emeritus at the California Institute of Technology, who led multiple rover missions exploring Mars. Following Elachis address, conference speakers will analyze topics in entrepreneurship, medical imaging, business education, bioinformatics, and computational biology. Leading authorities in these fields, like Bryan Johnson, the CEO and founder of Blueprint, and Jeff Scheinrock, the assistant dean of the Anderson School of Managements applied management programs at the University of California, Los Angeles, will guide the discussions.

We want to bring awareness to the different facets of AI, says Scalzo. This conference promotes a variety of perspectives from different key players in their respective areas.

Scalzo, whose own research interests focus on computational modeling for the medical field, discussed AIs exponential growth in recent years, claiming that the more access the public has to AIs tools, the faster the technology will grow. As a result, he believes it's important to recognize and acknowledge the ethics regarding AI the consequences and benefits that accompany this technological breakthrough.

Since 2021 Pepperdine Universitys Seaver College has been committed to examining the development of AI and how humans should interact with and implement such technology. With the Colleges establishment of the Keck Data Science Institute in 2022, Scalzo and his colleagues have been able to expand and deepen the discussion with events such as the Human-Centered AI conference.

To learn more about this event visit the Human-Centered AI Conference webpage.

Excerpt from:

Seaver College to Host Human-Centered AI Conference - Pepperdine University Newsroom

Read More..

Third-party AI tools pose increasing risks for organizations – MIT Sloan News

open share links close share links

As artificial intelligence becomes more powerful and widespread, the accompanying risks are also more real and more abundant. Errors or misuse could lead to reputational damage and loss of customer trust, financial losses, regulatory penalties, and even litigation.

Responsible AI is a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact.

To guard against these risks, companies are tasked with developing a responsible AI framework to ensure that AI systems are developed legally and ethically in service of the good of individuals and society.

A particular concern is third-party AI tools or algorithms designed by another company that an organization buys, licenses, or accesses, according to a new research report by MIT Sloan Management Review and Boston Consulting Group.

The report, based on an executive survey of more than 1,240 respondents representing companies in 59 industries and 87 countries, revealed that 78% of organizations use third-party AI tools, and more than half use third-party tools exclusively. This is a concern, because the report also found that more than half (55%) of all AI failures come from third-party tools. Company leadership might not even know about all of the AI tools that are being used throughout their organizations, a phenomenon known as shadow AI.

The report outlined five ways companies can reduce risk from AI, particularly from third-party tools:

1. Move quickly to expand responsible AI programs. The gap is widening between companies that lead in implementing responsible AI programs and those that lag behind. To keep up, organizations should broaden the scale and scope of responsible AI programs and make sure they are implemented throughout the organization instead of on an ad hoc basis.

2. Properly evaluate third-party tools. The use of third-party tools is likely to grow. While there isnt an easy way to mitigate the risks posed by these tools, organizations should continue to evaluate their use of third-party AI using a variety of methods, including evaluating a vendors responsible AI practices and adherence to regulatory requirements. The more evaluation methods an organization uses, the more effective their efforts are likely to be. The researchers found that organizations that use seven different methods to evaluate third-party tools are more than twice as likely to uncover AI failures compared with those that use only three.

3. Prepare for regulation.Organizations in highly regulated industries appear to have better practices around risk management, which could contribute to better responsible AI outcomes and greater business benefits. More regulations could be on the way too, as new rules are drafted and begin to take effect at local and national levels. According to the research report, all organizations can benefit from the structured risk management approach of a responsible AI program, particularly when it comes to using or integrating third-party AI tools.

4. Engage CEOs in responsible AI efforts. CEO engagement can boost the benefits of responsible AI programs and thus help mitigate the risks of AI use. The research found that when CEOs play an active role in responsible AI through hiring, target setting, or product-level discussions, the company sees 58% more business benefits than organizations with less-involved CEOs.

5. Double down and invest in responsible AI. Most of all, now is not the time to cut back on resources or teams devoted to ethical or responsible AI, or even to just sustain these efforts at previous levels. AIs adoption has soared, and so have the risks associated with the technology.

In this climate, not investing in [responsible AI] is tantamount to falling behind and exposing your organization to material risk, the authors write.

Read the 2023 Responsible AI report

Continued here:

Third-party AI tools pose increasing risks for organizations - MIT Sloan News

Read More..

Insight: Big Pharma bets on AI to speed up clinical trials – Reuters

LONDON, Sept 22 (Reuters) - Major drugmakers are using artificial intelligence to find patients for clinical trials quickly, or to reduce the number of people needed to test medicines, both accelerating drug development and potentially saving millions of dollars.

Human studies are the most expensive and time-consuming part of drug development as it can take years to recruit patients and trial new medicines in a process that can cost over a billion dollars from the discovery of a drug to the finishing line.

Pharmaceutical companies have been experimenting with AI for years, hoping machines can discover the next blockbuster drug. A few compounds picked by AI are now in development, but those bets will take years to play out.

Reuters interviews with more than a dozen pharmaceutical company executives, drug regulators, public health experts and AI firms show, however, that the technology is playing a sizeable and growing role in human drug trials.

Companies such as Amgen (AMGN.O), Bayer (BAYGn.DE) and Novartis (NOVN.S) are training AI to scan billions of public health records, prescription data, medical insurance claims and their internal data to find trial patients - in some cases halving the time it takes to sign them up.

"I don't think it's pervasive yet," said Jeffrey Morgan, managing director at Deloitte, which advises the life sciences industry. "But I think we're past the experimentation stage."

The U.S. Food and Drug Administration (FDA) said it had received about 300 applications that incorporate AI or machine learning in drug development from 2016 through 2022. Over 90% of those applications came in the past two years and most were for the use of AI at some point in the clinical development stage.

Before AI, Amgen would spend months sending surveys to doctors from Johannesburg to Texas to ask whether a clinic or hospital had patients with relevant clinical and demographic characteristics to participate in a trial.

Existing relationships with facilities or doctors would often sway the decision on which trial sites are selected.

However, Deloitte estimates about 80% of studies miss their recruitment targets because clinics and hospitals overestimate the number of available patients, there are high dropout rates or patients don't adhere to trial protocols.

Amgen's AI tool, ATOMIC, scans troves of internal and public data to identify and rank clinics and doctors based on past performance in recruiting patients for trials.

Enrolling patients for a mid-stage trial could take up to 18 months, depending on the disease, but ATOMIC can cut that in half in the best-case scenario, Amgen told Reuters.

Amgen has used ATOMIC in a handful of trials testing drugs for conditions including cardiovascular disease and cancer, and aims to use it for most studies by 2024.

The company said by 2030, it expects AI will have helped it shave two years off the decade or more it typically takes to develop a drug.

The AI tool Novartis uses has also made enrolling patients in trials faster, cheaper and more efficient, said Badhri Srinivasan, its head of global development operations. But he said AI in this context is only as good as the data it gets.

In general, less than 25% of health data is publicly available for research, according to Sameer Pujari, an AI expert at the World Health Organization.

German drugmaker Bayer said it used AI to cut the number of participants needed by several thousand for a late-stage trial for asundexian, an experimental drug designed to reduce the long-term risk of strokes in adults.

It used AI to link the mid-stage trial results to real-world data from millions of patients in Finland and the United States to predict the long-term risks in a population similar to the trial.

Armed with the data, Bayer started the late-stage trial with fewer participants. Without AI, Bayer said it would have spent millions more, and taken up to nine months longer to recruit volunteers.

Now the company wants to take it a step further.

For a study to test asundexian in children with the same condition, Bayer said it plans to use real-world patient data to generate a so-called external control arm, potentially eliminating the need for patients taking a placebo.

That's because the condition is so rare in the age group it would be difficult to recruit patients, and could raise concerns about whether it was ethical to give trial participants a placebo when there are no proven treatments available.

Instead, Bayer aims to mine anonymised real-world data of children with similar vulnerabilities.

Bayer said it hoped that would be enough to help discern how effective the drug is. Finding real-world patients by mining electronic patient data can be done manually, but using AI speeds up the process dramatically.

While unusual, external control arms have been used in the past instead of traditional randomised control arms where half the participants take a placebo - mainly for rare diseases where there are few patients or no existing treatments.

Amgen's drug Blincyto, designed to treat a rare form of leukaemia, received U.S. approval after adopting this approach, although the company had to conduct a follow-up study to confirm the drug's benefit once it was on sale.

Blythe Adamson, senior principal scientist at Roche (ROG.S) subsidiary Flatiron Health, said the advantage of AI was that it let scientists examine real-world patient data quickly, and at scale.

She said it could take months to trawl through data from 5,000 patients using traditional methods: "Now we can learn those same things for millions of patients in days."

Drugmakers typically seek prior approval from regulators to test a drug using an external control arm.

Bayer said it was in discussions with regulators, such as the FDA, about now relying on AI to create an external arm for its paediatric trial. The company did not offer additional detail.

The European Medicines Agency (EMA) said it had not received any applications from companies seeking to use AI in this way.

Some scientists, including the FDA's oncology chief, are worried drug companies will try to use AI to come up with external arms for a broader range of diseases.

"When you're comparing one arm without randomization to another arm, you are assuming that you have the same populations in both. That doesn't account for the unknown," said Richard Pazdur, director of the FDA's Oncology Center of Excellence.

Patients in trials tend to feel better than people in the real world because they believe they are getting an effective treatment and also get more medical attention, which could in turn overestimate the success of a drug.

This risk is one of the reasons regulators tend to insist on randomised trials as all patients believe they are getting the drug, even though half are on a placebo.

Gen Li, founder of clinical data analytics firm Phesi, said many companies were exploring AI's potential to reduce the need for control groups.

Regulators, however, say that although AI has the potential to augment the clinical trial process, evidentiary standards for a drug's safety and effectiveness will not change.

"The main risks with AI are that we want to make sure we don't get the wrong answer to the question of whether a drug works," said John Concato, associate director for real-world evidence analytics in the Office of Medical Policy in the FDA's Center for Drug Evaluation and Research.

Reporting by Natalie Grover and Martin Coulter in London; Additional reporting by Julie Steenhuysen in Chicago; Editing by Josephine Mason and David Clarke

Our Standards: The Thomson Reuters Trust Principles.

Link:

Insight: Big Pharma bets on AI to speed up clinical trials - Reuters

Read More..

Can AI detect eye conditions, Parkinson’s, other health issues? – Medical News Today

Experts at Moorfields Eye Hospital and University College London (UCL) Institute of Ophthalmology in England have recently developed an AI system which can detect vision disorders more accurately and efficiently than current methods.

This new technology could also help speed up diagnoses of systemic health issues including stroke, heart attacks, and Parkinsons disease.

The scientists performed a study on RETFound, their world-first foundation model, which used millions of eye scans from the UKs National Health Service (NHS). Their open-source initiative may serve as a template for efforts to help detect and treat blindness with AI.

This novel development brings promising news in time for World Retina Day on September 27, World Sight Day in October, and Diabetic Eye Disease Awareness Month in November.

Senior author Prof. Pearse Keane of UCL Institute of Ophthalmology said in a press release:

This is another big step towards using AI to reinvent the eye examination for the 21st century, both in the UK and globally. We show several exemplar conditions where RETFound can be used, but it has the potential to be developed further for hundreds of other sight-threatening eye diseases that we havent yet explored.

The study appears in Nature.

A report from the British Chambers of Commerce recently referred to AI foundation models as a transformative technology for their use of massive quantities of data.

The launch of ChatGPT in November 2022 highlighted the potential of AI models to develop adaptable language tools.

RETFound took a similar approach with retinal images, training on millions of scans. This has enabled the construction of a versatile model for virtually unlimited uses.

AI models have largely depended on human expertise and effort. Medical News Today discussed the challenge with technology developer Dr. Steve Frank, founder of Med*A-Eye Technologies. He was not involved in this research.

Dr. Frank explained to MNT: AI is data-hungry, and teaching an AI system to perform tasks generally requires vast amounts of training data. Worse, training usually requires the data to be labeled in some way meaning that youre teaching the system to distinguish one thing from another based on examples that you tell it are one thing or the other. Thats traditional supervised learning.

Furthermore, Dr. Frank said, experts may disagree on a piece of data, requiring time-consuming expert panel reviews.

According to the UK researchers, RETFound can match the performance of other AI programs using only 10% of human labels in its dataset.

RETFound achieved this higher efficiency with its self-supervising approach of masking parts of an image and learning to predict the missing parts by itself.

Self-supervised learning (SSL), which underlies RETFound, dispenses with labeling altogether. With enough training data, a properly structured AI model can learn enough about the training data from the data itself to make meaningful predictions []This approach is of particular value for healthcare AI because the cost of labeling is so high doctors are already busy saving lives, and their time is quite precious.

Dr. Steve Frank

A 2023 review in the Journal of Clinical Medicine refers to the retina as a window to the body. The study of oculomics uses deep learning to explore correlations between retinal image characteristics and diseases.

The current studys authors believe that RETFound may help improve diagnosis of sight-threatening eye diseases, such as diabetic retinopathy and glaucoma.

The program could also predict systemic disorders including heart failure, stroke, and Parkinsons disease.

Moreover, this AI technology facilitates a non-invasive view of the nervous system.

MNT discussed this study with Atropos Health co-founder Dr. Brigham Hyde, who was not involved in this research. We asked him how AI and deep learning techniques can help with detecting diseases.

First, imaging techniques aided by AI can often detect diseases a human may miss. Second, AI and deep learning techniques applied to combinations of digital, medical, and experiential data can uncover digital biomarkers for disease leading to earlier diagnosis, he responded.

Lastly, he added, risk scoring algorithms deployed at the physicians office can highlight and direct care teams to patients with key risk factors earlier.

The present study employed and evaluated RETFound, a new SSL-based foundation model for retinal images. The authors described a foundation model as trained on a vast quantity of unlabeled data.

In this case, Prof. Keane and his collaborators trained the AI system with a dataset of 1.6 million images from Moorfields Eye Hospital.

We adapt RETFound to a series of challenging detection and prediction tasks by fine-tuning RETFound with specific task labels, and then validate its performance, their paper reads.

The team considered ocular diseases including diabetic retinopathy and glaucoma, and ocular disease prognosis, in a 1-year period.

Next, they studied a 3-year prediction of heart diseases such as stroke, heart failure, and myocardial infarction, and Parkinsons disease.

Compared to models pretrained on SL-ImageNet, SSL-ImageNet, and SSL-Retinal, RETFound demonstrated consistently superior performance and label efficiency.

Dr. Frank remarked: The RETFound results are especially impressive for the sheer number of tasks their system can perform. The accuracies the researchers achieve arent sufficient for clinical use, but the more conventional systems they test against are mostly worse.

The UCL-Moorfields experts said that RETFound showed equal effectiveness in finding disease across diverse ethnic groups.

PhD researcher Yukun Zhou, the studys lead author, mentioned in a press release: By training RETFound with datasets representing the ethnical diversity of London, we have developed a valuable base for researchers worldwide to build their systems in healthcare applications such as ocular disease diagnosis and systemic disease prediction.

Dr. Tyler Wagner, vice president of biomedical research at Anumana, not involved in the research, had this to say about the study: While RETFound performs better than the other models compared in the manuscript during external evaluation on a set of patients with different demographics, the authors note the decrease in performance, highlighting the importance of the patient diversity in model development.

The study authors hope that their finding will encourage further studies, writing: Finally, we make RETFound publicly available so others can use it as the basis for their own downstream tasks, facilitating diverse ocular and oculomic research.

Originally posted here:

Can AI detect eye conditions, Parkinson's, other health issues? - Medical News Today

Read More..