Page 1,283«..1020..1,2821,2831,2841,285..1,2901,300..»

How Generative AI Changes Organizational Culture – HBR.org Daily

HBR EDITOR AMY BERNSTEIN: Nitin, youre a management consultant, you lead Deloittes global AI business. Whats the most interesting conversation youve had recently with a client?

DELOITTE PRINCIPAL NITIN MITTAL: A client, the CFO of the client basically said, If I apply generative AI in my company and the use case that, Nitin, you articulated took me, which is apply in a call center for customer care. Why? Because the marginal cost of conversing with that customer using a virtual digital agent is a zero, and because the marginal cost is zero, I know if I apply it itll drop my cost structure by 60 to 70%. But, what does it do to all the employees that I have who are from a disadvantaged part of the society Now the CFO was white, disadvantaged part of the society, who essentially are earning their daily living and have no other jobs?

AMY BERNSTEIN: I mean, that seems like a perfectly reasonable question. Howd you answer?

NITIN MITTAL: I punted it to a certain degree because its a difficult one to answer.

AMY BERNSTEIN: Yeah.

NITIN MITTAL: The reality is, yeah, itll lead to job losses. And the only way that youll be able to kind of overcome it, you have to reskill yourself for a different job as opposed to being in a call center. Reskill yourself, get a vocation training to be, for example, a prompt engineer who actually prompts and trains the models than being in the call center. The pay is probably the same, but there has to be a willingness both by the individual to get retrained and by the employer to do the retraining.

AMY BERNSTEIN: Welcome to How Generative AI Changes Everything, a special series from the HBR IdeaCast. Read just about any business history or any case study, and you realize just how much success depends on company culture. The unwritten rules of behavior can make the difference between capitalizing on a big shift or missing it all together. You cant have successful innovation without the right culture, you cant compete successfully without the right culture, you cant thrive over the long term without the right culture. And it follows that if you want to bring your organization into a future that includes generative AI, you need to build the right culture for it.

This week, How Generative AI Changes Organizational Culture. Im Amy Bernstein, editor of Harvard Business Review and your host for this episode. In this special series, were talking to experts to find out how this new technology changes workforce productivity and innovation, and were asking how leaders should adopt generative AI in their organizations and what it means for their strategy. Later on in this episode, youre going to hear from Harvard Business School professor Tsedal Neeley. Were going to talk through the known risks and how leaders can respond. But, first, Im talking to Nitin Mittal. He runs the global AI business at Deloitte, and he helped develop the firms own implementation of generative AI. Hes also a coauthor of the book, All-in On AI: How Smart Companies Win Big with Artificial Intelligence. Nitin, thanks for coming on the show.

NITIN MITTAL: Thank you.

AMY BERNSTEIN: When you walk into a clients organization, what are the signs that you look for that say, this organization is ready to move into AI?

NITIN MITTAL: First impressions dont always tell the whole story in terms of what an enterprise may be doing. But, having said that, if an organization already has some kind of a setup, like a center of excellence or a group that is focused on AI and has been experimenting and working with different business units, that is a very positive sign. On the other hand, if they just have a data science group that has been conducting proof of concepts without the connectivity to business, theyre not thinking about the culture of the organization, and they would very likely not be progressing. Those are things to kind of look out for. The other aspect to look out for is the leadership and the human side.

AMY BERNSTEIN: Yes. How do you advise your clients to lead, to shape their organizations into cultures that will embrace AI rather than run from it in fear?

NITIN MITTAL: Yeah. So, what is being noticed and what is being observed in many of these organizations is that the pressure to move ahead at speed and with skill is coming from the employees themselves. If we dont provide them these particular tools, and we dont provide them all the ways of augmenting themselves through generative AI, they are going to find their own ways, and that could lead to unfortunate circumstances where they end up using, lets say, open source models and start leaking an organizations data through the usage of those open source models.

AMY BERNSTEIN: So, you just alluded to the need for guardrails, right?

NITIN MITTAL: That is correct.

AMY BERNSTEIN: So, I wonder then what the role of culture is in all of this. I mean, is there a way communicate whats okay and whats not okay when you, an employee, are out there experimenting with ChatGPT, generative AI, which we want you to do within certain bounds, right? How does culture come into play here?

NITIN MITTAL: My view is that no AI system is going to magically somehow be responsible by itself without the culture of that organization being responsible onto itself to start with. It cannot be dictated by the CEO, it cannot be governed by the board, it cannot be mandated by the leadership. It is the prerogative, it is the sense of accountability of every single person to essentially always think about the right usages, in the right time, for the right areas where AI can be applied.

AMY BERNSTEIN: So, Nitin, as you talk to your clients, are you seeing alignment between the management team and boards, or misalignment? Whats going on there, on the generative AI front?

NITIN MITTAL: I would not necessarily say theres misalignment. Rather what it is say it is, its a lot about questions. The board certainly has a lot of questions of management, but management also has questions. And its all essentially around what is the impact to our business? How fast would that impact materialize? How disruptive this could be? And ultimately, how do we need to respond both culturally and from a safety and responsibility standpoint to this phenomena? Thats the set of questions being asked.

AMY BERNSTEIN: How are those questions being answered?

NITIN MITTAL: Frankly, theyre not necessarily being precisely answered. Everyone is trying to get their arms around it. We have a pretty good idea, but we also have to kind of learn. In Deloitte, for example, we have something called the trustworthy AI framework, and by its very nature, its a framework. It gives a set of guidelines, protocols, and methods in terms of what to think, when to apply those methods, and how to apply those methods. But, every organization also has to make sure that their employees are culturally sensitive to applying it in a responsible manner.

AMY BERNSTEIN: What does that mean?

NITIN MITTAL: The same way, the same way that every employee has a bond in terms of how do they work with their coworkers, how do they show up, what task they actually perform, and consequently, what is the team environment that they want? Think of that bond extending beyond just the human coworker, extending to essentially a non-carbon, non-bipedal coworker that happens to be an intelligent machine.

AMY BERNSTEIN: So, how do companies, that do it right, tease out that bond, that you just described, and turn it into a culture that can guide an organization forward on the use of AI?

NITIN MITTAL: There are perhaps not many companies who have kind of perfected it. But there are certain elements in terms of what is kind of critical to tease this out. First and foremost, education on cultural fluency. What would it take our employees to essentially apply things in a responsible manner, in a safe manner, for the benefit of not only the business, but their customers and society at large?

AMY BERNSTEIN: Does any organization train on cultural fluency in a way that you would want to share with other organizations.

NITIN MITTAL: In pockets. Ive seen it in pockets. Ive seen it in pockets in a few organizations that we serve. Ive also seen it in pockets in Deloitte as an example. But, that cultural fluency has typically extended to the realm of being culturally sensitive, particularly if youre a multinational organization, not necessarily culturally sensitive in the context of the rise of intelligent machines.

AMY BERNSTEIN: So, it sounds as if leaders then have to start making room for these foundational questions. I mean, these are questions weve never had to ask ourselves before, right?

NITIN MITTAL: These are questions we have never had to ask ourselves before, because now, with generative AI, that concept of we, the people, also transcends to we, the people and machines. And thats where the cultural boundaries have to be pushed. What would it mean for a factory worker to have a robot as a coworker? What would it mean for a professional consultant to have an AI model that is augmenting your particular kind of job, and augmenting and aiding the insights that you bring, and consequently being a coworker in your team? What does it mean for a medical professional to essentially have a AI assistant that is aiding with diagnosis? What does it mean?

AMY BERNSTEIN: So, this will call on everyones powers of imagination, but also everyones commitment to accountability and trust.

NITIN MITTAL: Absolutely. This is where I was kind of going earlier. It has to be for everyone, by everyone.

AMY BERNSTEIN: Nitin, youve described what progressive organizations are starting to do, including Deloitte. Where have you seen organizations kind of miss the mark? Where do they go wrong?

NITIN MITTAL: Well, there are definitely telltale signs of it.

AMY BERNSTEIN: Yeah, what are they?

NITIN MITTAL: Frankly, the organizations that absolutely miss the mark is who have got this viewpoint that, Well, this is yet another technology, probably going through a hype cycle, and consequently well just have kind of this particular group in IT, or this data science function that we have, or this set of individuals, kind of just look into it and take it forward. That is when they miss the mark. Rather, those organizations who actually view this as a moment in time where they have to question the basis of how do they compete, how do they thrive, what changes do they need to make, both from a product or a service perspective, but more important from a culture and a people standpoint, actually are the ones who are able to progress forward. If that can be tackled first, you will be a learning organization, you will thrive in a digital economy, and you will redefine the market that youre in.

AMY BERNSTEIN: Nitin, thank you so much.

NITIN MITTAL: Well, thank you.

AMY BERNSTEIN: Coming up after the break, I talk to Harvard Business School professor Tsedal Neeley about adopting generative AI in your organization, and the right ways to do that effectively and ethically. Stay with us.

AMY BERNSTEIN: Welcome back to How Generative AI Changes Organizational Culture. Im Amy Bernstein. Joining me now to discuss how to adopt generative AI within your own company is Tsedal Neeley. Shes a professor at Harvard Business School, and she wrote the HBR Big Idea article, 8 Questions About Using AI Responsibly, Answered. Tsedal, thanks for joining me.

HBS PROFESSOR TSEDAL NEELEY: Im so happy to be with you, Amy. Thank you for having me.

AMY BERNSTEIN: Im so happy youre here. So, I have more than eight questions to ask you, all right?

TSEDAL NEELEY: Great!

AMY BERNSTEIN: In your research, youve studied how global companies and smaller organizations alike become leaders at digital collaboration, remote work, and hybrid work. What about generative AI? Are organizations set up for it?

TSEDAL NEELEY: Currently, organizations are neither set up for it, nor do they fully understand it, but the adoption and the curiosity around it has been extraordinary, and so I think people will start figuring it out very quickly.

AMY BERNSTEIN: What kinds of changes are needed? Theyre cultural, theyre organizational, what kind?

TSEDAL NEELEY: I think the first thing that organizations need to ensure happens is that people understand these technologies fully. To really develop some form of fluency, a minimum level of fluency around what the technology is, what it isnt, what are the limitations, what are the risks, and what are the opportunities. So, everyone needs to start experimenting with it, but its really important to do it very carefully.

AMY BERNSTEIN: Now, I have to raise the specter of change management. What does this mean for change management? Its hard enough under the up till now normal circumstances.

TSEDAL NEELEY: Absolutely. You know what? Imagine change getting motivated from the top-down imperatives or mandates. Here we have a scenario where theres a lot of bottom-up activities.

AMY BERNSTEIN: With what youre describing, so much bottom-up rather than top-down change. What is leadership then?

TSEDAL NEELEY: Leadership, in this kind of scenario, you need digital leaders with digital mindsets to very quickly mobilize and begin organization led experiments and implementations of these tools, because otherwise, youre going to have individuals just experimenting and playing with them, which is actually a very, very good thing, but not understanding how they work. You can easily and unwittingly make a very consequential mistake to an organization. An example of this is uploading proprietary information, organizational confidential materials, because anything you put into these systems get fed into the overall model, which is why leaders have to guide the way these things are implemented. We need to think about these tools no different than the way that all of us had access to the internet 30 years ago. You cant stop it, you cant control it, unless you set the right boundaries and have these ethical codes that people follow, and even ways to protect the company.

AMY BERNSTEIN: So, do we need a new playbook to manage this change?

TSEDAL NEELEY: We need to take our playbook and add technology and speed and buy in and learning onto it. Organization-led opportunities and experiments become important, which is have some people start to work with them and document what theyre learning. Also, think about where do we automate and where are the places where we can do different types of strategic, creative, interpersonal kind of work. Third thing is you have to have a culture of responsible AI use from the start. This is not an afterthought, this has to be embedded in all that you do from the start. People need to be trained, and every single decision they make with their generative AI uses has to have ethical considerations, because its easy to get in trouble around this. Then finally, I would say that you have to pilot, you have to iterate, you have to be open to continuous learning, constant adapting, because you have to have a communication plan where people are open and understand that these changes are happening so fast that we have to be attuned to them and be prepared to implement them. Then finally, the culture change. You have to encourage a culture of flexibility, of innovation, of continuous learning, rewarding people who are adopting the new technologies in the right ways, you have to provide support and resources for those who are struggling, and for those many people are very afraid of these changes. So, youve got to make sure no one gets left behind. This type of change requires skill building, shifts to the nature of work in your organization. Many, many shifts.

AMY BERNSTEIN: So, Tsedal, talk a little bit about skill building, because that can be pretty challenging. You have people who are starting out with different skill levels, but also very different attitudes and levels of acceptance and fear. How do you do skill building in an organization with this nascent technology coming on so hard?

TSEDAL NEELEY: So, imagine a two by two. You know? Youre with an HBS professor, we have to come out with a two by two. You should have expected this.

AMY BERNSTEIN: Knew it, knew it.

TSEDAL NEELEY: And you knew it. Imagine a two by two, and imagine a framework called the Hearts and Minds framework. On one side, you have buy-in, where people have to believe that this is important. And another dimension is the belief that they could do it. Or another word for this is, do I have the self-efficacy for this? So, if you have high buy-in and high sense of efficacy, you are going to be inspired, excited. You are in a great, great spot. But, for those who do not, who may be struggling, it is incumbent on leaders of organizations to do the right type of messaging, to also build awareness and provide resources and support for people to learn these things.

AMY BERNSTEIN: How do you do this at scale over time? How do you sustain this?

TSEDAL NEELEY: Its actually something that weve done many times when it comes to scale building. Number one, individual managers need to understand where their team members are. So, youre bringing it down to the unit of analysis of a team. Team managers need to understand where people are in terms of their buy-in and their sense of efficacy. With that, has to be an organized training guide, learning guide, tutorials. Continuous learning is a mandate in this era of dramatic technological shifts and changes.

AMY BERNSTEIN: It sounds like its sort of the actual learning along with the compassionate piece of leadership, helping people embrace it, the hearts and minds piece.

TSEDAL NEELEY: Absolutely. Do you need to help more with the mind part or do you need to help more with the heart part? The other thing Ill say here is theres a phenomenon in this type of change called contagion. We do this as a group, together we have collective efficacy, and together we get through it. You cant let individuals flounder and get into sense of job insecurity, et cetera. This is why the team level is so important for this.

AMY BERNSTEIN: Thats a great insight. It puts so much of the agency into the hands of the manager, the team leader, it doesnt just happen from the top-down.

TSEDAL NEELEY: Absolutely.

AMY BERNSTEIN: Yes.

TSEDAL NEELEY: It needs to touch every member of the organization.

AMY BERNSTEIN: So, when were talking about hearts and minds, Tsedal, we really have to talk about the fear factor as well. A lot of tasks are going to get automated, and the natural conclusion for many of us to draw is that we will get automated out of our jobs. What do you say to that? What do you say, Tsedal, and what should a manager say to his or her team?

TSEDAL NEELEY: Listen, theres no doubt that theres going to be changes to the nature of jobs. Peoples jobs will shift. But, one thing we know is that every technological revolution has created more jobs than it has destroyed. For many people who are writers, writers are panicked, and I understand that completely, but its important to understand that these technologies work well with humans in the loop, meaning its human intelligence meeting artificial intelligence. Now, the reality is the long-term effects of generative AI are not fully known to us. We know theyre going to be complex, we know theres going to be periods of job displacement, theres going to be a period of job destruction for some industries, and this is why I always come back to the notion of education, training, upskilling and reskilling, and thinking about the various ways in which generative AI cannot help us with interpersonal work, with empathy, with various forms of creativity. So, theres a lot for us to continue to do, but its important to understand that, ultimately, we cant even conceive of the new things that are going to come out of this. So, there will be many more opportunities, many more things, many more industries that we cant even imagine that are going to be formed. Will things remain the same for individuals in terms of jobs, companies and industries? Unlikely.

AMY BERNSTEIN: So, its a very new technology, we dont have a lot of guardrails around it, we dont even really know what its capable of, we get a taste of it if we play with it. What are some of the risks ethically speaking here?

TSEDAL NEELEY: So, generative AI comes with many risks. The first one is it can perpetuate harmful biases, meaning deploying negative stereotypes or minimizing minority viewpoints, and the reason for this is the source of the data for these models, the underlying models, the large language models, are the internet, documents, its really pulling from everywhere and anywhere. As long as we have biases and stereotypes in our societies, these language models will have them as well. The second thing is misinformation from making stuff up, falsehoods, or violating the privacy of individuals by using data that is ingested, embedded in these models without the consent of people, so personal data can get into these. So, these are the ethical considerations that are important to both understand and to develop codes of ethics in your organizations to avoid them, and there are ways to avoid them. By the way, regulation is coming fast, the government is working on it at the state level, at the national level, but regulation still lag adoption.

AMY BERNSTEIN: Lets talk about harmful bias a bit. How do we prevent it?

TSEDAL NEELEY: There are a couple of things to consider. One is to always understand the source of data. So, generative AI may not give you citations or even the right citations, but if theres some information that it spits out, its important to check it and to double check it and to triple check it, to triangulate, to try to find primary sources. So, its important to have diversity in your company to vet these things, and if youre building models, large language models, internal large language models, which is where I think this is going to go for many companies, you need diversity, you need women, you need people of color, who are helping design these systems, you need to set strict rules around documentation, transparency, understanding where the source of all of this data is coming from.

AMY BERNSTEIN: Doing the legwork.

TSEDAL NEELEY: Absolutely, must do the legwork.

AMY BERNSTEIN: Yeah. Yeah. Theres a job that isnt going away, huh?

TSEDAL NEELEY: Exactly. These tools, in my mind, get us started, and we need to do additional work before the output is ready for primetime.

AMY BERNSTEIN: So, you mentioned transparency. What about how you, your team, your organization, is using generative AI? What are the responsibilities there in terms of transparency?

TSEDAL NEELEY: Its interesting because I dont think everyone will be reporting that theyve used ChatGPT for any and every little thing. I mean, thats no different than do we go around telling people, I Googled this, I Googled that. I went on this website, I went on that website, the use of our browser? No way are we going to do that, and no way do we need to do that. It only matters if there are important consequences from the use of these tools.

AMY BERNSTEIN: Right, and I guess it goes back to what you were saying before about citations and double checking that you, as the individual using these tools, have to remember that youre responsible for the truth-

TSEDAL NEELEY: Absolutely.

AMY BERNSTEIN: that youre putting out there. You cannot blame GenAI for your mistakes, because theyre your mistakes.

TSEDAL NEELEY: Theyre your mistakes, and this is where cultural change is important. The responsible AI use culture is going to be crucially important. This is why this is such a big deal for companies. Each individual user has to be responsible for what they put out in the world in their organization by using these tools, which means they have to be extra thoughtful, they have to be extra careful, they have to verify. The oversight is incredibly important. But, is it a shortcut tool? Is it a cheating tool? Absolutely not. We need to celebrate these tools because theyre not going away, and we need to guide people on their best uses.

AMY BERNSTEIN: Right. So, the skills you need are both technical, and then its those timeless leadership skills around integrity and accountability and a sense of fairness, right?

TSEDAL NEELEY: A hundred percent. In fact, the timeless leadership skills will be more important than ever before, because right now were a bit in the wild, wild west, and we inside of our organizations need to determine what are the safeguards? What are the guardrails? What are the ways in which were going to advocate people use these? So that we get the best possible results from them, without getting ourselves, as an organization, in trouble, or without any individuals unwittingly getting themselves in trouble.

AMY BERNSTEIN: So, then, given the kind of small d democratic nature of these tools, how do organizational leaders instill those values to ensure that these tools are used in a way that is fair and equitable?

TSEDAL NEELEY: I love that question, because it takes me right back to one of the most powerful organizational characteristics called trust. Trust, a culture where there is trust, a culture where you have some rules to help people, but you trust people to make the right decisions because leaders are role modeling it. Theres learning and training to help people understand how to use it, and the belief that one of our shared values in our organization is trust. This is no different than hybrid work where you trust people after youve equipped them. Its that same characteristics. I am learning that the more digital we become, the more trust becomes one of the most important shared values that companies need to uphold.

AMY BERNSTEIN: You know, what I find so inspiring about your message, Tsedal, is that youre saying you have got to do the work, youve got to understand this technology as a technology, but equally, you have got to pay attention and communicate as a leader, all those timeless leadership skills, the ones we just discussed, because in order to foster the kind of trust you are describing, you have to communicate not just your competence with the tool, but the values that you bring to its use, and thats the contagion, right?

TSEDAL NEELEY: Thats exactly right. That you cant be a mediocre leader in the world of remote or hybrid work. You cannot be a mediocre leader in the world of generative AI that is poised to transform every organization, every industry in ways that we cant really understand today. So, your leadership fundamentals are incredibly important, and leaders have to lead, they cant micromanage, you cant micromanage your way out of generative AI. Thats impossible. People are using it whether you want it or not. The question is, how do you make sure that you lead the way on generative AI in your organization as opposed to reactively run around trying to damage control? Because it can bring damage too.

AMY BERNSTEIN: So, whats changed is the technology, but the leadership values remain as theyve always been.

TSEDAL NEELEY: The leadership values remain with less flexibility on poor leadership. Theres no hiding on this one, youve got to be right ahead, and every leader has to work on becoming a digital leader with a digital mindset. This is it.

AMY BERNSTEIN: Tsedal, it was so interesting to talk to you. Thank you.

TSEDAL NEELEY: Thank you so much, Amy.

AMY BERNSTEIN: Anytime. Thats Tsedal Neeley, a professor at Harvard Business School. She wrote the article 8 Questions About Using AI Responsibly, Answered. You can find it, and other articles, by experts at hbr.org/techethics. Before that, I talked to Nitin Mittal. He leads Deloittes global AI business and co-wrote the book All-in On AI: How Smart Companies Win Big with Artificial Intelligence.

AMY BERNSTEIN: Next episode, How Generative AI Changes Strategy. HBR editor in chief Adi Ignatius will talk to experts who take stock of the competitive landscape and share how to navigate it effectively for your organization. Thats next Thursday, right here in the HBR IdeaCast feed after the regular Tuesday episode. This episode was produced by Curt Nickisch. We get technical help from Rob Eckhardt. Our audio product manager is Ian Fox, and Hannah Bates is our audio production assistant. Special thanks to Maureen Hoch. Thanks for listening to How Generative AI Changes Everything, a special series of the HBR IdeaCast. Im Amy Bernstein.

Read more:

How Generative AI Changes Organizational Culture - HBR.org Daily

Read More..

From railroads to AI: Why new tech is often demonised – The Indian Express

Technological advancements are polarizing. Its not a new phenomenon for innovations to be sneered at, criticized or even demonized. We find skepticism about technology even in the earliest written records that we have about technology theory, technology philosopher and historian Christian Vater told DW.

He said there were various reasons for this skepticism, including the complexity of technological inventions and the associated lack of knowledge or understanding, for example the fear of losing control or even emotionality.

But skepticism toward new technologies is not proof of a general fear of technology, according to Helmuth Trischler, head of research at Deutsches Museum in Munich. Behind this assumption is a limited perception its good that people examine things rationally, he said.

The difference between a rational assessment of possible consequences to technology and an irrational, uncontrolled defensiveness toward technology is also emphasized by Vater, who distinguishes between concern and panic. I consider concern to be very legitimate and extraordinarily necessary, especially if we want to actively, jointly shape a future shaped by technology in an informed democracy, he said. Panic, however, typically leads to uncontrolled running away.

The fact that technological inventions can inspire both concern and panic in equal measure can be seen in the example of the railroad.

Diabolical conveyance: The railroad

Some 200 years after its invention, the railroad is a completely ordinary form of transportation for people and goods around the world and a part of the fabric of modern society. But in its early days, some people perceived the railroad as the work of the devil.

The worlds first public railroad was inaugurated in England in 1825. After that, the steam locomotive made its fast, loud and smoky way across Europe and with it, the fear of trains and of what was known in Germany as Eisenbahnkrankheit or railway sickness. This was thought to be caused by the speed of up to 30 kilometers per hour (18.6 miles per hour) considered fast back then and the bone-rattling vibrations felt while sitting in the carriages.

Even as the railway network grew throughout Victorian England, the criticism of this mode of transportation remained strong, as evidenced by satirical caricatures and illustrated police reports.

Trischler said these reactions are completely understandable within the context of their time. Technological advancements require reorientation, which can spark fears to which people react with dire prognoses and apprehension. The new does, after all, arouse emotions. Technology is basically always associated with emotions, he explained.

Fear of the split atom

But not every technological invention inevitably evokes negative emotions. For instance, when nuclear energy was new, the attitude was different. The first German research reactor was built in Munich in 1957, and four years later, nuclear energy was fed into the countrys power grid for the first time. In the 1960s, atomic energy was seen as an inexpensive and clean alternative to oil and coal and encouraged hopes for a renewed industrial upswing.

The first critical voices grew loud in Germany in 1975, when the construction site of a planned nuclear plant was occupied by protesters. Critics in the southwestern German town of Wyhl warned of climate change, groundwater drawdown and possible security problems in connection with nuclear plants. The anti-nuclear movement gained momentum and incidents such as the accident at Three Mile Island in 1979 in the United States or the meltdown at Chernobyl in 1986 further spread fear and worry among parts of the population. Nuclear energy was a subject of debate in Germany for decades, until the accident at Fukushima in Japan in 2011 finally led to the German government deciding to phase it out for good.

While in some parts of the world, nuclear energy is still seen as a good alternative to fossil fuels, in other countries it evokes almost existential angst. When we think about why people are concerned when it comes to nuclear energy, we can point to the question of nuclear waste, to Chernobyl or Fukushima. In other words, to man-made or nature-dependent situations with technological failures and unsolved technical problems, said Vater.

He and Trischler see a democratic success story in the debate over nuclear energy. Vater said that a society, if it does not want to become technocratic, but wants to remain a participatory democracy, is dependent on goodwill, understanding and support from its members. Trischler added that something can emerge from the debate about technology skepticism, and said that its about a societys struggle for co-determination and joint negotiation.

Man vs. machine?

How fine the line can become between goodwill and skepticism, support and rejection, is illustrated by the current debate over AI. The American computer and cognitive scientist John McCarthy coined the phrase artificial intelligence in 1956 to describe a discipline of computer science whose goal was to create machines with human-like intellectual capabilities.

After decades of developments in the field, debate over the topic has focused of late on, among other things, the chatbot ChatGPT, which was released in November 2022 and immediately sparked controversy. In March, Italy responded by becoming the first country to block the software, at least temporarily. Its now allowed again, but only after proof of the users age is presented.

Despite the many advantages AI promises for example improved health care or increased road safety there is also a great deal of criticism of the technology. The fears seem to run in two directions: Some worry about possible misuse, fakes or disinformation and about their professional future and intellectual property, while others are afraid of future technical developments that could gradually give AI more power and thus result in a loss of human control.

Trischler sees the fear of AI in general as rooted in the complexity of the technology. Worries arise especially with regard to large technical systems that seem anonymous, he said. According to Vater, questions about, for instance, what impact AI might actually have on ones profession are rational concerns as opposed to a blanket fear of the machine.

To predict that the spread of AI will make all human creative effort superfluous, and that machines will take over the world in the near future, that would be panic, he said.

Skepticism raises questions

So is a certain degree of skepticism toward new technologies a normal, understandable human reaction? Christian Vater and Helmuth Trischler think so.

In hindsight, we often see that these fears have not materialized, said Trischler, adding that they are understandable when seen in the context of their time.

The ability to make predictions is useful because it helps us to tune in to the next steps in development as a group, as a society, perhaps even as humanity, said Vater. Its actually the normal situation that things then dont turn out as we expected.

This article was originally written in German.

Original post:

From railroads to AI: Why new tech is often demonised - The Indian Express

Read More..

AI runs amok in 1st trailer for director Gareth Edwards’ ‘The Creator … – Space.com

Recent headlines warn about the perils of artificial intelligence, even as we venture further into a future reliance on AI. So there's probably no better time to drop a first trailer for director Gareth Edwards' dystopian epic, "The Creator."

20th Century Studios, New Regency, and Entertainment One have just unleashed a terrifying new preview for Edwards' topical sci-fi thriller, "The Creator," which infiltrates theaters on September 29, 2023 starring "Tenet's" John David Washington, "Eternals'" Gemma Chan, "Inception's" Ken Watanabe, Sturgill Simpson, Madeleine Yuna Voyles, and Allison Janney.

Here's the official synopsis:

Amid a future war between the human race and the forces of artificial intelligence, Joshua (Washington), a hardened ex-special forces agent grieving the disappearance of his wife (Chan), is recruited to hunt down and kill the Creator, the elusive architect of advanced AI who has developed a mysterious weapon with the power to end the war and mankind itself. Joshua and his team of elite operatives journey across enemy lines, into the dark heart of AI-occupied territory only to discover the world-ending weapon hes been instructed to destroy is an AI in the form of a young child.

Executive produced by Yariv Milchan, Michael Schaefer, Natalie Lehmann, Nick Meyer and Zev Foreman, "The Creator" is directed by Gareth Edwards from an original screenplay by Edwards and Chris Weitz.

Edwards first wowed audiences with his 2010 indie creature feature "Monsters" and 2014's Hollywood kaiju flick, "Godzilla" before signing on to helm 2016's "Rogue One: A Star Wars Story." That "Star Wars" prequel continues to gain admirers as a very serviceable entry for the franchise in light of the negative reception to the most recent "Star Wars" sequels, "Revenge of the Sith" and "The Rise of Skywalker."

Fans of New York Times bestselling author Daniel Wilson and his "Robopocalypse" novel and "Robogenesis" sequel might see narrative similarities in this Aerosmith-scored trailer, which reveals blazing laser firefights, malevolent AI machines, and an overarching plan beyond the mental capacities of us puny human meat sacks.

Nevertheless, it's a striking first look at this fall sci-fi tentpole, with intense combat scenes and a creepy little android child who might hold the key to humanity's fate.

"The Creator" powers up in theaters on Sept. 29, 2023.

View post:

AI runs amok in 1st trailer for director Gareth Edwards' 'The Creator ... - Space.com

Read More..

Azeem on AI: Where Will the Jobs Come from After AI? – HBR.org Daily

AZEEM AZHAR: Hi there, Im Azeem Azhar. For the past decade, Ive studied exponential technologies, their emergence, rapid uptake, and the opportunities they create. I wrote a book about this in 2021. Its called The Exponential Age. Even with my expertise, I sometimes find it challenging to keep up with the fast pace of change in the field of artificial intelligence, and thats why Im excited to share a series of weekly insights with you, where we can delve into some of the most intriguing questions about AI. In todays reflection, I look at an insightful research note from Goldman Sachs, titled The Potentially Large Effects of Artificial Intelligence on Economic Growth, in which the authors explore the labor markets future. The study posits that global productivity could see an impressive uptick, ultimately boosting global GDP by 7%. And I wonder, where will the jobs come from after AI? Lets dig in.

The headline finding was that productivity globally could eventually rise, and you could see a rise in global GDP by 7%, which is no slouch. There were also models that showed that US economic growth could jump from that one to one and a half percent anemic level, up to two and a half, three percent, the sorts of levels that were enjoyed during those halcyon period of the 1950s, which is all pretty exciting. But what I thought was quite interesting was how the researchers dug into the impact on the workforce from all of these productivity changes. So they found some quite interesting findings. I suspect if youve been reading the newsletter thinking about these things, you wouldnt be too surprised by them. But lets just go through them because theyre numerical and theyre quite useful.

So, they found that about two thirds of US occupations were exposed to some degree of automation by AI, and a significant share of those had quite a substantial part share of their workload that could be replaced. So running from healthcare supports down the bottom end to health practices, computer and IT sales and management, finance, legal and office admin, you saw that between on average 25% of tasks to 46% of tasks in the case of office admin could be automated with a much larger impact in general in developed markets than in emerging markets. Its pretty interesting because the researchers suggest, and I think this is a kind of reasonable assertion, that if a job would find about 50% or more of its tasks being automated, it would lend itself to being replaced. Whereas jobs that might have 10 to about 49% of their tasks automated lend themselves to using AI as a sort of complement to the human worker.

Ive looked at this question over the last several years, youve probably read a number of those things, and the question is, what might that actually mean as it plays out? So what we found historically is that when new technologies come around, the firms that make use of them tend to be able to grow their headcount, they grow their employment levels, and its the firms that dont use those technologies that tend to lose out. I talk about this in my book. I have the parable of the two men, Indrek and Fred, who are walking in the Canadian wilds and they stop to take a break. They take their shoes off and a grizzly bear approaches them and one of them pops his shoes on and the other says, Why are you putting your shoes on? Youll never outrun the grizzly bear. And his friend says, I dont need to outrun the grizzly bear, I just need to outrun you.

And that of course, is a competitive dynamic. The firms that are well managed, that can manage these new technologies, that make the investment will perform better, as better performing firms always have, and theyll grow, and in the competitive space that will come at the cost to the underperforming firms. So that should create the kind of incentive for companies to invest in these technology. But they wont do evenly. So we will see some winners and well see some losers.

Wed also expect to see widespread downward wage pressure because these jobs are essentially being able to be done more efficiently. So, a smaller number of people potentially could be required. The other thing to wonder is the extent to which this would necessarily lead to job cuts. And you could say, Look, firms wont do this. These are well paid workers, 80k, 100k, 150k or more a year. And they will be protected in some sense for a certain period of time. But even the most protective firms come to really think about their workforces have gone into cost-cutting mode in the last few months, like McKinsey and Google. And its hard to imagine economy-wide that in this type of economy, the opportunity to streamline and be efficient wont be quite tempting for management.

So, the question is, where might those cuts fall in the firm? I have a hunch, and its no more than that, that if you are a manager in a largish, medium size, or even a sort of bigger end of the small firm, it will be quite appealing to look at the middle of your employment base. Because what you have there is you have people who are quite well paid but are not your top leadership. And the temptation will be to go in and thin those ranks, not so far that you deplete all the tacit knowledge and all the sort of socialized information in the firm, the stuff that isnt codified, but enough to cut costs on the basis that AI-enabled juniors working with a small number of well-trained, experienced, more senior professionals will be able to fill in the gaps. And I suspect that will be a kind of tempting strategy for companies as we move on. And that in a sense is a kind of extension of the delayering of firms that we saw when it started to get rolled out in the 1980s and 1990s.

But what about this 7% productivity growth? So, thats got to be doing something. The economy is going to be growing much faster than it was before, and its going to create new opportunities and new needs. Theres a great survey that the Goldman Sachs authors quote from David Orta. He is this amazing economist, and he points out that 85% of employment growth in the US in the last 80 years has been in jobs that didnt exist in 1940 when the period started. So, we know effectively that the economy creates new work, new classes of work very well, although over an 80-year period. And the thing is that if these technologies are going to be rolled out overnight to millions of workers, the impact will be felt quite fast.

I mean, just take a look at lawyers. Therere somewhere between 700,000 lawyers in the US, if you look at the Bureau of Labor Statistics data, or 1.3 million, if you look at the American Bar Association data. Sorry, I dont know the real number. But based on Goldmans estimates, about 40% of those jobs could be up for being replaced. So thats between 250 and 500,000 people. So, the question is not will new jobs be ultimately created. Its when do they get created in the sort of short time that is available? And we can imagine that new sorts of roles emerge that are complementary to the AI tools that get layered in, ones that are syncretic across the specialist expertise of being a particular type of admin or being a particular type of legal profession, and what is now required to make these technologies work. So, that would be one area.

The second is that the growing economy is going to raise the demand in complementary services, which is what you would expect from economic growth. And of course, there are these new sectors like the bioeconomy and the green economy that are developing rapidly and are being stimulated by things like, in the US, the Inflation Reduction Act, and similar sorts of things in the EU and UK, which should create a demand for new types of private sector jobs.

But its a really hard conundrum because how do you re-skill people? How do you ensure that they actually want to make the move? How do you make sure that they have the resources and the emotional psychological capabilities to make the move? And how do you make sure the jobs that are created are in the places where the people actually live? And I say all of this because I know that is material that weve heard before, but I dont get a sense that I see really strong and solid [inaudible 00:08:14] and interventions, and these are the types of things that need to come from government to tackle what could well be a very sharp transition as these productivity enhancing tools start to get rolled out.

Well, thanks for tuning in. If you want to truly grasp the ins and outs of AI, visit http://www.exponentialview.co, where I share expert insights with hundreds of thousands of leaders each week.

Read the original:

Azeem on AI: Where Will the Jobs Come from After AI? - HBR.org Daily

Read More..

WHO calls for safe and ethical AI for health – World Health Organization

The World Health Organization (WHO) is calling for caution to be exercised in using artificial intelligence (AI) generated large language model tools (LLMs) to protect and promote human well-being, human safety, and autonomy, and preserve public health.

LLMs include some of the most rapidly expanding platforms such as ChatGPT, Bard, Bert and many others that imitate understanding, processing, and producing human communication. Their meteoric public diffusion and growing experimental use for health-related purposes is generating significant excitement around the potential to support peoples health needs.

It is imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect peoples health and reduce inequity.

While WHO is enthusiastic about the appropriate use of technologies, including LLMs, to support health-care professionals, patients, researchers and scientists, there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with LLMs. This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision, and rigorous evaluation.

Precipitous adoption of untested systems could lead to errors by health-care workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world.

Concerns that call for rigorous oversight needed for the technologies to be used in safe, effective, and ethical ways include:

WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine health care and medicine whether by individuals, care providers or health system administrators and policy-makers.

WHO reiterates the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing, and deploying AI for health. The 6 core principles identified by WHO are: (1) protect autonomy; (2) promote human well-being, human safety, and the public interest; (3) ensure transparency, explainability, and intelligibility; (4) foster responsibility and accountability; (5) ensure inclusiveness and equity; (6) promote AI that is responsive and sustainable.

Read the original here:

WHO calls for safe and ethical AI for health - World Health Organization

Read More..

Prompt Injection: An AI-Targeted Attack – Hackaday

For a brief window of time in the mid-2010s, a fairly common joke was to send voice commands to Alexa or other assistant devices over video. Late-night hosts and others would purposefully attempt to activate voice assistants like these en masse and get them to do ridiculous things. This isnt quite as common of a gag anymore and was relatively harmless unless the voice assistant was set up to do something like automatically place Amazon orders, but now that much more powerful AI tools are coming online were seeing that joke taken to its logical conclusion: prompt-injection attacks.

Prompt injection attacks, as the name suggests, involve maliciously inserting prompts or requests in interactive systems to manipulate or deceive users, potentially leading to unintended actions or disclosure of sensitive information. Its similar to something like an SQL injection attack in that a command is embedded in something that seems like a normal input at the start. Using an AI like GPT comes with an inherent risk of attacks like this when using it to automate tasks, as commands to the AI can be hidden where a user might not expect to see them, like in this demonstration where hidden prompts for a ChatGPT plugin are hidden in YouTube video transcripts to attempt to get ChatGPT to perform actions outside of those the original user would have asked for.

While this specific attack is more of a proof-of-concept, its foreseeable that as these tools become more sophisticated and interconnected in our lives, the risks of a malicious attacker causing harm start to rise. Restricting how much access we give networked computerized systems is certainly one option, similar to sandboxing or containerizing websites so they cant all share cookies amongst themselves, but we should start seeing some thought given to these attacks by the developers of AI tools in much the same way that we hope developers are sanitizing SQL inputs.

See the article here:

Prompt Injection: An AI-Targeted Attack - Hackaday

Read More..

How a family is using AI to plan a trip around the world – Business Insider

"We are going into uncharted territory," Motamedi told Insider. "Because we are literally giving our lives into AI's hands we're kind of a guinea pig for the world." Courtesy of Michael Motamedi

Travel influencers Michael Motamedi and Vanessa Salas typically spend days, if not weeks, researching travel plans for them and their 18-month-old daughter.

But on Wednesday, the family used an AI chatbot to decide on their next destination in under an hour: They were heading to Morocco.

For the next six months, the digital nomad family is relinquishing control of their itinerary to artificial intelligence an experiment that will be the basis of a new web series called "No Fixed Address."

"I cannot explain to you the nerves that I have going into this," Motamedi said in an interview with Insider. "It's nerve-racking when you're not making the decisions. It's kind of a strange, out-of-body experience."

Media not supported by AMP.Tap for full mobile experience.

Motamedi is partnering with GuideGeek, a free AI travel assistant owned by the Matador Network that uses ChatGPT technology, to produce the show. The newly released chatbot will plan nearly every step of the family's journey, from picking out different cities for them visit to deciding where to eat each day. GuideGeek declined to disclose the financial terms of the partnership.

After living in a new country for a month, Motamedi will ask GuideGeek where they should travel to next based on their general interests, such as nice beaches, interesting architecture, and good food. The family will handle real-time logistics like booking flights and finding living accommodations, he said.

"We're going to pick the next place when we're in Morocco," he told Insider. "That's the crazy part about this whole journey I don't know where I'm going to end up in July."

Motamedi and Salas tested the technology out in April while using the chatbot to plan a date night in Mexico City. GuideGeek provided speakeasy and drink recommendations as well as local history facts.

Media not supported by AMP.Tap for full mobile experience.

While its recommendations resulted in a "fantastic night," Motamedi insists the new technology should be used as a helpful tool rather than a substitution for human interaction or online videos or articles based on personal experiences.

"Will I utilize AI as a tool? Of course," he said. "Do I care if the AI had a good time having tea? Not really because it doesn't know how to taste tea."

Despite his confidence in the nascent technology, Motamedi said he (and his mother) are "terrified" to let a robot plan out his family's life for the next six months.

Artificial intelligence chatbots based on large language models like ChatGPT are known to "hallucinate," or make up false information which can have real-life impacts when relying on the bots to make important travel decisions. Motamedi experienced this first-hand when he asked GuideGeek to provide the history of pastry shops in Mexico City, which he realized was inaccurate after talking to a local business owner.

Knowing the chatbot is bound to make mistakes, Motamedi said he does not plan on following its advice blindly and will occasionally fact-check its results.

"My family comes first," he said."Just because Google Maps is telling me to go left and I see a lake in front of me, doesn't mean I'm gonna go into that lake."

Loading...

Continue reading here:

How a family is using AI to plan a trip around the world - Business Insider

Read More..

AI at warp speed: disruption, innovation, and whats at stake – Economic Times

Synopsis

12 mins read, May 21, 2023, 06:00 AM IST

Conversations on tech these days are dominated by generative artificial intelligence and what it means for the worlds future. ChatGPT, Stable Diffusion, MidJourney and Google Bard arae rapidly changing the way we live, work and engage with each other. Those platforms are themselves evolving at an exponential rate based on what they learn from users. The costs involved are enormous, as are the stakes.On a clear morning in early May, Alphabet CEO

Membership Benefits

Access the exclusive Economic Times

Stories, Editorial & Expert opinion

Complete Access with ET Prime

Experience your Economic Times newspaper, the digital way.

Clean experience

with minimal ads

Easy & distraction-free reading with 90% less ads

Sharp Insight-rich,

In-depth stories across 20+ sectors

1500+ Exclusive stories & analysis across sectors to help you stay informed

Get 1-Year Emeritus Insights Subscription worth 19900 for free

Get One Year Times Prime Subscription worth 1199 for free

Get One Year Docubay Subscription worth 999 for free

Stream award-winning international documentaries from more than 100 countries.

Member only Newsletters

Never miss a story that matters

Members Love Us

The stalwarts of the industry trust ET Prime for insightful analysis & unbiased thought pieces

Gift a story

Your membership includes Story Gifting Credits. Now gift exclusive stories to your friends & peers.

Comment & Engage

with ET Prime community

Communicate & build a connection with great minds of the industry

A trusted team of

Journalists & Analysts

Unbiased perspective & detailed reporting by our team of journalists who have in-depth knowledge and years of experience

Read this article:

AI at warp speed: disruption, innovation, and whats at stake - Economic Times

Read More..

Beijing calls on cloud providers to support AI firms – TechCrunch

Image Credits: Photo by Bu Xiangdong/Qianlong.com/VCG / Getty Images (Image has been modified)

As large language models from Western tech firms show the potential to disrupt everything from marketing to teaching to coding, China is rushing to cultivate its home-grown AI pioneers by stepping up state support.

Beijing is now seeking public opinion on a draft policy aimed at developing artificial general intelligence, or AGI, a category of AI that can theoretically carry out all human tasks. The policys goal, in short, is to buttress AI firms by beefing up support from cloud providers and data companies.

Its not uncommon to see the capital city spearheading policymaking in emerging industries. Beijing, for example, was the first in letting driverless robotaxis ferry passengers on open roads under certain restrictions.

The AGI blueprint lays out action plans around three main areas: computing power, training data and applications.

The first strategy calls for closer collaboration between cloud providers, the sources of computing power and universities and companies, which consume large amounts of processing power to train large language models, multimodal learning and other AI. The policy proposes a state-backed, centralized platform that allocates public cloud resources to users based on demand.

Alibaba accounted for over a third of Chinas cloud infrastructure services spending last year, coming in first, according to market research firm Canalys. Huawei, Tencent and Baidu trailed behind.

The second strategy acknowledges the lack of quality Chinese-language data and encourages the compliant cleansing of such datasets, which includes data anonymization, likely an effort to meet Chinas new, stringent privacy law. The process will no doubt be time-consuming and labor-intensive, as weve seen how OpenAI relies on Kenyan workers to manually label training data and remove toxic text.

Beijings big data exchange, launched by the government in 2021 to facilitate data trading across facets of society, will aid the process of data sourcing.

Lastly, the policy lays out a list of potential pilot applications of AI, ranging from using AI in medical diagnosis, drug making, financial risk control and transportation to urban management.

The proposed policy also touches on the importance of software and hardware infrastructure for AI training. Amid an escalating U.S.-China competition, the latter is striving to shore up innovation in key technologies such as semiconductors.

The U.S. already restricts the export of Nvidias powerful AI chip H100 to China. In response, Nvidia came up with a less powerful processor for China to circumvent export controls. Domestic companies, such as tech giant Huawei and startup Biren, are also working on Nvidia alternatives.

The rest is here:

Beijing calls on cloud providers to support AI firms - TechCrunch

Read More..

Amazon is focusing on using A.I. to get stuff delivered to you faster – CNBC

Amazon is increasingly using robotics in its fulfilment centers to carry out repetitive tasks such as lifting heavy packages.

Nathan Stirk | Getty Images News | Getty Images

Amazon is focusing on using artificial intelligence to speed up deliveries by minimizing the distance between its products and customers, a top executive told CNBC.

Stefano Perego, vice president of customer fulfilment and global ops services for North America and Europe at Amazon, outlined how the company is using AI when it comes to logistics.

One area is in transportation, such as mapping and planning routes, taking into account variables like the weather, Perego said.

Another area is when customers search from products on Amazon to help them find the right goods.

But a key focus right now for Amazon is using AI to figure out where to place its inventory.

"I think one area that we consider key in order to lower cost to serve is on inventory placement," Perego said.

"So now, I'm pretty sure you're familiar with the vast selection we offer to our customers. Imagine how complex is the problem of deciding where to place that unit of inventory. And to place it in a way that we reduce distance to fulfill to customers, and we increase speed of delivery."

Amazon has been focusing on a so-called "regionalization" effort to ship products to customers from warehouses closest to them rather than from another part of the country.

But doing so requires technology that is capable of analyzing data and patterns in order to predict what products will be in demand and where.

That's where AI comes in. If a product is nearer to customers, Amazon will be able to make same-day or next-day deliveries, like what its Prime subscription service offers.

Perego said the efforts are progressing well. In the United States, more than 76% of the products customers order are now from fulfilment centers within their region, according to Amazon.

Amazon is also using robotics in its fulfilment centers to help with repetitive tasks such as lifting heavy packages.

The company said that 75% of Amazon customer orders are handled in part by robotics.

There's a debate over how robotics and artificial intelligence such as the ChatGPT AI chatbot developed by startup OpenAI will affect jobs. A Goldman Sachs report earlier this year suggested there could be "significant disruption" to the global labor market, with automation affecting 300 million jobs.

Perego described automation as "collaborative robotics," underlining how Amazon sees humans and technology working together.

"I think that what is happening is really a transformation of the type of jobs," Perego said.

The executive said that when automation and AI become more widespread, they will change, rather than eliminate, the jobs that workers perform.

"Eventually, the type of job that an employee will be called to do in a fulfillment center will be increasingly a high judgment type of job," Perego said. "And the heavy lifting and repetitive tasks will be done through robotics. That's fine. It's a transformation rather than a substitution."

Follow this link:

Amazon is focusing on using A.I. to get stuff delivered to you faster - CNBC

Read More..