To see the future of AI, look back to the lessons from the cloud – GeekWire

More than a decade ago, businesses were faced with a new, disruptive technology. This technology promised to cut operational costs, increase productivity, and allow for collaboration from around the world. It also raised concerns about reliability, security, and government regulations.

A decade later, these are the same promises and concerns businesses have about AI, potentially the most disruptive technology in a generation.

We are hearing from customers that they are excited, skeptical, and worried and each reaction is rightfully warranted. We are headed into an uncertain future as AI upends both the business and consumer world, but we are not without clues as to what an AI-powered future might look like or how we might proactively prepare for it.

We only have to look to the lessons learned from the disruptive technology that came before it: the cloud.

For many businesses, the cloud was initially viewed as an alternative to hosting servers, data, and applications on-premises. It was inexpensive, instantaneous to deploy, and relieved IT from the ongoing maintenance burden.

The reality, however, is that where a company hosts its technology infrastructure is just one small part of the journey we now call digital transformation. By using hosted services in the cloud, companies gained access to computing power that was inexpensive, resilient, and scalable based on their changing needs. This led to spillover effects, including many of the clouds initial promises, such as increased productivity, collaboration, and a larger focus on data.

There were also unforeseen costs. Many companies were surprised by data transfer costs, usage bills due to overprovisioning, or poor customer experiences due to underprovisioning. Security breaches and privacy violations related to cloud-hosted services were commonplace, as were outages impacting many customers simultaneously (from Computerworld in 2014: Human error root cause of November Microsoft Azure outage). Few people could have predicted these unforeseen costs, and most IT teams at the time were simply not trained to handle these brand-new situations.

We see a similar situation happening with AI. Take software development, for example, where generative AI has shown the potential to greatly increase the speed of writing code. There are many examples, and they are indeed impressive: code generation, suggested functions, and how-tos for writing scripts in different languages and frameworks.

But building great software is not just about writing code. In fact, developers have told us that its how they spend just 25% of their time (source: Global DevSecOps Report: The State of AI in Software Development). Its one part of an entire process that involves testing, security, monitoring, and more where generative AI is still in its infancy. When we make drastic changes in one area, such as how we write code, we must proactively anticipate unforeseen side effects elsewhere.

As recent headlines, such as Samsung Bans Staffs AI Use After Spotting ChatGPT Data Leak demonstrate, one of these side effects is how a companys code is used to further train language-learning models that their competitors can leverage. Our customers are excited to adopt AI across the software development lifecycle, but are justly concerned about what safeguards are in place to protect their private code and intellectual property.

The onus is on those of us building AI into our products and services to show our customers that they can trust and verify AI-generated code, while exploring ways to use AI elsewhere in software development, such as detecting and explaining security vulnerabilities.

There is no doubt that disruptive technologies generate fear and uncertainty; the same was true of the cloud. At the time, IT departments were often hesitant to hand over the reins of mission-critical hardware and company processes to any outside third party.

There were also legitimate concerns about the future of their jobs. In hindsight, its easy to see that although IT is no longer primarily focused on managing on-premises hardware, it has not been replaced. If anything, as they have learned new skills such as cloud scripting, security research, and systems design, IT is more critical to a companys vision than ever before. They are architects, designing the very infrastructure that makes modern software, platforms, and infrastructure services possible.

The situation with AI will be similar, offering opportunities for people to bring their ideas to life without the need to be expert coders. At the same time, AI will also allow upskilling opportunities for those in traditionally high-skill roles to accelerate their careers by applying their existing skills in new ways, just as the cloud did for IT. Also of note, reducing the maintenance of software will allow organizations to focus developers on more strategic efforts and spread the skilled work across more of them, rather than requiring a single superhero.

When I was at Tableau in 2013, I had the title of Head of Cloud Strategy. A few other forward-thinking companies had similar-sounding roles like Chief Cloud Officer. Today, the title sounds silly, but these leaders served an important purpose at the time: They helped businesses wrap their heads around a brand new framework, evangelizing the benefits of cloud computing, establishing clear guardrails surrounding its adoption, and introducing new innovative concepts like infrastructure as code and GitOps.

We are seeing such leaders again: Head of AI, AI Evangelist even Salesforce has a CEO of AI. They will all champion AIs possibilities while ensuring their companies adopt it responsibly.

The cloud remains one of the most disruptive technologies of the modern era. Some of the most innovative companies in recent history created their products and services because of cloud computing. Quite a few companies, however, have also lost the trust of their customers and as a result, their business, due to their failure to adopt the cloud quickly and securely.

AI is poised to be even more disruptive, and although organizations are optimistic about AI, they know that similar failures to think strategically about responsibility could lead to even worse outcomes regarding data privacy, intellectual property, and, worst of all, trust. For example, in an interview with 60 Minutes, Google and Alphabet CEO Sundar Pichai discussed how AI could be used to spread disinformation through videos of anybody saying anythinghimself included.

Like in the early days of the cloud, we need to strike the right balance between caution and optimism. AI will not simply change how we code, write, communicate, or any single part of our businesses. It will change everything, and with the right leaders in place, we will be ready.

Read this article:
To see the future of AI, look back to the lessons from the cloud - GeekWire

Related Posts

Comments are closed.