How Should Developers Respond to AI? – The New Stack

The tech-oriented podcasts at Changelog released a special edition last month focusing on how developers should respond to the arrival of AI. Jerod Santo, the producer of their Practical AI podcast had moderated a panel at the All Things Open conference on AIs Impact on Developers. And several times the panel had wondered whether some of the issues presented by AI require a collective response from the larger developer community

The panelists exploring AIs impact on developers were:

Speaking about recent high-profile strikes in other industries, Quick had lauded the general principle and the power of community, and people being able to come together as a community to stand up for what they think they deserve. I dont know that were, like, here right now, but I think its just an example of what people that come together with a common goal can do for an entire industry.

And then his thoughts took an interesting turn. And maybe we get to a point where we unionize against AI.

I dont know, thats maybe not. But the power of those connections, I think, can lead to being able to really make positive influence wherever we end up.

Unionize against AI. You heard it here first, moderator Santo said wryly then moved on to another topic. (When Freeman warned about prompts that trigger hallucinations of non-existent solutions, quipping that generative AI is on drugs, Santo joked the audience was hearing lots of breaking news on this panel.)

As the discussion moved to other areas, it reminded the audience that the issue is not just the arrival of powerful, code-capable AI. The real question is how the developer community will respond to the range of issues raised, from code licensing to the need for responsible guidelines for AI-developing companies. Beyond preserving their careers by adapting to the new technology, developers could help guide the arrival of tools alleviating their own pain points. They could preserve that fundamental satisfaction of helping others, while tackling increasingly more complex problems.

But as developers find themselves adapting to the arrival of AI, the first question is whether theyll have to mount a collective response.

Unionizing against AI wasnt a specific goal, Quick clarified in an email interview with The New Stack. Hed meant it as an example of the level of just how much influence can come from a united community. My main thought is around the power that comes with a group of people that are working together. Quick noted what happened when the United Auto Workers went on strike. We are seeing big changes happening because the people decided collectively they needed more money, benefits, etc. I can only begin to guess at what an AI-related scenario would be, but maybe in the future, it takes people coming together to push for change on regulation, laws, limitations, etc.

Even this remains a concept more than any tangible movement, Quick stressed in his email. Honestly, I dont have much more specific actions or goals right now. Were just so early on that all we can do is guess. But there is another scenario where Quick thinks community action would be necessary to push for change: the hot-button issue of who owns the code. AI has famously trained by ingesting code from public repositories and during the panel discussion, Quick worried developers might be tempted to abandon open source licenses.

He acknowledged to the audience that there are obviously much larger issues and that they can seem a little overwhelming. But he also believes theres some evolution that needs to happen, and in a lot of areas legally, morally, ethically open sourcedly. There has to be things that catch up, and give some sort of guidelines to this stuff that we have going on. Quick later argued it will follow the trajectory of other advancements that humanity has made including the need for acknowledging that theres probably a point where we need to have limitations.

Although he quickly added, What that means and what that looks like, I dont know.

But soon the discussion got down to specifics. Santo noted theres already ways that a robots.txt file can be updated by individual users to block specific AI agents from crawling their site. Quick suggested flagging GitHub repositories in the same way as a reasonable intermediary step, though later admitting that itd be hard to later prove where AI-generated code had taken its training data.

But Freeman returned to the role of communities in addressing companies with a profit-only mentality both developers and users. To some degree, between our work and also where we spend our money, we have to tell the market that that is not acceptable.

So I dont want to live in a world where were trying to hide from crawlers. I want to live in a world where we have decided on standards and guidelines that lead toward responsible use of that information, so that we all have some compromise around how were proceeding with this.

At one point Freeman seemed to suggest a cautious choosing-your-battles strategy, telling the audience to make demands where you can. But one area where she sees that as essential? Calling for responsible development of AI again meaning guidelines and standards. We are in the place where it is truly our responsibility to push for this, and push against the sort of market forces that would say, Were moving forward quickly with a profit-based approach to this a profit-first approach.'

Its a topic she returned to throughout the panel, emphasizing the importance of developers recognizing our own power and influence on pushing toward a holistic and appropriate approach to responsible AI.

The panel kept returning to the needs of the community. Freeman also agreed with Quirk that AIs impact on developers will someday include tools designed to relieve their least-favorite chores like debugging strange code though it may take a while to get there. But I think truly, I keep coming back to this we have ownership and responsibility over this. And we can kind of determine what this actually looks like in usage.

The biggest surprise came when Santo asked if they were bearish on bullish about the long-term impact of AI on developers. Santo admitted that he was long-term positive and both his panelists took the same view.

Quick characterized his attitude as a very super-positive thing, with a goal of easing peoples fears about AI replacing their jobs. And Freeman also said with a laugh that she was bullish on AI because its happening, right? Like, this is happening. We have to kind of make it our own and lean into it, rather than try and fight it, in my opinion.

Freemans advice for todays developers? Learn as much as you can, whether its about designing prompts or understanding the models that youre using, and recognizing the strengths and the limitations and being ready to adapt and change as we move forward Just as developers have in the past, its time to grow with a new technology.

And on the plus side, Freeman anticipates a ton of new AI tools being created as venture capitalists fund investment in the AI ecosystem.

Toward the end, Santo asked a provocative question: since detail-oriented programmers take pride in their meticulous carefulness, is AI stealing some of our joy? And Emily Freeman responded: I think you have a point. Maybe we humans glory in our ability to spot errors quickly, and that pattern recognition is something that makes us really powerful.

But a moment later Freeman conceded that I think thats the joy for some people its not the joy for others. Freeman described her own joy as building tools that matter to people I think the spark of joy is going to be different for all of us. But Freeman emphasized that joy and personal growth are important to humans, and will remain so in the future.

And this led back to the larger theme of taking control of how AI arrives in the developer world. We set the standards here. This is not happening to us. It is happening with us. It is happening by us. Freeman urged developers to take ownership of that to identify which areas they want to hand off to AI, and the areas where they want developers to remain, growing and evolving with the newly-arrived tools.

So instead coding up yet another CREATE/READ/UPDATE/DELETE service for the thousandth time, I want to solve the really complex problems. The challenge of solving new problems at scale is interesting, Freeman argues. And I think its that kind of problem-solving and looking higher up in the stack, and having that holistic view that will empower us along the way.

In our email interview, we asked Quick if hed gotten any reactions to the panel. His response? I think we got an overwhelming response of this is something I should be paying attention to.

See the original post:

How Should Developers Respond to AI? - The New Stack

Related Posts

Comments are closed.