Microsoft’s Guide to using AI to become more Productive – Medium

Hey there, Im Devansh. I write for an audience of ~200K readers weekly. My goal is to help readers understand the most important ideas in AI and Tech from all important angles- social, economic, and technical. You can find my primary publication AI Made Simple over here, message me on LinkedIn, or reach out to me through any of my social media over here. I work as a consultant for clients looking to integrate AI in their lives- so please feel free to reach out if you think we can work together.

Teams everywhere are concerned about how to integrate AI into their workflows most effectively. If your organization has lots of money to burn, you could pay McKinsey consultants 400 USD/hour to create pretty slides based on recommendations from ChatGPT and spend 5 hours weekly in meetings to explore synergies and best practices. But for those of you without that luxury, one of the best resources is to look at the research done by the productivity teams at major companies. These companies have dedicated teams that interview their employees, study workflows, and extract insights from various internal and external experiments conducted on productivity.

Today, we will be looking at Microsofts excellent Microsoft New Future of Work Report 2023 to answer a key question- how can we leverage AI to make our work more productive? We will be studying the report brings to pull out interesting insights on-

1. How LLMs impact Information Work:

Task completion times for lab studies of Copilot for M365 (Cambon et al. 2023)

In a lab experiment, participants who scored poorly on their first writing task improved more when given access to ChatGPT than those with high scores on the initial task. Peng et al. (2023) also found suggestive evidence that GitHub Copilot was more helpful to developers with less experience. In an experiment with BCG employees completing a consulting task, the bottom- half of subjects in terms of skills benefited the most, showing a 43% improvement in performance, compared to the top half whose performance increased by 17%(DellAcqua et al. 2023).

I would take these results with a grain of salt, however. High-skill performers often do different things to their low-skilled counterparts, something that standardized tests are unable to measure. League 2 player Erling Haaland is a better footballer than me not just because he can beat me on performance-related tests, but also because he does 30 things that I dont. These 30 things are often much more difficult to measure. As we figure out how to use AI more effectively (and how to measure the results better), AI might actually increase the performance disparity between skilled and unskilled workers (most technology tends to reinforce differences, not reduce them). We already see some signs of this.

2. LLMs and Critical Thinking:

3. On Human-AI Collaboration:

4. LLMs for Team Collaboration and Communication:

5. Knowledge Management and Organizational Changes:

6. Implications for Future Work and Society:

Well spend the rest of this article discussing these ideas in more detail. Lets get right into it.

The following image summarizes the key themes very well-

Generative AI makes a clear, undeniable contribution to reducing the cognitive load from repetitive work, significantly improving experience- 68% of respondents agreed that Copilot actually improved quality of their workparticipants with access to Copilot found the task to be 58% less draining than participants without accessAmong enterprise Copilot users, 72% agreed that Copilot helped them spend less mental effort on mundane or repetitive tasks.

The impacts on quality are a bit more diverse. In a meeting summarization study, we see a slight reduction in performance, in the meeting summarization study where Copilot users took much less time, their summaries included 11.1 out of 15 specific pieces of information in the assessment rubric versus the 12.4 of 15 for users who did not have access to Copilot. This is not a super-significant difference but it definitely highlights the importance of having a human in the loop to audit the generation. In this sense, it seems like LLMs can be very helpful in creating a good first draft very quickly- leaving the refinement and improvements to the user (something 85% of the respondents agreed to).

On more domain-specific tasks, LLMs can introduce a very noob-friendly meta by raising the performance floor- In the other direction, the study of M365 Defender Security Copilot found security novices with Copilot were 44% more accurate in answering questions about the security incidents they examined. You can see something similar for yourself- with tools like DALLE that allow anyone to make good images. This is what leads to the impression that AI can help replace experts in their respective fields. For example, the usage of Github Copilot leads to a significantly better performance for programmers-

However, the reality is a lot more complicated. While such tools can be very helpful- they also introduce all kinds of unpredictable errors and vulnerabilities in systems. This is where Domain Expertise is key, since it will help you evaluate and modify the base output to your needs (the first draft concept shows up again). The most effective usage of LLMs often involves guiding it towards the correct answer. So for knowledge workers- it is crucial to know what to do. LLMs/Copilots can take care of the how.

Using AI for knowledge work always comes with the risk of overreliance and lax evaluations (we humans are prey to something called the automation bias, where we give undue weightage to any decision taken by an automated system). This is why a large part of my work involves building rigorous evaluation pipelines, better transparency systems, and controlling for random variance for my clients. Without these teams can end up with an incomplete picture of their system- leading to catastrophically wrong decisions (cue AirCanada not testing their system and it offering refunds to people).

With all of that covered, lets move on to the next section. How can we use AI to improve critical thinking and creativity? How can humans use AI effectively?

To answer this question, lets first understand the biggest problems faced by a lot of teams- cognitive overload, knowledge fragmentation, and a lack of feedback.

When it comes to reducing cognitive overload, AI-based tools can be used for delegations, planning, and quick load balancing. Once again, the goal here isnt to have AI do this perfectly, but for it to save time for users that would otherwise do this manually-

Next, lets cover knowledge fragmentation. Large organizations have a lot of projects happening, and key people often leave due to turnover, promotions, or retirement. In this environment, keeping track of all that is happening and already done becomes impossible- and there is a lot of wasted effort reinventing the wheel.

Knowledge fragmentation is a key issue for organizations. Organizational knowledge is distributed across files, notes, emails (Whittaker & Sidner 1992), chat messages, and more. Actions taken to generate, verify, and deliver knowledge often take place outside of knowledge deliverables, such as reports, occurring instead in team spaces and inboxes (Lindley & Wilkins 2023). LLMs can draw on knowledge generated through, and stored within, different tools and formats, as and when the user needs it. Such interactions may tackle key challenges associated with fragmentation, by enabling users to focus on their activity rather than having to navigate tools and file stores, a behavior that can easily introduce distractions (see e.g., Bardram et al. 2019). However, extracting knowledge from communications raises implications for how organization members are made aware of what is being accessed, how it is being surfaced, and to whom. Additionally, people will need support in understanding how insights that are not explicitly shared with others could be inferred by ML systems (Lindley & Wilkins 2023). For instance, inferences about social networks or the workflow associated with a process could be made. People will need to learn how to interpret and evaluate such inferences

This is a theme we see in a few different studies. Google has an excellent publication into what software devs want from AI. Both the 2nd and 3rd reason mentioned below can be addressed (atleast partially) by using AI to aggregate insights across platforms and unify them into one place that people can refer to.

We covered that publication in-depth over here. The final section- which talks about concrete steps that orgs must take to fully unlock their AI potential will be relevant to you, even if youre not an AI/Tech Company. For now, the simple takeaway is to encourage active documentation/logging so that your AI has plenty of data, and to invest heavily into AI systems that can interact with that Data in a useful manner.

We can summarize the main ideas in this section as follows-

Combine this with the usage of Copilot-like tools for knowledge workers, and you get something really powerful. Lets end this with a discussion the implications and the future of work.

As with any disruptive technology, AI will change not only how we do things, but also fundamentally what we do and what becomes important. Were already seeing some of this. Slide 11 brings up an interesting possibility- where knowledge work may shift towards more analysis and critical integration as opposed to raw generation.

As opposed to a naked replacement that many people claim- I think that people will simply have to dedicate a lot more time to the evaluation. Checking outputs, sources, and the base analysis of the AI are all a must, and well all probably spend a lot more time on that. Thus, there is a lot to be gained by investing in your skills for the same (or building tools there).

Similarly, soft skills and the general ability to push other people to get shit done would become even more important-

Skills not directly related to content production, such as leading, dealing with critical social situations, navigating interpersonal trust issues, and demonstrating emotional intelligence, may all be more valued in the workplace

-(LinkedIn 2023)

With a powerful tool like AI, accessibility also becomes an important discussion point. There are two important dimensions of accessibility-

The second is critical, but much harder. Open-sourcing research/other important ideas in AI is my goal and the reason why my primary publication- AI Made Simple- doesnt have any paywalls. However, thats a very small part of what needs to be done. I have some ideas on what we can do to push things forward- but this is something that needs a lot open conversations from a lot of people. If you have any ideas/want to discuss things with me, shoot me a message and lets talk. Once again, you can find my primary publication AI Made Simple over here, message me on LinkedIn, or reach out to me through any of my social media over here.

If you liked this article and wish to share it, please refer to the following guidelines.

That is it for this piece. I appreciate your time. As always, if youre interested in working with me or checking out my other work, my links will be at the end of this email/post. And if you found value in this write-up, I would appreciate you sharing it with more people. It is word-of-mouth referrals like yours that help me grow.

I put a lot of effort into creating work that is informative, useful, and independent from undue influence. If youd like to support my writing, please consider becoming a paid subscriber to this newsletter. Doing so helps me put more effort into writing/research, reach more people, and supports my crippling chocolate milk addiction. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

Help me buy chocolate milk

PS- We follow a pay what you can model, which allows you to support within your means. Check out this post for more details and to find a plan that works for you.

I regularly share mini-updates on what I read on the Microblogging sites X(https://twitter.com/Machine01776819), Threads(https://www.threads.net/@iseethings404), and TikTok(https://www.tiktok.com/@devansh_ai_made_simple)- so follow me there if youre interested in keeping up with my learnings.

Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.

Small Snippets about Tech, AI and Machine Learning over here

AI Newsletter- https://artificialintelligencemadesimple.substack.com/

My grandmas favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/

Check out my other articles on Medium. : https://rb.gy/zn1aiu

My YouTube: https://rb.gy/88iwdd

Reach out to me on LinkedIn. Lets connect: https://rb.gy/m5ok2y

My Instagram: https://rb.gy/gmvuy9

My Twitter: https://twitter.com/Machine01776819

Originally posted here:

Microsoft's Guide to using AI to become more Productive - Medium

Related Posts

Comments are closed.